Asimov's Third Law...Fail!
Asimov's 3 Laws of Robotics
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
An AI (artificial intelligence) that passes the Turing test is, for all intents and purposes, indistinguishable from a human.
Asimov's 3[sup]rd[/sup] Law fails for the simple reason that implicit in it is the provision that robots can protect themselves against other robots but, the catch is, robots won't be able to tell the difference between robots (AI) and humans (robots/AI pass the Turing test).
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
An AI (artificial intelligence) that passes the Turing test is, for all intents and purposes, indistinguishable from a human.
Asimov's 3[sup]rd[/sup] Law fails for the simple reason that implicit in it is the provision that robots can protect themselves against other robots but, the catch is, robots won't be able to tell the difference between robots (AI) and humans (robots/AI pass the Turing test).
Comments (21)
Quoting Agent Smith
If a robot isn't distinguishable from a human, would it still be subjected to Asimov's three laws of robotics?
Apparently no.
Then perhaps we could say: for those robots who have passed the Turing exam, they are now exempt from Asimov's laws of robotics? As in, reserve the rule for those who do act robotically.
Yes.
Then...In my belief I don't think Asimovs' rules fail. They just apply to who their applicable too.
There are none to whom Asimov's laws apply to. That's the point.
Wouldn't it just apply to robots who haven't passed the Turing exam?
ALL robots pass the Turing test.
Really?! Wow, I didn't know that. Dang.
Well, now you know.
The More You Know.
A rule is meant for a certain category of people, but then people in that category are the exact kind of people the rule isn't meant for.
:chin:
If that's so, how does the robot know what's the difference between people ("human beings", "conscious rational agents", "neutral observers") and robots? Can he prevent a "logic, conscious, rational, unit agent" killing a robot? What if a robot turns against him?
Or is this what you ask? The root of robot identity crisis...
Breaking the law.
No, that's not it.
Kavka's toxin puzzle: I intend to drink the poison, but then, after that, I don't have to drink the poison.
Kavka-Asimov Paradox!
The 3 Laws of p-Zombies?
So the robot cannot harm or by doing nothing cause harm. All his actions should be human friendly. He cannot stab a knife and must prevent a knive being stabbed.He should act as told as long as his actions don't harm people or prevent him preventing harm. He must protect himself as long as he doesn't hurt people and the order is not to kill himself.
When they are ordered to kill themselves and killing themselves means people get killed they cannot kill themselves. When protecting themselves from people trying to kill them they should let themselves get killed. When trying to protect themselves from robots preventing hurt to people they should not protect themselves. When ordered to kill robots they should obey only if the robots the kill are not involved in preventing harm.
So, the auto destruct demand nor the kill command can be complied to if people are hurt. They can't protect themselves from the auto destroy command if they don't hurt people by doing this. All robots that are not involved in preventing pain can be destroyed like this. The robots that are involved in preventing pain cannot obey that order.
When they don't know the difference between people and robots, the situation gets complicated. Should he kill himself if ordered? He must obey, and protecting himself is less important than obeying. But obeying is less important than preventing people from getting hurt. If he's a robot he should obey, if he's a people he should disobey. Looking in the mirror doesn't offer solace. "Who am I?" the robot asks. Should I obey or should I disobey?
Asimov-Turing Pickalilly
:up: Interesting! Suicide is wrong: we being robots/humans unable to harm humans/robots (Asimov's 1[sup]st[/sup] law).
Quoting Raymond
Cellular Suicide (Apopotosis)