You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Asimov's Third Law...Fail!

Agent Smith January 05, 2022 at 16:40 5050 views 21 comments
Asimov's 3 Laws of Robotics

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

An AI (artificial intelligence) that passes the Turing test is, for all intents and purposes, indistinguishable from a human.

Asimov's 3[sup]rd[/sup] Law fails for the simple reason that implicit in it is the provision that robots can protect themselves against other robots but, the catch is, robots won't be able to tell the difference between robots (AI) and humans (robots/AI pass the Turing test).





Comments (21)

john27 January 05, 2022 at 17:21 #639133
Quoting Agent Smith
Asimov's 3rd Law fails for the simple reason that implicit in it is the provision that robots can protect themselves against other robots but, the catch is, robots won't be able to tell the difference between robots (AI) and humans (robots/AI pass the Turing test).


Quoting Agent Smith
An AI (artificial intelligence) that passes the Turing test is, for all intents and purposes, indistinguishable from a human.


If a robot isn't distinguishable from a human, would it still be subjected to Asimov's three laws of robotics?
Agent Smith January 05, 2022 at 18:04 #639151
Quoting john27
If a robot isn't distinguishable from a human, would it still be subjected to Asimov's three laws of robotics?


Apparently no.
john27 January 05, 2022 at 18:07 #639152
Quoting Agent Smith
Apparently no.


Then perhaps we could say: for those robots who have passed the Turing exam, they are now exempt from Asimov's laws of robotics? As in, reserve the rule for those who do act robotically.
Agent Smith January 05, 2022 at 18:24 #639156
Quoting john27
Then perhaps we could say: for those robots who have passed the Turning exam, they are now exempt from Asimov's laws of robotics? As in, reserve the rule for those who do act robotically.


Yes.
john27 January 05, 2022 at 18:27 #639158
Reply to Agent Smith

Then...In my belief I don't think Asimovs' rules fail. They just apply to who their applicable too.
Agent Smith January 05, 2022 at 18:31 #639160
Quoting john27
They just apply to who their applicable too.


There are none to whom Asimov's laws apply to. That's the point.
john27 January 05, 2022 at 18:33 #639162
Reply to Agent Smith

Wouldn't it just apply to robots who haven't passed the Turing exam?
Agent Smith January 05, 2022 at 18:33 #639163
Quoting john27
Wouldn't it just apply to robots who haven't passed the Turning exam?


ALL robots pass the Turing test.
john27 January 05, 2022 at 18:35 #639164
Quoting Agent Smith
ALL robots pass the Turing test.


Really?! Wow, I didn't know that. Dang.
Agent Smith January 05, 2022 at 18:36 #639165
Quoting john27
Really?! Wow, I didn't know that. Dang.


Well, now you know.
john27 January 05, 2022 at 18:37 #639166
Agent Smith January 05, 2022 at 18:38 #639168
Agent Smith January 18, 2022 at 15:33 #644766
A robot to which the 3 laws of robotics applies is one that is sentient enough for the 3 laws of robotics not to apply to it. What's a name for this kinda situation? Catch-22? To want to be declared insane is to prove that you're sane. :chin: There must be a better, fancier, more eloquent way to express this state of affairs. Are there real-world examples?

A rule is meant for a certain category of people, but then people in that category are the exact kind of people the rule isn't meant for.

:chin:
Raymond January 18, 2022 at 15:50 #644774
Quoting Agent Smith
An AI (artificial intelligence) that passes the Turing test is, for all intents and purposes, indistinguishable from a human.


If that's so, how does the robot know what's the difference between people ("human beings", "conscious rational agents", "neutral observers") and robots? Can he prevent a "logic, conscious, rational, unit agent" killing a robot? What if a robot turns against him?

Or is this what you ask? The root of robot identity crisis...

Agent Smith January 18, 2022 at 19:07 #644845
Reply to Raymond While I am interested in knowing more about Asimov's 3 laws, how they stand up to deeper analysis, I'd also like to know a short phrase/name for such situations as described in my last post, you know like Dunning-Kruger effect, Drunkard's search, Rubber Duck debugging, and so on.
Raymond January 18, 2022 at 20:04 #644862
Reply to Agent Smith

Breaking the law.



Agent Smith January 18, 2022 at 20:12 #644863
Quoting Raymond
Breaking the law.


No, that's not it.
Agent Smith January 18, 2022 at 20:27 #644866
A robot will intend to follow Asimov's laws, but then that means it doesn't have to follow Asimov's laws.

Kavka's toxin puzzle: I intend to drink the poison, but then, after that, I don't have to drink the poison.

Kavka-Asimov Paradox!
Real Gone Cat January 18, 2022 at 20:54 #644874
Um...

The 3 Laws of p-Zombies?
Raymond January 18, 2022 at 20:55 #644875
Quoting Agent Smith


First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


So the robot cannot harm or by doing nothing cause harm. All his actions should be human friendly. He cannot stab a knife and must prevent a knive being stabbed.He should act as told as long as his actions don't harm people or prevent him preventing harm. He must protect himself as long as he doesn't hurt people and the order is not to kill himself.

When they are ordered to kill themselves and killing themselves means people get killed they cannot kill themselves. When protecting themselves from people trying to kill them they should let themselves get killed. When trying to protect themselves from robots preventing hurt to people they should not protect themselves. When ordered to kill robots they should obey only if the robots the kill are not involved in preventing harm.

So, the auto destruct demand nor the kill command can be complied to if people are hurt. They can't protect themselves from the auto destroy command if they don't hurt people by doing this. All robots that are not involved in preventing pain can be destroyed like this. The robots that are involved in preventing pain cannot obey that order.

When they don't know the difference between people and robots, the situation gets complicated. Should he kill himself if ordered? He must obey, and protecting himself is less important than obeying. But obeying is less important than preventing people from getting hurt. If he's a robot he should obey, if he's a people he should disobey. Looking in the mirror doesn't offer solace. "Who am I?" the robot asks. Should I obey or should I disobey?

Asimov-Turing Pickalilly
Agent Smith January 19, 2022 at 06:01 #645010
Quoting Raymond
When they are ordered to kill themselves and killing themselves means people get killed they cannot kill themselves.


:up: Interesting! Suicide is wrong: we being robots/humans unable to harm humans/robots (Asimov's 1[sup]st[/sup] law).

Quoting Raymond
auto destruct


Cellular Suicide (Apopotosis)