You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Is it immoral to power down an AI?

Kenshin March 25, 2018 at 15:18 5575 views 11 comments
Suppose I create a conscious self-aware AI machine. Is it immoral for me to shut down the computer in which the AI lives without it's permission?

Comments (11)

Ying March 25, 2018 at 15:41 #166422
How do you distinguish between a "conscious self aware AI" and a "p zombie AI"?
Kenshin March 25, 2018 at 15:59 #166430
Reply to Ying We usually assign rights to all people even if we can't prove that they are conscious. So if we assign rights to people even though we can't prove their conscious, does it matter if we can prove the AI is conscious when deciding whether to give it rights? Nevertheless, in my question it is a postulate that the AI is conscious in this case.
charleton March 25, 2018 at 15:59 #166431
DEFINE....
Quoting Kenshin
self-aware


Kenshin March 25, 2018 at 16:01 #166432
Reply to charleton As in, the AI has the same subjective mental experience as a human being.
charleton March 25, 2018 at 16:04 #166434
Quoting Kenshin
As in, the AI has the same subjective mental experience as a human being. The only difference is (1) it doesn't have a body, and (2) it was created artificially by man.


This is a contradiction since the subjective mental experience of humans is generated by their bodies. We have every reason to think the characteristics of this experience are unique to each bodily experience.
Kenshin March 25, 2018 at 16:05 #166435
Reply to charleton Think of it this way. Imagine you went to a doctor for an operation, and the doctor told you that your body was actually a machine, and your mind was a computer. Would you deserve human rights?
BC March 25, 2018 at 16:17 #166437
Reply to Kenshin Please remember the first term in the acronym "AI" -- artificial not the genuine article. It isn't just a matter of "intelligence". We already have sort of "intelligent systems" that can do a number of important and complex procedures. What these systems are not, and what your AI - artificial intelligence - is not is "being". It takes more than code to make a being. A rat is far more of a being than AI. Biology is an essential part of being: a body, senses, pleasure, pain, motivation, mobility, will (even the small will of a rat), birth, and death.

As Charleton noted, "This is a contradiction since the subjective mental experience of humans is generated by their bodies."

Shut the damned thing off.
Kenshin March 25, 2018 at 16:22 #166439
Reply to Bitter Crank But as I asked Charleton, what if you personally found out that despite having a human body, your mind was actually an AI machine created by some mad scientists. Then all the experiences you've had up until now that you've thought were human experiences were actually AI experiences. All other people are real people, but what if you were the exception? Is this an idea you can entertain? In this case then, should you have rights like the other people?

BC March 25, 2018 at 17:19 #166469
Quoting Kenshin
Is this an idea you can entertain?


In a word, no.

I find the fascination with AI, and the fantasy that we are the creation of mad scientists to be exceedingly tedious. It has potential in science fiction, but in philosophy it's a bore.
charleton March 25, 2018 at 20:12 #166496
Quoting Kenshin
?charleton Think of it this way. Imagine you went to a doctor for an operation, and the doctor told you that your body was actually a machine, and your mind was a computer. Would you deserve human rights?


I'd rather not think in terms of absurdities.
Wayfarer March 25, 2018 at 21:34 #166519
Quoting Kenshin
Suppose I create a conscious self-aware AI machine. Is it immoral for me to shut down the computer in which the AI lives without it's permission?


That's a very good question. People speak blithely about 'artificial intelligence' as if such devices really were beings, and not machines. But if they were self-aware - if they were in fact beings - then their rights ought to be taken into consideration. (Although dealing with some of the potential complexes that might arise from such a scenario could be challenging - 'what do you mean, "I hate being a machine"?!?)

But then ask yourself this - should the Uber self-driving vehicle that killed a pedestrian last week be held responsible for the death? Should the vehicle be charged and, if found guilty, punished? Because it is after all an AI unit.