At what point does something become a Preference Rather than a Program?
This thread is sparked by a perusing of the essay concerning NPCs in video games and their ethical importance. Link.
At what point does something become a preference rather than a program?
Say I am typing on a word processor, like I am right now. I type the letter "A". The letter "A" exists continuously; it is programmed to continue to display on my screen. Now say I delete the letter "A". Was this going against the preference of the letter "A"? Or was I over-riding the programming of the letter "A"?
From a computational perspective, humans are just like computers. But it's immoral in many cases to stop someone from pursuing their preferences.
An NPC in a video game is programmed to attack you and defend against your attacks. Is that a preference or just a programming? Where do we draw the line?
I think we draw the line at the point at which an being has sentience; i.e. that being takes ownership of its programming and therefore takes it as its preferences. But it's impossible to know if something is truly conscious or not. So should we act as if these NPCs are conscious?
At what point does something become a preference rather than a program?
Say I am typing on a word processor, like I am right now. I type the letter "A". The letter "A" exists continuously; it is programmed to continue to display on my screen. Now say I delete the letter "A". Was this going against the preference of the letter "A"? Or was I over-riding the programming of the letter "A"?
From a computational perspective, humans are just like computers. But it's immoral in many cases to stop someone from pursuing their preferences.
An NPC in a video game is programmed to attack you and defend against your attacks. Is that a preference or just a programming? Where do we draw the line?
I think we draw the line at the point at which an being has sentience; i.e. that being takes ownership of its programming and therefore takes it as its preferences. But it's impossible to know if something is truly conscious or not. So should we act as if these NPCs are conscious?
Comments (11)
One wouldn't want to get carried away with imputing reality to characters in games and epics. People get carried away with imputing complex mental processes to their pets. At least an actual dog can have actual preferences (our retriever had all sorts of preferences and balked when they were not honored). A character in a novel can not have actual anything, EXCEPT that the author is an actual person. The same goes for video games -- there is a real author behind the artificial characters.
Part of the pleasure of fiction is to join with the author in a game of "let's pretend"--the necessary voluntary suspension of disbelief. Of course Frodo, Harry, and Darth don't exist. They are nothing more than ink spots on paper. BUT... we can "bring them to life" and sort of "share their reality". We take the author's text as a testament of their reality.
Aren't we just machines as well?
These have the appearance of substantial claims but both fall flat. The first claim falls apart by recognition that it is a likeness by analogy. Humans are not just like computers, but we can make sense of cognition by making an analogy to computers. Analogies are only as strong as they approach equivalence, but in this case, the difference between humans and computers is precisely what we are after so we cannot use the analogy to make sense of the difference in cognition.
The second claim has a bit of a weasel word that intimates a strong position, but actually fails to take any position at all. By including the word "many" you implicitly suggest there is something inherent about the impermissibility of preventing someone from pursuing preferences, but "many" can also be used to imply that there is at least one case where it is not immoral to stop someone from pursuing preferences. We need to know what the salient features are that make this immoral or not before we can make a judgement about NPCs.
Take some random object in your hand, throw it up in the air, and then catch it. A computer, I understand, has to do some pretty fancy math to get a robotic arm to do that. But you didn't do any math when you caught it. We may do the same things that computers do, but the stuff going on in our brains is not like the stuff going on in the computer in some important ways.
But it's worth noting that the parallelism of many modern day computers is commonly seen to be that of the brain. Parallel computers can compute things that a serial computer would struggle to do so.