The Robot Who was Afraid of the Dark
I originally posted this in Artificial Intelligence and the Intermind Model, but I think I caught the end of the thread, so I hope the moderators don't mind that I put it as a new post.
What if we designed a robot that could act scared when it saw a snake? Purely mechanical of course. Part of the fear response would be the hydraulic pump responsible for oiling the joints speeds up, and that higher conduction velocity wires are brought into play to facilitate faster reaction times. This control system is regulated through feedback loops wired into the head of the robot. When the snake is spotted the control paths in the head of the robot suddenly reroute power away from non-essential compartments such as recharging the batteries and into the peripheral sense receptors. Artificial pupils dilate to increase information through sight, and so on.
This robot has been programmed with a few phrases that let the programmer know what is happening in the circuits, "batteries low" that sought of thing. In the case of the snake it reads all these reactions and gives the feedback "I'm scared."
Is is really scared?
Before you answer, and as you probably know, a long long time ago they did live vivisections on dogs and other animals because they did not believe they actually felt pain. The pain response- all that yelping and carrying on, was nothing more than a set of reflexes programmed into the animals, the scientists and theolgists argued. Only humans, designed in God's image actually felt pain as we know it.
What if we designed a robot that could act scared when it saw a snake? Purely mechanical of course. Part of the fear response would be the hydraulic pump responsible for oiling the joints speeds up, and that higher conduction velocity wires are brought into play to facilitate faster reaction times. This control system is regulated through feedback loops wired into the head of the robot. When the snake is spotted the control paths in the head of the robot suddenly reroute power away from non-essential compartments such as recharging the batteries and into the peripheral sense receptors. Artificial pupils dilate to increase information through sight, and so on.
This robot has been programmed with a few phrases that let the programmer know what is happening in the circuits, "batteries low" that sought of thing. In the case of the snake it reads all these reactions and gives the feedback "I'm scared."
Is is really scared?
Before you answer, and as you probably know, a long long time ago they did live vivisections on dogs and other animals because they did not believe they actually felt pain. The pain response- all that yelping and carrying on, was nothing more than a set of reflexes programmed into the animals, the scientists and theolgists argued. Only humans, designed in God's image actually felt pain as we know it.
Comments (43)
It's theorized that interoception and affect are major aspects of human emotion. Also in the mix are our past experiences and emotion concepts, like the concept of fear. So for a machine to have a human like experience of fear, at a minimum it would need the concept and its associated interoceptive sensations. As you describe it the interoceptive sensations for the robot would be predictive feedback loops associated with "hydraulic pumps," "higher conduction velocity wires," "batteries," etc. And of course the robot would need the capacity to consciously recognize these sensations in the context of a snake, which it has learned to fear for some reason, to conclude "I'm scared."
The code of the robot has been divided into a simple executive program that can activate a range of other codes. All the executive code has to do is call on the correct program and it will execute as a hard wired reflex. As it sits atop these other programs the executive code is not 'aware' of how they operate (one program is written in C++ and one in Cobalt and except for the interface they are incompatible). In a way, there is a mask separating them. All the executive programs knows is that after identifying the snake, the hydraulic pump sped up, the large conduction velocity wires began to hum, its vision became brighter, and other background programs such as do the vaccuuming have shut down. It is also aware that the snake may cause damage to its shell and it is programmed to avoid that.
You mean, they don't have self-pity?
I guess not.
And I have a nice robot too. She's called 'Siri'.
I think self-awareness understood in the ordinary 'human' sense requires symbolic language. If you want to say that animals are, or might be, self-aware, then what could you mean by that?
Yes, I have no doubt animals' spirits may become depressed.
"Out' or "out of it"?
A very siri robot?
Animals may read signs, but they don't understand symbolism. Human languages, linguistic and visual, are the only symbolic languages we know of.
It's believed that this works the other way around in people. The predictive brain, operating subconsciously as in your model, would direct the release of adrenaline etc. after recognizing a serious threat. At some point the more 'executive' consciousness would realize what was going on and perhaps think something like "I'm scared."
Military applications are frightening,
Article about this appeared in today Bloomberg.
https://www.bloomberg.com/view/articles/2017-09-05/take-elon-musk-seriously-on-the-russian-ai-threat
Clearly, my ineptitude, not that of the author of the diagram.
My point is, one should always make a point he or she herself understands, otherwise the point is lost even on him. This is clearly lost on me.
Didn't know there was an AI arms race. If it's true I imagine it will greatly accelerate development.
Yes, did you catch Elon Musk's tweet from yesterday?
He and 116 other international technicians have asked the UN to ban autonomous weapons, killer robots. UK has already said it would not participate in ban....and I am sure that US, China, Russia and others will also not participate in ban.
And to think this site is one of the biggest philosophical sites on the net, if I was trying to get an edge or new insight I'd be reading your posts very carefully. :)
People suck. :’(
Hi Nelson,
This is well a considered point.
Why wouldn't a program that allows for illogical feelings count though? Scientists are, afterall, designing the 'self'. When the weighting of an electro-neural signal exceeds the threshold of 8 for example, we may program the neuron to start firing off in random fashion to random connections.
I agree. A program that allows for ilogicall fellings is okay as long as the programmer cannot predict what it will result in. I also think that there is a difference between the robot actually having a thought process that is faulty resulting in ilogical feelings and illogical feelings just popping into existence. It is those faulty processes that will determine the personality of the robot. For example two robots with the same code will have different personalities.
Interesting. If the robot had previously learnt that eating pies was safe, but then saw the person dying from eating the pie, then is it fair to create an extreme fear response when presented with a pie to eat, rather than weighting it at a be aware or simple avoidance level? You say not. You say the pairing of the pie with the fear response is unjustified [it is over weighted] and therefore the code or wiring is faulty.
Perhaps if the robot did not understand how the person died - maybe the pie killed them, then the weighting is justified. A pie might kill it. Then if you explained to the robot why the person had died, should the robot then reduce the weighting on the fear response, especially if it too could suffer the same fate? Is that the reasonable thing to do? Can we do it?
What if reason and fear are not directly interwired? Do you wish to give the robot total control over all of its internal responses, rather than group them into subroutines with 'push me now' buttons on them. The robot would spend all day long trying to focus on walking across the living room.
What is the distinction between a person who reacts this same way? They too have over weighted their code. The shock of seeing death caused by a pie or any other death has over weighted their perception of pies. Perhaps their friend was the one that died. The entire incident caused the weighting, but the focus was on the guilty pie.
Do you see it differently?
Of course you could also condition the robot to be afraid of the pie by beating it with a stick every time it saw a pie, but that is a slightly different matter.
This doesn't appear to be true. Dolphins have this ability. See: http://www.actforlibraries.org/animal-intelligence-how-dolphins-read-symbols/
Also dolphins can recognize themselves in a mirror, which suggests that they have a self concept.
Yes, and that goes for all feelings. The robot may through a faulty thought proscess decide that it loved brick houses or hated cement even if it had no logical ground.
Interesting. If the robot had previously learnt that eating pies was safe, but then saw the person dying from eating the pie, then is it fair to create an extreme fear response when presented with a pie to eat, rather than weighting it at a be aware or simple avoidance level? You say not. You say the pairing of the pie with the fear response is unjustified [it is over weighted] and therefore the code or wiring is faulty.
What I'm saying is that the robot should be able or even prone to illogicality. The robot may develop a pie phobia or maybe it won't. It is all up to if the robot calculates the situation right. Without a chance to misjudge or calculate situations badly it would remain static, never change personally and always reply with the same answers. it would be as aware as a chatbot.
Perhaps if the robot did not understand how the person died - maybe the pie killed them, then the weighting is justified. A pie might kill it. Then if you explained to the robot why the person had died, should the robot then reduce the weighting on the fear response, especially if it too could suffer the same fate? Is that the reasonable thing to do? Can we do it?
It's up to the robot to judge if we are speaking truth and if it should listen. It may not be the reasonable thing to do and that is the point.
What if reason and fear are not directly interwired? Do you wish to give the robot total control over all of its internal responses, rather than group them into subroutines with 'push me now' buttons on them. The robot would spend all day long trying to focus on walking across the living room.
That is the problem. If the robot is totally logical it would have total control over its feelings and that is obviously not how real creatures work. And if reason is not connected with feelings the feelings still have to be controlled by a programmed logical system. My solution is giving the robot a mostly logical thought proscess that sometimes misjudges and acts illogical. That way we get a bit of both
It seems to me that we should intentionally avoid emotional AI, and perhaps even consciousness, or allow just enough consciousness to learn or adapt so that it can accomplish our goals.
Hi Nelson, if a human has an illogical thought process, is that also the result of the faulty wiring or code? What's the difference?
Of course no one has consciousness figured out yet and I'm certainly no expert but I think that I may know enough to caution against making too many assumptions about awareness and emotions.
How do we know that we are scared if not an awareness of our own physical characteristics - heart beating faster, adrenaline rush, the need to run, etc. and then know the symbol for those characteristics occurring together - "fear" - in order to communicate that you fear something.
You mention the physical characteristics of fear. All that is needed is an awareness of those physical characteristics and a label, or designation, for those characteristics - "fear". In this sense, the robot would know fear, and know that it fears if it can associate those characteristics with its self. A robot can be aware of its own condition and then communicate that condition to others if it has instructions for which symbol refers to which condition: "fear", "content", "sad", etc.
The fun thing to explain, like Nelson alluded to, is when we have a tone of positively weighted inputs, that when summed lead to the opposite feeling. For example, you may hate the way a yellow beach house looks at sunset, yet independently love yellow, beaches, houses and sunsets. You might have to surmise a contradiction has occurred (eg beaches are nature, nature is inviolable, beaches are nice but on a beach violate nature - or something to that effect).
I believe it can be coded though - you can code the illogical without it being a fault.
Quoting MikeL
The robot isn't really scared if it's just programmed to say, "I'm scared."
But, if some genuine menace (a snake probably wouldn't menace a robot) triggered measures for self-protection, then it could be said that the robot is scared.
It's the old question of what you call "conscious".
The experience of a purposefully-responsive device is that device's surroundings and events, in the context of the purpose(s) of its purposeful response.
The robot can be scared.
Dogs, cats, and all other animals, are, of course much more like us than the robot is. For one thing, all of us animals result from natural-selection, and the purposes and precautions that go with that. Harming an animal of any kind is very much like harming a human.
Michael Ossipoff