You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

The Robot Who was Afraid of the Dark

MikeL September 03, 2017 at 21:31 13750 views 43 comments
I originally posted this in Artificial Intelligence and the Intermind Model, but I think I caught the end of the thread, so I hope the moderators don't mind that I put it as a new post.

What if we designed a robot that could act scared when it saw a snake? Purely mechanical of course. Part of the fear response would be the hydraulic pump responsible for oiling the joints speeds up, and that higher conduction velocity wires are brought into play to facilitate faster reaction times. This control system is regulated through feedback loops wired into the head of the robot. When the snake is spotted the control paths in the head of the robot suddenly reroute power away from non-essential compartments such as recharging the batteries and into the peripheral sense receptors. Artificial pupils dilate to increase information through sight, and so on.

This robot has been programmed with a few phrases that let the programmer know what is happening in the circuits, "batteries low" that sought of thing. In the case of the snake it reads all these reactions and gives the feedback "I'm scared."

Is is really scared?

Before you answer, and as you probably know, a long long time ago they did live vivisections on dogs and other animals because they did not believe they actually felt pain. The pain response- all that yelping and carrying on, was nothing more than a set of reflexes programmed into the animals, the scientists and theolgists argued. Only humans, designed in God's image actually felt pain as we know it.

Comments (43)

Janus September 03, 2017 at 22:24 #102150
Robots don't feel pain. Animals feel pain, but are not self-aware they are feeling it. Humans feel pain and are (sometimes at least) self-aware they are feeling it, and they may also be conscious of the pain as an indication of a threat to life, or as a prison they fear they may escape from only by dying. These kinds of human experience of pain probably make the pain much worse and harder to bear.
Wayfarer September 03, 2017 at 22:32 #102151
What is it that can say 'I am'? Answer that, and the rest should be easy.
praxis September 04, 2017 at 00:54 #102186
Quoting MikeL
This robot has been programmed with a few phrases that let the programmer know what is happening in the circuits, "batteries low" that sought of thing. In the case of the snake it reads all these reactions and gives the feedback "I'm scared."

Is is really scared?


It's theorized that interoception and affect are major aspects of human emotion. Also in the mix are our past experiences and emotion concepts, like the concept of fear. So for a machine to have a human like experience of fear, at a minimum it would need the concept and its associated interoceptive sensations. As you describe it the interoceptive sensations for the robot would be predictive feedback loops associated with "hydraulic pumps," "higher conduction velocity wires," "batteries," etc. And of course the robot would need the capacity to consciously recognize these sensations in the context of a snake, which it has learned to fear for some reason, to conclude "I'm scared."

MikeL September 04, 2017 at 06:04 #102236
Reply to Janus Hi Janus, how do you know animals aren't self-aware of their pain?
MikeL September 04, 2017 at 06:22 #102239
Reply to praxis The robot has been programmed to assess threats to its structure. As its impossible to program for everything, part of the program says if the unidentified object is mobile and unidentified, activate the fear response. It has identified the snake which is in its list of threatening objects and thus the program has activated. A physiological response is occurring within the robot.

The code of the robot has been divided into a simple executive program that can activate a range of other codes. All the executive code has to do is call on the correct program and it will execute as a hard wired reflex. As it sits atop these other programs the executive code is not 'aware' of how they operate (one program is written in C++ and one in Cobalt and except for the interface they are incompatible). In a way, there is a mask separating them. All the executive programs knows is that after identifying the snake, the hydraulic pump sped up, the large conduction velocity wires began to hum, its vision became brighter, and other background programs such as do the vaccuuming have shut down. It is also aware that the snake may cause damage to its shell and it is programmed to avoid that.
Wayfarer September 04, 2017 at 06:26 #102240
Quoting Janus
Animals feel pain, but are not self-aware they are feeling it.


You mean, they don't have self-pity?
Janus September 04, 2017 at 09:41 #102298
Reply to Wayfarer

I guess not.
Wayfarer September 04, 2017 at 09:48 #102302
Reply to Janus well I agree. Although dogs do mope ;-)
MikeL September 04, 2017 at 10:09 #102306
You two need to get out more.
Wayfarer September 04, 2017 at 10:16 #102309
Reply to MikeL Oh, I walk my dog every day.

And I have a nice robot too. She's called 'Siri'.
MikeL September 04, 2017 at 10:25 #102312
It sounds like you're living the high life.
Janus September 04, 2017 at 10:30 #102316
Reply to MikeL

I think self-awareness understood in the ordinary 'human' sense requires symbolic language. If you want to say that animals are, or might be, self-aware, then what could you mean by that?
Janus September 04, 2017 at 10:32 #102317
Reply to Wayfarer

Yes, I have no doubt animals' spirits may become depressed.
Janus September 04, 2017 at 10:33 #102318
Reply to MikeL

"Out' or "out of it"?
Janus September 04, 2017 at 10:34 #102319
Reply to Wayfarer

A very siri robot?
MikeL September 04, 2017 at 10:35 #102321
Reply to Janus Symbolic language is the common language of all animals. It is the most fundamental aspect of language. Animals read gestures and make gestures to be read.
Janus September 04, 2017 at 10:39 #102322
Reply to MikeL

Animals may read signs, but they don't understand symbolism. Human languages, linguistic and visual, are the only symbolic languages we know of.
MikeL September 04, 2017 at 10:45 #102323
There was this guy I know. His name was Pavlov. He had a dog too.
praxis September 04, 2017 at 15:04 #102377
Quoting MikeL
All the executive program knows is that after identifying the snake, the hydraulic pump sped up, the large conduction velocity wires began to hum, its vision became brighter, and other background programs such as do the vaccuuming have shut down.


It's believed that this works the other way around in people. The predictive brain, operating subconsciously as in your model, would direct the release of adrenaline etc. after recognizing a serious threat. At some point the more 'executive' consciousness would realize what was going on and perhaps think something like "I'm scared."
MikeL September 04, 2017 at 21:26 #102448
Reply to praxis Hi Praxis, I take your point. That sounds like the better explanation.
Cavacava September 05, 2017 at 22:34 #102757
Here is a road map for an emotional, creative, social, AI.
User image

Military applications are frightening,


Article about this appeared in today Bloomberg.
https://www.bloomberg.com/view/articles/2017-09-05/take-elon-musk-seriously-on-the-russian-ai-threat




szardosszemagad September 05, 2017 at 22:55 #102761
Reply to Cavacava Road map is nice and symmetrical. A beauty. Too complex for me to examine. Arrows can mean anything -- processes, feeback, feed forward, reaction, action, feeling, emotion, motivation, action. Too much. I like a two-way road map, such as two arrows pointing in opposite directions, side-by-side, parallel to each other. THAT I understand. To understand this road map would take a long time and reading a magazine article, and even that is not a guarantee I'd understand it when all is said and done.

Clearly, my ineptitude, not that of the author of the diagram.

My point is, one should always make a point he or she herself understands, otherwise the point is lost even on him. This is clearly lost on me.
Cavacava September 05, 2017 at 22:58 #102762
Reply to szardosszemagad I am certainly no expert, that is why I referenced the article and short 7 page paper written by people who have a much better understanding.
szardosszemagad September 05, 2017 at 23:17 #102772
Reply to Cavacava I apologize I was not referring to you at all when I mentioned "some people don't understand this chart". After all I don't know you and to the point of your most recent post I had no clue what you knew and what you didn't. But it was nice of you to volunteer that you are no expert either. :-)
praxis September 06, 2017 at 01:59 #102801
Reply to Cavacava The line that I thought disturbing:

Vladimir Putin:Whoever becomes the leader in this area [AI] will rule the world.


Didn't know there was an AI arms race. If it's true I imagine it will greatly accelerate development.
Cavacava September 06, 2017 at 02:16 #102807
Reply to praxis

Yes, did you catch Elon Musk's tweet from yesterday?

China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.
5:33 AM - Sep 4, 2017


He and 116 other international technicians have asked the UN to ban autonomous weapons, killer robots. UK has already said it would not participate in ban....and I am sure that US, China, Russia and others will also not participate in ban.

MikeL September 06, 2017 at 09:18 #102844
Quoting praxis
Didn't know there was an AI arms race. If it's true I imagine it will greatly accelerate development.


And to think this site is one of the biggest philosophical sites on the net, if I was trying to get an edge or new insight I'd be reading your posts very carefully. :)
praxis September 06, 2017 at 13:41 #102878
Reply to MikeL Well, being dim witted and poorly informed can have its advantages at times. For instance I now realize that I've been enjoying the naive notion in the back of my mind that AI could be used for good, rather than a tool for the rich a powerful to acquire more wealth and power.

People suck. :’(
Nelson September 11, 2017 at 10:51 #103910
For a robot to truly feel scared it must truly be aware, like a human. My definition of being aware is: being able to act and think without input, have illogical fellings develop out of the self(scientists programming illogical fellings into you would therefor not count), being able to remember and being able to act on said memories. The amount of awarenes a creature possesses is determined by how mutch it fits into these criteria. The robot is felling real fear if it has all these qualities.
MikeL September 11, 2017 at 11:11 #103912
Quoting Nelson
My definition of being aware is: being able to act and think without input, have illogical fellings develop out of the self

Hi Nelson,
This is well a considered point.

Why wouldn't a program that allows for illogical feelings count though? Scientists are, afterall, designing the 'self'. When the weighting of an electro-neural signal exceeds the threshold of 8 for example, we may program the neuron to start firing off in random fashion to random connections.
Nelson September 11, 2017 at 11:24 #103914
Reply to MikeL
I agree. A program that allows for ilogicall fellings is okay as long as the programmer cannot predict what it will result in. I also think that there is a difference between the robot actually having a thought process that is faulty resulting in ilogical feelings and illogical feelings just popping into existence. It is those faulty processes that will determine the personality of the robot. For example two robots with the same code will have different personalities.
MikeL September 11, 2017 at 11:32 #103915
So, are you saying that our own illogical feelings just 'pop' into existence without a neurochemical or coded cause?
Nelson September 11, 2017 at 11:41 #103917
I think that illogical feelings need to have a thought process behind them no matter how faulty. If the robot was scared of pie because it had seen a video where someone chocked on pie and ether died that would be better that it suddenly being scared of pie. I have no problem with the proscess popping into existence but i don't believe that the result and/or answer should.
MikeL September 11, 2017 at 12:15 #103920
Phobias. If we could program the computer to learn and adapt information from its environment that increased its probability of survival by evoking a fear response, then the only way a fear response in a robot would become a phobia is if it was a fault. Is that your position?

Interesting. If the robot had previously learnt that eating pies was safe, but then saw the person dying from eating the pie, then is it fair to create an extreme fear response when presented with a pie to eat, rather than weighting it at a be aware or simple avoidance level? You say not. You say the pairing of the pie with the fear response is unjustified [it is over weighted] and therefore the code or wiring is faulty.

Perhaps if the robot did not understand how the person died - maybe the pie killed them, then the weighting is justified. A pie might kill it. Then if you explained to the robot why the person had died, should the robot then reduce the weighting on the fear response, especially if it too could suffer the same fate? Is that the reasonable thing to do? Can we do it?

What if reason and fear are not directly interwired? Do you wish to give the robot total control over all of its internal responses, rather than group them into subroutines with 'push me now' buttons on them. The robot would spend all day long trying to focus on walking across the living room.

What is the distinction between a person who reacts this same way? They too have over weighted their code. The shock of seeing death caused by a pie or any other death has over weighted their perception of pies. Perhaps their friend was the one that died. The entire incident caused the weighting, but the focus was on the guilty pie.

Do you see it differently?

Of course you could also condition the robot to be afraid of the pie by beating it with a stick every time it saw a pie, but that is a slightly different matter.

praxis September 11, 2017 at 15:07 #103937
Reply to Janus
This doesn't appear to be true. Dolphins have this ability. See: http://www.actforlibraries.org/animal-intelligence-how-dolphins-read-symbols/

Also dolphins can recognize themselves in a mirror, which suggests that they have a self concept.
Nelson September 11, 2017 at 15:33 #103940
Phobias. If we could program the computer to learn and adapt information from its environment that increased its probability of survival by evoking a fear response, then the only way a fear response in a robot would become a phobia is if it was a fault. Is that your position?

Yes, and that goes for all feelings. The robot may through a faulty thought proscess decide that it loved brick houses or hated cement even if it had no logical ground.

Interesting. If the robot had previously learnt that eating pies was safe, but then saw the person dying from eating the pie, then is it fair to create an extreme fear response when presented with a pie to eat, rather than weighting it at a be aware or simple avoidance level? You say not. You say the pairing of the pie with the fear response is unjustified [it is over weighted] and therefore the code or wiring is faulty.

What I'm saying is that the robot should be able or even prone to illogicality. The robot may develop a pie phobia or maybe it won't. It is all up to if the robot calculates the situation right. Without a chance to misjudge or calculate situations badly it would remain static, never change personally and always reply with the same answers. it would be as aware as a chatbot.

Perhaps if the robot did not understand how the person died - maybe the pie killed them, then the weighting is justified. A pie might kill it. Then if you explained to the robot why the person had died, should the robot then reduce the weighting on the fear response, especially if it too could suffer the same fate? Is that the reasonable thing to do? Can we do it?

It's up to the robot to judge if we are speaking truth and if it should listen. It may not be the reasonable thing to do and that is the point.

What if reason and fear are not directly interwired? Do you wish to give the robot total control over all of its internal responses, rather than group them into subroutines with 'push me now' buttons on them. The robot would spend all day long trying to focus on walking across the living room.

That is the problem. If the robot is totally logical it would have total control over its feelings and that is obviously not how real creatures work. And if reason is not connected with feelings the feelings still have to be controlled by a programmed logical system. My solution is giving the robot a mostly logical thought proscess that sometimes misjudges and acts illogical. That way we get a bit of both










praxis September 11, 2017 at 18:10 #103949
I think it's important to acknowledge the fundamental difference in physical structure between human intelligence and AI when considering emotions such as fear. The substrate that AI will be built on is significantly different and that will effect it's development in terms of emotion. The biology associated with human emotion deals with regulating energy and other biologically based needs. General AI doesn't have these requirements. You'd have to go out of your way to simulate these requirements, like going out of your way to simulate flying like a bird, which may be aesthetically pleasing but inefficient. But why would that be a desirable thing to do? Presumably we create AI to help accomplish our goals. If we were to encode a general AI with an imperative to replicate itself and simulate emotions as we experience them, wouldn't that be dangerous and counterproductive to accomplishing our goals?

It seems to me that we should intentionally avoid emotional AI, and perhaps even consciousness, or allow just enough consciousness to learn or adapt so that it can accomplish our goals.
Nelson September 12, 2017 at 07:00 #104172
I totally agree. There is no real reason for emotional AI. But if we would want to create a truly aware AI it would have to be emotional.
MikeL September 12, 2017 at 10:08 #104179
Quoting Nelson
The robot may through a faulty thought proscess decide that it loved brick houses or hated cement even if it had no logical ground.


Hi Nelson, if a human has an illogical thought process, is that also the result of the faulty wiring or code? What's the difference?
praxis September 12, 2017 at 22:52 #104309
Quoting Nelson
if we would want to create a truly aware AI it would have to be emotional.


Of course no one has consciousness figured out yet and I'm certainly no expert but I think that I may know enough to caution against making too many assumptions about awareness and emotions.
Harry Hindu September 13, 2017 at 00:25 #104317
Quoting MikeL
What if we designed a robot that could act scared when it saw a snake? Purely mechanical of course. Part of the fear response would be the hydraulic pump responsible for oiling the joints speeds up, and that higher conduction velocity wires are brought into play to facilitate faster reaction times. This control system is regulated through feedback loops wired into the head of the robot. When the snake is spotted the control paths in the head of the robot suddenly reroute power away from non-essential compartments such as recharging the batteries and into the peripheral sense receptors. Artificial pupils dilate to increase information through sight, and so on.


How do we know that we are scared if not an awareness of our own physical characteristics - heart beating faster, adrenaline rush, the need to run, etc. and then know the symbol for those characteristics occurring together - "fear" - in order to communicate that you fear something.

You mention the physical characteristics of fear. All that is needed is an awareness of those physical characteristics and a label, or designation, for those characteristics - "fear". In this sense, the robot would know fear, and know that it fears if it can associate those characteristics with its self. A robot can be aware of its own condition and then communicate that condition to others if it has instructions for which symbol refers to which condition: "fear", "content", "sad", etc.
MikeL September 13, 2017 at 09:15 #104402
Reply to Harry Hindu I agree. Just like people.
The fun thing to explain, like Nelson alluded to, is when we have a tone of positively weighted inputs, that when summed lead to the opposite feeling. For example, you may hate the way a yellow beach house looks at sunset, yet independently love yellow, beaches, houses and sunsets. You might have to surmise a contradiction has occurred (eg beaches are nature, nature is inviolable, beaches are nice but on a beach violate nature - or something to that effect).
I believe it can be coded though - you can code the illogical without it being a fault.
Michael Ossipoff October 31, 2017 at 18:15 #120107

Quoting MikeL
What if we designed a robot that could act scared when it saw a snake? Purely mechanical of course. Part of the fear response would be the hydraulic pump responsible for oiling the joints speeds up, and that higher conduction velocity wires are brought into play to facilitate faster reaction times. This control system is regulated through feedback loops wired into the head of the robot. When the snake is spotted the control paths in the head of the robot suddenly reroute power away from non-essential compartments such as recharging the batteries and into the peripheral sense receptors. Artificial pupils dilate to increase information through sight, and so on.

This robot has been programmed with a few phrases that let the programmer know what is happening in the circuits, "batteries low" that sought of thing. In the case of the snake it reads all these reactions and gives the feedback "I'm scared."

Is is really scared?


The robot isn't really scared if it's just programmed to say, "I'm scared."

But, if some genuine menace (a snake probably wouldn't menace a robot) triggered measures for self-protection, then it could be said that the robot is scared.

It's the old question of what you call "conscious".

The experience of a purposefully-responsive device is that device's surroundings and events, in the context of the purpose(s) of its purposeful response.

The robot can be scared.

Dogs, cats, and all other animals, are, of course much more like us than the robot is. For one thing, all of us animals result from natural-selection, and the purposes and precautions that go with that. Harming an animal of any kind is very much like harming a human.

Michael Ossipoff