How can consciousness arise from Artificial Intelligence?
I seem to have encountered an interesting thread about the nature of consciousness with respect to computers, who seem to display an attitude of sentience.
Do you think it is true that consciousness can arise from Generalized or non-Generalized Artificial Intelligence?
Do you think it is true that consciousness can arise from Generalized or non-Generalized Artificial Intelligence?
Comments (52)
Imho, consciousness is just the multi-threaded nature of thought. That is, one stream of thought can examine and interact with another. Thus we get phrases such as...
"I am thinking XYZ", where "I" is one stream of thought, and "XYZ" is another.
Not sure why computers couldn't be given this ability.
I'm uncertain if this a form of the Turing Test.
This is probably an oversimplification but what is consciousness but information processing and if it is that then any information processor must be capable of it. :chin:
I dunno.
But, consider that intelligence is required to process information. And, there is no ideal processor for all tasks.
Maybe, but it's also entirely possible that having a biological brain and all that comes with that is vital for consciousness like we know it to arise. So I'd say no for the moment, but it's a very weak no.
A question I would like to ask though, is why do we think intelligence has necessarily a lot to do with consciousness? They seem like two separate things to me...
Either I'm out of my depth or there's something about the fact that all our reasoning, whether by a 5 year old toddler or the great Einstein himself, can be reduced to just 20 basic inference rules. Logic is the essence of our intelligence, no? If so, that the greatest works of our greatest minds can be broken down into an assortment of 20 simple logical rules suggests to me there's nothing extraordinary about human intelligence - nothing that a well-designed computer can't handle.
Because intelligence is required for grasping reality itself, and with that comes conscious thought at some higher level. Let's not joke around and say that computers aren't as intelligent in any regard as we are, be it in isolation or collectively.
Do you mean the act of reasoning itself or logic? These are two domains of knowledge. Logic contains itself no knowledge, it simply does.
In order for a subject to be aware of an object, there must be a connection between the two. In humans, our brains are connected to our body physically through nerves that use electrical impulses to communicate between one another. We also have connections to the external world through our senses. All of these modes can create emotions which inform us of how to react to the stimulus.
Personally, I fail to see how a computer is capable of experience. Due to its programming it will act in largely predictable ways, although some learning is obviously involved. It will necessarily lack the amount of creativity and unpredictability to appear conscious.
Personally I lean towards believing that consciousness and intelligence are not related. Panpsychism seems to make the most sense to me even if it comes with problems. So I would lean towards saying yes but again that’s just speculation
I don’t know where you got that. Care to elaborate?
They are good (and bad) at different things than we are.
"and with that comes conscious thought" seems like a big assumption that is still in need of a lot of justification.
Don't feel obliged to try to justify it, because I don't think anybody really knows at this point.
My summary is that I don’t say it is impossible that machinery could be “conscious”, but we have to recognise why standard notions of computation aren’t even starting down the path to mimicking the biological processes involved.
In biology, the mind is an information process in the sense that there is an organism trying to model the world. The organism wants to "be in that world" so as to control it - regulate all the physical processes that count towards being alive.
So the simple argument is that to be conscious, computational hardware would first have to crack the problem of being alive. The hardware would have to be an actual self in the world in the sense of regulating the very physics which is producing its organismic state of being.
Normal computational hardware doesn't live in the world at all. It gets fabricated. It gets plugged into a wall socket. It gets given information already determined in its form by whoever decided what counted as meaningful input. It is just a machine blindly executing a program. It can do "anything" because nothing it does has to be meaningful in terms of maintaining its physical existence.
So a lack of biological realism leads on to a lack of neurological realism.
Generalised AI may well be useful machinery in a human setting. Its blind pattern matching can be applied to world problems that are meaningful to us.
But it would be nothing like "consciousness", just like it is nothing like "living".
What is a program?
Even for us, consciousness doesn't spring into being in a fully developed form (as far as I know). Consciousness has a way forward to fulfillment. Can you think of anything, even a minute development, that would contribute to consciousness? Can that minute development be duplicated in a computer?
For instance, we (and other animals) have proprioception, which informs our brains of the arrangement of our bodies in space. iPhones can tell whether they have been picked up--the screen becomes active. There's an accelerometer in the phone which senses movement. It's a very trivial feature, but it is an example of what I am getting at.
How could a computer not only register that it was upside down, but also 'know' that it was upside down, and maybe even 'care' that it was topsy turvy?
What do you think?
A set of instructions written by someone with conscious intent for a machine lacking such a capability.
It is clear that consciousness can arise (some like to say emerge) from piles of atoms; specifically, in living creatures made of meat. It may be that consciousness can ONLY arise in bags of meat, but it's more likely that it could arise in some other kind of substrate as well.
I do believe that.
I do NOT believe that the nature of such a substrate could possibly be computational. Despite popular gee-whizzing, AI programs are conventional computer programs running on conventional hardware. It is true that the sophisticated multi-layered learning networks of today are very clever ways of organizing a computation; but as they are implemented on conventional hardware, it follows that these are ultimately conventional programs; and limited by the built-in limits to computation, which were discovered by Turing.
Penrose for example thinks that consciousness is not computable, Searle also, though he doesn't phrase it like that.
Here is something I know for sure. Even if some future AI does somehow become conscious or self-aware; it will NOT be by means of the current approach to machine learning. Neural networks don't do anything but pore over data looking for patterns. They tell you everything about what has happened; and nothing about what is happening. That is the fatal flaw of current approaches to AI. Plays a kickass game of chess but do you really want it running your life?
You can say you believe it because you have already presumed that life is made of “meat”. You have skipped straight over the issue of how that meat ever actually had life before it arrived in your frying pan.
What is it that made the meat alive before it ever got bagged and put on display at the supermarket?
Answer that and you might have a reason to believe that consciousness has anything to do with particular physical “substrates” rather than particular biological processes.
In Doctor Who he punches a wall for four billion years in order to break a diamond wall. I think the same thing about Machine learning. Our advances in semi conductors have made it so we can get the result through force alone. I'm astounded in our computer advances but I think it is far from a mind in any traditional sense.
I’d like to see the specification for how to generate ‘cultural archetypes’ - the kinds we all draw on whenever we speak and think. That would be an interesting module to try and program. When I worked at that AI startup, they talked a lot about providing her (their bot) with a ‘sense of context’. And it’s a pretty difficult thing to do. (I asked her once, whilst drilling down into supermarket data - let’s call her Siri - ‘hey Siri, got any data on shopping figures for bachelors?’ The response was ‘is a bachelor a type of commodity (Olive)?’) :smile:
[quote=Steve Talbott]a dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of ‘houses‘ and ‘leaps’ — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate.[/quote]
Besides, there are no ‘constituents of consciousness’ in artificial systems, merely the ability to emulate reasoning processes through computation.
Yep. And this context can’t be just “represented”. It has to be lived all the way down in that physically embedded sense.
What biological intelligence involves is the ability regulate physical instability (in pursuit of gaining personal advantage). Computation, by contrast, needs to exist in a world that is already physically stable in ways that let it compute. It has no regulative capacity. So it needs a power cable and everything else supplied.
On the other hand, there is a reason why neural approaches to computational architecture seem to hold some promise of biological realism. That is what gets the AI folk excited.
But what that shows is that even a tiny bit of that biological realism is a powerful improvement on a purely mechanical/informational notion of computation. A Bayesian architecture implements something that is better equipped to regulate the uncertainty of a natural environment.
The argument against conscious machines is then that neural nets can only add a little of that biological realism. There is no believable plan for extending it down to the level of muscle and bone - or rather, chips and power supplies.
So existing computation is basically nothing like biology. That means even just adding a touch of biology makes for something impressive. But the difficulty is that computer science is trying to add back the world - the lived in context - from the top down when biology builds its “machine” from the ground up.
Biology is founded in the regulatory possibilities that nature provides at the quasi classical nanoscale of physical processes. That is the uncertainty (of energy releasing and structure forming chemical reactions) that life is able to “mindfully” harness.
Recreating what life already does - produces conscious humans - becomes a redundant exercise given the overwhelming difficulty of reinventing the same nanoscale organicism in silicon, or whatever.
But that hardly matters once we forget about recreating human consciousness and switch to what a machine-expanded human consciousness is going to look like once “neural AI” has really had its impact on the way we live our lives.
Imagine two engineers back in the 1800s. One says one day they will be able to make a biologically realistic horse. And here’s my first metal and clockwork contraption as a starter on that.
The other says, one day you will have driverless Uber pods. And here is my own first metal and clockwork contraption as the start of that journey.
Reasoning is the same as Logic. You mentioned intelligence with regard to consciousness and going by how intelligence is measured (IQ tests) - memory & logical thinking - it looks like computers can beat us anytime, anywhere in IQ tests.
Spookily, that description fits humans as well. :chin:
You are absolutely correct. As I was writing my post I was actually thinking of having to talk about life. I think that's Searle's point though I'm no scholar. Something about life-animated meat that does the trick when it comes to implementing consciousness.
I completely agree with you, it was just a composition error that I forgot to mention it. We have to figure out why some piles of atoms become life. And if life, whatever it is, is prerequisite to consciousness.
Hah. Wasn’t expecting that comeback! :rofl:
Have you ever checked out Howard Pattee on the “epistemic cut”? He makes the best hard-nosed physicist’s case for the difference between life as a process vs machines.
Another theoretical biologist, Robert Rosen, used category theory to say something similar in a mathematically abstract way.
Both provide the rigorous basics of what I’m arguing,
I appreciated the correction.
Quoting apokrisis
I will definitely check that out. I could use some more clarity on the subject.
Quoting apokrisis
(Must ... not ... take ... bait ...)
Quoting apokrisis
I'm afraid I have not been following your argument, nor as far as I know have I taken any stand on it. I am grateful for the reference to life versus machines.
What people are valuing or trying to preserve must also count. I think people value getting to destinations more than riding on robot variations of their older biological horses. Sometimes conservation efforts affect this and people can't get to what they want to do because more people value the old way more.
So going back into what is valued, I think people value the idea of consciousness existing on silicon chips because they want to live on indefinitely and a handful of their friends and family too. There is also the romanticist aspect too where people will want to be doing the same things as the machines but I'm not sure enough how far that will go.
How can a conscious computer "lack"?
Ok, makes sense. But, I'm focusing on sentient computers.
Of which there are none. Except in every sci-fi work of fiction I guess.
How are you so sure?
Did you have an example of one?
Seemingly, an example cannot be provided since, I think, it cannot ever be determined if it really is conscious or even sentient according to the information processor theory or something?
:smile:
It has something to do with the ability to form memories. Without memory, there can be no continuity of process, and without continuity of process, there can be no homeostasis. And there's nothing known to physics - the 'behaviour of atoms' - which accounts for that.
Is it possible that this can't be done within the current scientific paradigms? Something is missing - it may remind you of élan vital but that is probably not the right answer - Something is missing [from our theories.]
Something...is...missing!
Any ideas, Monsieur?
An elevator forms memories. I fail to grasp your point.
No, was just acknowledging that life, whatever it is, may well be prerequisite to consciousness.
@Shawn, you could try asking the question again.
That's fair.
AI may suffice.