Artificial intelligence, humans and self-awareness
What distinugishes a human from a computer?
Not logical thinking. Computers are logic machines.
Not creativity. Creativity is randomness and can be replicated.
The only thing left is self-awareness - the observer realizes her own existence and role as an observer of both the self and the world outside.
But...we're not completely self-aware. We don't know what's happening in our brains or liver or our skin and so on. It's like consciousness or self-awareness is limited to the organism as a whole but not its parts. Therefore, we are NOT fully self-aware.
Now imagine a being x who is completely self-aware in every respect from the atomic realm to the macroscopic world we're familiar with. Such a being is what I call truly self-aware.
What about computers? They seem to be able to do logic flawlessly. However, they don't display any evidence of self-awareness like us humans. Aritificial intelligence attempts to replicate human-level self-awareness. I don't know if that's even possible but what I want to present is a comparative analysis of
1. computers
2. humans
3. being x I described above.
Let's put these on a line that represents the spectrum of self-awareness from completely oblivious (like a stone) and completely aware (like being x).
Would we be closer to the computer or being x?
Not logical thinking. Computers are logic machines.
Not creativity. Creativity is randomness and can be replicated.
The only thing left is self-awareness - the observer realizes her own existence and role as an observer of both the self and the world outside.
But...we're not completely self-aware. We don't know what's happening in our brains or liver or our skin and so on. It's like consciousness or self-awareness is limited to the organism as a whole but not its parts. Therefore, we are NOT fully self-aware.
Now imagine a being x who is completely self-aware in every respect from the atomic realm to the macroscopic world we're familiar with. Such a being is what I call truly self-aware.
What about computers? They seem to be able to do logic flawlessly. However, they don't display any evidence of self-awareness like us humans. Aritificial intelligence attempts to replicate human-level self-awareness. I don't know if that's even possible but what I want to present is a comparative analysis of
1. computers
2. humans
3. being x I described above.
Let's put these on a line that represents the spectrum of self-awareness from completely oblivious (like a stone) and completely aware (like being x).
Would we be closer to the computer or being x?
Comments (91)
What makes you think a computer could ever be aware? Only software can do that.
Software must need hardware right? Anyway I'm questioning the basic premise that humans are self-aware. I think that's not true, at least not to the extent of being x I described in my OP.
Software and hardware are not the same thing. Your brain is hardware, your mind is software.
??? On the face of it, this is self-contradictory. Random is patternless, meaning it can never be precisely the same twice as an intended end.
Biology -- and the HUGE everything that biology implies.
Quoting TheMadFool
Computers are contraptions that carry out logical operations designed by humans. On their own they are just a pile of metal and plastic.
So we aren't full aware. So what?
What about self-monitoring programs? Ones that can modify behavior or output when certain conditions are met -- e.g a robot that self corrects its walking trajectory when it is not going in proper direction? I think that involves some amount of self awareness
Self awareness is hard to define. It is easiest to describe it as "what we have and the beasts do not" so yes, by definition, we have self awareness.
We could also break down the word. It is awareness of the self. I am aware that I exist, what about you? A dog is not aware that it exists. It is just the culmination of biological processes and chemical reactions. It can feel, but not ask why it feels. While we are made of the same material as them, we have self awareness because we are evolved enough to have such a thing.
What you described as "self awareness" is more of the level of awareness a god would posses. I suppose we could label this "deity awareness"
Computers (software, at least) is definitely capable of that, eventually. We simply don't have the hardware to support them, the knowledge to create them, or the energy to maintain them. That can change soon.
I assume you have heard of the turing test. If not, essentially it is a test where you are put in a room with a monitor and a keyboard. In one window on the monitor, you are talking to a human, and on another window, you are speaking to a computer. If you can't tell the difference, the computer passes the test and has "self awareness". You may ask, how can this be? They have just learned to mimic human interactions incredibly well! And I may interject, how is that any different from how you or I interact with people? You had to learn how to interact with people from a young age. Self awareness is not some tangible end goal, it is something that evolves over time until you finally comprehend your world.
One could say babies are not self aware. They haven't developed object permanence, they can not speak, cannot write. However, you consider the baby more human than the computer on your desk that can do all of the above?
To be human (or self aware) is not biological, in fact, you don't even have to be biological to be a human. You just have to be very good at seeming like a human. Consciousness is an illusion, but a very very good one. This is why most people would consider Superman a human, even though he is in fact extraterrestrial.
However you can only be aware of what you are aware of. You cannot be aware of what the stone is aware of or not aware of.
You even think, it appears, that you are aware of you.
However, you (or what arises as a thought of you) is merely what arises in what we define as "awareness".
The stone also arises within this same awareness as the thought "me". The "me" which comes into contact with the stone.
"me", however (as with "you") is merely an arising thought. A thought which arises in awareness.
All I'm saying is that consciousness or self-awareness isn't a deserving attribute of humans. We're NOT completely self-aware.
Quoting Bitter Crank
So, are we more like computers or are we very near to, in terms of awareness, to an entity that is completely self-aware?
I ask because if we're more like computers then it changes the whole idea of what it means to be human. We're more like machines than what would lie at the opposite end - total self-awareness.
Quoting gloaming
Take a look at how we really think. Personally, I have many random thoughts going on in my mind. Not totally random I agree. I guess these thoughts arise out of association e.g. when I think of rain I think of umbrella, etc. But then I get to choose which thoughts I think upon and that is random most of the time unless you're thinking of some goal to achieve. That's what I mean.
Quoting aporiap
Yes. That can be correctly classified as some level of self-awareness. This leads me to believe that most of what we do - walking, talking, thinking - can be replicated in machines (much like wormw or insects). The most difficult part is, I guess, imparting sentience to a machine. How does the brain do that? Of course, that's assuming it's better to have consciousness than not. This is still controversial in my opinion. Self-awareness isn't a necessity for life and I'm not sure if the converse is true or not.
Quoting Arne
This logic fails I think. It doesn't make sense to say ''I'm aware that I'm aware that I'm aware...'' After some iterations we can't grasp the meaning of such statements. Anyway, the base of any such ordered awareness begins at ''I am self-aware''. We can try and achieve that first for computers.
What I'm really interested in is to show that we're more like machines than we think. We can imagine an entity x who possesses complete self-awareness of its being from the atomic to the macroscopic and humans don't possess that level of consciousness do we?
Quoting TogetherTurtle
Consciousness is an illusion you say but what is that which experiences this illusion?
It rests upon the unstated presumption that you have something beasts do not. If all H's (Humans) have A's and only A's and all B's (Beasts) have A's and only A's, then the statement that SA = that which H's have but B's do not produces a null set. And even if you could establish some sort of qualitative and/or quantitative difference between the awareness Humans have and the awareness Beasts have, that difference would not necessarily be a difference in a degree of awareness regarding awareness, i.e., self-awareness. Couldn't such a difference simply be a difference in awareness of how or how much? For example, if all Humans were aware to some degree as the result of having visual sense of entities while all Beasts were aware to some degree as the result of having a sonic sense of entities, then under your formula the difference between visually sensing entities would be an awareness the Humans have and that the Beasts do not and would therefore be, under your formulation, self-awareness.
Seriously, I share your interest in the subject matter. But I maintain the deeper issue is why some seem so insistent upon reserving to or creating for human (and only human?) some sort of unique normative ontological priority. This apparent need to preserve, reserve, and/or create a significant normative specialness for human is quite fascinating.
If animals possess qualia - i.e. they can create "what-it-is-like" knowledge, then what is to stop them from creating, as humans do, any kind of knowledge?
Humans are quite unique. We are the only known objects in the universe that create explanatory knowledge. In order to achieve this remarkable feat, there are strong arguments to the effect that we require at least two features: computationally universal hardware, and software that is not genetically determined. There is absolutely no evidence that animals possess either of these.
Animals have been on the planet a lot longer than humans, and you none of them has developed a language, literature, culture or science. You may be fascinated that certain philosophers think this sets humans apart from other animals, but are you also fascinated that we don't ascribe morality to animals and put them on trial for their misdemeanors? Surely we have to prosecute them to escape the charge of "significant normative specialness"?
What does it mean to be "completely self aware" as opposed to just self aware?
It's not an assumption, it is a consequence of known physics. Stones are not and cannot be self aware, or even aware.
At what point is one "aware" without some sense of self?
What I am getting at is there not just general moment to moment experience which may as well be called "awareness"?
After all we cannot be aware of anything other than what is our immediate experience - the very same experience within which our self is assumed to arise with.
So the so-called "self" cannot really be other than every other apparent object which arises as the experience. Sometimes this experience may consist of what is defined as a stone.
During your experience of a stone the stone must also be you. You can never be outside of your experience.
So if you are "aware" then you as the experience is "aware", and the "experience" is every thing within that.
You cannot exist outside of experience and you do not exist within experience.
There is just Experience.
Try having an experience where you are not. It is just not possible. So we should just be dealing with reality rather than what we think maybe real.
Thinking there is a you that exists that then goes about having an experience (awareness) is merely some idea.
To consider such an idea as real is essentially absurd.
Awareness must be whatever arises. Sometimes this will be a stone. Consequently a stone is equally awareness.
Where is one when conducting a physics experiment?
One cannot be other than whatever the experience happens to be, in this case the physics experiment experience.
Other species don't possess qualia, so we are different. Animals don't possess computationally universal brains, so we are different.
Quoting Arne
That is false and contrary to known physics.
Quoting Arne
There is no pattern except you making assumptions.
Humans are the only objects in the universe known to create explanatory knowledge and possess qualia. You are assuming that I claimed humans were the only objects that could do these things, when I did not. It's a pattern.
IOW, signalling of internal states (emotion) initially evolved as a co-ordination mechanism for social creatures, but at some point the neat trick of lying about internal states for some kind of advantage was discovered, and then the possibility of holding the falsehood (the counter-factual) and truth in mind at the same time, then we were off to the races.
So it's not just self-awareness as such (a machine can self-monitor) but it's more to do with an interpersonal game (something Turing was aware of with the Turing test - i.e. detecting intelligence would be closely connected to detecting cheating).
That's why I think that AI people, if they're really aiming at intelligence proper as we humans understand it, and not just at expert systems and machine learning systems, probably need to think more about intelligence as a function of sociality. Not "an AI" but a community of AIs. All the most intelligent animals (with the odd exception of the octopus) are social - corvids, parrots, wolves etc., humans.
(Another way of saying this might be that intelligence probably requires a limbic system analogue - there has to be some sense of something at stake, something mattering, to the AI. But then at that point, there's the danger of losing the crisp cleanliness that we associate with computers, and getting into the murky, shifty complexity that is genuine intelligence, so it hardly seems worth it to try and create a genuine artificial intelligence.)
2. I made no claims inconsistent with known physics. I stated we want to claim that the universe is better because of our presence. Apparently you agree with me that such a claim is absurd.
3. I did not assume that people claimed computers will never be able to beat a Grandmaster at chess. I was there when sceptiks of AI made the claim. You can look it up. However, once Deep Blue did beat a Grandmaster, the claim then became computers will never be able to display emotions. That is not an assumption on my part. I was also there when AI skeptics made that claim. And now the new X is that computers will be unable to display awareness of self. That is not assumption on my part. Every time computers are able to do the X that the skeptics say they will never able to do, the skeptics come up with a new X. If you want to argue that X1 replaced by X2 replaced by X3 is not a pattern, then good luck with that. I have obviously failed in my attempt to persuade you to see the deeper issue regarding self awareness as pregnant with our need to treat uniqueness as a synonym for superiority. That failure is on me. I am done now.
Computer is a mathematical concept. Alan Turing defined it (the Turing machine) and also its limits (the halting problem). Human is the biological specie of Alan Turing.
Being x. It's always rather odd to me people want to focus on computer models (computer as model) as representing intelligence or awareness instead of, say, the integrated processes (mind) of an old growth forest. My style of consciousness and components of mind communicate in a way unimpeachably closer to the minute feedback systems you find in the "cognitive network" of an ecologically complex superorganism (forests). Living compost on a forest floor is far more impressive and complex in its self-awareness than a computer could ever be (interspecies communication requires species; is a computer a specie? nope). Yet this is only a small, local slice of what's going on "information-processing" wise in a organic superorganism, like any robust sylvan environment. Mycelial mats connect plants and trees and search for feedbacks that then determine what they will do in bringing balance to players of the network locally, and non locally. Mycelial mats can connect thousands of acres of forest in this way. This is very much like a neural network of nature.
Honestly, taking computers to be intelligent, or most absurdly, at all self- aware, and not nature, tends to gore my ox...so I'm apt to wax too emotional here, but perhaps I'll be back with some cool examples as to why computers cannot be self-aware compared to the far more self-aware "being x" that can be found in nature (of which I'm a part way more than a computer). That is to say, my self-awareness is far more an extension of the order and processes going on in the superorganism of a forest than anything in silicon. We can understand (a priori), computers don't understand anything. We are aware of our limitations, computers are not. Because we are aware of our limitations thanks to nature's gift of metacognition (note I'm not saying a computer's gift of metacognition), we can ask questions about how we are limited, such as boundaries the subconscious puts on conscious awareness. You can even ask sci-fi questions about computer sentience thanks to nature's vouchsafing of self-awareness. Somehow, self-awareness is a part of having a mind that is informed nonlocally by interminably incomplete information. A machine only has to handle incompleteness according to its implied programming or manufacturing: algorithms and egos are veeery much alike, and both are chokingly narrow-minded, unreasoning. Seeing as the human brain-mind isn't invented or programmed and doesn't do calculations, and that it is likely nonlocally entangled with the universe in a way that remains forever incomplete (unless perhaps in deep sleep or dead), we think about thought and have thought about thinking, emote about emotions and have emotions about emoting: nothing is more sublime than wondering about wonder, however. I wonder if computers could ever wonder? What about the utterly unreasonable idea that a computer could have emotions reliant on programming...laughable. Reminds me of someone, having missed the punchline, laughing at a joke just because he knows it's supposed to be funny.
Null sets, though.
Quoting Arne
Quoting Arne
Humans have something other animals do not - a computationally universal brain, and a self aware mind.
Done? You mean hoist by your own petard.
It has been proved that, according to known physics, a universal computer can emulate any physical system exactly. It's not odd, it's reality.
We are not like computers, at all.
Our brains cannot be more than computers, according to physics.
The things that you see are more or less accurate, but some things are chosen arbitrarily. Color is just waves of the light spectrum being reflected into your eyes. If you were able to see those particles outside of the human mind, they wouldn't be red or green or blue or yellow. Color is made up, but it is useful, and that's why we have it. Color is a very good example of the brain processing the illusion of consciousness for the mind to observe and decide what to do next. Imaginary shades of color seem so real to us, but simply don't exist outside our minds.
What is a universal computer? I've heard of the Cosmology Machine and was taken aback at the level of hubris. It's amazing the "science" (pseudoscience) of meteorology continues to claim it can forecast, when all it does is update based on essentially current conditions. Don't meteorologists rely on computer simulations? Their computers, then, fail miserably in attempting to compute nonlinear conditions. Now, how in the world would I believe, if the weather forecast is always wrong for regions of our planet, it would ever be possible for a machine to simulate the physical conditions of the entire cosmos?
What does that question even mean?
Quoting Anthony
Nails? Computational universality has nothing to do with mathematicians, or nails.
Quoting Anthony
You've lost me.
There's only one way known for humans to actually 'create another being', and that is by reproduction.
Computers are devices, by definition. They're manufactured artefacts that in essence are large arrays of switches that are able to emulate or model various cognitive and computational processes.
But I have never seen anything to persuade me that a computer is a subject of experience.
Quoting tom
You said, see below, that humans have a computationally universal brain. Maybe I'm one of those jugheaded laymen that needs an explanation here. Perhaps I'll look it up. Apologies.
Quoting tom I didn't know human brains differed that much from other mammal's brains, functionally? The human mind is what differs most patently, not the brain. As to why we are so self aware compared to other organisms is a question we should be very careful in limiting to any sort of computation.
Btw, I redacted my previous post.
Are algorithms physical? In what sense are you using the term physics to mean a scientific model of both hardware and software?
You can start with a Turing machine if you like.
I see the simple gate mechanism that switches the state of a symbol. I see the infinite length of tape on which those marks are recorded. I try hard not to mention the problems the second law creates for this imagined material device.
But then this machine needs someone to write it some rules, supply it some data, understand the results.
In what sense are you saying that all that rather mental stuff is reduced to the same materialistic physics used to imagine the hardware?
Seriously, what am I, chopped liver.
and please define "experience".
I will wait here.
That is only a partial definition. Really, defining or coming to an understanding of the meaning of fundamental terms like 'being' and 'existence' are a basic task in philosophy. Many people - presumably including yourself - simply assume that it is obvious what the word 'being' refers to, and that computers and beings are pretty much the same kind of thing. But when you analyse such beliefs, they rest on many unjustifiable assumptions.
For instance, the word 'being', in this context, can be used either as a noun, i.e. 'human being', or as the present participle of the verb 'to be'. What I'm saying is that computers are not 'beings' in the sense conveyed by the former. And in fact we don't refer to them as such. If you were standing outside a burning building, and were to ask, 'is there anyone in that building'? you wouldn't be asking 'are there any computers in that building?' If you said (although it would be a strange turn of phrase) 'are there living beings in there?', then it could be taken to be asking: are there humans, rats or pigeons in there?
So, the definition of 'being' as 'something that exists' doesn't capture something distinctive about living beings. And in fact, I am of the view that the words 'to be' and 'to exist' are not strictly synonymous, but I will leave that aside for now. But part of the meaning of 'being' in this context is precisely that beings are 'subjects of experience'. It can be said of humans, rats and pigeons, but not of artefacts or devices.
Seriously, you are going to presume that I have a shallow understanding of being?
You are the one whose understanding of being was shallow to the point that you presumed that by being I meant human being.
If you want to give it another try, I will continue to wait here.
There seems to be something special about it if one can make a difference between "a human" and "a human being". What is it that gets emphasized? This is not to say your definition was wrong nor that I'd think this should really make a difference in this context (aside for the sake of the argument).
My entry was simply based on the post that I was commenting on.
Quoting Arne
I wouldn't disagree, but it is a very broad definition which raises further questions. But in relation to the question at hand, what does it say about the question of whether computers are conscious subjects of experience? Because I take that question to be central to the OP.
Well, to start, I don't really know who you mean by the other guy. I guess someone else found the fallacy as well.
The point to us being special is that, yes we have self awareness, and yes, anything we make that has it as well is also special.
Therefore, yes, we have self awareness, yes, machines can be self aware, and no, animals are not self aware. That was my argument from the beginning and it seems that is the argument I will have at the end as well. You are simply being unreasonable at this point. While it seems wrong, we are special, we are different, we are self aware, and animals are not. To be frank, you should have more pride in being human. We have built every civilization on this planet and made all of its scientific breakthroughs. If you really don't think we are aware of ourselves, you are sorely mistaken.
I disagree for two reasons:
1. The original post posits self awareness as the issue rather than conscious[ness]. And I have no reason to presume the poster chose his terms carelessly. And though one could carelessly consider them synonymous, that would be a tough argument to make. All reasonable people would agree that my dog and I are both conscious beings. Yet I doubt all reasonable people would agree that my dog has a sense of self awareness. And if all beings who are conscious are not necessarily self aware, then conscious and self-awareness cannot be synonymous. So absent a reason to believe the original poster meant something other than what he said, it would be anti-philosophical to presume the central issue is other than self-awareness; and
2. It is where "human" stands on the spectrum of self-awareness relative to the computer that is the central question. As the poster clearly asks "Would we be closer to the computer or being x?" Again and with all due respect to the poster, it would be anti-philosophical to suggest a different question is "central to the OP."
……………………………………………………..? <---- HUMAN ----> ?
Rock --------------------------Computer---------------------------------------------------------------Being X
And in an attempt to advance the issue, I suggest that Human is closer to the Computer. However, I suspect that Human is unlikely to move significantly (if at all) closer to Being X but that the Computer certainly will move closer to Being X. If that is the case, then the deeper issue becomes whether Computer will move past Human on the spectrum of self awareness.
Further, self-awareness rather than consciousness strikes me as an interesting twist to this now age old debate. In order for there to be self-awareness, there must be awareness. If we call awareness "AL1" (Awareness Level1) and self awareness "AL2" (Awareness Level2) and awareness of self awareness "AL3" (Awareness Level3), are we not already at AL3? And at what AL(x) is Being X?
Finally and most important of all, is this simply a more grown up version of the "I know" game?
I know
I know you know
I know you know I know.
I am aware
I am self aware
I am aware that I am self aware
Because that is the way it feels every time contemporary programming achieves that which yesterday's learned skeptics said it will never achieve. If we ever had a working definition of "conscious" (which we do not) and coders were able to represent it, you can bet your bottom dollar we would promptly change the definition.
Wrong.
You may rest assured that the others guy's mistakes are not as "unique" and "special" as yours.
How fallacious of me to expect people to actually make arguments in support of their claims.
When will I ever learn?
I agree that animals aren't reflexively self-conscious to the same degree that humans are. But I still say that a dog is a subject of experience.
Actually, your post made me go back and read the OP again - when I jumped in previously, it was in respect of a general view of the question of the difference between computers and sentient beings prompted by this remark:
Quoting Arne
So my point was simply that, 'beings' are of a different order to 'devices', including computers. And that furthermore, there is no instance of humans ever having 'created a being' other than by the act of procreation, if that counts as 'creation'. So you're correct in saying I wasn't really addressing the OP. And going back to the OP again, I would single out this paragraph:
Quoting TheMadFool
I think this is problematical, as I think that 'complete self awareness' of that kind is a logical impossibility. So the hypothetical 'being X' is not something that could ever exist, which renders the entire OP rather pointless, in my opinion. So, nothing further to add, at this point.
Perhaps you and the poster have a different understanding of imagine. It never occurred to me that imagination must be limited to the logically possible. Oh well.
I am going to bed now.
Seriously? Perhaps you should place your pride in who you are rather than what species you were born into. The former depends entirely upon your choices while the latter has absolutely nothing to do with anything you have ever done.
Quoting Arne
Quite to the contrary. The species I was born into is the whole reason I can be who I am. The human intellect is unmatched. If I was a dog, I would not be here typing this I assure you. In fact, I just asked my dog if she would like to defend herself. She met me with annoyed silence, as she was trying to sleep. I won't go as far as to blame my dogs lack of sleep on you, but I will tell you this. You have a distinct smell of arrogance around you and your posts. I refuse to go to name calling, and despite how hostile you may respond I will not. However, I will give some examples of your assholery.
Quoting Arne
This one is interesting because you still never explain why you thought I saw someone else's argument against you, you continue to ignore the fact that everyone who has responded to you is trying to explain your arguments faults and is making an argument, and you decide to add a sarcastic stinger on the end. If this was "Snarky Teenager Forum" I would applaud you. However, this is not such a place.
Quoting Arne
I don't think the writer ever implied that imagination had to stay within the realm of logic. Again, you like to end your posts with some kind of statement meant to irritate and provoke. It's almost as if you want attention?
Quoting Arne
You mean someone doesn't understand your idea? That couldn't be evidence that you are spouting nonsense and refuse to reason could it?
Quoting Arne
He of course meant the experience of living, of seeing, feeling, hearing, touching, tasting. Have you ever heard of the term "I experienced ____". It's really the only way you can take that. If I'm wrong I would gladly take an alternate explanation, but I know you wouldn't, so I'll stop here. If anyone reads this far, this man is a lunatic. Give him no more attention, he only thrives on it.
the fact that you take pride in your ability to type only proves my point. Your standards are too low. And stop with the type/token stuff. The human intellect may be unmatched, but it is clear that cannot be said of your's.
.Quoting TogetherTurtle
because the only difference in your equally ridiculous arguments is that he used the word "unique" while you used the word "special". Another mystery solved.
Quoting TogetherTurtle
Listen to Mr. Fallacy talk about wanting attention. You may rest assured, I would more than happy with a little less attention from you. And how wonderfully philosophical of you to speak for others and to direct them how to respond to me. I am sure they appreciate that.
Dude, this ain't facebook.
I believe it is customary to tell you that I'm "Going to bed now"
Also if I ever talk to you again, I'm calling you "Mr. No Clearer"
I argued in another thread that algorithms are not physical - they are logical. Of course, their instantiation must be physical, but given that this is arbitrary, the instantiation and the algorithm are different things. An identical algorithm may be instantiated on Babbage's analytic engine or on a, yet to be constructed, quantum computer. The instantiations will be subject to quite different physical laws, one effectively classical, the other quantum, but the algorithm itself is not subject to the laws of physics.
Quoting apokrisis
Usually when I use the term "physics" I am referring to that body of knowledge relating to the fundamental structure of reality.
Quoting apokrisis
Why would I do that? Turing machines don't exist, they are mathematical abstractions.
Quoting apokrisis
Pretty sure I made no such claim.
Did you give an argument that animals are not self aware, or did you just assert it?
So we agree that physics doesn’t account for that part of the structure of reality that is an algorithm?
Great.
Now what is it that says an algorithm is logical as such? The universe of randomly produced rule sets would be infinite. What would select among all those to create ones we would call a logical system?
Then I guess your algorithms have to have data to work on. Again, how would the input get selected so that it had physically relevant meaning?
That's not a particularly convincing argument.
Or I'm wrong. There's only two possibilities right? What do you think?
I think that most people when confronted with the idea that animals are not sentient, do not possess qualia, don't even know they exist etc. find that notion repulsive and experience various degrees of emotional outrage.
However, I gave an outline of various hints and arguments that this is indeed the case. There is a computational and epistemological argument that they cannot know anything beyond what they are programmed to know, and they are not programmed to be self-aware or other-aware, because they, lacking appropriate hardware, cannot be.
Another argument comes from the impressive work of the psychologist R. W. Byrne. Animals learn by behaviour parsing, not by understanding.
http://pages.ucsd.edu/~johnson/COGS260/Byrne2003.pdf
For some reason we find the notion that animals don't suffer horrifying, when it is in fact a blessing.
In that we are in agreement. Self awareness is simply the result of superior hardware and software.
Quoting tom
I think that it would be impossible for animals to not have emotions. They are a process of evolution, and are useful in the wild. If they didn't it would be better for us, but I don't really buy that my cat is faking it when hes glad to see me.
While I don't think that it is right to treat animals poorly on purpose, some killing is inevitable. Meat and its consumption is deeply ingrained into the culture of almost every people on earth. We are omnivores after all. Animals feel emotion, but in the human world, we overlook feelings for the greater good, so why wouldn't we apply that to animals as well? Death is simply the end of life, destined to happen from birth. Animals are our friends, and we should treat them well, but in the end, that's just how things are on our planet. Food chains and all. There is no reason to fear the facts of life.
As the more intelligent beings, I would like to believe it should be our responsibility to see to it that the life we are so closely related to and so dependent upon is treated well for as long as we can afford to let it live. Someday we will know enough to gift them with the blessings nature has given us naturally, and we will be able to very easily create identical copies of their meat just by having the elements that make them up. Today however, is not that day.
I fail to see a contradiction in the idea of complete self-awareness. Think of hunger, thirst, pain and the senses etc. These sensations are a form of awareness of the chemical and physical states of the body or the environment.
Why do you think total self-awareness is an impossibility?
We're NOT computers, I agree. But are we machines, just of a higher order? That's what I want to know.
I mean there must be an x for which consciousness or whatever else is an illusion. Is this x real or also an illusion?
Are you saying there is no such thing as consciousness?
So you think social existence contributes towards intelligence. I think so too but what about the ''fact'' that geniuses are usually depicted in culture as socially inept? Is this just one of those myths that have spawned out of movies and literature or is there some truth in it?
I suppose genius-social-misfits aren't completely normal.
Perhaps we've already achieved the greatest thing possible - duplicating rationality - with computers. What remains of our mind, its irrationality, self-awareness, and creativity, aren't as important as we think they are.
Complete self-awareness would be knowing the position, function and state of every atom within our bodies and knowledge of our subconscious.
In a way we're not actually free unless we know these things.
Like, suppose intelligence evolved to require the co-operation of A, B, C, D, E genes, with the total contributing to intelligence level, and the set being roughly in balance with most people, but then suppose in some people the E factor is much more heavily weighted than the other factors. That would produce a super-high intelligence. But what if the E factor happens to clash with other aspects of the total personality, making the person inhibited or socially inept?
Another possibility: human beings and animals generally are like these Heath Robinson contraptions, stuck together with duck tape, sticks and glue, that "pass muster" in the circumstances they evolved in for the bulk of their evolution, but don't necessarily function so well outside those conditions. For example sociality in our ancestral environment would have meant knowing, say, about 20 people quite well, and half a dozen really well. What happens when a creature designed for that type of environment is rammed cheek by jowl with millions of strangers in a modern conurbation? Maybe they withdraw into themselves, or whatever.
Lots of possibilities here, of course one would have to know the science and investigate to figure out what's really going on.
We are not machines, either. We are organisms, and more, beings. We are born, not manufactured. Our biological design incorporates a billion years of evolution. Life exists without any designing agent: no owners, no designers, no factories, etc. Life is internally directed; machines are made, and have no properties of beings or organisms.
Machines are our human creations; we like our machines, and identify with the cleverness of their design and operation. Our relationship to the things we make was the subject of myth for the ancient Greeks: Pygmalion from Greek mythology, A king of Cyprus who carved and then fell in love with a statue of a woman, which Aphrodite brought to life as Galatea; (the name of a play by George Bernard Shaw, the name of a musical, My Fair Lady--the same theme). We pour our thoughts into our computers, they deliver interesting viewing material to us -- none of it comprehended or created by our machine computers.
That there are "biological mechanisms" like DNA replication, respiration, oxidation, etc. doesn't in any way make us "machines" because "biological mechanisms" is itself a metaphor of a machine mechanism. We're victims of our language here. Because we call the body a machine, (levers, pulleys, engines, etc.) it's an easy leap to body status in things like office copiers and computers, ships, cars, etc.
So... No, we are not machines, not computers, not manufactured, not hardware, not software.
In a way, yes. it isn't a tangible thing. Consciousness is more of a culmination of our senses in a way that makes sense to us, and that we can question. consciousness is just the brain translating for the mind per say. I guess the question really is, how do you know you are conscious? You can think internally, you can see, hear smell, feel, taste. I would argue a computer can do all of those things through various peripherals, so therefore a computer of sufficient hardware and software capabilities could be conscious. If all else fails, you could build a human brain out of synthetic materials, and I would argue that would be conscious.
So I guess you are the x. All of your brain cells and your eyes and ears and mouth, They collect information and that is the illusion. If we had more senses, there would be more of an illusion. All of this information is brought together in the brain, it decides what chemicals to shoot through your body, and what results is consciousness.
Touche.
That is funny.
This is a difficult point and I'm not claiming that I am correct in what follows. But one of the principles that I have learned from Vedanta is expressed aphoristically as 'the eye cannot see itself, but only another. The hand cannot grasp itself, but only another'. [sup] 1 [/sup] So I take from that, that what we are aware of appears to us as an object or the 'other'. It seems to me to be inherent in the nature of awareness itself.
Now obviously I can be aware of my internal states, like hunger or lust or depression, and so on. But even in all of those cases, the psyche is recipient of sensations like the feeling of hunger or is thinking about its circumstances, and so on. But the psyche cannot turn it's gaze on itself as it is the subject of experience, not the object of perception. And that subject-object relationship seems fundamental to the nature of awareness.
There's a wikipedia entry on Kant's Transcendental Apperception which I think comes very close to expressing this same idea:
Now, number 5 is crucial here *: we're actually not aware of the 'act of synthesis' which underlies and indeed comprises conscious experience; that is what 'the eye not seeing itself' means. Which stands to reason, as I think these correspond to the role of the unconscious and sub-conscious. That is the process of 'world-making' which the mind is continually engaged in; it is in this sense that reality is 'constructed' by the subliminal activities of consciousness into what appears as a coherent whole. This kind of understanding is characteristic of the philosophy of Kant and Schopenhauer.
But it also has some similarities with Vedanta and Buddhism, which are also aware of the sense in which 'mind creates world'. But to say that in the context of secular Western culture is to invariably be misunderstood (at least in my experience), as the formative myth of secular culture is the so-called 'mind-independent' nature of the world. But what this precisely has lost sight of, is the role of the mind in the construction of reality. In fact the very idea is taboo (as explained in Alan Watts' 'The Book: On the Taboo against Knowing who you Are').
So - as to whether any intelligence can be 'completely self-aware', then in light of this analysis, it seems unlikely. And in fact I read somewhere not that long ago about it being understood in Eastern Orthodox theology, that even God does not know himself, that He is a complete mystery to Himself (although I suspect I won't be able to find the reference.)
--------
* I'm not at all sure I agree with 4 but it's not important for this analysis.
Brilliant point. The way I understand this is each level of what I call existence is separated from the other and awareness, as in knowledge of, may not be able to cross the boundaries between these levels. For instance the individual cells in our bodies don't know what ''love'' means. To know what ''love'' means requires different experiences and environments than the cell is exposed to, not to mention the cell's lack of machinery to comprehend.
That said, I do see a way in which cells may become aware of ''love'' by way of hormones, adrenaline, etc. And the process works in reverse too - cells in a low glucose environment signal hunger. Of course ''total self-awareness'' is a far cry yet.
I can't make sense of telling myself that I'm an illusion. Are you Buddhist and bringing up annata here?
Well, I think we're more like machines than we think. Biology = chemistry+physics.
:up: thanks
No.
Long Answer:Software and the Materials they are made with.
Is this evidential or just a gut feeling?
Agreed. I too believe our senses can be deceived or that the picture of the world we create out of them isn't the actual state of affairs. It's like taking a photograph with a camera. We have an image in our hands but it isn't the actual object the image is of.
Quoting TogetherTurtle
As far as I'm concerned there's a limit to illusion. EVERYTHING can't be an illusion, especially our sense of self. In the basic definition of an illusion we need:
1. an observer A
2. a real object x
3. the image (illusion) of the object x, x1
I can accept 3 but what is undeniable is the existence of the observer A who experiences the illusion x1 of the real object x.
Are you saying the observer A itself is an illusion? In what sense?
In the Buddhist context, the self is an illusion because it lacks any permanent existence. The self, according to Buddhism, is a composite "material" and when decomposed into its parts ceases to exist.
It is evidential to some extent. I apologize if I didn't make it clear before, but I don't believe nothing exists. I'm more on the line of thinking that how we view existing objects is arbitrary.
Quoting TheMadFool
I agree with this. When I said everything, I more meant every way we experience the world. Your sense of hearing, for instance, can be tricked by focused, weak soundwaves. That is what you are experiencing when you put on headphones. While no one else can hear your music or audio book or other media, you hear it like the performer was in the room with you. This of course, is not the case, and other senses verify that. Therefore, it is very possible some things in the natural world go unnoticed because we can't sense them. What we sense is very selective, labeled arbitrarily, and subject to trickery.
I may in time take interest in the Buddhist view on this subject. For a religion they have a strangely materialistic view on the concept of a soul.
Hmm, I would think self awareness comes part and parcel with some level of sentience. I think a robot that can sense certain stimuli - etc. light, color, and their spatial distribution in a scene - and can use that information to inform goal directed behavior must have some form of sentience. They must hold some representation of the information in order to manipulate it and use it for goal based computations and they must have some representation of their own goals. All of that (i.e. having a working memory of any sort) presupposes sentience.
I
The AIs whose construction is inspired by the human brain are merely a bunch of matrices chained together resulting in a map from an input to an output. m(X) = Y. These get trained (in supervised learning at least) by supplying a set of desired (X,Y)-Tuples and using some math. algorithm to tweak the matrices towards producing the right Y values for the Xes. Once the training-sets are handled sufficiently well chances are good it will produce plausible outputs for new Xes.
The point here is: those things just "work" - not meaning that this works well, but the whole idea of the concept is not to implement specific rules but just train a "black box" that solved the problem.
Mathematically such AIs separate the input-space by planes, encirceling regions for which certain results are to be produced.
These things do not exactly have a representation of their goals - they are that representation.
One cannot exactly forcast how such an AI develops if not stopping alteration of the matrices at some point: The computation that would be needed to do this is basically said development of the AI itself.
Isn't this true for only a subset of AIs. I'm unsure if this is how, for example a self navigating, walking honda robot works, or the c. elegans worm model, etc. And even in these cases, there is still a self monitoring mechanism at play -- the optimizing algorithm. While 'blind' and not conventionally assumed to involve 'self awareness', I'm saying this counts -- it's a system which monitors itself in order to modify or inform its own output. Fundamentally, the brain is the same just scaled up in the sense that there are multiple self monitoring, self modifying blind mechanisms working in parallel.
They have algorithms which monitor their goals and their behavior directed toward their goals no? So then they cannot merely be the representation of their goals.
Sure there are other methods. But the ones that are derived from the functioning of the human brain, which generally means interconnected neurons passing on signals are usually expressed that way.
Quoting aporiap
The whole program is written to fulfill a certain purpose. How should it monitor that?
I still think neural networks can be described as self monitoring programs - they modify their output in a goal-directed way in response to input. There must be learning rules operating in which the network takes into account its present state and determines how this state compares to a more optimal state that it's trying to achieve. I think that comparison and learning process is an example of self monitoring and modification.
I was wrong to say it monitors its own goals, rather it monitors its own state with respect to its own goals. Still there is a such thing as multi task learning - and forms of AI that can do so can hold representations of goals.
If computer memory was designed like neurons, it would not be stable in storage. It would be subject to spontaneous change, as the chemical potential attempts to lower. The brain has a way to deal with this, allowing spontaneous creative change using the laws of physics and chemistry. At the same time, it maintains high energy continuity.
For example, say we designed a future computer using high energy memory. We would need a backup version of the memory, using traditional low energy memory. We allow the high energy memory to be triggered, so it spontaneously lowers potential. This movement of potential rearranges the furniture, so to speak. We then compare the two memories, to filter out any useful change. We then rewrite the high potential memory back to the starting point, while adding useful changes.
In this scenario, the change in the high energy memory is not based on computer instructions or software, but on the physical pathways needed to lower chemical potential. This gives the memory liberty to find the best paths, which may not be part of any previous logic; creativity. We continue the cycling, until the pathways reach a steady state; maximizes energy flow.
Next, we add a secondary high energy memory, that will use the energy change profile of the primary as the trigger to ignite the spontaneous change in the secondary memory. Now we are getting closer to self awareness. The brain does this through well worn ancient genetic pathways in the primary, that trigger a wide range of self feedback; feelings, sensations, emotions, etc.This occurs at the same time it triggers spontaneous change in the secondary.
The energy flow is based on free energy which is composed of enthalpy and entropy. Free energy has a natural logic, based on the laws of physics, which are universal. This flow does not need manmade language. Although, manmade language impacts how the high energy memory of the secondary moves the potential around. This helps to create a disconnect with the secondary; consciousness. The primary cannot turn the secondary into a clone of itself, due to manmade language. One becomes self aware of the separation while still feeling overlap.