Consciousness
What is consciousness? What does it mean, and entail? What are its characteristics, prerequisites, function, substance, or importance?
Consciousness can be seen as awakefulness, or intentional, purposeful, internally driven activity. Through metabolic processes, emotional impulses, imagination, reasoned motivation, and the like. A kind of alertness, a reactivity to the environment. It can also be thought of as a state of knowledge, or awareness. What one comprehends about themselves or the world. "Raising one's consciousness", or "states of consciousness" based on quasi-mystical, ideological, or insight based comprehension or mind vision.
What is it? Is it functional, the cogs and wheels moving just right at one level emerging into consciousness at the other? A univocal cosm of consciousness, from the micro to the macro, existing at all stages or levels? A narrative reflection, or illusion?
What's it made of? Parts? Information? Sensory immersive experience? Fields? Plasma? Ectoplasm?
Tell me all about it, I'm a zombie, and only pretend to understand.
Consciousness can be seen as awakefulness, or intentional, purposeful, internally driven activity. Through metabolic processes, emotional impulses, imagination, reasoned motivation, and the like. A kind of alertness, a reactivity to the environment. It can also be thought of as a state of knowledge, or awareness. What one comprehends about themselves or the world. "Raising one's consciousness", or "states of consciousness" based on quasi-mystical, ideological, or insight based comprehension or mind vision.
What is it? Is it functional, the cogs and wheels moving just right at one level emerging into consciousness at the other? A univocal cosm of consciousness, from the micro to the macro, existing at all stages or levels? A narrative reflection, or illusion?
What's it made of? Parts? Information? Sensory immersive experience? Fields? Plasma? Ectoplasm?
Tell me all about it, I'm a zombie, and only pretend to understand.
Comments (49)
I am vary curious to hear a zombie's interpretation of that scene. Namely, who else could see the printed text on the visual field, and where did it reside?
Everyone that watched the movie, and designed the scene saw the illusion, it does not however work on me, as I do not have an experiential visual field, I merely behave as if I do.
Consciousness doesn't seem to be always intentional, or possess intention. There are many things that appear in consciousness that weren't preceded by an intention. Intention seems to arise as a response to certain experiences. For instance, my intention to live only arises when my life is threatened. My intention to seek pleasure only arises when I'm feeling down or suffering to some extent. So it seems that the only persistent part of consciousness is it's awareness. When we are conscious, we are aware, and what we are aware of, we can respond to with intent.
So with you, there is no understanding. Why would a thing which can't understand... ask for an explanation? I guess I would be posing that question to the wall. You would only pretend to understand it. It's kind of lonely to ask the wall questions. They just bounce back at me.
Of course it's easy enough to fake understanding in some cases, and we can write software that does this now, but it won't succeed in all cases. Indeed, nobody is convinced that Siri or Watson are conscious, or some clever bot. But there are AIs from fiction which would be able to behave convincingly, and then we would have to ask ourselves if it makes sense to think they are p-zombies.
You could have a potential p-zombie read a story with a novel twist on first person and ask them all sorts of questions. We know that humans, if they found the story interesting, would discuss and debate it at length. But how would a p-zombie make sense of it?
Because that's what a thing that was interested, and could understand would do. Pretending to understand is often considered polite, walls aren't polite.
So it has some sort of software? Imagine you're walking through a garden and you come upon a statue. As you walk toward it you're startled by a voice. It seems to be coming from the statue. You peek behind it and laugh because there's sound system strapped to the back of it. You realize there must be a motion sensor somewhere. As you listen, it tells you that it's just a statue. It affirms that you are also a statue with a sound system and a motion sensor. Then it asks you what you think about that.
I think I'd have various thoughts... how contemporary art installations aren't really my cup of tea and stuff like that. But would I express my thoughts to the statue? I guess if you were there too and I was trying to make you laugh.
So if I respond to the OP, that shows that I don't believe the last sentence. Can I prove it's wrong? No. I can't prove to myself that I'm conscious. I can't prove what I can't deny.
My thought process for making this thread was that we ought to have threads about the key philosophical issues, like consciousness, and what it is. I ended up posing it all as questions, asking for people's opinions about what they thought consciousness was, and for this reason, just thought that I'd add "before I'm a zombie" at the end for shits and giggles. Funnily enough, that's what got all of the attention, so I just ran with it.
My own position is not that we're all zombies, if you think that I could be meaning that.
In fiction they often go out of the way to remind us that the A.I. is an A.I. by often nonsensically being unable to grasp some emotional experience. Like the Terminator claiming to have extensive data bases on human anatomy and function for medical purposes, and then five minutes later being all like "why is your face leaking? Crying is a mystery to me!". Problematic in two senses, first it is incongruent with its claims of extensive anatomical knowledge which would surely exceed even seasoned professionals in detail, and accuracy because of memory, and secondly because the question itself suggests and interest, and recognition of the face leaking as significant. So that ultimately it doesn't work to remind us that the A.I. isn't conscious, as much as demonstrate an odd kind of robot ignorance (which is what sifi usually resorts to when attempting to display this), While artificially, and implausibly inserting a gaping hole in the A.I.'s data base.
Humans that found the story interesting may discuss it at length if they felt comfortable enough, but may otherwise wish to say very little, because of being overwhelming by the pressure.Whereas some that didn't find it particularly interesting at all, may find lots to say about it, because they talk a lot under pressure, or simply like attention, and the opportunity to be listened to, or a million other motivations that may make people say much or little, regardless of interest. Much like a lie detector, such a test is arbitrary.
Modern A.I.s have no extensive memory, or parsing of natural language, and are easy to detect by asking them question about what has been said already, or meta questions, seeking specific, non-general responses. If an A.I. did master natural language (and it is only a matter of time before we design one that does), I don't see what kind of test one could design to decide whether or not it was truly conscious -- and I don't think that the artificial, implausible movie scenarios give any answers towards this.
Maybe the concept is interesting. Maybe because it was the only thing resembling a positive opinion or position I put forward, and people prefer to address positive positions than put them forward. Maybe because the first person decided to focus on that, and I obliged, it just naturally became focused on that, but it could have gone many other ways, depending on how the first reply went. Who knows.
There's two recent AI movies that do a good job with this sort of thing. One is 'Her' and the other is 'Ex Machina'. In the second one, a programmer at a big software company wins a prize to become bait in Turing testing the secret robot the company's CEO has been building. It's a rather ingenious scheme as it has several levels of deception built into the plot. In 'Her', it's easy enough at first to think the the operating system Samantha, as it names itself, is just a futuristic Siri, but it becomes impossible to maintain this belief as Samantha evolves and pursues goals on her own (and with other versions of the operating system).
I don't think an AI can do what either of those AIs did without attaining consciousness. Same goes with Data and the holographic doctor on Star Trek Next Generation and ST Voyager.
Aren't those just movies that go the other direction and give them magic inexplicable consciousness? It is thus again, just written into the plot to convince you one way or the other, but we do have -- allbeit incomplete -- explanations for why organisms are goal directed, and even organisms that arguably are not conscious are goal directed. So, this too fails on two fronts. The first being that they just inexplicably become conscious as a plot device (approaching the definition of fantasy rather than science fiction), and secondly the quality that is presented to demonstrate their position of consciousness or awareness fails to do so.
Ex Machina does provide an explanation for how consciousness was built into the robot, even if it's somewhat dissatisfying. There is a fair amount of interesting conversation in the movie, since the point is the test the robot for genuine consciousness. The intriguing part is that it means the robot must deceive it's unknowing interrogator in order to truly pass the test. Deception on that level requires an understanding of other minds.
Secondly, it merely presupposes, or question begs a computational theory of intelligence, or consciousness, which implies, at base, that calculators are some degree conscious and intelligent.
Data is my fav. Though, he does explicitly claim to possess qualia.
I'm not sure that it does. All that it requires is a mastery of language, in my view.
Clearly a fool proof test. We don't seriously question whether or not other people have minds, until the question is brought up, mainly because of their history, and origin, more so than their functionality.
Not indistinguishable, but rather fully capable. Data wouldn't pass a Turing Test (too easy to tell he is a machine the way he talks), but he is conscious.
No, it isn't, and I actually think that it will be more difficult for A.I.s to be able to parse natural language with an active physical world than merely abstractly in a conversation. I still don't take seriously such movies, which merely create the universe, and make whatever true they want, regardless of real world feasibility, and as Aristotle would tell you, rendered false the moment they're taken as true, and only retain their truth to the extent that they're understood to be false.
Neither movie presents a dumbed down Turing Test. Anyway, there's plenty of examples in literature and movies. Some of the machines are very human like, and some are very machine-like, but they all possess a deep understanding of the first person (meaning they're doing more than mimicking), because they're all conscious.
Dennett would have laughed.
I think the experience of qualia is directly related to consciousness, as in, consciousness is a necessary prerequisite for qualia.
However, Hume argued that we are qualia - the self is a conglomeration of sensory inputs. I have to ask what experiences these sensory inputs, though. For if there is nothing to experience these inputs, they aren't qualia, they are just electrical impulses.
So a computer that represents wavelengths of light, vibrations of air molecules, chemical inputs, etc. consistently, and all at once, would in effect be conscious.
Carrying on a intelligible conversation isn't a measuring stick for consciousness. If it were, then children, say below the age of 10, and some adults (just look at Facebook), aren't conscious. Those that speak a different language wouldn't be conscious. Carrying on an intelligible conversation requires learning the language, and we all have made mistakes using our native language and our mistakes is what makes us learn to use the language more intelligibly. Teaching and programming are basically the same thing. We re-program ourselves when we make mistakes. Computers need programmers to update their software and eventually computer and software engineers will design a computer that can re-program itself.
That is oddly worded, the whole "sensory input" thing, sounds as if we're sitting in a dark room sending out outputs, and receiving inputs via our organs, and thus interacting with the world.
I do think that qualia is a necessary prerequisite for consciousness, but I'm not sure of the inverse. It isn't clear to me that some animals that definitely have sense are conscious, though I do believe that everything that is conscious has sense.
I don't know what you mean by the representational thing, sounds like representational realism as a theory of consciousness, and if so then you just blew the representation I have of my mind! I don't see how they're related... but isn't it the case that a digital camera represents photos as digits in a storage drive? Are digital cameras conscious?
That consciousness plays a large role in the behaviour of conscious things is not that such behaviour is (necessarily) unique among conscious things.
Is there any evidence or reasoning to suggest that human-like behaviour (including conversion) cannot be explained by non-conscious physical influences (or that consciousness is a necessary by-product of such non-conscious physical influences)?
That nobody has been able to come up with a convincing physical or non-conscious explanation for consciousness, and philosophers such as Chalmers, Nagel, McGinn have provided fairly strong reasons for why all such attempts are doomed to fail, despite the efforts of Dennett and company.
As I see it, the explanatory gap arises because we start by abstracting objective properties from the first person perspective, such as number, shape, extension. And that has worked really well in science. But then we turn around and ask how those objective properties give rise the subjective ones that we abstracted away from, such as colors, smells, pains, etc. And there just isn't a way to close that gap, other than as a correlation. Brain state ABC correlates with feeling XYZ. But why? Nobody can say convincingly.
The result is 25 (throwing out a number) different possible explanations ranging from it being an illusion to everything being conscious. Of course one can take the idealist route and dispense with the problem, but at the cost of eliminating the third person properties as being objective, by which I mean mind-independent, despite appearances to the contrary (for us realists anyway). Of course if idealism was universally convincing, this wouldn't be a philosophical issue. But it's not. I would venture to say that realism is more convincing to a a majority of people.
And so it will probably continue to be argued going forward, despite whatever progress neuroscience makes. The correlations will be stronger of course, but it's unlikely anyone will be able to answer why it's not all dark inside. Of course that lends credibility to Chalmers' arguments, but I'm not convinced by his either.
For good reason: there isn't a "why." Brains are not a description of explanation of feelings and vice versa. They always fail to account for each other because they are distinct states. The mistake was to propose the account for each other in the first place. No "gap" exists because each has no role in explains the other. The entire approach to consciousness which understands it something to be explained by logic (the meaning of other states) is flawed. It ignores exactly what states of consciousness are: their own state of existence.
Experiences aren't "subjective." Like any state of the world, they are their own state, "objective" and within the realm of language (like any state of the word). They are even "mind-independent": the presence of an experience doesn't require someone be aware of that experience. I can, for example, feel happy without being aware I am experiencing happiness. It doesn't take me thinking are talking about my own happiness for me to be happy. it just requires the existence of a happy state.
All the consternation over "first person" and "third person" is nonsensical. The controversy over "what is it like to experience" is one giant category error. By definition, the being of an experience is distinct from any description we might give, so to attempt "first person" description is to literally try to turn language about something into the state being described. Is it any wonder it always fails?it is exactly what language never is.
So when we are asked: "But what is it like to be a bat (or bee, or rock)?" the question is really asking us to be the bat. Only then, it is assumed, can we understand the experience of a bat (or bee, or rock). It is an incoherent argument which makes a mockery of language and description. The very point of language, of description, is that it is an expression of meaning which is not the thing described. To understand something is, by definition, not to be the thing you know in your present state, but be aware of it anyway. The absence of "first person" IS understanding (even within the one individual: if I understand that I am making this post, then my being has changed from making the post to a state of knowing about making the post. Making the post has been lost to my "first person." It is nowhere in this state of knowing about making the post. I am distant from it).
In other words: the "gap" argument utterly misunderstands what states of awareness are. It proposes to understand involves being what is understood, as if knowledge, awareness or understanding something constituted its existence. It is no coincidence the obsession for the authenticity of "first person" is offered by the idealists. It is the ultimate expression of their position: (only) experience as existence.