Do Chalmers' Zombies beg the question?
I'm re-reading Chalmers' The Conscious Mind and, while I agree with his position, I find his arguments about logical supervenience lacking.
Chalmers contends that conscious experience does not supervene logically on the physical, so mind is not derivable from matter which leads to his position of naturalistic dualism. He relies heavily on the case of the phenomenal zombie, which is functionally and psychologically identical to him, but has no phenomenal experience.
To me, this seems a very strained example, and I think Chalmers highlights the limitations of this example himself when he says, "My zombie twin does not have any conscious experience, but he claims that he does" (p. 174).
If zombie-consciousness is devoid of phenomenality, what possible set of conditions could give rise to the zombie asserting phenomenality? Isn't this a petitio principii?
Chalmers contends that conscious experience does not supervene logically on the physical, so mind is not derivable from matter which leads to his position of naturalistic dualism. He relies heavily on the case of the phenomenal zombie, which is functionally and psychologically identical to him, but has no phenomenal experience.
To me, this seems a very strained example, and I think Chalmers highlights the limitations of this example himself when he says, "My zombie twin does not have any conscious experience, but he claims that he does" (p. 174).
If zombie-consciousness is devoid of phenomenality, what possible set of conditions could give rise to the zombie asserting phenomenality? Isn't this a petitio principii?
Comments (110)
Are we to exclude deliberate deception? If so, how about innocent confusion?
Brain activity that triggers the vocalization of the expression "I am conscious." There's nothing special about those words. "I am conscious" is no more an indicator of consciousness than "one plus one equals two." They're just sounds that can result from mechanical operations.
Well as long as you say that he relies his case on that. Then don't think there is much to discuss here. He starts his position from a total wrong base. So his outcome can normally only be wrong too.
Functionally and psychologically identical presupposes human phenomenal experience. They come in package. You can't have the one without the other. However convenient would might be for him to built his case on, still can never be right.
Quoting dimosthenis9
But it is still possible to come to the right conclusion for the wrong reason.
Quoting bongo fury
I've been thinking along these lines. One difference must be that the actually conscious being can know that it is conscious (in the strong sense); while the zombie that knows it is conscious is wrong. So could you say that this is the exact dividing line between a reductionist and an emergent consciousness?
By pure luck?
For example, if knowledge that one is conscious is the phenomenal judgement sine qua non, then when the zombie effects this judgement, it can either judge correctly "I am not conscious"; or it can judge incorrectly "I am conscious". What this says is that the judgement of/about consciousness (conscious judgement) does not supervene on the physical since, if it did, it would be self-contradictory. Which explicitly contradicts Chalmers' second premise.
Even if his conclusion is correct it would never be accepted, if the way he tries to prove it is via starting from this false base. Everyone could just push it with the tip of his finger and it would collapse.
So as to be taken serious he would have to choose a different path. More solid one.
We could discuss what could be the appropriate base as to make his conclusion right or seem more reliable at least.
I consider myself idealist and believe that mind is non material but I would never choose such a lame premise as to convince a materialist. I would be easily defeated.
I've always claimed to be the zombie, without lying about it. I don't think I'm conscious, at least not by Chalmers' definition, so no, they don't necessarily lie about it. Sure, I can detect red, but so can the simple mechanical device.
But anyone who takes this view will likely be an eliminativist (or a reductive functionalist) about consciousness from the start. If one accepts that our immediate evidence does not rule out the possibility that we are zombies, then one should embrace the conclusion that we are zombies: it leads to a much simpler view of the world, for a start. But the reason there is a problem about consciousness is that our immediate evidence does rule out that possibility.
Chalmers on the "Irreducibility of Consciousness"
How does this work? Acquaintance of the head with an immaterial picture inside it?
Functionalist about consciousness seems closer to the mark, but they don't call themselves zombies, and neither do I except in the context of discussions such as this one.
The OP says they're functionally identical, so by definition, immediate evidence does not rule out the possibility.
How is that a problem? It simply leaves it open to interpretation (as does any position without empirical differences).
Well, axiomatically, if the zombie thinks it is not a zombie it is wrong, if a conscious being thinks it is not a zombie it is correct.
Chalmers does discuss this. One position he explores and admires involves ascribing phenomenal (or proto-phenomenal) states to physical entities. And also monism, not of the mental, but of an over-arching variety. I have strong affinities for both these directions also. Both of them are consistent with a systems-philosophical approach.
If a machine with no ghost thinks it has a ghost, it is wrong. Tick.
If a machine with a ghost thinks it is not a machine with no ghost, it is correct. Tick.
Does this mean you equate the 'experience of consciousness' as that of 'having a ghost'? That is interesting.....
Chalmers’ zombie twin is not “logically coherent”, to me. He can only assume, and not prove, that “conscious experience” is missing from the zombie. This is because he assumes, and never proves, that “conscious experience” is a fundamentally natural phenomenon. Of course he can imagine it missing from a zombie because he has long assumed it occurs elsewhere.
I think you’re right. He’s reasoning in a circle.
"P-zombie" incoherence.
:roll:
But if he assumes it then that is the case he is examining. A conscious zombie would contradict his example.
Zombies are functionally equivalent to conscious entities. Generically different entities have different evolutionary histories (because "you count to two when you count them"), but given the functional equivalent clause in the definition, any treatment of p-Chalmers as saying something Chalmers says is by definition fair game.
But at some point, there'd be a common ancestor, which means that a non-conscious entity bred a conscious one with this new relationship with the external entity.
In other words, what had functioned just fine in the parent (a mind that was self-contained and fit despite the lack of relationship with the 2nd entity) gave birth to one where all those advantages were cast aside, letting the choices be made by the external entity rather than itself as has been done by its parent.
This is probably unfair, since it paints a picture of a very binary, abrupt transition from one way of doing things to a complete new one. The child simply wasn't evolved to take advantage of the external entity, at least not in full, and so I imagine the process to take many generations, with a slow ceding of control to the external choices which for some reason made the phenomenal being more fit.
To me, this sounds like slow possession by a demon, however benevolent.
This also leaves the price paid by the parent (the extra metabolism to support an exceptionally large brain) still being paid by the child despite all the function of that expensive brain having been contracted out to a 3rd party. If the 3rd party is doing all the work, why still cart around a brain whose function can obviously be accomplished by one a 10th the metabolism as is done by other creatures our size.
Agree with you. How could a zombie answer the question 'how are you?' or 'how are you feeling?' I don't see how it could. It could be scripted to regurgitate an answer, but surely it couldn't be hard to fool it. 'What was the most embarrasing thing that ever happened to you?' for example - because they allegedly don't have any experience or inner life. They are, therefore, automatons.
Actually I'm reminded of a passage I often quote from Descartes, and bear in mind he wrote this in the 17th Century:
[quote=René Descartes]if there were such machines with the organs and shape of a monkey or of some other non-rational animal, we would have no way of discovering that they are not the same as these animals. But if there were machines that resembled our bodies and if they imitated our actions as much as is morally possible, we would always have two very certain means for recognizing that, none the less, they are not genuinely human. The first is that they would never be able to use speech, or other signs composed by themselves, as we do to express our thoughts to others. For one could easily conceive of a machine that is made in such a way that it utters words, and even that it would utter some words in response to physical actions that cause a change in its organs—for example, if someone touched it in a particular place, it would ask what one wishes to say to it, or if it were touched somewhere else, it would cry out that it was being hurt, and so on. But it could not arrange words in different ways to reply to the meaning of everything that is said in its presence, as even the most unintelligent human beings can do. The second means is that, even if they did many things as well as or, possibly, better than anyone of us, they would infallibly fail in others. Thus one would discover that they did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action. [/quote]
:up:
Actually Chalmers touches on this because, if his zombie-twin has an "inverted spectrum" of any conscious experience, for example, (sees blue where the other sees red i.e.) then there necessarily will be different "causal histories" of that type of experience, even if the experiences themselves are the same. So isomorphic mapping of history can be problematic.
:up:
I like Chalmers description of consciousness as something which determines the intension of its own concept.
I'm wondering how P-zombies could have a history that involves the development of words that refer to conscious experiences they don't have.
We have a word like “soul” despite the fact that we have no such thing.
If we're not conscious in the way that philosophers like Chalmers claim we are, then qualia would count as such a word in our universe. Idealism would be another. Platonism would be yet another. Not to conflate those three terms, but it demonstrates that if the world is physical, it doesn't prevent us from coming up with non-physical words.
We could in the future have f-zombies with mind-uploading. The science fiction book Permutation City explores the concept. Brain scanning and computing has progressed to the point in the 2050s for accurate digital copies to exist in simulated worlds. The question the physical main character wishes to explore is whether a digital copy is conscious, and how manipulating the simulation might distort that consciousness. Most of the copies commit suicide upon finding out their uploads. But the last one is prevented from doing so, and goes on to invent the dust theory of consciousness.
If digital uploads are not conscious, they could still be functionally equivalent and make the same claims about being conscious as we do. They would physically be different, which would mean that consciousness is not functional, and behavior is not a reliable indicator for being conscious.
Then he must be a phenomenal zombie with phenomenal experience.
It's possible that there's no beetle.
Truly conscious people have something that they call "beetle" but p-zombies have nothing in their boxes but they still call it "beetle". In essence p-zombies are using words (meaning is use) and if Wittgenstein in right, we're also doing the same. :chin:
Are there non-truly conscious people (apart from my wife)?
:lol: Don't say that I laughed at your joke. I really don't want to be in her bad books. :smile:
Yes, as I mentioned earliler, I think this is the sense in which Chalmers suggests that consciousness determines the intension of its own concept.....
If the zombie is the clone of a liar, ill educated, or mad person.
I'm more of the opinion that consciousness in this scenario constitutes a nescio quid, such that for a zombie to make a true qualia-claim it would be referring to something to which it in principle does not have access.
Indeed. Any claim to having an experience must be false if expressed by a zombie, very much a nescio quid for the zombie. But for the human, who has noticed he is conscious, it's more of a, er, conscio quid, or something.
Neither party is lying. For it to be a lie, each being (the zombie using nothing but physics, and the 'human', as y'all put it) need to spend a moment in each other's shoes to compare. This is what the one is like, and this is the other. Now given that, one can select which most closely matches his experience (or choose to lie about it and claim the other). Until then, there's no lie about it since both have only one experience to compare, and each has learned the vocabulary to describe it from places like this forum. I certainly would never have used the word 'qualia' for instance had I not heard it from others.
So in the interest of not lying, I assert that the experience that I have is just the result of doing it the same way as would any physical device with sensory input and an information processor to make sense of it. There's nothing seemingly inexplicable about it, and hence I conclude that I'm one of the zombies and that I'm missing out on the full inexplicable-by-physics experience.
:up:
There's nothing it's like to be a zombie. So for us humans switching places is the same thing experientially as being unconscious. It would be like losing time once you switch back.
Chalmers eventually examines how information can plausibly link the physical and the phenomenal, since it presents aspects of both (the well-known issue of the two entropies). What I would like to consider is, extending Chalmers approach of supervenience, if consciousness, while not supervenient on the physical, is in fact supervenient on the informational, then consciousness could be translated from one medium to another, exactly as information can be. The only question is, when I am thinking this thought now, is that exhaustively represented by the informational content, or is there something more? Is the thing which is producing or creating information itself a form of information? I'm inclined to think it is...some form of globally coherent informational history maybe. And so, yes, theoretically translatable between mediums.
One possible answer, is that the zombie is just programmed to say these kinds of things. If, for example, our reality is a kind of program of sorts, then it's quite possible that some being (what we refer to as a person) might just be part of the program. They act like us, they talk like us, but they lack the internal subjective experiences of a real self. It's certainly possible, but unless you were able to remove yourself from the program, it would be difficult if not impossible to tell the difference.
It's hard to see where he's committing a fallacy.
I believe Chalmers would disagree, because he would say that consciousness is not reducible to information. It does not logically supervene. Rather, there's an additional law of nature that binds conscious experience (or causes it to emerge) whenever there is an informationally rich stream, or whatever the criteria is.
One might object to this new arbitrary law that adds something additional to nature, but I think the even deeper issue is the status of laws making nature be a certain way. If we can allow such laws on the microphysical level, then I don't see what stops them from happening elsewhere. Because laws of nature are deeply mysterious.
I didn't really have in mind 'switching places' since lacking something being me, there's nothing to switch. Perhaps the zombie (Phil) can be possessed by something (Bob) being it for a short while. But this only lets Bob know what its like to be Phil (who is for a short while not a zombie), but Phil might not necessarily be aware of it.
I think I follow this, but disagree. A self-driving car, with driver in it, has something 'being' it and the car is 'conscious'. The car is an extension (an avatar) of the driver. Not sure how you're using 'experience' here. A self-driving car is capable of being aware of its surroundings and function on its own. That's 'experience' in my book, as distinct from 'conscious' which is the experience and control of the driver. If you use the word differently, then I need one to describe what a mechanical device does to measure the world.
Point is, the car can cede control to a conscious entity (driver) and become a car/driver system, and if you ask the system if it has phenomenal experience, it would be the driver that answers 'yes'. Perhaps the car still has its own experience and notes that it would have done that lane change better, but it's not in control.
If the same car is driving itself with the same person now acting epiphenomenally as passenger, then there's still something 'being' it, but it's the car in control, and thus the car that answers when asked if it has phenomenal experience. The car is unaware of the passenger, so it truthfully answers yes since it is quite aware of the vehicles around it and such and has no driver-phenomenal experience with which to compare.
There may be no passenger at all, and thus nothing 'being' the system, and the experience of the car is the same. So as long as it is in control, the car is going to answer the same way. If it's under control of an external driver agent, then it is the agent that answers, not the car, and the car cannot convey what that experience is like.
I claim to be the zombie car, not the driver/car system because i have no evidence to the contrary and it seems more plausible than the physics-defying system otherwise posited.
So you see zombie colors, hear zombie sounds, think zombie thoughts, dream zombie dreams?
Except the zombie is supposed to be identical to me except for being conscious. I don't talk about my mental states due to any programming; when I talk bout being in pain, say, it's because I want to inform someone about my mental state.
Doesn't coming up with words for mere possibilities require imagination?
So ... p-zombie Chalmers is imagining the redness of red and what it's not like for his zombie twin to lack that red sensation, and it's implications for metaphysical possibility.
When p-zombie Mary leaves the black and white room and sees a red object for the first time, she learns a new fact that isn't a fact, because p-zombie Mary is mistaken about seeing red. In fact, her entire world is colored in combinations of p-red, p-yellow and p-blue. She learns about the p-redness of p-red. We can call that a p-fact. But she already knows all the p-facts. So she learns nothing.
My brain hurts now. I'll admit to having difficulties with the p-zombie argument when it comes time for the zombies to talk about consciousness.
Yes, and this is why I said, "...they lack the internal subjective experiences of a real self," which was meant to mean they are not conscious. It's difficult to know if such a zombie would really act like a conscious being. It seems that you could in theory make them respond just like us. It would be like playing a game, say, World of Warcraft, and not knowing if you're talking with a real person or not.
I don't think they could act entirely like a conscious being because conscious beings' actions are sometimes caused by their mental states.
The point of course would be, how you could you tell the mental state apart from a programmed response? I don't think, in theory, you could.
Maybe not but that's an epistemological point. It seems to me that P-zombies can exist iff there are no actions caused solely by mental states.
No, I see colors, hear sounds, think thoughts, and dream dreams, but I do it the zombie way without help from the outside, just like the self-driving car does. OK, the car probably doesn't dream, but it does the other things, however reluctant you might be to ascribe such terms to such a device.
What's that like?
I don't feel pain. I'm a zombie, remember? I merely process the data received from my nerve endings and make the appropriate facial expressions and such.
Some cars don't have damage sensors. They're still awfully primitive. Ones that do have the sensors process the data, which can be interpreted as pain or not depending on your choice to characterize it with such language or not, a choice which doesn't alter what's actually going on one way or the other. But I assert there is no evidence of any fundamental difference between myself and the car. Our mutual refusal to use the word pain to describe the respective systems isn't evidence of anything.
Quoting MarcheskTo what? There isn't anything to which it is like something. That's the thing I deny. There's no 'I' (a thing with an identity say) that's being me.
Yeah, that's the real problem here. If qualia are epiphenomenal, how can we talk about them?
:up:
So you don't feel pain?
This phrase sounds suspicious. There's a me, but there's no I being me?
Also, there's definitely an "I" there. Something typed an entire grammatically correct, if not coherent, response in this thread with a unified theme conveying some particular form of skepticism to zombies.
Are you pretending for the thread, or do you actually think you're a p-zombie?
Quoting InPitzotl
Indeed, if one with qualia can talk about it, it isn't epiphenomenal. Those of us without the qualia might talk about it because we hear the rest of you talking about it and know no better.
Quoting RogueAI
Well, not pretending anything. Chalmers claims a conscious experience that does not supervene logically on the physical. I don't have that since what I do isn't a logical contradiction like that. So I can only presume Chalmers (and the rest of you non-zombies) has a conscious experience that is fundamentally different than me just "receiving data that could be interpreted as pain", as it was put in T2. I might use the word pain, not because I (like the other zombies) am lying, but because we've been provided with no other vocabulary to describe it.
Quoting Marchesk'Pain' seems to be a word reserved to describe the experience of had by the experiencer of a human. It would be a lie to say that I feel pain, in the context of this topic, so lacking an experiencer, I cannot by definition feel pain any more than can a robot with damage sensors. Again, I may use the word in casual conversation (outside the context of this topic) not because I'm lying, but because I lack alternative vocabulary to describe what the pure physical automaton does, something which by your definition cannot feel pain since it lacks this experiencer of it.
Quoting InPitzotl
This is a better question. The 'me' is like the robot, the thermostat', the automaton. These things, in common language, have a sort of legal identity, but not an identity which holds up to close scrutiny such as Parfit demonstrates. The "I" on the other hand refers to the experiencer of a conscious thing, something which gives it a true identity that doesn't supervene on the physical. My 'me' doesn't appear to have that. It seems inconsistent that something with an identity can be paired with something without one. The bijunction between the two doesn't work without a series of premises which I find totally implausible.
No, that's the legal 'me' doing that. Any toaster has one of those. Any automaton can type a similar response in a thread such as this.
I cry foul here. Imagine a believer of the classical elements telling you that he just fetched a pail of water from the well. When you ask the guy what water is, he explains that it is the element that is cold and wet. Analogously, you object... there is no "water"; for "water" refers to an element that is cold and wet, and we don't have such things. The problem is, the guy did in fact fetch the stuff from the well. This I believe is your error.
Slightly more analytical, the guy has a bad theory of water. When asked to describe what water is, the guy would give you an intensional definition of water that is based on the bad theory. It's proper to correct the guy and to say that there is no such thing as he described in this case; however, the guy is also ostensively using the term... the stuff in the well is an example of what he means by water. His bad theory doesn't make the stuff in the well not exist. So the guy is in a sense wrong about what water is, but is not wrong to have the concept of water. The stuff the guy goes out to fetch from the well really is there.
You're objecting to an intensional definition of "I", which is simply based on a questionable theory of self... but you still have the extension to which "I" refers.
Quoting noAxioms
I've no idea what you mean by legal me, but the ostensive I to which humans refer is not something a toaster has. I can't comment on the automaton... the term's too flexible.
How far does you skepticism go? Do you think there's a strong possibility you're the only mind in existence?
I can't tell which position you're actually arguing for or against. I assume it's a reductio?
I any case, I'm confident you do feel pain, and trying to argue that you don't via some objective comparison or description doesn't change the fact that you do in fact feel pain.
This is exactly why a physicalist like Dennett, who thinks the idea of Zombies is incoherent, says that if they were possible, then we would all be zombies. Why would a zombie say it is conscious if it didn't think it was conscious? (Why would it say anything at all if it didn't think anything, for that matter?).
If the zombie can think it is conscious (which itself is an act of consciousness), and this thinking is the result of brain activity, then what reason could we have to think consciousnesses would not also be a result of brain activity?
You would need to posit a programmer which is no part of Chalmer's ideas..
An eliminativist about personal identity could hold the phlogiston as a counterexample. To be sure, the phlogiston, identity, water element have been posited not as idle fantasies, but in order to explain some manifest reality. But the preferred solution, at least in the case of the phlogiston, was not to come up with a better theory of the phlogiston, but to drop the stuff altogether as part of a better theory that accounts for the manifest reality of heat transfer.
I am pretty sure you're at least one step behind, not ahead, of the post you just replied to.
Quoting SophistiCat
This is clumsily phrased. Phlogiston theory is a theory about combustion. It was replaced by oxidation theory, a better theory about combustion. We dropped the notion of phlogiston, but not the notion of combustion.
I am not sure how to take this. Is this just a generic putdown, or did you mean something more specific? What am I missing?
Quoting InPitzotl
Well, referring to the phlogiston theory as a theory of heat heat transfer was perhaps clumsy, but you have ignored the substance of my response in favor of capitalizing on this nitpick.
The argument is just about conceivability. Your question shows you've gone beyond conceiving of the P-zombie to asking why it's like that.
That's all that's needed to drive the wedge in.
The only way I can parse it, it is the followers of Chalmers that are making the error you point out, where a human is privileged in being allowed to call something water/cold/wet, but anything else (a sump pump moving the stuff) doing the exact same thing is not allowed to use such privileged language (the pump moves a substance which could be interpreted as water). A mechanical device with damage sensors to which it reacts lacks the privilege to say it feels pain. I didn't make those privilege rules, so cry foul to the ones that make those rules. I decline to use the word because I don't consider myself to be in the privileged class.
Bad analogy. In the case in question, nobody is ostensively using a term. You can't point to your subjective feeling of warmth and assert the toaster with thermostat doesn't feel anything analogous. Sure, it's a different mechanism, but not demonstrably fundamentally different.
No, wrong to have the concept of water since the term 'water' is not in fact being ostensively used. Perhaps not wrong, since there may be water in his well, but I detect none in mine and he cannot show me the water in his.
I think so.
Legal identtiy: There is a rock placed at X, and you move it to a new location Y. Is it the same rock, or merely a different arrangement of matter in the universe with only language suggesting a binding between the prior arrangement and the later one? Is that toaster under your arm the same toaster as was stolen from me a moment ago, or a different one to which I have no claim? I shake a rope sending a wave down its length. Is the wave I created the same wave that reaches the other end despite not involving motion of a single bit of the original perturbed material? That's what I call legal identity, and has nothing to do specifically with life forms. It seems mostly language based, not based on anything physical, and it doesn't always work. A cell divides by mitosis. Which is the original cell? Language has no obvious answer and physics doesn't care.
This seems to be an example of the privileged language mentioned above. What I see as the 'bad theory' asserts privileged status to humans, raising them above a mere physical arrangement of matter, and assigns language reserved only for objects with this privileged status. I'm denying the status, and thus sit in the group with the toaster, forbidden to use the sacred language. My son has one of those 'hey google' devices sitting on its table, and it might reply to a query with "I cannot find that song" or some such. But such usage seems to refer to the legal identity (something I don't deny) and not to "I, the experiencer of the device" which neither the toaster nor the physical arrangement of matter referred to as 'noAxioms'.
Quoting RogueAII don't consider my position on this to be abnormally skeptical. I simply deny the non-physical experiencer, which is a fairly standard monist position. I differ from the mainstream position in that I'm willing for others to have the dual relationship (and hence all the talk about it), thus forcing me to use alternate terms to describe how I work. Most monists probably believe that every mind supervenes on the physical, not just some of them. My position explains why the zombies talk about pain when they don't actually 'feel' (privileged definition) it. They are not lying, merely drawing from the limited vocabulary available to them.
Quoting MarcheskNot if a mechanical device is forbidden from using the word. If a thermostat doesn't feel warmth, then neither do I. I admit to pain being a rare one, with few devices having sensors to provide it.
Quoting Janus:up:
Or not even necessarily brain activity, but any information-processing activity.
It was not a put-down. I'm not just generically using braggart language here; you're literally one step behind. The water example is a response to the response you just gave, and it does not negate it. We did not discard the notion of water when we discarded classical elements, and there is a good reason we did not do so. That we discarded phlogiston on replacing it with a better theory, does not negate this good reason not to discard water when dropping classical element theory.
Quoting SophistiCat
That's not quite the clumsiness I was referring to. "X is a bad theory of Y" is to be understood in the sense of X being an explanans and Y an explanandum. In this sense, phlogiston theory is not a theory of phlogiston because phlogiston is an explanans. The explanandum here is combustion; so phlogiston in this sense is a theory of combustion. When we got rid of phlogiston theory, we did get rid of phlogiston (explanans), but we did not get rid of combustion (explanandum).
There is an infidelity in my phlogiston analogy in that "phlogiston" and "self" are not on the same level in terms of their pedigree and epistemic centrality. They are, however, on the same level in that both are theoretical entities that have played a role as explanans, and it is that which eliminativists attack. They do not deny that which gives rise to our habitual concept of "self"; rather they question the validity of the conceptualization.
Here I should disclose that I have been playing something of a devil's advocate, because I am not on board with the kind of eliminativism that blithely rejects concepts like "self" as merely illusory. Personal identity may be nothing over and above a psycho-social construct, a legal fiction, as @noAxioms might say, but it does exist at least qua construct, and as such it has very real consequences. And that is existence enough, as far as I am concerned. Where I am on board with eliminativism is in not granting habitual mental categories roles in science or metaphysics without first subjecting them to critical evaluation.
A p-zombie is a hypothesized being physically identical to a human but bereft of consciousness.
So, yes a p-zombie begs the question; after all it's defined in such a way that assumes consciousness nonphysical.
However, it can still be used in an argument like so:
1. If consciousness is nonphysical then p-zombies are possible
Ergo,
2. If p-zombies are impossible then consciousness is physical.
Of course they are. This is why they tend to say we have these properties, but these things over here, they don't. They are ostensively pointing to the properties, and they are formulating an incomplete theory in an attempt to explain the properties they are pointing to. And I even agree it's a bad theory about what they're ostensively including.
Quoting noAxioms
The notion that either we have an immaterial driver in the driver's seat experiencing things or the toaster feels warmth sounds like a false dichotomy to me.
Quoting noAxioms
Yes, it's the same rock...
Quoting noAxioms
...and it's probably that too. Most of the toaster's mass is in gluons. They're constantly obliterating and reforming. And the next grand TOE may even do something more weird with the ontologies.
Nevertheless, if you had your name scratched onto the toaster when I stole it, it will tend to still be scratched on there unless I scratched it off.
Identity need not be fundamental; it can be "soft"... emergent, pragmatic versus universal by necessity, constructed from invariances, and the like.
Quoting noAxioms
Actually, yes, I can. The toaster reacts to warmth. "Legal me" reacts to warmth as well. But "legal me" also reacts to an increase in blood acidity.
But there's a difference between how I react to warmth and how I react to an increase in blood acidity. I can subjectively report on my feeling of warmth; I cannot subjectively report on my feeling of high blood acidity. Ostensively speaking, warmth is an example of something I subjectively feel; acidity is an example of something I react to but do not subjectively feel. There is no good reason for me to suspect that because the toaster reacts to warmth like I react to warmth, that it is subjectively feeling warmth like I subjectively feel warmth in contrast to how I do not subjectively feel blood acidity.
Quoting noAxioms
Your sales pitch here is a dud. I can play Doom on this computer. I might could even play Doom on my Keurig. But I cannot play Doom on this bottle of allergy pills.
Different physical objects have different physical arrangements, and some arrangements have properties other arrangements don't have. We might could even say certain arrangements of physical objects have privileged status, raising them above other arrangements, and that we are justified in assigning language reserved for some classes of objects.
The "I" I accused you of having is simply a unit of theory of mind as it applies to the linguistic aspect of your posts. Humans can be thought of as objects susceptible to be described in terms of units of theory of mind, at least in the typical sense.
Quoting noAxioms
There are particular arrangements of physical matter that come in individual "toaster"-like bodies, which are embedded in their environments and must navigate them, and which regularly participate in conversations of various sorts with other entities. "Hey google" is not one of these things. But noAxioms is one of these things.
TOM can probably be extended to work in some version on "hey google", but it's distinct enough to reassess how we want to discuss its identity. An automaton of the right type might work better (SDC's are a bit out... the networked trend confuses the information-complex-to-body relation so requires the reassessment). You, OTOH, meet the requirements to apply theory of mind to as humans above the age of five regularly do.
But for some reason, instead of asking me what I mean by "I", or getting this plain reference to the notion that you as a unit consistently type out the same themed argument throughout single posts and across time, you keep going to this "experiencing the device" thing. I understand you're rejecting a bad theory of "I"; I too reject it. But I cannot play Doom on my bottle of allergy pills, and I cannot play debate-the-zombies with my toaster (at least yet).
Not really. I'm saying that the absurdity of the idea that the p(urported) zombies could say things about their experience, even though they have no experience, is the central point that establishes the inconceivability of such a being.
A computer can't tell you it's conscious?
Why would that be a problem for the argument?
It's kind of blatantly obvious that you aren't familiar with Chalmers or philosophy of mind in general.
Not an ad hominem. It's just a fact.
If you read a little about Chalmers' p-zombie argument it will become clear to you why your objections are irrelevant.
I have no interest in explaining it to you just to have you repeat your nonsense.
When someone has enough interest to make a comment they claim they don't have enough interest to explain; I smell something distinctly rotten.
I'm a bit lost. This is what a zombie is according to Chalmers:
http://consc.net/zombies-on-the-web/
...that doesn't sound like a computer. So what's the objection here?
That was a reply to Janus, who found it impossible to conceive of an entity that tells you it's conscious when it isn't.
Is the thermostat biologically equivalent to you? But this isn't about what words you can and can't use if you subscribe to this or that. It's about the fact that you do feel warmth. Whether you want to grant or deny that to thermostats is a different matter. I tend to think they're incapable of feeling warmth, although I wouldn't rule some form of panpsychism completely out.
I simply don't believe you when you claim skepticism about feeling warmth just because you can't say the same for thermostats in this discussion.
Would it make it not the same toaster if the name got scratched off, or was never there in the first place? Despite my calling it a 'legal identity' (an old habit), I'm not talking about being able to prove the fact to a court of law. I'm talking about it actually being the toaster in question or not.
Perhaps 'pragmatic identity' fits better.
That's not the story being pushed:Quoting InPitzotl
Per this assertion, the lack of privilege does not come from a defect or other difference in the physical arrangement.
Not how I'm using it when I make a distinction. The "I" refers to the non-physical experiencer, the thing that gives the privilege.
Age of five eh? Does that imply you were a zombie until some sufficient age? What do you experience before then?
Quoting frank
Quoting Janus
While there are plenty of computers running fixed algorithms that just play pre-recorded messages (a typical phone tree for instance), a true AI isn't programmed to say any specific words. It learns them, same as you do. Its programming may have been done by another computer, probably better than would have said 'conscious programer'. Your assertions are rapidly going to be demonstrated false as capabilties improve.
Irrelevant. This is ostensive.
Quoting noAxioms
Yes.
Quoting noAxioms
If you say so, but all I can talk about is what I mean by "being the same".
Quoting noAxioms
That's David Chalmers' story. I'm not David.
Quoting noAxioms
You said the toaster feels warm. It doesn't matter how you're using the word "I"... the toaster doesn't feel warm. It lacks the parts.
Quoting noAxioms
Yes; by that age, most humans learn theory of mind.
Quoting noAxioms
No, it implies that you can for example pass the Sally Anne test. Theory of Mind has nothing to do with p-zombies.
Most thermostats are not biological, and are thus not the biological equivalent of anything.
I am, however, presenting one as a crude mechanical equivalent of "processing data which could be interpreted as 'feeling warmth'", which is what I claim noAx does.
Sorry for the 3rd person reference, but some of the prior posts have been getting a bit undefined as to what exactly 'I' refers. noAx is the physical biological human.
Yes it is. Chalmers forbids the usage of 'feels warmth' for the zombie, and the thermostat is a zombie, lacking the added bit that is the difference between zombies and humans. So the word is forbidden, despite the fact that the thermostat measures (via physics!) temperature and reacts to it, exactly as the zombie does. The vocabulary is reserved (by proponents of the existence of the ';additional bit') for objects that have that additional supernatural bit, as evidenced by assertions of 'lies' when the zombie claims that he also feels warmth.
I chose the thermostat since it is the ultimate in trivial data processing. A sensor and a single mercury switch is enough, the opposite end of the complexity spectrum compared to noAx, but fundamentally doing the same thing.
A rock also reacts to warmth, possibly by breaking, but it lacks both explicit sensor and an information processor, at least of the sort with which we're typically familiar.
Chalmers claims that the feeling of warmth can only be had by some supernatural experiencer, and since noAx is not one of those, noAx no more can feel warmth than can the thermostat. I refuse to be placed in a privileged category over it, unearned.
Oooh, magic sauce!
I'm merely skeptical of this immaterial experiencer/possessor, or of the magic sauce, or however it's presented. Surely I'm not the first person on these forums skeptical of dualism.
User whatever terms you like, but your first person experiences of warmth, pain, color, etc. are not part of the physical descriptions of the world. Thus the motivation for p-zombies. I think it's a problem with the description of the world. As in it's leaving something out, not that p-zombies are actually possible.
Quoting noAxioms
The mercury responds to thermal energy. Cold and warmth are relative to organisms that need to survive in a certain temperature range.
Your opinion is noted, but it provides negligible evidence falsifying an alternate one.
I find the p-zombie description not only possible, but a more accurate description of what's going on. A biological being such as you describe (with agency) would have evolved differently than what we see. Everybody tends to selection-bias that away. I did an old thread on the subject on the defunct PF. Can't reference it anymore. :sad:
Last week @180 Proof recommended the book Descartes' Error in the thread on emotional intelligence (thanks!) I just finished it, and right at the end Damasio describes an interesting case of a pre-frontal leucotomy (where portions of the pre-frontal cortex are removed) which was performed in the attempt to alleviate debilitating neuralgia:
Two days after the operation, when Lima and I visited on rounds, he was a different person. He looked relaxed, like anyone else, and was happily absorbed in a game of cards with a companion in his hospital room. Lima asked him about the pain. The man looked up and said cheerfully: "Oh, the pains are the same, but I feel fine now, thank you." Clearly, what the operation seemed to have done, then, was abolish the emotional reaction that is part of what we call pain. It had ended the man's suffering. His facial expression, his voice, and his deportment were those one associates with pleasant states, not pain.
Does this example support the notion of a p-zombie? Or the opposite?
The two are closely linked. For metaphysical possibility, you can just posit a god that makes it happen.
Physical possibility is trickier. That's whether it's possible in our world.
Yes, the example I give happened in our world. That was what I was thinking. Chalmers calls them logical, metaphysical and natural possibility.
P-zombies would seem to be logically possible in the sense that the idea that a robot, for example could be programmed to talk about all the same things we do, without any of the kinds of experiences which prompt us to talk about what we do. On the other hand the idea that an entity could be just like us, and talk about all the same experiences we do, without experiencing any of them, and ,without being programmed to do so, seems illogical. Also it seems physically impossible that an entity could be physically and behaviorally identical to a human in every respect and yet not experience any of the things we do, in fact not experience anything at all.
I get the distinction between being able to feel pain and being bothered by feeling the pain. Mine was a difficult birth: my mother was thirty seven hours in labour, and I was yanked out with forceps in the end. She told me the pain was extreme and they gave her morphine. She said she still felt the pain when under the influence of the morphine, but that it didn't bother her at all.
Nice example though. Be interesting to have the dualists give their explanation of it. Somehow the surgery seems to have cut off the 'uplink' for 'mal-data' to wherever it gets 'processed' in a way that could be interpreted as pain. The data still gets through to say the rational area, but not to the area which causes distress. How might a dualist explain the 'mind' still getting the data for pain, but not feeling it? It's not like the mind was modified, only some connection in the brain.
Just for info, I do notice that there's multiple 'minds' in me. They hold different (contradictory even) beliefs, and one of the two is clearly in charge, but the other isn't epiphenomenal. But both of them seem physical. No funny external magic thingy.
Right, so I guess it highlights the problem with the p-zombie hypothesis: is it plausible that a p-zombie could accurately report on phenomenal experiences without actually having them? It seems like a p-zombie would actually be in this boat....