Neurophenomenology and the Real Problem of Consciousness
Neuroscientist Anil Seth discusses what he calls the real problem of consciousness in this Philosophy Bites podcast: https://philosophybites.com/2017/07/anil-seth-on-the-real-problem-of-consciousness.html
He defines the real problem as building explanatory bridges between brain mechanisms and phenemonal descriptions. Neurophenomenology is this mapping between rich conscious descriptions and brain processes. It allows for a chipping away at the explanatory gap between the hard problem and neuroscience, which may end up suggesting the cause and not just an in-depth correlation.
Anil also calls this consciousness science, similar to the there being a vision science. He goes on to suggest that perception is a process of inference between the noisy, ambitious, indirect relationship with the world, and the sensory signals that the brain receives. This includes the body.
So the brain combines prior expectations of what's likely to be out there in the body and world with sensory data to come up with its best guess as to what's out there, which is a form of bayesian inference. He claims there is emerging evidence the brain is doing something like this.
So instead of perception being outside --> in, it is inside --> out with top-down predictions about what the sensory signals will be. Also, this sort of inferential perception makes sense of disorders like schizophrenia or experiences such as the rubber hand illusion.
What this inference allows is a mapping of phenomenology onto mechanism. Anil can then ask questions like, "Is it visual predictions that are represented in visual experience, or the prediction errors?"
I think this is exciting stuff, combining cutting edge science and philosophy, where the science informs the philosophy, and possibly paves a way for providing a scientific answer to a perviously intractable problem, or at least results in a much more specific version of the problem that philosophers can continue to mull over.
He defines the real problem as building explanatory bridges between brain mechanisms and phenemonal descriptions. Neurophenomenology is this mapping between rich conscious descriptions and brain processes. It allows for a chipping away at the explanatory gap between the hard problem and neuroscience, which may end up suggesting the cause and not just an in-depth correlation.
Anil also calls this consciousness science, similar to the there being a vision science. He goes on to suggest that perception is a process of inference between the noisy, ambitious, indirect relationship with the world, and the sensory signals that the brain receives. This includes the body.
So the brain combines prior expectations of what's likely to be out there in the body and world with sensory data to come up with its best guess as to what's out there, which is a form of bayesian inference. He claims there is emerging evidence the brain is doing something like this.
So instead of perception being outside --> in, it is inside --> out with top-down predictions about what the sensory signals will be. Also, this sort of inferential perception makes sense of disorders like schizophrenia or experiences such as the rubber hand illusion.
What this inference allows is a mapping of phenomenology onto mechanism. Anil can then ask questions like, "Is it visual predictions that are represented in visual experience, or the prediction errors?"
I think this is exciting stuff, combining cutting edge science and philosophy, where the science informs the philosophy, and possibly paves a way for providing a scientific answer to a perviously intractable problem, or at least results in a much more specific version of the problem that philosophers can continue to mull over.
Comments (95)
A comedian on stage will tell jokes that mock & insult people's irregularities. A corrupt leader will sentence to death or imprison any citizen or person who does not think like them or act in the correct manner.
There was one person who was told by the doctor that he had terminal cancer & so he did gradually wither & die. But when the autopsy was done it was found that he didn't have cancer after all.
Another person had an incurable disease that many suffer from, but a person who did not know this tried hypnotising the ill person & telling him that the disease would heal now; which it did completely.
So an incurable disease was cured by suggestion, & a healthy man was killed by the simple suggestion that he had cancer.
This demonstrates the power that people can have over us if we believe them. Even when they are 100% wrong.
There is also the story of the girl who fell asleep by the fire & when she awoke she had a different personality entirely. She was a completely different person, to the alarm of her parents, & then a year later she fell asleep by the fire & when she woke up she was her old self again. This happened a number of times, & she was taken to a hypnotist who put her into a trance & ordered her old self to come back to her.
She went limp like a dead person & a loud voice came out of her, & all the people in the building & waiting room heard it & ran. They stated later it sounded like the voice of God. The voice told the two hypnotists to leave the girl alone or It would take the girl's soul away & leave her dead body behind.
There are countless stories like that which prove that mankind's medicine & knowledge are not very developed after all & know next to nothing about the true nature of "The Self".
Psychology & psychiatry were also something the nazis used to torture & kill people. Anyone who did not look & think like them was obviously an inferior life form with an inferior mind that must be removed etc.
If God appears to you they will say you imagined it. Because all they can perceive about God is their own imaginings & beliefs etc. And if they are atheists or nazis then that is all they will project onto others. It happens 24/7 on the entire web & world & there is no exception. So be very careful about what you allow yourself to believe. If you put your faith blindly in someone else then you are nothing more than a cult type member thing who believes the most ridiculous things.
You have to only believe in yourself & develop & expand your own mind by opening it to God. It's like a flower opening to the sun's light. You'll see things much clearer when you let God be your teacher & guide
I think it's worth asking why are people who think that there's an "explanatory gap" likely to accept explanations that are "mapping between rich conscious descriptions and brain processes"?
Because many of them like Chalmers want a science of consciousness where it's taken seriously, and they think there is a strong correlation between brain activity and consciousness, so it would be informative to map that out. Also, I think philosophers like Chalmers would change their mind on the hard problem if science showed them a way the gap might be explained.
Presumably, proponents of the hard problem became convinced there was a gap because of arguments in favor of a gap, so they could become unconvinced. That's how it should work. We should change our views when good arguments/evidence become available.
No matter how many turns of feedback one supposes the brain has, there is still the question of how the final turn is registered into our awareness without creating another loop. And, if we claim awareness/consciousness or even perception is a mere activity within the cause-effect or action-reaction paradigm, then what is there to prevent plants from being conscious?
My impression has always been that the folks who stress that there's an "explanatory gap" would feel that way no matter what explanation is forwarded, especially because they'll give no clear criteria for what fhey require of explanations.
That raises the question of whether the explanatory gap lies with the limitations of explanation. Does an explanation have to cause you to experience the transition from explanation to experience?
I'm not even sure what I'm saying here, but part of the problem with not knowing what it's like to be a bat is that no description is going to put you into the state of having a sonar experience. At least, I can't see how it would.
Exactly, and sometimes it seems like that's what critics are demanding.
That is something to take into account, but it's also because we can't say why any brain process would have a conscious correlate other than some just do. Which then limits us epistemologically from knowing about other animals, machines and aliens.
Well, it's akin to asking why any physical stuff has just the properties it does.
Imagine though if atoms had a special property only under certain situations, and we couldn't give a scientific reason for that.
Anyway, nobody has jumped on the inferential view of perception yet. Perhaps I should have made the thread about that instead of another hard problem one.
No, it's not; because physical properties are generally well understood within a coherent system of causality-based theory which comprises the sciences. No one has any idea how physical processes can cause phenomenal perception as it is subjectively experienced.
Quoting Marchesk
There doesn't seem to be a problem with the idea of inferential perception, even AIs may be able to do that. The real puzzle is as to what could give rise to experiencing oneself perceiving; the reflexive experience that probably leads to the notion of the much maligned "Cartesian Theatre".
[quote=Anil Seth]In my own research, a new picture is taking shape in which conscious experience is seen as deeply grounded in how brains and bodies work together to maintain physiological integrity – to stay alive. In this story, we are conscious ‘beast-machines’, and I hope to show you why.[/quote]
Which is an immediate red flag for me, as I don’t see humans as either. If we’re talking such crude generalisations, then I would refer to the well-known popular philosopher, Gordon Sumner, better known as Sting, who insists that ‘we are spirits in a material world’. (Just like René!)
Another question - through the natural sciences we have discovered many capabilities and powers. What capability and power could we expect to discover through this science? When Socrates stood before the portal of the Oracle of Delphi and read the inscription Gnothi Seauton, should he have declared it impossible, due the lack of the appropriate apparatus? So I can’t help but think this effort is part of the attempt to bring the question of the nature of being itself under the purview of the human sciences. I mean, they’re having a hard enough time figuring out the exact nature of the simplest thing in the Universe, namely, the atom, so good luck with that.
Why is there a mapping at all? Talking about mappings implies some kind of dualism.
I want to know why minds appear as brains when we look at them.
All that matters in regards to the phenomenological investigation is the ‘experience’ as it is regardless of its parcelling into ‘real’ and/or ‘physical’. This is useful to anyone looking into the neurological lens of consciousness as it allows for an agenda-less perspective that can help piece together, or pry apart, relations between data gathered in the cognitive neurosciences and the phenomenological approach that is wholly unconcerned with ‘physicality’ yet never in denial of the physical sciences.
Note: I don’t believe ALL cognitive neuroscientists regard neuro-phenomenology in the same manner as some drift from Husserl’s stance (Varela is how I found Husserl and the phenomenological approach to the cognitive neurosciences).
I’ll pull up the little article Varela wrote about the cognitive neuroscience and the different ‘philosophical’ approaches later when I find the link. I think it will be very useful for this particular discussion - couldn’t be MORE fitting :)
The inferential part about perception where the brain is guessing at what the sensory inputs will be is different than what people arguing philosophy say perception is, and the idea that you could arrange experiments to help map that indirect computation onto experience might possibly lead to discovering a causal link, instead of just supposing that argument has determined a priori that such a thing can't exist.
Of course the arrow goes both ways as the brain updates it's guessing with new inputs it receives as it tells the body to move about. Maybe this view of perception would find some agreement from the Kantians, with the inference mechanism being part of to categorizing the sensory manifold.
The Oracle (from the Matrix Reloaded movie):
I think the whole "brain-predicts our reality" thing is just another way to say that our consciousness/awareness is efficient, harmonious or works along the path of least resistance (chaos). I don't think it's a new approach to understanding consciousness, just a scientific version of an idea already existing in the philosophical paradigm.
I definitely agree with this. This is the essence of pragmatism really and pragmatism seems to be the dominant philosophy within science itself.
“So the brain combines prior expectations of what's likely to be out there in the body and world with sensory data to come up with its best guess as to what's out there, which is a form of bayesian inference. He claims there is emerging evidence the brain is doing something like this.”
An altogether terrifying prospect. This seems to imply that the real universe is different to the one that exists inside our heads. It would begin to explain why a group of people can perceive the same event and perceive and remember it quite differently. For me this really highlights the importance of the collaborative approach to understanding reality.
“My impression has always been that the folks who stress that there's an "explanatory gap" would feel that way no matter what explanation is forwarded, especially because they'll give no clear criteria for what fhey require of explanations.”
I wouldn’t be overly concerned with these critics. Seeing as how philosophy is a long term collaboration: the real test of any argument or explanation comes from repeated critique over multiple generations. If the explanation or argument has value somebody will point it out eventually. There might even be an explanation to be made about what we infer from our expectations of reality, based on a critics expectation of this explanation to be unsatisfactory.
But we already knew this was the case, at the very least because our senses are limited, and many things we only learned about the world after we had the technology to perform experiments and gather data to tell us how the universe was different.
All materials have unique properties in certain situations. I'm not sure what makes some of those properties "special." And the scientific reason never amounts to seeming like the properties in question. We just describe the materials, their structures and relations, how those are different in particular situations, and then report the properties exhibited.
Yet again you don't understand what I'm saying. All we can do is talk about the physical stuff in question. Talk about it structurally, relationally etc. None of that ever seems like any of the properties we ascribe to it. Or if it does to anyone, it's simply because they're so used to making the association and not questioning it that they take it for granted.
So if we replaced your brain with sawdust, you'd still be conscious?
For a phenomenologically minded empiricist, the very meaning of a scientific hypothesis is the sense in which experience is said to corroborate or refute the said hypothesis and this sense cannot be transcendent of experience. Therefore this empiricist is likely to reject your question as meaningless and inapplicable in the first-person.
Nevertheless, neuro-phenomenology can potentially have sense in the first-person, in terms of an association between sensory experiences and brain-probing-experiences, as for example in an experiment in which the subject records his experiences when probing his own brain.
Please provide a link to your paper when published.
The OP was referring to one neuroscientist's approach to explaining consciousness, or at least providing more detailed correlation. My question would be the hard problem, I take it. That problem comes about because of the expectation that science can provide an explanatory framework for everything.
The hard problem also has to do with the fact that we are trying to understand the very thing we are using to understand anything in the first place. Consciousness is the very platform for our awareness, perception, and understanding, so this creates a twisted knot of epistemology. Indeed, the map gets mixed into the terrain too easily and people start thinking they know the hard problem when they keep looking at the map again!
We cannot help it, as a species with language. Language itself is a form of secondary representation on the terrain. We largely think in and with language, so to get outside of that and then reintegrate it into a theory using language is damn near impossible. For example, let's take a computer. The very end result is some "use" we get out of a computer. The use is subjective though. Someone can use the computer as a walnut cracker, and it would still get use out of it. The users experience and use of the computer is what makes the computer the computer. Otherwise it is raw existence of a thing. A computer is nothing otherwise outside of its use to the user.
Then, let's go down to the other end to its components. Computers are essentially electrical signals/waves moving through electrical wires- moving on/off signals. These electrical signals are just impulses of electricity through a wire. That is it. However, because we quantized and represented things into a MAP of 0 and 1, and further into logic gates that move information to make more quantified information, we now have a way of translating raw existing metaphysical "stuff" into epistemically represented information. Every time we look at any piece of raw stuff, we are always gleaning it informationally.
Some ways that try to answer the hard problem is to call consciousness raw "stuff" rather than information (panpsychism or some sort of psychism). It is a place holder for simply metaphysical "existing thing" that we then represent as "mind stuff" or "mentality" or "quale". Other than panpsychism, which is just a broad view of "mind stuff", there is not much else one can do to answer the hard problem, because it will ALWAYS have a MAP explanation of the terrain. What is an electrical signal if not simply represented as a mathematical equation, an on/off piece of data, a diagram, an output of usefulness (the use of a computer discussed earlier)? The raw stuff of existence can never be mined. The terrain is always hidden by the map.
Or to paraphrase that last thing in terms of your post, it's the essence of things themselves that we can't get at it and remains unspeakable.
For those who prefer to conflate mystery (a form of ignorance) and incommensurability, consciousness is a hard problem.
I view the "Hard Problem of Consciousness" as a conceptual mystification, and consciousness (mass noun) as a set of sensitivity-awareness (body-mind) conditions which vary.
For example, human beings may be:
1) Conscious: uninhibited (physiologically unconstrained) and actively aware,
2) Semi-Conscious: inhibited (physiologically constrained) and passively aware, or
3) Non-Conscious: inhibited (physiologically constrained) and unaware.
:cheer:
It isn't so surprising that the hard problem is unsolvable when the capacity to give an account attempting to solve it apparently undermines any account.
Just as in the other thread I have no idea what point you are trying to make here.
For once--well, twice really--we agree.
Exactly.
Yes. The things-themselves, the terrain. Panpsychism is simply a broad term for "that" when talking of terrain..that darn mysterious beetle that we know but cannot describe. I don't know which is more odd, panpsychism or neuropsychism. Panpsychism has the oddity of all processes or matter having some sort of experientialness. Neuropsychism becomes some sort of property dualism related to the neurochemistry/physiology interacting with the environment of an organism that begs the question of where and why the psychism aspect of the neurology arises, or what is the nature of this psychism that is arising.
Much of your post seems to be mischaracterization of the problem and/or a poor choice of terms.
Take your first sentence for instance:
Quoting schopenhauer1
What do you mean by "understand"? How do you understand something you use, if not by using it?
Quoting schopenhauer1
What do you mean by "map"? I can use the map to get around the world because the map is about the world. Also, the map is part of the world!
Quoting schopenhauer1
What do you mean by "use"? How can you use something without knowing, or understanding, anything about it? Could you use a pillow as a walnut cracker? Using a computer instead of a pillow for cracking walnuts says something about the nature of walnuts, pillows and computers, no?
Quoting schopenhauer1
An analogy would be digitizing an analog signal. Our brains seem to compartmentalize the stream of information from the environment. The mental objectification of "external" processes and relationships makes it easier to think about the world to survive in it.
Quoting schopenhauer1
This is kind of what I was wanting to get at when talking about the monistic solutions. If the rest of reality is really made of the same "stuff" as the mind, then I don't see a mind-body problem. I don't see a reason to be using terms like "physical" and "mental" to refer to different kinds of "stuff", rather than different kinds of arrangements, processes, or states, of that "stuff". This is also saying that the experiential property isn't necessarily a defining property of the "stuff", rather a particular arrangement of that "stuff". So you can have objects without any experiential aspect to them. In other words, realism can still be the case and the world and mind still be made of the same "stuff".
Physicalism and Panpsychism aren't really saying anything different. They are both saying that the mind and world are made of the same "stuff" that can interact. There are simply different kinds of arrangements, processes, or states of this "stuff". The only difference is what they call the "stuff" - "physical" or "mental".
Unlike everything else we try to understand, with consciousness, it is investigating the phenomena that allows for other things to have understanding. A computer is only in relation to how the use perceives it- or what the user is conscious of about the computer. A computer may not be a computer in and of itself without a consciousness. The thing itself is perhaps unknowable outside of a human consciousness and is certainly defined by it epistemically.
Quoting Harry Hindu
The map is the secondary layer we create to make meaning of the world, it is not necessarily the world as it is in itself. It is a representation of what's going on, but not what is going on. We conceptualize and quantize what is going on, but it would be a mistake to say the conceptualization and the quantization is the metaphysical thing itself. It is just something we do to make epistemic sense to us, perhaps because we are a natural linguistic species. It is was also my claim that we can never get past an epistemic sense.
Quoting Harry Hindu
The question is why it is experience comes out of a particular arrangement of stuff. We know that it is does, but why this particular characteristic of experientialness comes out of it is the question that is begged to be answered here. By saying that this happens, there is now some sort of metaphysical dualism: arrangement of stuff/experientialness. This presents a bifurcation like any dualism.
Quoting Harry Hindu
That makes a major difference though. It is a pretty big philosophical leap to suggest that most forms of matter or processes have an experiential aspect to it. That is what panpsychism believes.
There is, but better mapping/measurements could lead us to clues and reduce the explanatory gap. Assuming this is impossible is assuming that our a priori arguments for the hard problem are bullet proof. And history isn't kind to that sort of certainty.
The "conscious problem" in philosophy has always been more ontological, how something that exists as neurons firing (or let's go further to their atomic and subatomic substitutes) appears as a unified phenomenal experience. A revolution like that is probably not going to appear in neuroscience.
As it is, over the last few decades we have a wealth of mapping a la, for example, fMRI correlations with mental events and general mental status (it can help predict mental future mental health issues, for example). Has that helped anyone who buys into the "hard problem" and who thinks there's an explanatory gap come any closer to believing that that's not the case? I don't think so.
The "hard problem" arises due to a combination of (a) a bias against seeing mentality as something physical and (b) bad analysis of what explanations are and what they can and can't do in the first place, and (c) sundry other ontological misconceptions.
Mentality cannot be seen as "something physical", so it's not a matter of bias. Bias would be to say that mentality simply cannot be something physical.
And again with your red herring about "explanations". Explanations must explain, that is all. If you want to say that mentality definitely is something physical, then the burden is on you to explain that in such a way that any unbiased interlocutor would be able to see that, oh yes, mentality really is something physical after all. And part of that explanation would be to address the question as to why it does not appear to be something physical. Without such an explanation the claim that mentality must be physical is itself merely an expression of bias.
And then your little hand waving gesture about "sundry other ontological misconceptions". But by all means carry on with your vapid unargued assertions.
You're not being an Aspie about the word "seen" there, are you?
The problem is that identifying the mental with the physical is a category error, since they are are two different domains. And it doesn't explain why there would be an identify for some brain states and not others, nor does it tell us whether other physical systems different from our own would be conscious.
The domain of mental is: belief, desire, pleasure, cold, taste, color, sound, emotion, dreams, hallucinations, etc.
The domain of the physical is: physics, chemistry, biology, function, structure, brain states, etc.
The problem is that it's not a category error. The mistake is thinking that they're "two different domains." That's very addled thinking that has no justification aside from ancient confusions that should have long been abandoned.
If you're talking about different conventional ways to talk about things, surely you're not suggesting that ontology (or more importantly what ontology is about) in some way hinges on how people normally talk about things, are you?
Quoting Marchesk
"bad analysis of what explanations are and what they can and can't do in the first place"
I'm saying our making ontological arguments does.
Quoting Terrapin Station
They're not conceptually the same sort of "things" at all. On the one hand you have abstracted, objective descriptions, and on the other, you have experiences.
Quoting Terrapin Station
Maybe, but then we're still stuck with the limits of what physicalism can explain, and not being able to say whether some physical system different from our biology is conscious.
You're saying that what ontology is about, what it's addressing, somehow hinges on the conventional language used in the ontological arguments we make?
Quoting Marchesk
They're not conceptually the same things when your concepts on this issue are confused or based on confusions, incorrect and incoherent beliefs, etc. Sure. And plenty of people have confused concepts about it. The solution to that is to make those people no longer confused.
Quoting Marchesk
We can and do say that for plenty of things. Do you mean with certainty or something?
I think that you must reach that understanding first, then you are able to use that understanding.
I'm saying that your ability to make an identity claim of consciousness to brain states is based on ontological talk. But I'm criticizing that on the grounds of a category error. Obviously, reality doesn't care what we say about it.
Ability to make a claim is "based on talk" in the sense that it's talk and one has to use recognizable language to make an intelligible claim.
What's being claimed, however, is in no way based on talk. It's based on what the world is like. Talk is secondary to that.
Category errors are not about conventional language usage. The idea that anything should conform to conventional language usage rather than conforming to what the world is like is ridiculous.
I can say the world is like a square circle, and you can rightfully tell me that's a contradiction.
Ohhhhkay . . . and?
(That trope is typically a misunderstanding of something, by the way--it's not actually about the words "square" and "circle" as in the definitions of the shapes. It's about a geometry problem re area instead.)
You're making a claim about the world that's problematic for several reasons. If it wasn't, there wouldn't be a hard problem, for all the reasons that have been stated many times before.
Goddammit man, I just explained why there's a "hard problem."
You tried, but I think there's a hard problem without the quotes, and that's why I'm explaining it to you.
That's fine that you think that, but that you do is a combo of the reasons I explained. Including that you are confused in thinking that it's a category error. That was part of my explanation.
Since you're not suffering any confusions on the matter, can you:
1. Explain why only certain brain states are conscious?
2. Say whether a machine like Data would be conscious?
3. Draw a line on which animals are conscious?
4. Say whether a perfect simulation of your brain would be conscious?
I can. but we're going to go over (so that we agree on) what explanations are and what they can and can't do first, so that you don't just say, "That's not an explanation" afterwards. Are you prepared to do this?
Remember that the second part of why there's a so-called "hard problem" is " bad analysis of what explanations are and what they can and can't do in the first place"
Sure, so let's start with how you'd characterize explanations in general. What are they? What do they do?
By learning, but not by use. You can't use something until you have it there to be used.
I feel like invoking @Banno at this point. Definition of explanation?
So, first let's make clear then that you apparently don't even have a view about just what counts or doesn't count as an explanation in general, including why it counts or doesn't count, yet you're offering criticism on the grounds of whether something is an explanation.
If we're going to criticize something on the grounds of whether it's an explanation, we'd better have some idea of what counts/doesn't count as an explanation and why.
One conventional dictionary definition of "explanation" is "a statement or account that makes something clear." Does that seem good, or would you say it's problematic for some reason or another?
No need to play Socrates. You already know how to use the word; setting out a definition will only lead showing off.
Hence Quoting Marchesk
leads to Quoting Terrapin Station
when it should have led to a story about the difference between being awake and asleep.
This doesn't inspire confidence in me that using the language game approach can solve philosophical problems.
However, I was reminded of it when trying to think of what explanation means, and not having a good answer come to mind without consulting a dictionary. Or at least, not one which didn't lead to murky waters.
Unfortunately, "you know it when you see it" won't cut it for something highly disputatious where we're trying to avoid biases/prejudices that folks have.
I didn't suggest the dictionary definition that I did because I thought it was good or that it would work for our purposes.
Re chemistry, you say it "explains the properties." How, exactly? Figuring that out will help us figure out what explanations are, what they can do, how they can do it, etc. We need to figure that out if we're going to forward philosophical critiques based on whether something is an explanation. Simply intuitively saying "I don't feel this is an explanation" sets up a non-winnable situation if someone is not intuitively inclined to believe that consciousness is physical.
So explanations of how automobile engines work, for example, or how to make toast, etc. have something to do with the difference between being awake and being asleep?
What I'm rather doing is highlighting what the real problem is when it comes to the "hard problem." A real problem that no one wants to address.
Well then, just spell out the real problem. Give your analysis of what an explanation is. I can't think of a non-controversial or overly simple definition.
I did. That's what started this tangent.
Quoting Marchesk
I didn't give one. But my analysis would stress the subjectivity of counting/not counting as an explanation. However, I'm not hinging any argument on whether there's an explanation for something. You (and others who accept the "hard problem" at face value) are hinging an argument on that. Hence you should have some plausible demarcation criteria--demarcation criteria that do not stress subjectivity in the way that my analysis would--for what counts as an explanation, why, etc. (it would need to differ from mine in that way otherwise we're really just saying something about relative psychological dispositions, etc.--that's what my analysis would be about, because that's what I believe is really going on when it comes to explanations when there might be an objection that something descriptive "isn't really an explanation").
You're certainly safe for William Tell purposes.
And there wouldn't be any need for further philosophical debate on the matter. There would be a consensus and it would be resolved. There would be no more mystery. It would be like the sun rising and setting, in that we understand what gives rise to the experience of the sun moving through the sky, and there's no debate.
For the record, you seem to be arguing in good faith and earnestly, Terrapin seems to be constantly just jerking you around and not addressing the questions at hand. In fact, he doesn't even address his own "prerequisite" questions that are supposedly going to "dissolve" the mystery. This is most likely because he doesn't have any good arguments, so it's just a long trolling holding pattern. I'd like to actually see some arguments from him for once, but I suspect he's bereft.
You're missing the point that this isn't just about explanations of consciousness. If we're critiquing something in terms of whether there's an explanation, then we'd better have a general account of what explanations are, what they can and can't do, how they do it, etc.
But you don't have to map everything to understand the Gap Problem. Just take one example of Conscious Experience and study it. I like to study the Gap between the Mapping of Neural Activity for Red and the Experience of Redness. There is a huge Explanatory Gap in between the Neural Activity and the Experience in this case. No amount of other kinds of Mapping closes this particular Gap to any degree. If Science can solve just one particular Gap it will solve all the other Gaps in one giant leap of discovery.
Yes, too much reliance on definition can do that to you. Not everything can be precisely defined. ... Not everything should be precisely defined.
Some things are intrinsically vague. Or at least the terms we use to describe them are. We all know well enough what an explanation is. There is no need to be more precise than this:
Quoting Terrapin Station
Isn't that more than sufficient for our needs? If not, what does it lack? :chin:
What's to stop anyone from effectively arbitrarily saying that something is or isn't an explanation in that case? And if that's what we're doing, how would whether there's an explanation for something serve as a hinge for philosophical or scientific claims?
Awkwardness?
That doesn't seem to be working. ;-)
An argument explaining why the purported explanation fails to explain the phenomenon in question.
If you say that consciousness is identical to brain states, as identity theorists do, then I can ask what is it about those brain states which makes them identical and not all the brain states which are unconscious?
And all the other related questions that go with that. What I'm wanting from an identity explanation is what makes the identity an identity. Just saying it is an identity doesn't work because there are brain states which aren't identical, and it leaves us in the dark about robots and animals with different brain states.
Right, except that in philosophical discussions the lack of clarity leads to much semantic wrangling. However I wonder if that isn't more to win the debate than it is really seeking clarity. And I'm as guilty of semantic wrangling as anyone else.
My favorite is, "I can't make sense of X." Yeah you can. You just don't want to because of the philosophical lens you're viewing it through.
No one does that in a way that implies an actual demarcation criterion they have for explanations in general.
Yes, semantic misunderstandings are common, and I think the reasons for this are obvious. ( :chin: ) But there is also the paralysing effect of people holding up the discussion by demanding definitions, sometimes of the most commonly-used and -understood terms. I think there is room for a middle-path compromise here? :chin: