The Churchlands
I have been studying Philosophy of Mind for years. I came on here and made a fairly lame argument in favour of eliminative materialism, and was raked over the coals. No worries. But I've recently sensed (I think?) a bit more acceptance of a neuroscientific approach to understanding consciousness. Or maybe not?
I do know the member who told me "we eat materialists for breakfast here" is gone, or lurking. Anyway I'll make this statement for discussion
The statement "eventually consciousness and qualia will be explained with neuroscience" is speculative, but no more so that "consciousness and qualia will never be explained by neuroscience."
Oh....and are you folks familiar with The Churchands, Paul and Patricia?
I do know the member who told me "we eat materialists for breakfast here" is gone, or lurking. Anyway I'll make this statement for discussion
The statement "eventually consciousness and qualia will be explained with neuroscience" is speculative, but no more so that "consciousness and qualia will never be explained by neuroscience."
Oh....and are you folks familiar with The Churchands, Paul and Patricia?
Comments (285)
They can be considered already explained: they are simply a product of the activity of our neurons. What else do you want to be explained?
I will tell you what else: your inner feeling of being a "I", a subject. Any explanation cannot but be objective and, as such, will always make you feel unsatisfied.
So, actually, we are just pretending that they have not been already explained.
How about grammar? Syntax? Semantics? Do you think they will be explained in terms of neuroscience?
Actually one of the well-worn Einstein sayings comes to mind here - '“It would be possible to describe everything scientifically, but it would make no sense; it would be without meaning, as if you described a Beethoven symphony as a variation of wave pressure.”
It's also worth noting that the *only* time you read the word 'qualia' is in connection to this particular clique of American academic philosophers, of whom the Churchlands comprise about half.
And what does "neuroscience explanation is unable to meet our subjectvity" mean?
....
it's rejected. It's like saying "it's just air blowing through a bent tube" regarding John Coltrane playing sax. Something can have two descriptions.
Why not? And qualia and quale have been used by philosophers of mind, for many years. And Churchlands are Canadian, to be anal about it!
I think you are right. And maybe there are many descriptions for the same thing. What is interesting is that some descriptions of things upset or trigger people. Generally this is when the description appears to violate their value system. Tip: avoid using the word spiritual on an atheist forum.
Quoting GLEN willows
Probably true. But here's the thing. People keep talking about how the 'miracle' of consciousness is one the barriers to accepting physicalism as a viable worldview. But I suspect that even if we can demonstrate that consciousness and our sense of self is a product of the brain, the way digestion is the product of the stomach, there will still be people unconvinced, or who will find other ways to advocate for some notion of soul. And visa versa.
But it's pointless. That's the point! But if you don't see the point, then there's no point.
The serious point is that when something is scientifically disproven, we simply move along. Dualism is a hard one, I admit. It forces us to admit that there's only one thing, the brain, and everything is matter. All out thoughts and ideas, our appreciation of beauty, are just the result of a piece lumpy grey matter. Definitely takes the romance out of things (ha), but I'm still surprised that so many philosophers cling to the notion that consciousness is in a special category, a mystical or at least special thing.
Neuroscientists, and neuroseurgons, need to understand the workings of the brain, but that tells us nothing about the problems of philosophy. So, yes, I've heard of the Churchlands, and I agree with their numerous critics. Over and out.
"We" is anybody thinking that neuroscience has not explained consciousness and qualia.
"Meet our subjectivity" is my formulation of what is considered not explained by those who think that neuroscience has not been able yet to explain consciousness and qualia.
I think you'll find a lot of people here who are sympathetic to the position you support. I certainly am. At the same time, I can understand why people resist. Consciousness is so personal, intimate, fundamental, experientially mysterious. How could it all just be mechanical. How the heck do electrical impulses become the movies I see in my head? The brain is outside, but I'm here inside. They are obviously different types of things. They are in different categories.
I strongly believe that everything I experience results from processes that take place in my body, primarily in the nervous system. If I may steal a concept from another discipline, there are no hidden variables. But saying that "explains" consciousness is like saying chemistry explains life. And the answer is (drumroll) emergence. Yes, it has become a cliche, but that doesn't mean it's not true. Saying I can break down (analyze) mental phenomena into biological processes isn't the same as saying I can build (synthesize) mental processes up from biological leggo blocks. It doesn't work both ways. That is the hallmark of strong emergence. As P.W. Anderson wrote, "More is Different."
We've had lots of discussions about this here. If you're lucky, Apokrisis will come along and explain downward constraints in hierarchical systems.
A practical example. Consider a neurological expert who claims that data shows that some area within the brain performs a function. You won’t see anything like ‘a function’ when you look at the data, which presumably consists of graphical images of neural activity and so on. You must take the experts word for it that this data means such-and-such. That ‘meaning’ is always internal to the act of judgement - you won’t see that in the data, not unless you are likewise trained in the interpretation of what the data means.
See also this article on the neurology of mice.
That's mice, right? With smells. So good luck with working out the neurology of Justice, or Truth, or Beauty!
And yet you've just waffled on for half a page 'explaining' to us exactly the sort of thing 'judgement' is...
Quoting Wayfarer
Quoting Wayfarer
Quoting Wayfarer
Quoting Wayfarer
Quoting Wayfarer
...all statements about the nature of judgement, presumably arrived at using judgement. So it seems you can use judgement to explain some things about judgement after all. Which means it's the method you object to, not the mere fact of using judgement of reach conclusions about the nature of judgement.
Are you jealous of my fMRI?
How else could it be explained? We are what we eat. Material! A huge and complex collection. With a face that can smile, arms and legs, a brain to be aware and think, etc. It's the nature of the material thats important.
Of course. They can be explained as neuronal structured dynamic parallel patterns of spike potentials. Every process in the physical world can be simulated and ordered or analyzed or used or creatively varied into new patterns. Which happens in structured ways. All perceptions are related and partially shaped by other one. With appropriate constraints, usually holonomic.
and without reference to neuroscience, which is the point.
Quoting Isaac
Do you believe in God, or is that a software glitch?
No, the point was that you claimed judgement about 'judgement' was begging the question, then perfomatively contradicted yourself by making judgements about 'judgement'. You did so from your armchair, I do so by studying people in more controlled situations and examining brain images. So I'm asking - if the whole 'judgement about judgement' issue is now cast aside, then all you're left with is an objection to my methods. So what's so damn special about your armchair that trumps my lab?
That's not what I said. I said
Quoting Wayfarer
Quoting Isaac
And what I'm saying is that, in order to do that, you need to employ judgement. You have to judge what the data means, and so on. So if you're claiming to explain the entire cognitive function of man - which is the claim that is at stake - then you are employing the very faculty which you're attempting to explain.
Of course, if you're merely conducting neuroscientific research, then you're not doing that. But that is not what is at issue.
You said it right here...
Quoting Wayfarer
As to...
Quoting Wayfarer
I agree. Since when has scientific inquiry proceeded only on the ground that such an inquiry was 'required'? I don't think even the Churchland's are making the claim that their inquiry is 'required' (though I wouldn't put that past them, I've no great sympathy for their approach).
That out of the way, this is where I'm at on consciousness vis-à-vis materialism:
1. Physical: matter & energy
2. For consciousness to be physical, those who think so beed to demonstrate that it is either matter or energy or some combination of both. If this is impossible to do, nonphysicalism is still a viable alternative as to the nature of mind.
3. That consciousness is matter (has mass & volume) seems a bit farfetched. Does the thought I'm entertaining as I pen this post have mass & volume? Shouldn't I be gaining and losing weight continuously then and shouldn't my brain swell and shrink. "A bit naïve," a (neuro)scientist might say, "that's an (over)simplification." My response: Possibly, but the ball is in your court as to what precisely is meant by mind is physical in the sense that it is matter.
4. Next energy. Heat is energy, it can do work (steam engines) and it can be measured (joules). If the mind is energy then explain how it can e.g. lift a feather off the table and how many joules is it?
5. Mind is patterns in the physical (matter & energy), but then patterns are substrate-independent (punchcards, logic gates, cellphone radio signals, can all encode the same info). Doesn't that imply the mind is, at a minimum, quasi-independent of matter & energy. It can be transferred, like me Xeroxing a document, from one substrate to another. That's a win in my humble opinion for nonphysicalism for the simple reason that as per physicalism the mind persihes with the body at death.
And I stand by it. If the claim is
Quoting GLEN willows
Then part of what will be explained is the faculty that provides the capacity to explain.
Yep. The apparent circularity of which you've just shown to be unproblematic by your doing exactly that from your armchair.
So the question remains. If you can explain "the faculty that provides the capacity to explain" from your armchair (using that very capacity), why can the Churchland's not do so from their lab (also using that very capacity)?
[quote=Ed Feser]One aspect of the mind that philosophers have traditionally considered particularly difficult to account for in materialist terms is intentionality, which is that feature of a mental state in virtue of which it means, is about, represents,points to, or is directed at something, usually something beyond itself. Your thought about your car, for example, is about your car – it means or represents your car, and thus “points to” or is “directed at” your car. In this way it is like the word “car,” which is about, or represents, cars in general. Notice, though, that considered merely as a set of ink marks or (if spoken) sound waves, “car” doesn’t represent or mean anything at all; it is, by itself anyway, nothing but a meaningless pattern of ink marks or sound waves, and acquires whatever meaning it has from language users like us, who, with our capacity for thought, are able to impart meaning to physical shapes, sounds, and the like.
Now the puzzle intentionality poses for materialism can be summarized this way: Brain processes, like ink marks, sound waves, the motion of water molecules, electrical current, and any other physical phenomenon you can think of, seem clearly devoid of any inherent meaning. By themselves they are simply meaningless patterns of electrochemical activity. Yet our thoughts do have inherent meaning – that’s how they are able to impart it to otherwise meaningless ink marks, sound waves, etc. In that case, though, it seems that our thoughts cannot possibly be identified with any physical processes in the brain. In short: Thoughts and the like possess inherent meaning or intentionality; brain processes, like ink marks, sound waves, and the like, are utterly devoid of any inherent meaning or intentionality; so thoughts and the like cannot possibly be identified with brain processes.[/quote]
Quoting Isaac
There's something they're not acknowledging, because of the blind spot of science. Because in a lab situation, you're concerned with objective and measurable phenomena. First person consciousness is not objective, it is 'what observes'. That is why Churchlands, Dennett, et al, are called 'eliminativists' - it is the first-person nature of consciousness which they are obliged to deny. Hence, the blind spot.
Yes, in the same way that one day, neuroscience will eventually be explained by neuroscience.
I just want to make this point - if you read my first post my statement for discussion was roughly "saying neuroscience can explain consciousness is speculation," but so is "neuroscience will never explain consciousness." At this point no one understands consciousness - everything is speculation. That's why I find it the most interesting aspect of philosophy/science.
I'm being careful NOT to claim it WILL be explained by science, just that it could. I would argue that inductively speaking, science has proven many phenomena in history that were considered "mystical" or simply outside of the realm of the scientific method. At one point the concept of "life" was thought of that way.
Don't forget Einstein thought quantum entanglement was "spooky."
I'm actually curious about what it is about their method that you dislike?
[quote=Ms. Marple]Most interesting![/quote]
The issue you refer to (intentionality/aboutness) is what AI researchers are presently struggling with. It appears that computers, present state-of-the-art AI, can, in a certain sense, "understand", syntax. Even a cheap PC can be programmed in such a way that they'll make, at the most, say one grammatical error in (hyperbole alert) 10[sup]100[/sup] years. It's with semantics that AI and computers in general trip up.
An aside: An interesting corollary of this fact is that it's likely that syntax evolved way before semantics in the primate brain. Yet, oddly, there's what's called broken English (an inability to get the grammar correct). It's quite a puzzle.
"Saying I can break down (analyze) mental phenomena into biological processes isn't the same as saying I can build (synthesize) mental processes up from biological leggo blocks." I believe that we will eventually build robots with consciousness. Do you really think that's impossible?
But the Churchland's (and by the way I don't agree with everything they write) use the term eliminative materialism. Anti-materialists seem to have this image of science "destroying" things by explaining them with science. I hope I don't have to list all the misconceptions humans had before science took off the blinders. And each time removing the false beliefs - in this case I would argue consciousness is a separate "thing" from the brain - caused great fear and disbelief.
I don't agree with the Churchland's about folk psychology. We still say "the sun rises everyday" when we know that's not how it works. We still say "love is in the heart" when it's not. And we WILL still talk about consciousness, what-it's-likeness if/when it's proven to be the actions of neurons or another brain process(es).
Historically, explaining nature scientifically, and disproving God to many (I'm an atheist) caused a lot of panic. Ex. "How will I know right from wrong without God??" In my opinion, we will eventually realize consciousness is part of the amazing processes swirling through our brains. And we'll still have intentionality, qualia, and philosophy, and the world will still be beautiful.
ps - I think consciousness may require a NEW method of study - that we don't yet have. You know - like when they invented those "microscopes" to study bacteria? But - sigh - I could be wrong.
That does not follow, though, because evolution did not have to mimick computer science. My understanding is that apes and even birds have a vocabulary, but they lack syntax -- the capacity not just to say a word but to combine several words into a meaningful whole.
Yeah, one point of interest is syntax has, how shall I put it?, semantic functions as well. I mean grammar serves to remove semantic ambiguity that can crop up in language without grammar.
I was kinda working from a biomimetic angle: all that humans have invented and are capable of creating are mirrored in nature (planes - birds, rockets - octopus siphon, so on and so forth). I would've expected our copies of nature's creations to be hi-fi so to speak, right down to the sequence in which they occurred.
More likely, the sequence reflects what was necessary or possible. You cannot invent syntax before vocabulary, because the latter is needed for the former to exist.
I don't know how computers handle syntax so well and score zero on semantics then! I can't quite wrap my head around that. They (computers) seem to be able to mimic semantic capabilities though but that could be a case of infinite monkey theorem actualizing on a small scale.
A computer is basically a sophisticated abacus, right? It can mimick logic like an abacus mimics arithmetics.
Reification of metaphysical predicates with empirical principles, destroys both.
Perhaps, but abaci, to my reckoning, don't possess even the basic architecture to do logic-apt Boolean algebra like a calculator or a computer. In a sense abaci are like crutches (you have to do the work) but computers are bionic prostheses, like but not exactly the robotic arms of Otto Octavius (it does the work for you). :confused:
They're neither obliged to deny it, nor blind to it. They voluntarily deny it's relevance as part of their theory. It a decision made using that judgement you're so fond of telling us about. You, clearly, have reached a different conclusion. Saying they are blind to it is begging the question.
The issue is the means by which such differences can be resolved. Adding controls and statistical analysis is just a method for such resolution we've found agreeable. You can derive theories by whatever means you like, but it's nothing but storytelling if you can't come up with an agreeable method for resolving disagreements between them. The approach taken by the Churchlands is a framework for resolving such disagreements between theories. Unless you can produce an equally agreeable method, their progress on the problem exceeds yours, their frame is the more useful.
Quoting GLEN willows
I'm most familiar with their work on belief where I don't think they've sufficiently accommodated the extent to which we take an active part in the environmental modelling process in the lower brain hierarchies. Their approach tends to minimise the role of noise reduction in neural signalling to more of a 'housekeeping' role whereas I see it more as and integral and two-way process between the somato-sensory system itself, the lower sensory systems and the higher cognitive modelling areas.
It doesn't really impact their work on consciousness much, but I think the degree of stress one places on noise reduction is crucial to any understanding of brain function and so that disagreement, although slight, does rather crop up quite often.
Like an abacus cannot count, like a clutch cannot walk, a computer cannot think.
Yes. I find their reductive materialism to be boring. Explains nothing.
And will consciousness then be "eliminated"? Could you give an entirely speculative picture of how that might happen?
I'm kind of lost. Is this intended as an argument against rational inference about mental phenomena or against any rational inference at all?
Quoting Wayfarer
Again, inference from observations is the way we know almost everything. When I'm reading about some scientific finding, I often say to myself "How did they get that conclusion from that data?" I assume that they know what they are talking about. Is that the problem?
Quoting Wayfarer
This is not a very convincing story. Some thoughts:
I've tagged an article about the Churchlands in the Atlantic, but I haven't read it yet. I'll see if I have anything to add after I read it.
Quoting GLEN willows
I agree with this.
Quoting GLEN willows
There is already a method of study for consciousness and other mental processes - psychology, of which cognitive science is a branch. Advances in brain imaging in the last 20 years or so have changed psychology in the same way that microscopes changed biology. That doesn't mean more new methods won't be found.
I don't find this very convincing. Again, it comes back to the hierarchical knowledge. It is not reasonable to expect processes and phenomena at one level to fully explain those at a higher level. Chemistry does not fully explain living organisms, but life can certainly "be identified with" chemical processes.
Quoting Wayfarer
Calling consciousness "first person consciousness" is redundant. It is possible to study consciousness objectively just like it is possible to look at eyes, think about minds, etc. Introspection is a valid method of inference, as is observing conscious actions and speech.
By the way, I'm always ambivalent about your posts. They are generally well-written, interesting, and well-thought out, and sometimes even right. On they other hand, you always have long quotes and links to articles that I have to read or I'll miss something important. I'm a busy man!! Why can't you write facile, snarky, trivial, brief posts like...well, you know who?
Quoting T Clark
Forgot to ask. Do you have a reference for a relatively brief discussion of the Churchlands' ideas that you like.
A computer is, to my knowledge, an automated alogrithm executing device. What happens, as far as I know, is that some aspects of our thinking can be reduced to an algorithm (a step-by-step sequence of instructions on what to do with what). This instruction-based thinking, it was realized by luminaries such as Babbage and Turing, could be done by mechanical and electronic devices. If when we follow an algorithm, we call it thinking, are computers doing the same also not, sensu amplo, thinking?
Another way to present what a computer does -- and one that gets closer to your initial puzzlement at the fact that to teach a machine syntax is easier that to teach it the meaning of words -- is the Chinese room argument of John Searle. Which is intended to refute a position Searle calls strong AI: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
I think Searle is right: following the right instructions, he could process Chinese sentences into other Chinese sentences (or into English sentences) without understanding a word of Chinese. And there lies a response to your question: even a human being (like Searle in his Chinese room) will find it easier to learn a few rules of grammar than the nuanced meanings of thousands words. You could learn Chinese grammar in a week or two. But the vocabulary will take you half a lifetime.
Quoting Wayfarer
And the distinction between valid and sound inferences is completely lost on you (or are you just disingenuously obfuscating the distinction under the label "rational" make a quixotic point)? Intelligible demonstrations – historical, juridical, clinical, technical, scientific – about matters of fact require soundness. Otherwise, mere validity suffices.
I have to admit, your choice of words is from time to time, impressive.
That is what I think.
As usual, another facile distortion. "Eliminativists" argue that folk psychological concepts (e.g. "consciousness", "qualia", "intention", etc) occult more than elucidate and therefore are useless in formulating explanatory models of (meta)cognition which in no way "obliges them to deny" subjectivity, or first-person phenomenal awareness. If your disagreement is rational (i.e. philosophically non-trivial) with the eliminativist argument, Woofarer, make a counter-argument which soundly concludes that folk psychological concepts are needed to explain (meta)cognition in neuro-cognitive science. :chin:
Quoting RogueAI
Just as digesting doesn't emerge from guts and walking doesn't emerge from legs.
That depends on how "matter" is defined as well as what you mean by "exist".
:sweat: This spectre of Lord Kelvin needs exorcizing.
Right. Which is optometry, psychology, and cognitive science. Not philosophy per se.
[quote=SEP, Eliminative Materialists]Eliminative materialism (or eliminativism) is the radical claim that our ordinary, common-sense understanding of the mind is deeply wrong and that some or all of the mental states posited by common-sense do not actually exist and have no role to play in a mature science of the mind. Descartes famously challenged much of what we take for granted, but he insisted that, for the most part, we can be confident about the content of our own minds. Eliminative materialists go further than Descartes on this point, since they challenge the existence of various mental states that Descartes took for granted.[/quote]
The reason they assert that the first-person nature of consciousness 'occults more than elucidates' is because it cannot be accomodated by third-person description as a matter of principle. As scientific method relies on third-person description, it therefore can't be accomodated, and so is to be eliminated.
[quote=Thomas Nagel]Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”
[/quote]
Very simple. No point trying to obfuscate.
Not at all! It is of course perfectly valid across all kinds of subjects. But think about the subject of this particular claim - that consciousness - let's say thought - can be wholly explained in neuroscientific terms. Among the things that are purportedly being explained, then, is the very process of rational inference which is used to draw inferences from data. In other words, the process of reasoning itself. That's what makes this claim different from other scientific claims.
The Churchlands - they are a married couple - Daniel Dennett, and other philosophers of that school generally hold to a materialist theory of mind. This is that mind is what the brain does, so that if you sufficiently understand neural science, then you will understand the nature of thinking.
Of course, many, probably even most, philosophers take issue with this attitude - I could provide yet more quotes, but I'm trying to keep it short and snarky. Suffice to say, the particular argument that I am trying to marshall, is that reason comprises the relation of ideas - all the way down! In other words, you can't deduce the primitive articles of reason, such as the rules of valid inference, from neuroscience. And you can't see them from the outside - you won't literally see the operations of reason in neural data - you have to make inferences about how neural systems instantiate reason, in order to explain how reason is the product of such operations*. And, I say, there's an unavoidable circularity involved in doing that, because in this case, the mind is both the object of analysis, and the analysing subject. So it's not the same as other branches of science - whereas the whole point of Churchlands' argument is that it has to be same, and any conceptual difference has to be eliminated.
----
* That is where the article about 'representational drift' in mice is relevant: the neural areas involved in mice reacting to smells keep drifting throughout the rodent brain, in a way that can't be predicted or understood by the models. So, if something that simple defies neurological explanation, then how to account for such abstractions as reason in the human brain?
Perfectly correct, but not germane to the point.
Quoting 180 Proof
Refer to the Stanford Encyclopedia of Philosophy article, and Thomas Nagel's comment. What is being 'eliminated' by eliminative materialism, is the idea of there being a first-person point of view which cannot be completely explained without residue.
Just to be unequivocal, Dennett himself says it:
[quote=Daniel Dennett; https://ase.tufts.edu/cogstud/dennett/papers/JCSarticle.pdf]What, then, is the relation between the standard ‘third-person’ objective methodologies for studying meteors or magnets (or human metabolism or bone density), and the methodologies for studying human consciousness? Can the standard methods be extended in such a way as to do justice to the phenomena of human consciousness? Or do we have to find some quite radical or revolutionary alternative science? I have defended the hypothesis that there is a straightforward, conservative extension of objective science that handsomely covers the ground — all the ground — of human consciousness, doing justice to all the data without ever having to abandon the rules and constraints of the experimental method that have worked so well in the rest of science.[/quote]
So, I understand quite clearly what eliminative materialism proposes, and I stand by my argument against it.
Maybe I didn't clarify that enough. As I said, all the folk expressions (as the churchland's call them) will eventually be just empty descriptors. Like saying "he's got some balls" as a descriptor of a man's bravery, not the simple physical fact.
Thanks for your responses. Whoever said "many philosophers will disagree" with Dennett, The Churchlands I think Patricia Churchland would agree wholeheartedly. She, and myself as well, started in philosophy but started leaning towards neuroscience. I think that might stick in the craw of many philosophers. I have a long list of philosophers who value science highly - but there's probably a longer list of those that are skeptics,
So if consciousness and the material brain are not literally the same thing, how do we avoid dualism?
From what else? But not purely from the brain's matter. In the brain, the intrinsic nature of matter is connected to the body which walks around in the physical world seen on the outside, projecting itself continuously into the brain by the senses, which makes the intrinsic nature of matter resonate and a conscious simulation of the physical world appear. An appearance shaped by our modes of thinking, worldview, expectations, feelings, a priori or inborn reactions, culture, etc. In principle, a complete material rendering of this process of a body with a brain walking in the world can be given, but such rendering will never be able to explain the intrinsic matter features which can only be felt on the inside.
quote="Wayfarer;697465"]Rational inference depends wholly and solely on the relations of ideas - ‘is’, ‘is not’, ‘is greater than’, ‘is the same as’, and so on. Judgements based on those simple elements are intrinsic to any rational claim about anything whatever, including the claim that thought can be explained in terms.[/quote]l9l
I don't think that these judgements are inherent to the claim that thought can be explained in physical terms. Why should it? That claim is just a claim about the nature of reality.
Ya - I haven't mentioned Dennett because I don't feel comfortable with all his views. But if you read the ps in my final msg from yesterday, I actually DO think we may have to "to find some quite radical or revolutionary alternative science?" to explain consciousness, as opposed to your Dennett quote..
Is that a stretch? Well...isn't there something called "quantum mechanics" that requires a different approach to particle physics? (not to mention reality itself)
As has been observed in popular literature, there's a meme that 'hey, consciousness is mysterious, and quantum mechanics is mysterious, so maybe there's a connection! :yikes: '
But seriously - the problem with the Churchlands and Daniel Dennett is actually simple. It's just a matter of perspective. The point of David Chalmer's famous paper Facing Up to the Hard Problem of Consciousness, is the impossibility of accounting for the first-person reality of experience in third-person, objective terms.
Science generally deals with what is objectively the case, right? What can be measured and predicted according to mathematically-structured hypotheses. So, ask yourself, why is this approach also called 'physicalism'? Isn't it because physics provides the ideal paradigm, in that the objects of physics can be described wholly in terms of measurable, physical quantities, such as velocity, mass, vector, and so on. This is why many of the other areas of science are said to have 'physics envy' - they want to be able to deal with things that are perfectly predictable, like those of physics.
If that breaks down at the quantum level, it's relevance to this argument might only be that physics itself doesn't go 'all the way down'. But the main point is actually a lot more simple than that - it is that humans are subjects of experience, not simply objects of scientific analysis. They're out of scope for objective sciences for that reason. But for some reason, that is just unacceptable to some people - if it can't be made an object of scientific analysis, then it can't be considered real. And that is the exact reason they're called 'eliminativists'.
Quoting Hillary
:up: (Although your 'in principle' assumes that science can perfectly reproduce a living being de novo, which so far is not even close to happening.)
folk descriptions have specific meanings for people just as the technical terms
used by eliminativists like the Churchlands . And their empirical jargon may be found pragmatically useful in ways that the folk speech they disparage is not, but to imply , as the term ‘folk’ usually does when used by cognitive scientists, that the neural concepts the eliminativists employ are more scientifically ‘correct’, is part of the problem I see with their brand of psychological
modeling. The Churchland’s concepts are just as contestable as the folk concepts they want to replace, and in my opinion have already been replaced by what I consider to be more satisfying accounts by enactivist cognitive theorists. It’s not just subjective mystical ‘woo’ the eliminativists eliminate, it’s the appreciation that reducing cognitive processes to internal computational bits misses the interactive web tying together mind, body and world.
I do. It's not because consciousness is something that lies outside purely mechanical processes, but rather it's because consciousness is a muddled notion to begin with, Boolean logic consists of all true statements, inanimate objects have no emotion, emotion is part of thought and belief, and consciousness includes an ability to suspend one's judgment as well as change one's mind about things previously held true. That's just skimming the top of the problems involved with any claims of artificial 'intelligence'(scarequotes intentional).
Some think intelligence is a function of human biology. So, there is natural intelligence and artificial intelligence. But artificial intelligence does not merely mimic human intelligence or consciousness.
Turing's paper, COMPUTING MACHINERY AND INTELLIGENCE, shows that the question is what is intelligence regardless of it being natural or artificial.
Mind/body dualism? Internal/external dualism? Material/immaterial dualism? Mental/physical dualism? Another, perhaps?
Which dualism?
I'm quite fond of Dennett's paper "Quining Qualia". Quite indeed!
There are two points here I think are worth noting; first, Chalmers doesn't claim that science cannot possibly explain consciousness (otherwise he would name it Impossible Problem rather than Hard Problem); he says that a new kind of way of doing science will likely be needed; whatever we might think he has in mind with that.
And second, accounting for first person experience in third person terms does not equate to giving a first person account of experience in third person terms; which is an obvious contradiction. It may be possible to come up with a coherent physical theory that explains why we have first person experience, but that account will not be a first person account of experience, in other words, and just in case you missed the distinction.
If the term means anything at all, it must include biological machinery. Our intelligence is most certainly a result of our biological machinery including our physiological nervous system, of which the brain is just one part.
Why? There are lots of things in the universe. Why privilege the human biological mechanism?
No. I never really understood what Aristotle meant by "art imitates nature." But he considers nature to be a process so art imitates that process. So, nature exhibits intelligence. The modern preoccupation with subjectivity obscures this idea.
Quoting Janus
:fire:
Quoting Wayfarer
:roll:
Before I could post my own reply ...
Quoting Janus
He calls it a first person science, which Dennett dismisses as a fantasy. But it must be something very like phenomenology.
We are part of nature, aren't we?
Exactly. So, "artificial" intelligence is just an extension of natural intelligence.
Hey my man! Hope all is good is peach country. Get rid of that effin lunatic Taylor Greene... Phew!
:smirk:
Artificial intelligence is not even close to being the same sort of thing that human intelligence is. Not even close. The point here is that it is a misnomer that renders the term intelligence meaningless.
How is AI different from human intelligence?
Quoting creativesoul
He doesn't mention it in the essay we're talking about. 'Phenomenology' only appears in the references. The salient passage is this
Although the question this flags for me is, if these fundamental laws need to capture intentionality, then already they're significantly different from the kinds of laws we're familiar with from post-Galilean science.
And also, as you're presumably familiar with Husserl's criticism of naturalism, I think he would call into question the sense in which such a description could be 'entirely naturalistic'. But I suppose, in the Anglosphere in particular, if you stray from the 'entirely naturalistic', then you're already going out of bounds.
From the passage:
I think this goes without saying. Dennett and the Churchlands don't deny that we experience, and it seems obvious that any theory of consciousness must explain how consciousness (experience) is possible.
And this is an important point:
Yes, a physical theory of consciousness would give an account of how experience is possible, as I said earlier, and not why we have experiences, or how those experiences seem (the latter being the province of phenomenology).
Quoting Jackson
So, you seem to be saying there is no first person experience. In any case I think it is a poor analogy; there seems to be every reason to believe that consciousness is not a substance or substantive medium, but that it is a process.
No. I don't know how you inferred that.
Yes, that is what I am saying.
You said "there is no ether" and since you were drawing an analogy between the ether and consciousness, it seemed reasonable to think you were suggesting that there is no consciousness.
Couldn't wait – made my covid-delayed move to Washington State (across the Columbia River from Portland, OR) a couple of months ago. No more assbackwards, sunstoke belt, Red State hate for me in this life! The scuttlebutt is it's better than even odds MTG will lose the primary next week (like that other trump-stain Madison Cawthorn did this week in NC). I didn't live in her district anyway ... :smirk:
Yes. As a general rule, what is the name of the philosophical outllook which has the tendency to take "experience", as distinct from, say, "matter", as fundamental?
//oh, wait. I suppose the answer is 'empiricism' but that is not what I had in mind. :sad: //
Electromagnatic waves do not move through either, that is the analogy. I am saying that consciousness is no more mysterious than the fact we think, walk, and talk.
In reference to AI: A machine can think without what we call consciousness. That is, you do not need consciousness to think or have intelligence.
:pray:
Quoting Jackson
Just like thought does not move through consciousness, eh? Consciousness is still mysterious, though, since there are no satisfying theories as to how it is possible that we should be conscious.
Quoting Jackson
It depends on how you conceive of thinking and intelligence.
No more mysterious than that the universe exists.
One property of Intelligence is the ability to respond to the environment and make new things. I would also call the evolution of the universe from the BigBang to now an intelligent process.
What makes this "an intelligent process"?
Purposeful in constructing more complex objects. Diversifying itself and being different from itself. Producing objects which are new.
The purpose of an electron is to meet other charges to fall in love with or to push away.
No. Anthropomorphism is the idea that humans are the only reality and thus purpose can only be a function of human agency.
Leibniz criticized mechanism because it excluded purpose (Aristotle's telos) from explanations. Nature is purposeful. Not always, not always in a good way, but it exhibits purpose--accomplishing an end.
:roll: https://en.m.wikipedia.org/wiki/Anthropomorphism
No. Mechanistic science did not use it to explain motion. That's all.
And I am not a scientist so I do not care what they do.
Insults end discussion.
From a Wittgensteinian POV it seems that all we're capable of is syntactic manipulation (language + logic), semantics "drops out of consideration". That's why I believe he was hell bent on proving private languages are nonsensical or can't exist.
Suppose we discuss god, the two of us. I say blah, blah, blah, god exists and you claim yada, yada, yada, god doesn't exist. We can save ourselves a whole lot of trouble by, philosophers will kill me for this, not trying to understand what "god" means but by looking at the validity of the argument put forth by both sides. Forget meaning, focus on the syntax (linguistic & logical). This is, in my humble opinion, exactly what computers do: for a computer, there's no difference between a variable and a constant (the form is more important than the content). I dunno, just sayin'
for wiw I agree with you.
Quoting Jackson
Well then AI is not intelligent according to that definition, so it seems you are contradicting yourself. Even if AI can create new things it is only on account of the fact that we have programmed them to do so, which means it is really us creating the new things utilizing the AI to augment our creativity. If the development of the universe is an intelligent (as opposed to merely intelligible) process then everything is intelligent; are you a panpsychist?
Quoting Jackson
Leibniz was a theist. Nature itself is not purposeful by any normal definition unless it has intentions, Either you believe nature has intentions (panpsychism) or you think it is driven by transcendent intentions (theism).
Of course AI intelligence is different from human intelligence. What we’re debating is whether AI will eventually develop consciousness and I say why not? It could take hundreds or thousands of years of course. I just believe history is littered with the bodies of men who said “your ‘science’ will never explain ….”
That’s what i’ve said that he says, with a quote from him showing him saying that exact thing.
I’m not ‘straw-manning’, I think it’s more the case that you don’t understand anything I’ve said.
Well, mind/body initially, but really if the mind and consciousness are one and the same, which I believe, what’s the need for any dualism? I find chalmers phenomenology a sort-of dualism/materialist compromise. Consciousness is an emergent property of the brain but then is separate….where? Floating above the head like a halo?
Quoting Jackson
Quoting creativesoul
Quoting GLEN willows
Quoting GLEN willows
Sure, I suppose we could say that it is logically possible. That there are certain conditions and/or circumstances that would lead up to robots developing consciousness. Logical possibility alone is insufficient for justified and/or warranted belief. It is logicallly possible that we are the creation of the Flying Spaghetti Monster too. Logical possibility plus adequate explanation is better.
So, what would it take for an inanimate object that operates on Boolean logic to form, have, and/or hold thought and belief about the universe and/or itself?
This requires already having a good ontological understanding of thought and belief, in terms of what it consists of as we know it. Anthropomorphism is popular... and mistaken. WE tend to personify things prior to grasping what sorts of features, qualities, and/or characteristics are uniquely human and which sorts are not. This is the measure of consciousness. This is what informs our viewpoint with regards to which sorts of things can be conscious and which sorts of things cannot.
I don't know how long you've had the pleasure of 'interacting' with @Wayfarer, but you've nailed his M.O. to a tee (and after more than a dozen years 'interacting' I'm here to tell you, GLEN, he's immune to even the friendliest persuasion, correction or shaming).
Ironically I find the arguments against materialism similar to those for intelligent design. “The eye is just far too complex to have been developed naturally”. And if a natural process developed consciousness, it can be repeated. In theory, of course, but it’s not illogical.
Whether or not it is logical depends upon the possible world in which robots develop consciousness. Saying it does not make it logical.
Logical possibility alone does not warrant belief. I remain unconvinced that we, as humans, can produce a biological creature replete with thought and belief out of inanimate material.
No irony to speak of there. That's known and deliberate. As far as creationism goes. Occam's razor applies.
Oh. I thought you were trying to say something along those lines here:
Quoting GLEN willows
---
Quoting GLEN willows
That's not the argument that I used. I'll try it one more time. At issue is the fact that consciousness is not an objective phenomenon for science. You can study cognitive function through science - cognitive science, evolutionary and regular psychology. But the scientific study of consciousness is based on observation of objective and measurable data, whereas the key attribute of consciousness is feeling, it is a first person phenomenon, it is only cognizable in the first person, not as an object. So it's not that it's too complex to study, but it's not a satisfactory object of scientific analysis. And that's nothing like an intelligent design argument.
I've just been reading (actually, listening to) a book called Silicon: From the Invention of the Microprocessor to the New Science of Consciousness by Federico Faggin. Note the title: - 'a new science of consciousness'. So he says there is such a thing.
Federico Faggin was the inventor of the world's first microprocessor, the Intel 4004, and later the Zilog Z8000 microprocesser, which is still in production after 40 years. So he's one of the principals of the Information Revolution. The first part of the book is all about chip design, and rather hard to follow unless you have some knowledge of microelectronics. But from there, he started a start-up to commercialise AI software. By this stage he was already immensely wealthy and never had to work again. But it was during this phase that he realised that consciousness is something that can't be realised in a computer. In fact he refers to David Chalmers by name, and basically recapitulates what I said above. This is followed by a completely unexpected spiritual awakening, which transformed his life's direction. He went on to form the Faggin Foundation about which he says 'The Foundation is interested in the scientific investigation of consciousness under the assumption that it is an irreducible property of nature.' Read on for more. Far more fruitful line of enquiry than Churchlands, in my opinion.
Hi Glen, I don't really like it when people here refer me to some external article, book or paper, but these few pages from the introduction to Searle's "Mind: A Brief Introduction" say it far better than I can.
https://www.amazon.co.uk/Mind-Brief-Introduction-Fundamentals-Philosophy-ebook/dp/B00VQVP8V8?asin=B00VQVP8V8&revisionId=cf1f5c33&format=1&depth=1
Many of the discussions on this forum are weighed down by the conceptual baggage he identifies, and this one is no exception.
He also deals, in my view conclusively, with the question of the possibility of computer consciousness, which has also cropped up in the present discussion and many others. Can digital computation produce consciousness? No, because digital computation is an observer-dependent phenomenon, while consciousness is observer-independent.
I apply the razor to materialism. Simply put - the simplest explanation for consciousness is that it's in the brain. It isn't visible, and doesn't have an obvious single "home." But as I said neither do any brain processes (memory, sight, etc). When you chop out a piece of the brain, or damage the brain, it affects consciousness. When you chop out any other part of the body, it does not.
I understand the arguments that consciousness may reside in the brain, but is not "just" the brain. This can only conjure up some sort of mystical "mist" floating around somewhere. Unless someone can illuminate that aspect to me.
The other options, phenomonology, pan-psychism, have too many conditions and problems (dualism).
This i see as Witty's greatest failing. Linguists are better at cracking those language nuts than philosophers, IMO. I also note that dictionaries exist and are quite useful to folks. Including to computers I would think, ie in machine translation I imagine that the coders include a lexical (or semantic, I don't know) map between the two languages.
Quoting GLEN willows
Can anyone help Glen out?
Pat is very active on Twitter. If you hate twitter, just look at her's - most of the comments and arguments are intelligent and made by Phil. profs and Phil. writers.
I quit other social media because too many people just came on to argue. Getting too old for that, nothing personal.
I know that many objections have been raised, but I haven't been convinced by any of them. Searle's argument from the observer-dependent nature of digital computation seems straightforward and decisive to me: whether a computer is carrying out computation or not is determined by us, outside observers, and is not inherent in the physics of the computer. In exactly the same way, arithmetic is not inherent in the physics of an abacus.
Can you present an argument against that?
And did you understand what he was saying in that introduction about the futility of "how do we avoid dualism?" discussions?
"...it is a very strong argument for reductive or eliminative materialism: we can do without consciousness and meaning and still have the capacity to reason." Paul Churchland
...it obviously doesn't mean that - if eliminative materialism is correct - our minds will abruptly stop having consciousness or meaning. I have this horror movie image of people turning immediately into Zombies without intelligence or goals in their lives.
This is a silly image - but there seems to be a whiff of that in everyone's feelings about devaluing folk psychology. It's why I think it's the weakest chink in the EM's armour - might have been better to downplay it.
https://www.researchgate.net/publication/224905185_The_nature_of_the_present
So the elimination of the present, which is what physics does, entails the elimination of 'process' in favour of "an ordered system of events" and the elimination of flow and dynamics in favour of a static world of 4 dimensions.
"the interaction of perceiving self-conscious individuals with their environment" is nothing but such a static orders system. Thus at every moment I am equally conscious, while at each moment I am located in the particular moment of time by the state of memory and imagination that constitute knowledge of past and future respectively.
Science here achieves the god's eye view from 'outside time'; it is a view shared by some mystics:
https://jkrishnamurti.org/content/cleansing-mind-accumulation-time
Me think there is merit to the idea that we are our brainwaves.
If so, then the argument is that it's obviously true, to anyone but an uneducated person. It seems like a reaction to the initial amazement with computers. Regardless, it still doesn't mean that an AI device can't eventually develop such advanced capacities, and even consciousness. This isn't something that would necessarily be a positive thing, would it? It would definitely be an ethical dilemma at the very least.
The way I would put it is that thought is not consciousness, but a mechanical process of symbol manipulation that one is sometimes half conscious of, rather as I am half conscious of the kettle coming to the boil in the kitchen, but don't mistake it for the essence of consciousness.
First of all, one should examine situations in which they claim not to have been conscious of anything. For example, take the claim that one wasn't conscious of anything while one was sleeping last night. If this were indeed the case, then how could one possibly know it? Isn't empiricism, i.e. conscious verification, supposed to be the most authoritative methodology for making epistemological claims, in which case, isn't unconsciousness an inadmissible concept? Or are we supposed to put faith in pure reason here and unquestionably accept the testimony of others?
One is tempted to say that one can only deduce one's state of unconsciousness in retrospect via a rational reconstruction of what happened in the past. This conclusion ought to encourage a focusing of attention on the meaning of retrospection, including the nature and existence of the past itself , a topic which impinges upon debates concerning the nature and existence of time, space and causation.
At the very least, thinking about sleep and unconsciousness illuminates the conceptual inter-dependencies of philosophies of consciousness with philosophies of time and philosophies of causation, in which the debate between realism and idealism is ever present. Sadly, most neuroscientists appear not to grasp the conceptual scope of their investigations and instead derive trite and unenlightened conclusions.
As in ‘the really, really real’? Or as in ‘pragmatically useful ways of interacting with the world’? I side with those philosophies of science ( Kuhn , Feyerabend) who see the latter as the role of science, which is. it that different from the role of the arts, whereas writers like Dennett have not been able to peel themselves away from a certain realism that is not far enough removed from
correspondence notions of empirical truth.
Hm. Well then I don't think you do understand the Chinese Room. What it shows is that semantics is not intrinsic to the computer. Some ten years later however Searle came to the realisation that syntax is not intrinsic to the computer either. He said he ought to have realised that years earlier. It's this point that I believe makes computer consciousness a logical impossibility
Once again I find it useful to consider the analogy with an abacus. Here are some instructions on how to use one:
The user chooses how the positions of the beads are to be interpreted. Neither the syntax (the rules for moving the beads) nor the semantics (the meaning of the position of the beads) are intrinsic to the physics of the abacus.
The situation is no different with a digital computer. The designers and users of the computer choose how the physical states and features of the computer are to be interpreted. So whether or not a computer is carrying out computation is an observer-dependent matter.
Quoting GLEN willows
According to the argument from the observer-dependence of computation itself, the computer has no such capacities, advanced or primitive.
Note that consciousness, in humans, or dogs, is not an observer-dependent phenomenon. Whether you (or your dog if any) are conscious is not a matter of interpretation by an external observer.
This is the argument that I take to decisively rule out computer consciousness.
We don't need arguments for that. Just point at the impossibility to create even a single neuron from scratch.
Chalmers invented a weird game. He proposed that if you replace each neuron with a tiny computer computing the right exit signals based on the input, you wouldn't notice that your whole brain was replaced by this procedure...
That seems to presuppose that neurons operate in this kind of linear fashion. They don't. The idea is scientifically naive.
I totally agree! There is no calculation going on in neurons in the first place. No written program. Even if you could compute non-linear processes, it would still be a programmed simulation of a neuron. Not really a fresh wind blowing into your brain!
No, it learns and is not just repetative.
As a process. Not merely a function of the physical.
Not necessarily. It is perfectly consistent to adopt an anti-realist stance regarding the existence of other minds, where the existence of other minds is considered to be ontologically dependent on the perceptions of the observer.
This position has the advantage of being able to refute skepticism regarding the existence of other minds, in identifying the recognition of another mind as partially constituting the very definition of said 'other' mind.
AI is programmed. It appears to understand, learn, be creative, feel, think, or be intelligent because of a programmed series of hyperfast operations on collections of data.
Same with humans. Not being sarcastic. Most humans are not creative. Most don't have awareness of their feelings.
Consciousness is not the process itself. Its the content
Quoting Jackson
The brain processes are not programmed. There is a free rolling process going on. Only connection strengths are varied.
Both.
Human processes are programmed. We don't cause our brain to have thought.
What an idiotic remark! Have you actually met and interacted with humans?
Goodbye.
You will have to elaborate as to why you consider functions to be observer dependent, but not the existence of other minds.
After all, we recognise the existence of other minds in terms of behavioural stimulus-responses we relate to, which are in turn correlated to the ability of said body to perform computation. How is it consistent to regard the computation to be observer-dependent, but not the existence of said 'other' mind?
Also consider borderline AI cases. Suppose that 50% of the population believe an artificial agent to be conscious, but the other 50% disagrees. A realist regarding the existence of other-minds will conclude that half of the population is right and that the other half is wrong. But there is no reason to assume the existence of a transcendental fact of the matter concerning the consciousness of the agent, that is above and beyond the observable behaviour. An anti-realist can simply conclude that the agent is 50% human-like in it's observable responses as judged by aggregated public opinion.
I still think you are misinterpreting the meaning of "observer-dependent" as I'm using it.
Money and marriage are observer-dependent phenomena, in that something is only money or a marriage because we say so. Whether an abacus or a PC are carrying out computation is observer-dependent in this sense.
Metals, mountains, molecules and minds are observer-independent. Something is a metal and acts as a metal does regardless of what any external observer may say or think about the matter. And a conscious entity is conscious (has experiences, feels stuff) regardless of the views of external observers.
But has A.I. solved the frame problem yet?
What is the frame problem?
Thank you.
But does anti-realism deny that even just as a “pragmatically useful” way of understanding consciousness, it’s important to learn about the process?
Sometimes I find philosophers (not all) get mired in realism/anti-realism arguments, while scientists just keep working on new ideas. I’m not anti-philosophy or blind to all of sciences flaws, I just think a lot of philosophy of science tends to dismiss the strides made in - for example - neuroscience. It’s seems to be mostly about poking holes - in my opinion.
I know this is a simplistic perspective, but that doesn’t mean it isn’t true. I’m definitely not as accomplished a philosopher as many of you here. But I do think it would benefit academia and probably the world if the disciplines could work together on these major issues. And some are coming around to that conclusion.
I admitted right in my first post that this whole discussion is speculative. No one here KNOWS what consciousness is, so none of us can predict whether it will be something that can be explained with neuroscience, or created and put into a robot or computer.
This “I don’t believe consciousness could ever be…etc” just seems like circling the philosophical wagons and a lack of imagination.
Thanks for the comments
:up: ... or advanced neural nets (post von Neumann systems).
:clap:
Do they deal with the argument I raised about the observer-dependent nature of digital computation?
How?
But it isn't the case that we know NOTHING about what consciousness is, we have gained a vast amount of knowledge about it in my lifetime and that knowledge is growing at an accelerating rate. We know enough about consciousness to know what can't cause it, and the observer-dependent nature of computation means that computation can't be the cause.
Don't you think it is now being explained by neuroscience?
Perhaps you would like to elaborate on how you know this? Do you think there's anyone on the forum here who doesn't have awareness of their feelings? All the children I know have been creative from an early age, creating drawings and imaginative stories, and they all have clearly had awareness of their feelings. If most humans are not creative and don't have awareness of their feelings, have I just been incredibly lucky with the humans I've met?
Here are children's drawings from around the world: https://multiculturalkidblogs.com/2015/08/12/wordless-wednesday-kids-drawings-around-world/
But let me ask you: is there any reason, according to this argument, that it’s impossible to learn about the interrelation between the brain and consciousness? And how we might get insights into the process through neuroscience?
No, if anything it's the opposite. The argument aims to show that computation can't cause consciousness because computation is an observer-dependent phenomenon, whereas the brain (and body) and the consciousness they produce are observer-independent phenomena. They are what they are and they do what they do regardless of what external observers think and say.
The argument seems relatively simple and clearly decisive to me, which is one reason I have had a long-term interest in the topic. Another reason is that it is very widely believed that computation could some day cause consciousness, and I feel that I know it can't. In the same way that I know a weather simulation on a computer can't cause rain. It intrigues me that so many people don't see this.
Can you answer my questions?
Well I thought you were saying quantum computation might cause consciousness, and ?180 Proof seemed to think that neural nets might do it, and as far as I know neural nets are still computational (and observer-dependent). Chalmers may have said that about the thermostat.
Well I'll try, but not any more tonight, I'm going to sleep.
What is your argument?
I made use of some of their work on neural networks in a masters thesis concerning knowledge and ethics in organisations. I rejected their eliminative materialism as a basis for decision making on the grounds of impracticality. In the absence of a usable Neuro-physiological theory, we must rely on our “folk Psychology”.
"In the absence of Neuro-physiological theory..." So far. Isn't it possible that that could change?
I do t think consciousness is an ‘it’, some special facility that some living things happen to have produced. Instead , the basis of consciousness is present in even single-celled organisms, and I strongly believe that this is a continuum that can be even be traced from
the non-living to the living. The point is, in consciousness we’re dealing with a feature of the world that is not all or nothing but becomes more complex in tandem with the evolution of living systems. We will never produce something that is conscious in the same way as living creatures , just as we won’t produce flying machines that exactly duplicate what flying animals do.Our inventions build upon what has already been produced in nature rather than recapitulating it. But that means that , just as consciousness is a developing product of evolution , our thinking machines will evolve in their own way. We will produce
ever more complex devices that will achieve a kind of ‘consciousness’ that does not duplicate but mimics the consciousness of living forms. One could say that it will be parasitic on our own consciousness.
The statement "eventually consciousness and qualia will be explained with neuroscience" is speculative, but no more so than "consciousness and qualia will never be explained by neuroscience."
It's all speculation and I'm getting the VERY REAL sense that most of the members here are firm in their belief that it's literally impossible that science might provide a solution to the "Hard Problem" of consciousness.
And if you read other posts, I also say this seems pretty close-minded, considering that history has a plethora of examples of science doing/discovering/proving things that were previously considered impossible to understand.
So no more questions, thanks for engaging.
Doesn’t Dennett believe that a thermometer has a bit of consciousness? That is , that consciousness only makes sense when we take the intentional stance , which is just a convenient fiction, and from within this stance , a thermometer does indeed have intentionality.
Personally , I suspect that eventually we will
abandon this fetishizing notion of ‘consciousness’ as some special capacity that a thing has or doesn’t have. I think all living systems , including plants, have consciousness in that what we are looking for when we play around with this seemingly mysterious notion is the general autopoietic self-organizing capacities of living systems. When we ask if a thing is conscious what we are really asking about is the level of organizational complexity of a living thing’s consciousness. If we could come anywhere near to understanding how a single-called organism works , we would be well on our way to understanding consciousness. The difference between an amoeba and a human mind is just icing on the cake. Contrary to the views of Dennett and Nick Bostrom , a computational , representational device will never come close to what even the r simplest living system can do. We need a very different kind of architectural
model. And when we begin utilizing such a model , we will likely dump the silicon chips in favor of genetically engineered wetware, and interact with these wetware systems in ways more like how we interact with animals than with machines. Once we forget about our superstitious ‘consciousness’ fetish, the question won’t be whether they are conscious but how conscious we can make them. That is , how complex can we make these wetware self-organizing systems.
And anyway, AI itself stands for ARTIFICIAL intelligence, and whatever consciousness we (or it itself) will produce will be an artificial version, different from human consciousness.
But can you not imagine an AI with enough consciousness to plan, make decisions, use logic, discuss politics, play chess (oops already happened)? Even those things, which I think are a pretty low bar to set, would be amazing. Other things like emotions and other qualia definitely SEEM impossible, but I think they're possible. Though the ethical issues would be huge.
I remember when the concept of a computer recognizing the english language and then printing it out on paper in front of my eyes seemed crazy. Yet here we are.
Ok it seems like we're in agreement - and thanks for the info that I lacked!
I just realized I read the typo in your post
"I do t think consciousness is an ‘it’" as I DO think etc. ha!
Sorry - Friday night, just being silly.
I would put a question mark in front of that.
Even when projected into the future, it will be to no avail. Computers will never be conscious, regardless what kind of computer. Consciousness can't be computed. It can't be forced. It's inherent to freely evolving processes. AI only looks intelligent, to a very small extent. Computers are good in the hyperfast execution, on the rythm of the hyperclock, of a sequence operations on massive data streams. In the brain there is a totally different process happening. The running of spike potentials on the neural network is wrongly compared with potentials and currents used by computers. The spike potential patterns that travel parallel en masse on the network are no information as used in a computer. The patterns running in the brain form mental objects themselves, without being information like in computers, which refer to other object, inform about objects. A pattern of freely running spike potentials is a different pattern intrinsically than a pattern of ones and zeroes being pushed around programmed. In a computer, a pattern of ones and zeroes can hold information about, say, a football, is different from a brain pattern that simulates a ball. The simulation in the brain is the ball in mental form. The information on the computer just refers to a real ball without itself having ball features. The mental ball floats around in the brain. Information about a real ball, in a computer, is pushed around by a program, without itself showing ball-like behavior.
Whereas, I personally find that to be a very poor example of an explanation. I would also argue against the idea that consciousness is the sort of thing that has such a precisely ascertainable spatiotemporal location.
That's based upon a notion of consciousness that I find is a bit emaciated.
Cut off the foot, it affects the nervous system, the belief system, etc. All of these are integral parts of human consciousness. Cut out the tongue and it will certainly effect/affect the individual's worldview.
It "learns" because we programmed it to do so. Algorithms...
We know what they can do now; we don't know what they can do in the future. We can only assess what seems plausible now.
Why does it have to come into existence?
Yes.
To me this is the best and most logical response.
So you feel consciousness exists in all parts of the body? Taken ad absurdum this means there's consciousness in your toenail.
Anyway I think it's more logical to say that taking out a chunk of your body other than your brain will affect your consciousness the same way seeing something sad does. It has an effect on it, but doesn't actually remove part of it.
When my brother died, it definitely affected me, but not in the same way - or as much for that matter - as a frontal lobotomy would. My consciousness remained fully intact - and in the long haul might have even improved my conscious decision-making and life choices.
Beyond that, the definition of "what is consciousness" is kinda the topic of discussion here.
:cool:
And hey, it ain't brain surgery :wink:
:up:
Well, you couldn't do it yourself, but as the brain has no nerve endings, it is impervious to the pain caused by surgical incision, so patients can experience having their brain operated on while fully conscious. There was a pioneering Canadian neurosurgeon by the name of Wilder Penfield who conducted thousands of these procedures over decades. And, whilst only peripheral to his main body of work, his discoveries caused him to form a rather dualist view of the brain. He noted that he could, by stimulating areas of a subject's brain while conscious, cause them to have vivid recollections of past experiences, experience sensations, and even to move parts of their body. But, intriguingly, all of those subjects reported that they knew when what they were experiencing was being triggered by the stimulus - they would invariably be able to report that 'you (the surgeon) are doing that', could distinguish those effects from voluntary actions and recollections of their own volition. He published a book on it, called Mystery of the Mind (which regrettably, but probably predictably, has become canon-fodder in the psi wars.)
As though a chess grandmaster is more conscious than a waiter. The usual class bias!
This is the first question they ask on the emergency medical line.
To be alive is to be interacting with the environment - sucking it in and squeezing it out, and to be conscious is to actively respond to pain, to noise, to voice, to touch, to light in the eyes, etc.
Computation is not necessary.
You mean, you didn't read what I reported Penfield to have said?
:smirk:
:blush: I'll get to it. Au revoir.
Is that the bit where you don't know what it means?
I think you need to stop making sweeping statements about what members here believe. We're individuals, we have different beliefs and approaches. You should also stop making similar statements about what philosophy thinks and does. Again, different philosophers have very different and often opposing views.
Instead, deal with what individuals have said, here and elsewhere. And also tell us what you think yourself, and why.
It's not all speculation. It's not speculation to state that a weather simulation on a computer will not cause rain, for example.
Yes.
A proper semantics, such as the Chinese Room lacks, is a game of pretend: the pretending of appropriate connections, between words and things, and between tokens of the "same" word. And it depends on observers, because it's in the nature of an act of pretending that there can't be any inherent and un-observed connections between the act itself (the brain shiver) and any of its many plausible interpretations.
And if semantics is observer-dependent, perhaps consciousness is what we call the fleeting (or more persistent) occasions of forgetting that it was all pretend. It's an aspect of our pretending, and hence observer-dependent as well.
The Chinese Room lacks the skills to play the game, and other players may or may not discover that it isn't really joining in. It doesn't understand. It depends, like an abacus, on the involvement of the skilled players, to perform its computations, or conversations. They have to do the pretending of appropriate connections, between words and things, and between tokens of the "same" word. So, lacking the skills, the Room lacks the confusion we call consciousness.
But it isn't obvious that the Room's limitations result from its digital machinery: that it couldn't be enhanced so as to be able to learn to pretend. Some way down the line.
I do think consciousness is a phenomenon some living things have produced. I think that a certain prerequisite for consciousness is present in single-celled organisms, but I don't think it is present in the non-living world.
Living things are individuated in a way non-living things aren't. There's an inside and an outside. The single-celled organism is an entity in a way non-living things aren't.
I think you need that before you can have consciousness. There has to be an experiencing entity.
What do you think of that?
Of the mind forever"
Quoting Agent Smith
Celebrating ignorance.
We're encouraged to apply the Principle of Charity here. In brief, we are to take an interlocutor's arguments in their strongest form, and assume that the interlocutor is competent and rational. In greater detail:
1. While temporarily suspending our own beliefs, we actively seek a thoughtful understanding of presented ideas, exposition, theory, or argument prior to assessing their justifiable merits or weaknesses.
2. We assume for the moment the proposed ideas are true even though our initial reaction might be, or is, to disagree; we provisionally seek to tolerate ambiguity for the larger aim of understanding the presented statements which might prove useful and helpful.
3. Preliminary emphasis is placed on seeking to understand rather than on searching for inconsistencies or other confusions.
4. We seek to understand the ideas in their most cogent form and actively attempt to extract an accurate interpretation in an effort to resolve contradictions. If more than one view is presented, we choose the most cogent emerging perspective — and, if possible, secure the key ideas interactively with the presenter.
5. Since the meanings of translations or interpretations depend upon an interdependence with background beliefs and behavior, some indeterminacy or uncertainness is unavoidable.
6. Once the ideas, exposition, or argument have been reliably identified and articulated with any irrelevancies dismissed, the resulting account can be properly critiqued.
Money and marriage are examples of observer-dependent phenomena: something is only money or a marriage because we say so.
Both brains and minds are observer-independent phenomena. They are what they are and they do what they do regardless of what an external observer says or thinks.
The metal conductors, silicon chips and the bumps and hollows on an optical disk in a PC are observer-independent items.
But whether or not the PC is carrying out computation, that is, the meaning of the states of the metal conductors and the chips and the optical disk are observer-dependent items. We ascribe meaning to the states of the computer, the meaning is not intrinsic to the machine.
This is equally true of an abacus, of the device you are using to read this, and of the most advanced post-Von Neumann quantum computer. The physical components of all these devices are observer-independent. But whether they are carrying out computation and what computations they are carrying out, that is, the meanings of the observer-independent physical states of the device, are observer-dependent matters.
Computation therefore cannot cause consciousness. To think so is to make a category error.
Computation as we know it or as we presently define it -- the mechanical stacking of 0's and 1's -- cannot cause consciousness. To think so is to make a category error.
I think it follows that a machine that would be conscious would need to be more than a sophisticated abacus. It would need quite a few things more perhaps. E.g.
1. A first person experience of the world. It needs to be an entity that can wander around and see by itself what elephants and rainbows look like.
2. The understanding that words relate to this world, that sentences can describe it. That text has a meaning in the world, and also a force. Words have consequences in the world. It's called communication.
3. A capacity for infinite self-reflecting loops. It need to be conscious of itself but also be conscious of the world and of being at the world, and conscious also of what it means (roughly) to be conscious, i.e. be consciously conscious. And be conscious of being consciously conscious. And conscious that others around it (them humans) are conscious, etc. etc.
More?
Why does a machine have to be conscious?
Every computation has the property of not being able to wear conscious life. Conscious life can appear only in a freely evolving process,, without a program forcing the process. Be it quantum computed, or however computed.
Does this apply to the conscious life of animals,?
If AI is part of the evolution of the universe then it can understood as part of nature.
Of course. But its nit a freely evolving process.
Why not?
I believe so, yes.
Because there is a split between the process and the program directing it. Of course the program evolves freely when set in motion, but the process it directs is programmed and thus not free. It depends on the program inserted by us.
Just as humans cannot fly, teleport, or calculate the distance to the moon by looking at it. Programmed.
You really think a dog is conscious about the fact she's conscious? It would impair the playful character.
That's not programmed. That's part of the free process.
Ok, same as AI.
No. In AI, everything is programmed by a program you can point to. Where is the program in our brain?
The way the brain functions. It's a finite object.
And yet, something does. A physical system, the body, the brain, underpins conscious thought.
Quoting Jackson
It doesn't have to be but it's conceivable.
Yes, but where is the program directing the process?
I understand that I don't want to be "celebrating ignorance" like this:
https://thephilosophyforum.com/discussion/comment/698236
and
https://thephilosophyforum.com/discussion/comment/698236
The program is the process.
But there is no program guiding it.
The process is not a program. A program resides external to the process. That's exactly my point.
Which I do not agree with. No need to repeat this debate.
It's not a matter of opinion.
I did not say it was.
Then where is the external program in our brain?
Not external.
Good point. But she's conscious. And she knows you're conscious. It's just that she can't reflect about it.
What does the program say. Where in the brain it is situated? If there is such an internal program, how dies it apply to the process in the brain? From the inside or from the outside is both separated.
She can act though. The jealous little lady once faked a painful hindleg when I paid attention to another dog. She just dropped to the ground, crying like a little baby. She got mad at me when tried to help her! Poor thing... What a bitch!
No. Crudely put, the body is one necessary part of consciousness.
Quoting GLEN willows
I'd be interested to see what you think count as all necessary parts of consciousness.
Quoting GLEN willows
Are those the only two options? On my view it does not make sense to talk about consciousness having a spatiotemporal location. As before, I think the notion is muddled to begin with. Consciousness seems to me to consist entirely of thought and belief. Thus, it emerges with individual creatures capable of forming thought and belief about the world and/or themselves.
Necessarily parts of consciousness? Any of it. Some of which are awareness, emotions, memory - all of which have been proven to be affected by damage to the brain, not the foot - hand etc.
If you think about this discussion, I’m actually NOT trying to define consciousness, you are. My stance at this point is just garden variety common sense. I don’t know what consciousness is - that’s the point. You can describe it, but can’t explain it’s origin, how it developed. I find your attempts at defining it on the one hand vague, while also being obvious. It’s awareness, intentionality, memory, and quaila. We all know that.
But despite not really knowing what it is, you - and other non-materialists - insist it will never be explained. Especially by science. I just find this puzzling. I can’t think of anything else that so many people insist is unexplainable.
Or am I wrong? Do you agree that science could eventually explain consciousness as a process of the brain? Possibly?
This is something I touched on in a response to Josh's above.
I think only living things are entities of the appropriate kind. A single-celled organism is an entity because of the way the cell wall separates it from its environment. It isn't conscious, but this individuation is a necessary step on the way to consciousness.
Maybe achieving that individuation, creating genuine entities, is the real Hard Problem.
You're doing the generalisation thing again. It's not a democratic decision, it doesn't matter how many people believe something. Focus on what individuals have said.
There's nothing wrong with generalizing, in principle. I'm surprised at how much rain we've had here lately. I'm surprised at how many people believe in conspiracy theories. I'm surprised at the number of people (I didn't even say philosophers)) who believe consciousness will never be explained.
I have the right to an opinion.
Please stop the lecturing, I've tried to be accommodating, and I apologized to you. Let's agree to disagree on consciousness, we've covered the territory. Please?
Don't be silly Glen, we've barely scratched the surface. Of course there's nothing wrong with generalising, but it doesn't make for an interesting philosophical discussion. Neither does agreeing to disagree!
I agree Life is deeper a mystery than Thought because it already includes self-consciousness as a logical possibility.
Life is based on the genetic code, itself a form of language in which the recipes for various proteins are 'coded'. I.e. written down in DNA code. The code is interpreted by ribosomes. They play the role called by Pierce "the interpretant" in his linguistic theory. But of course they are far from being the only ones, because there are other codes at play in life than just the genetic one. There are hormones and their receptors for instance, and an endless list of regulators and receptors of myriads of processes. A lot of these processes have to do with what gets in the cell and out via the membrane.
All this to say that life makes language possible. Literally it is a form of chemical language, and it creates (or is based on) many interpreters of language.
Then consciousness is not the same as intelligence. A system can be intelligent without be conscious.
Yes, I can imagine a computer that isn't conscious easily passing a Turing Test and giving intelligent answers to questions. Would that be true intelligence, though, or an example of Searles Chinese Room?
In more ways than one regarding what you think and believe about our 'conversation' here. You're confusing what I've said with what others have said. I've no time for this.
Be well.
Intelligence. I don't agree with Searle on the Chinese room.
The modifer "true" doesn't function, or add anything meaningful, in your sentence. By definition if an entity passes the Turing Test, then that entity functions indistinguishably from other intelligent humans to intelligent human observers. That the entity "is not conscious" implies merely that the entity is not interacting with its environment by generating a phenomenal self-model as the 'experiential focus' (or axis) of phenomenal continuously-updating environment / world-model within which it is an embodied agent (with a "theory of mind"). Like, for instance, an active, high functioning sleepwalker, no? :chin:
By "true" intelligence, I mean actually intelligent, as opposed to mimicking intelligence.
Can you further explain this? What is "mimicking" intelligence mean.
Sure. Suppose you have a computer with no programming at all. All the switching operations inside it happen randomly. It is possible, though incredibly unlikely, that that computer can pass a turing test, just by random chance. It would seemingly be intelligent, but it wouldn't really be intelligent.
What if an artificial system can be productive, isn't that real intelligence?
Further: The distinction between natural and artificial is the problem.
There is some possible world where a computer randomly puts words together and produces great works of literature. Is that artificial system intelligent? I don't think so.
The unlikelihood isn't the issue. The issue is that we're getting an intelligent response from a "dumb" system (a system that produces things randomly). The response (passing the Turing Test) is seemingly intelligent, but fundamentally, the system just produces gibberish. It just happens that the gibberish it produces matches the gibberish we expect. That's not intelligence.
Okay, I was thinking of deliberately constructed machines.
It's possible a computer producing random words could pass the Turing Test a million times in a row. It's incredibly unlikely, but there's a possible ETA [world] where that happens.
We are free, but guided by some programme alright. Instinctual fears and desires are an example of it.