Does ontological eliminative materialism ascribe awareness to everything or nothing?
First a disclosure. I am not a professional philosopher, but a layperson looking for an answer. But that should not stop a more qualified respondent to take the discussion to an academic level. If they so chose, they had a reason.
To my understanding, there are three major positions regarding the nature of the mind. The first is spiritualism, which asserts that the mental states cannot be completely explained physically. Another one is reductive materialism, which asserts that both the environmental and emotional sense experience can be explained by an entirely physical, logically consistent, maximally accurate predictive and regressive model. However it does allow that there might be qualities of being human that it does not encompass. What I would venture to call, existential incompleteness. Eliminative materialism does away with that last addendum.
I assume however, that even eliminative materialism recognizes consciousness as a concept, referring to our awareness. So, my question is - to what does eliminative materialism ascribe awareness, in what amount, and what physical criteria are involved?
For example, are inanimate objects aware more so then vacuum? Do larger structures, such as societies and solar systems, inherit some of the awareness properties of the individuals populating them? Does self awareness exists in the components of an aware life form? For example, neurons, the instant they resolve to a sense of self differentiation for a particular person? Will machines with human-level intelligence (designed with similar values) possess qualitatively indistinguishable existential experience? If a self aware life form is disassembled and reassembled, does it inherit all its existential properties, as if it were instantly transported to its reconstruction environment? If a life form is disassembled and reassembled in copies, do all copies inherit at the initial instant all the existential properties of the original? Is there a difference between being and not being for a subject, assuming the sense data is incorporated in its mental state (i.e. the brain, computer memory, etc) in the end?
To my understanding, there are three major positions regarding the nature of the mind. The first is spiritualism, which asserts that the mental states cannot be completely explained physically. Another one is reductive materialism, which asserts that both the environmental and emotional sense experience can be explained by an entirely physical, logically consistent, maximally accurate predictive and regressive model. However it does allow that there might be qualities of being human that it does not encompass. What I would venture to call, existential incompleteness. Eliminative materialism does away with that last addendum.
I assume however, that even eliminative materialism recognizes consciousness as a concept, referring to our awareness. So, my question is - to what does eliminative materialism ascribe awareness, in what amount, and what physical criteria are involved?
For example, are inanimate objects aware more so then vacuum? Do larger structures, such as societies and solar systems, inherit some of the awareness properties of the individuals populating them? Does self awareness exists in the components of an aware life form? For example, neurons, the instant they resolve to a sense of self differentiation for a particular person? Will machines with human-level intelligence (designed with similar values) possess qualitatively indistinguishable existential experience? If a self aware life form is disassembled and reassembled, does it inherit all its existential properties, as if it were instantly transported to its reconstruction environment? If a life form is disassembled and reassembled in copies, do all copies inherit at the initial instant all the existential properties of the original? Is there a difference between being and not being for a subject, assuming the sense data is incorporated in its mental state (i.e. the brain, computer memory, etc) in the end?
Comments (144)
I believe some eliminative materialists contend that “consciousness” doesn’t even exist, that it is folk psychology. To them, the concept should or will be eliminated in time and with new neuroscientists discoveries.
But check out “embodied cognition”, which I believe is superseding the computational theory of mind.
Let me elaborate where I see the parallels. According to relationalistic interpretations, such as those of Leibniz, time is an ordering. Concurring with some post-relativistic ideas, time is not a changing property, but is merely a human faculty through which the subject rationalizes its temporal beliefs. Thus the intentionality (if I use the term correctly) is not changing, but is associated with particular state of mind that the subject possesses in a temporal continuum of chronologically ordered versions of itself. Essentially, the assertion is that time separation is an illusion, consequence from our evolution and the natural law. However, if this is the case, it follows by analogy that space separation is an illusion as well. Meaning, that we are subject to spacial relations governed by natural law, which our mental faculties have evolved to appreciate. Assuming that time and space are an illusion indeed (although I am not necessarily making the claim), it follows that our self-concept has to be an illusion as well. It is merely a mental faculty that appreciates the natural law affecting our embodiment. But the embodiment is not particular to anything, except its self awareness.
If this is what eliminative materialists mean, It would seem to resemble a sort of Spinozian pantheism or Leibnizian relationalism. Or do they mean that the self-concept as a mental faculty is unnecessary and must be eliminated?
Quoting NOS4A2I will, thanks. From a brief reading, I am not sure whether they are trying to localize the awareness to physical form or delocalize it, but it is related.
'Spiritualism' is a strange choice of word in the context. It is usually used in reference to the Victorian interest in spirit-mediums, seances, and the scientific analysis of psychic phenomena.
I think a better route into the particular issue is via the terminology of the 'explanatory gap':
another route is David Chalmer's original paper, Facing Up to the Hard Problem of Consciousness - in fact I think this is probably the first thing to read. As he puts it:
[quote=David Chalmers]t is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.[/quote]
Daniel Dennett's entire career is dedicated to arguing that the hard problem doesn't really exist, and/or is not really a problem. He argues that the cumulative consequence of billions of cellular reactions is to give rise to the persuasive illusion of conscious experience. Now, of course, there is an immediate objection to this claim, which is that an illusion is something that can only be entertained by a subject. To me, this objection is total, fatal, and complete. But Dennett has ploughed on regardless, writing book after book about his proposal which according to critics is so preposterous as to verge on the deranged.
Another tack on the whole problem is the modern iteration of panpsychism, That is nearest to your attempt in the last paragraph to ascribe'consciousness to primitive objects, even such as sub-atomic particles. That is the subject of well-known paper by Galen Strawson.
And finally, mention has to be made of Thomas Nagel, whose well-known paper What is it Like to be a Bat? is amongst the most famous contemporary philosophy essays.
I hadn't heard the term "eliminative materialism" so I looked in Wikipedia:
Eliminative materialism (also called eliminativism) is the claim that people's common-sense understanding of the mind (or folk psychology) is false and that certain classes of mental states that most people believe in do not exist.....Some supporters of eliminativism argue that no coherent neural basis will be found for many everyday psychological concepts such as belief or desire....Eliminativists argue that modern belief in the existence of mental phenomena is analogous to the ancient belief in obsolete theories such as the geocentric model of the universe.
I don't believe in the mind, I experience it and observe its effects in the behavior of myself and other people. Eliminative materialism sounds like a regurgitation of behaviorism. It's like saying life doesn't exist, just chemistry and associated electrical activity. Come to think of it, some people believe that.
I can study belief and desire and, based on the results of that study, predict human behavior. What else does it take to exist?
:up: That’s exactly what it is.
... hold noses... access handkerchiefs, gas masks...
Which is such a bizzarre idea. All their conclusions are based on, well, experience. Or, they are based on direct intuition. Either way, since they would still have to experience the intuition and be aware of it, ALL their conclusions are fruit of a poison tree.
I think they mean something different when they say "exist" than I, and apparently you, do.
Thanks. You have provided me with quite a few pointers. I first have to check them out before I can get back with a meaningful reply.
To be honest though, I am still left puzzled what exactly eliminative materialists believe. I suspect that I wont find out without some thorough research. Initially, I thought that they are denying the self-concept, but now I start to think that they are avoiding questions of existence.
I think I probably wasn't clear. We know the mind - what it is and how it works - the same way we know other things, by observing the world, in this case, primarily the behavior of other people, including their words. We also know it from the inside, from our own personal experience. Then, those two get combined as we imaginatively come to understand that other people have internal experiences that are similar to ours.
Is that common sense? I wouldn't have called it that but maybe you would.
Edit: Of course, I understand that to some of those questions, "we don't know yet" is a perfectly valid response. But it just diminishes the analytic value of the statement somewhat.
It's clear to me that the mind is different from the brain. I guess I'd say "obvious," although I acknowledge that what's obvious to one person isn't to another. When I talk about the brain, I use words like "neuron," "cortex," and "cerebellum." When I talk about the mind, I use words like "understand," "love," and "perceive." The metaphor I often use is of a television. When I talk about the television device, I talk about LEDs, antennas, and speakers. When I talk about the program I'm watching on the TV, I talk about the sound quality, the colors, the images, and I guess even the basketball game I'm watching.
Does that seem obvious to you?
Quoting T ClarkThat is why I used the term "common-sense" previously. I meant, that albeit privately experienced, the mind is a widely observed phenomenon. But I still struggle to find the scientific value of this statement.
Quoting T ClarkBut since the facets are related, you might be talking about picture quality, but mean leaked capacitor. How do you differentiate? Unless you can switch the program broadcast or change the TV. But, for the analogical mind-body case, I think this is the real problem, that it cannot be done.
Going back to your first statement, if the brain can manifest without a mind, because they are separate, then how do you know that you share your experience of the mind with the other people around you. After all, if all brains have developed a faculty for self-differentiation, they would reaffirm your belief. If the brain cannot manifest without a mind, what would be the distinguishing feature between your conceptualization of the brain and that of an eliminative materialist. (That is as much a question to you as to the materialist and goes back to my original post. After the clarification by Wayfarer, I can ask, are materialists panpsychists?)
Quoting T ClarkNothing sounds obvious to me anymore. Epistemologically, that is. :)
Well, clearly the brain can exist without the mind. People die or sink into a permanent vegetative state. The mind is gone, but the brain continues. As for the mind existing without the brain - life cannot exist without chemical processes. Do you think life is just chemistry. Can you tell the difference between chemistry and biology? If not, I doubt you and I will be able to discuss this subject very productively.
Quoting simeonz
"The mind is a widely observed phenomenon" says everything that needs to be said. Everything is "a widely observed phenomenon." That's how they come to exist for the observers.
Quoting simeonz
Sorry - but a leaked capacitor is (I imagine) a piece of metal with goo all over it. Poor color quality is a term applied to an image of something when the color of the image doesn't match the color of the original. They're completely different things. Is an iron bar something different from 10E +24 iron atoms? "Hey, please hand me 10E +24 iron atoms."
Would I be accurate in saying then, that eliminative materialists don't deny personal experience, but just deny private aspects of personal experience that do not manifest in nature?
Then what would differentiate eliminative materialists from pantheists or panpsychists, aside from sentiment? In particular, does materialism deny awareness and self-awareness as a continuous spectrum for systems of different complexity? Would they consider an ecosystem or a social system to be aware or have sense experience, at least in principle similar to ours? Is my awareness and personal experience thereof related to the one of the ecosystems and social systems of which I am part. Or to put in simpler terms, assuming the position of eliminative materialism, how would they precisely differentiate our sense experience from any other abstract system, simpler or more complex?
(This unintentionally alludes to the hard problem of consciousness that Wayfarer wrote about.)
I am sorry if I ramble a bit.
This says nothing about explanations. Whether something can be explained significantly hinges on who we're explaining it to, what their criteria are for explanations (if they have any and they're judgment about whether it's a successful explanation isn't essentially arbitrary), their psychological biases, etc.
In other words, explanations and whether any set of words counts as an explanation is a completely different can of worms that what things are ontologically.
So, the light from my flashlight is identical with the flashlight.
Edit: For this purpose, we can call a dead brain, a non-brain. At least not anymore brain then a matchbox is.
Quoting T ClarkThe problem is, that according to a materialist, the mind is not perceived first hand by itself, but is only attested by the brain. Since the brain does not always attest externalities, but sometimes emotions and intuitions, the mind stops being an observation, but a shared sentiment. (Mind you, I am not defending materialism as a belief necessarily, just its deductive method.)
Quoting T ClarkOk. But the image is ultimately the result of leds and liquid crystals and capacitors and antennas and electromagnetic processes. The "image quality" is just an aspect of the end result presented on the screen, which is also a facet of the events produced by the underlying mechanisms. The term conceptualizes this aspect, but it does not change the nature of the televising process in substance. An eliminative materialists would argue that there is no image (even less so, image quality) as a separate phenomenon, just a variety of actual events and mechanism, being treated when conceptualized.
Quoting T Clark
Let me restate my question. Do you think that a properly functioning brain, in all its biological aspects, can exist without manifesting a mind?
For this purpose, a dead brain can be assumed to be a non-brain. No more brain then a matchbox is.
Quoting T ClarkYes, but if I were a materialist, I would claim that the the mind is not perceived first hand (such as by itself), but is merely attested to by the brain. And the brain does not always attest to externalities. Sometimes it purports intuitions and emotions. A materialist would then argue, the mind is simply a shared sentiment or concept.
Quoting T ClarkYes, but the TV is still a system of leds, liquid crystals, capacitors, antennas, electromagnetic events, etc. The image is an aspect of the end result as seen by the viewer, which is one facet produced by underlying processes. A materialist would argue that the "image quality" is just term or conceptualization. That there is no "image quality", but just a state of the screen crystals and a number of preceding steps that evoke it.
I don't know and I don't see why it matters.
Quoting simeonz
Either I don't understand this, I don't agree with it, or both.
Quoting simeonz
If you really don't think that a television set is different from the image produced on the set, you and I are too far apart to have a fruitful discussion.
Insofar as we're talking about it at or inside of the surface of the flashlight I'd agree with that.
Behaviourist after torrid love-making session: “That was wonderful for you, dear. Was it good for me?”
Well, a pantheist or panpsychist believes that all things have mental activity while an eliminative materialist might believe that only things and beings capable of decision making and autonomous action has the capacity to experience things. This is because the only way that a thing made of atoms can do those high level tasks is if they have some sort of software. Software is strange to the materialist in the same way that consciousness is strange. 100 years ago, it would be hard for anyone to believe that we could create something like AI from an object made of only atoms. People who claim that you would need to have a soul or some other weird thing to create an intelligent entity. We now know that intelligent and autonomous entities can be created from mere atoms. So, why couldn’t consciousness be created from mere atoms? It’s strange to imagine but it’s also strange to imagine that intelligence can be created from atoms. We now know that matter can arrange itself in very complex and interesting ways.
Quoting simeonz
They do not deny that it is a spectrum but they don’t have to think that it begins on a molecular level or that all objects are part of the spectrum.
Quoting simeonz
Probably not because ecosystems and social systems are not unified systems in the same way that an organism is. An organism is a unified embodied system which is composed of parts called organ systems which are composed of smaller parts called organs. All these organs work very closely together to maintain the organism. The same cannot be said of social systems. People who are part of a social system sometimes contribute to it and sometimes they don’t and they don’t make their entire existence about the social system. The very concept of a social system or ecosystem is a lot more vague than the concept of an organism. Scientists rarely disagree where an organism’s body begins and ends. They do disagree about where an ecosystem or social system begins and ends at though. So, you basically also need to be part of a compact system to experience mental states.
Quoting simeonz
I don’t fully understand this question. What do you mean by an abstract system?
Quoting simeonz
On the wikipedia page is Quine's question:
This neatly distinguishes a strong eliminativism (ascribing consciousness to nothing) from mere identity-ism (ascribing consciousness to some things, some brain states). The former would be what causes horrified reactions from many (see above), and the latter is accepted by @Terrapin (I think), and @TheHedoMinimalist (I think).
Note we are also given an intermediate option of ascribing some modified notion of consciousness to some things, some brain states. That would be my choice, and the likely scale of modification is enough to make me often want to side with the strong camp, especially when identity-ists embrace the folk-psychology of consciousness (e.g. mental words and pictures) with so little care or modification as to suggest Cartesian dualism.
Then there is, as the OP says, the further option of ascribing consciousness to everything, and the subsequent question whether this could tell us anything that ascribing it to nothing wouldn't have told us anyway. (Or not, as argued here.)
My question:
Doesn't ascribing consciousness to any machines with "software" set the bar a bit low? Are you at all impressed by Searle's Chinese Room objection?
Other questions arise. Assuming we treat consciousness as a spectrum, rather then a binary property, which to my way of thinking, concurs with our understanding of how brain conditions affect our awareness, it seems natural to ask, how much capacity for memorization, analysis and responsiveness a system has to have, in order to be considered minimally conscious? Some such criterion has to exist, at least in principle, for both materialists or pantheists, even if the threshold is set at the vacuum state.
Let's examine some actual cases. The human brain has greater overall capacity for information processing than that of animal species. Both have (in general) greater analytical performance compared to plants. Doesn't it follow that animals are more conscious then plants? If so, doesn't it follow that they are actually conscious? Plants, on the other hand, are capable of some sophisticated behavior (both reactive and non-reactive), if their daily and annual routines are considered in their own time scale. Doesn't that make them more conscious then, say dirt?
But is dirt completely unconscious? Particles cannot capture substantial amount of information, because their states are too few, but they have reactions as varying as can be expected. After all, their position momentum state is the only "memory" of past "observations" that they possess. But it isn't trivial however. One could ask, why wouldn't they be considered capable of microscopic amount of awareness? Not by virtue of having a mass, but because of their memory and responses. If not, there has to be some specific point in the scale of structural and behavioral complexity at which we consider awareness to become manifested.
A different approach that leads me to similar conundrums is to think of the possibility of brain engineering. How many neurons (or similar structures) would we need to create an organism whose behavior can be considered minimally sentient - five, five hundred, five million, etc? How many neurons can process information in a manner that appears to be minimally intelligent? Why not one neuron. Wouldn't a single neuron carry sentience alone, assuming it has some suitable interface to a not immediately hostile environment.
Quoting TheHedoMinimalistThat is completely fair. But then they must, at least in principle (even if currently unknown) hypothesize a function that maps states of matter to degrees of being aware/conscious/sentient, a set of states, which are considered non-sentient, and a boundary between the two. There is nothing incoherent in that, but it poses interesting questions.
Quoting TheHedoMinimalistI would like to illustrate how I think societies and ecosystems are similar with respect to consciousness using a thought experiment. Suppose that we use a person for each neuron in the brain, and give each person orders to interact with the rest like a neuron would, but using some pre-arranged conventional means of human interaction. We instruct each individual what corresponding neuron state it has initially, such that it matches the one from a living brain (taken at some time instant). Then we also feed the peripheral signals to the central nervous system, as the real brain would have experienced them. At this point, would the people collectively manifest the consciousness of the original brain, as a whole, the way it would have manifested inside the person? Or to put differently, do eliminative materialists allow for consciousness nesting?
While societies and ecosystems are very different qualitatively to biological organisms, they have some semblances - like traditions and legislation as collective memory, public sentiments, herd mentality, group thinking, as analytical processes, and various internal and external affects - politics, environmental changes by human activities, etc. Assuming that, for eliminative materialist, consciousness exists in a continuum, manifesting as state and structure, I was wondering if it would be applicable (in some amount) to supra-structures made of other conscious entities?
Quoting TheHedoMinimalistThe term "abstract" was probably inaccurate, but the idea was to be able to describe all types of conscious structures not by exhaustion, but using a principle. In other words - not to name the human condition as conscious, or the animal one, but to use a rule that incorporates structures of various scales and appearances.
For language translation or anthropomorphic simulations, AI can focus on fixed training set with fixed evaluation criteria. Artificial general intelligence (AGI) on the other hand, which is more akin to human sentience, is about reinforcement learning. The software agent has to pursue a goal, which is to gradually minimize some penalty, which it learns to do by acting imperfectly (or projecting the outcome of its actions first, if possible) and by simple trial and error. That is, a lot of the behavior is constructed through experience and goal reevaluation.
In this respect, the classical Turing test is outdated, because it limits the scope of the observations to static behavior. It cannot evaluate the progress the system makes to attune itself. In the experiment, the man in the room does not exert any effort to evolve with respect to some goal, it acts more like a classical pre-trained AI, which distances him from human sentience as well.
Also, it is worth noting that in the experiment, the man in the room acts like a cog in a machinery. Akin to a neuron in the brain, but possessing individual consciousness, the man infers that no other consciousness manifests, because its own consciousness is not employed. For this to be true, consciousness must apply in only one way in any given situation. This relates to my previous post, where I asked whether eliminative materialists allow consciousness to nest (and used a thought experiment to illustrate what I mean).
But in any case, translation requires very primitive intelligence. Not to mention that I agree that human intelligence may be extended in the environment (extended cognition from post), so even AGI, being individually engineered, cannot encompass some evolutionary aspects (unless we hard-code them) - empathy, social instinct, self-preservation, etc. In other words, any system thrown out of historical and social context can be considered unintelligent. The man in the closed room is starved from environmental interactions.
I sort of agree with Dennett's postulation. The "persuasive illusion", however, just simply is consciousness. There's nothing illusory about it.
I don't think that he adequately supplants Chalmers's objections, though.
Has anyone in the Philosophy of Mind taken up Merleau-Ponty? I think that the problem will begin to disappear as embodied cognition becomes more prevelant.
Edit: To avoid starting another thread, I just wanted to bring up that bivalves can feel pain and therefore have consciousness. This, for me, had partially resulted in a crisis of faith as a vegetarian, but I think may posit something useful for anyone concerned with Philosophy of Mind. That a decentralized network can still be conscious has interesting implications for the field.
They are, therefore, subjects of experience, not simply objects, even if very simple examples. Life could be seen as the emergence of the subjective. And there’s no place in Dennett’s philosophy for the subjective.
[quote=Thomas Nagel]The scientific revolution of the 17th century, which has given rise to such extraordinary progress in the understanding of nature, depended on a crucial limiting step at the start: It depended on subtracting from the physical world as an object of study everything mental – consciousness, meaning, intention or purpose. The physical sciences as they have developed since then describe, with the aid of mathematics, the elements of which the material universe is composed, and the laws governing their behavior in space and time.
We ourselves, as physical organisms, are part of that universe, composed of the same basic elements as everything else, and recent advances in molecular biology have greatly increased our understanding of the physical and chemical basis of life. Since our mental lives evidently depend on our existence as physical organisms, especially on the functioning of our central nervous systems, it seems natural to think that the physical sciences can in principle provide the basis for an explanation of the mental aspects of reality as well — that physics can aspire finally to be a theory of everything.
However, I believe this possibility is ruled out by the conditions that have defined the physical sciences from the beginning. The physical sciences can describe organisms like ourselves as parts of the objective spatio-temporal order – our structure and behavior in space and time – but they cannot describe the subjective experiences of such organisms or how the world appears to their different particular points of view. There can be a purely physical description of the neurophysiological processes that give rise to an experience, and also of the physical behavior that is typically associated with it, but such a description, however complete, will leave out the subjective essence of the experience – how it is from the point of view of its subject — without which it would not be a conscious experience at all.
So the physical sciences, in spite of their extraordinary success in their own domain, necessarily leave an important aspect of nature unexplained. [/quote]
Mind and Cosmos
Which is, in my opinion, the same issue as that stated in ‘the hard problem of consciousness’.
I don't know that I would say that Dennett necessarily rejects subjectivity, but, then again, I honestly haven't read too much Dennett, and, so, couldn't tell you too much about him either way.
Nagel makes a good point about physical sciences. I had always associated the hard problem of consciousness as being a critique of AI, but may have just conflated a set of theories at the time.
I don't think that Dennett's critique is necessarily on point, but do sort of agree that qualia does not necessarily refute physicalism. Qualia can just describe aspects of physical states.
Do you here allude to, or have you just re-invented, the China brain?
Also relevant, this speculative theory of composition of consciousnesses. Also it attempts to quantify the kind of complexity of processing with which you (likewise) appear to be proposing to correlate a spectrum of increasingly vivid consciousness.
Especially since it is immediately sequential to certain brain results, and contains and represents those products, in a unified way, interrelating all the objects, and providing seamless continuity to the previous brain analyses.
Quoting bongo furyInteresting. Thank you.
That, I am unsure of. Plants respond to things in nature. Why shouldn't plants feel? I would bet that that any living thing responds to stimuli means that it does in some way feel. In order not to starve, however, you do just have to make an arbitrary distinction. Sentience only extends to decentralized networks because I can't figure out how to get the nutrients that I need to survive otherwise.
I am a vegetarian as well and I have made a similar arbitrary commitment. What I mean is, that at least logically, I cannot deny that plants may have some degree of feeling.
In a discussion of this nature - concerning generalizations of consciousness, I think one cannot make hard assertions. I am just trying to examine (for my own sake) the consistency of the arguments and the value of the statements. But the validation, by whatever means become available, if they become available, will probably not happen in my lifetime.
You’re right, I misunderstood what eliminativism was. I was surprised to learn that some eliminativist philosophers even deny the existence of pain. This is certainly not what I believe. I actually think the proper term for my position is actually functionalism.
Quoting bongo fury
I would like to object to Searle’s Chinese Room objection with the following argument:
P1: If AI are capable of learning through exposure to stimulus, then it likely has some mental understanding of what it has learned.
P2: AI are capable of learning through exposure to stimulus
C: Therefore, it likely has some mental understanding of what it has learned.
To explain why I think P1 is true, imagine that you are training a dog. The dog listens to your commands and learns to respond appropriately to them. You would likely believe that the dog is indeed capable of hearing. Similarly, if there was an AI that could learn Mandarin by interacting with Mandarin speakers without the need to pre-code the knowledge of Mandarin into the AI, then it is likely capable of some type of mental understanding of Mandarin. It’s not clear to me why we have more reason to believe that the dog is capable of mental understanding when learning something but not an AI program. To show that P2 is true, I would like to mention that AI programs which are capable of learning through interaction already exist. For example, Alpha Zero is an AI program which taught itself how to play chess while only being pre-coded with the rules of chess. I imagine that it must be capable of some sort of mental understanding to constantly improve its strategy and adapt its playing style to beat every human and AI player. It might even receive positive and negative reinforcement through the experience of positive emotion after winning a game and negative emotion after losing a game. Overall, I would say that we likely already have AI with some mental capacity.
Quoting simeonz
Yes, I think it does because they are capable of more autonomous action and more complex decision making.
Quoting simeonz
Well, I’m not sure if plants have mental activity of any sort. This is because plants do not seem to be capable of autonomous action or decision making which is remotely similar to that of humans. They also probably do not possess sufficient energy to support something like mental activity. Plants are more likely to have mental activity than dirt though. This is because dirt doesn’t seem to be sufficiently compact to form an embodied entity which could support a mind.
Quoting simeonz
I don’t think that the view that particles could have some microscopic amount of mental activity could completely be dismissed but I think the most reliable hypotheses that could be formed about the types of things which are conscious comes from the most certain beliefs which we hold about our own consciousness. I cannot know if you are really conscious but my best educated guess is that you are since you are the same type of thing as me(a human). Furthermore, it’s seems that animals are capable of experiencing certain things as well. This is because if I tell my dog that it’s time to eat, he will respond by running to his food bowl. This implies that he is capable of listening the same way that me and other people are. AI Programs also display characteristics indicative of mental activity. For example, if I am speaking to an AI chat bot which seems to respond to me as through it is reading and comprehending what I am saying, then it’s hard for me to conclude that the AI bot is less likely to have mental activity than an animal without simply being prejudice in my judgements. On the other hand, I don’t observe plants or particles performing tasks which I can recognize as being indicative of the presence of mental activity. Therefore, my best educated guess is that only humans, animals, and AI have mental activity.
Quoting simeonz
This is difficult to precisely answer but I would make an educated guess and say enough to form a microscopic insect. I don’t think that my theory has to explain everything precisely in order to be a plausible theory. The same epistemic difficulties exist for the binary view of consciousness which you accept. The binary view also has to explain which things or beings are conscious. It also has to explain why animals should be considered just as conscious as humans or whether humans with serious neurological issues are just as conscious. Accepting the spectrum view would allow you to demonstrate that human beings are more complex in their mental activity than animals.
Quoting simeonz
So, I would like to start by distinguishing between an unrealistic thought experiment and an absurd thought experiment and why I feel the thought experiment you are presenting me is part of the latter category and why only the former category is relevant for most metaphysical discussions. Unrealistic thought experiments cannot possibly occur in the real world but are still within the realm of possibility. For example, if you have a theory that a star will always be bigger than a planet, then I could ask you to imagine a planet that is bigger than a star. Because this scenario appears to be within the realm of possibility, it has some point to make even if we cannot ever find a planet which is bigger than a star. An absurd thought experiment, on the hand, is not only unrealistic but is also not within the realm of possibility. For example, if you have a theory that all round shapes have no sides and I respond by asking about whether round squares have sides, then I am giving you an absurd question/scenario because round squares are not within the realm of possibility. The reason why I think that the thought experiment you are providing me is absurd is because humans cannot remotely behave like neurons while maintaining their identity as humans or even humanoid creatures. This is because humans would have to carry out interactions as rapidly as neurons do with unrealistically perfect synchronization. This would require humans to have radically different body shapes and brain composition. In other words, they would have to look like giant neurons. Thus, the resulting creature would not resemble an ecosystem or a social system. It would just be a giant monster. I cannot even properly imagine your thought experiment similarly to how I cannot imagine a round square and so I will have to abstain from responding.
No correction intended, I was just trying to orient myself on the wikipedia map of positions. (I think I'm happy here, but in some respects also here and here.)
Quoting TheHedoMinimalist
Yes, point taken. You are more likely to ascribe consciousness to non-fleshy as well as fleshy brains. (?)
Thanks to you and @simeonz for the Chinese room arguments. I will be pleased to respond later, for what it's worth.
Hi.
Quoting simeonz
Wasn't that Searle's point? That the test was useless already, because an obvious zombie (an old-style symbolic computer) would potentially pass it?
Not that everyone then or now finds it obvious that an old-style computer would be a zombie, but the man-in-the-room was meant to pump the intuition of obvious zombie-ness. That was my understanding, anyway, and I suppose I tend to raise the Chinese room just to gauge whether the intuition has any buoyancy. Lately I gauge that it doesn't, much.
Quoting TheHedoMinimalist
I agree, and I don't know that I could without a rather clear intuition that all current machines are complete zombies.
Quoting TheHedoMinimalist
I must say, I find it easy to intuit that all insects are complete zombies, largely by comparing them with state of the art robots, which I likewise assume are unconscious (non-conscious if you prefer). I admit there is an element of slippery slope logic here - probably affecting both "sides" and turning them into extremists: the "consciousness deniers" (if such the strong eliminativists be) and the "zombie-deniers" (panpsychists if they deserve the label).
I agree it is interesting to poll our educated guesses (or to dispute) as to where the consciousness "spectrum" begins (and zombie-ness or complete and indisputable non-consciousness ends). I vote mammals.
Related to that, it might be useful to poll our educated guesses (or to dispute) as to where the zombie "spectrum" ends (and consciousness or complete and indisputable non-zombie-ness begins). I vote humans at 6 months.
Quoting TheHedoMinimalistI would like to contribute to my earlier point with a link to a video displaying vine-like climbing of plants on the surrounding trees in the jungle. While I understand that your argument is not only about appearances, and I agree that analytico-synthetic skills greatly surpass plant life, it still seems unfair to me to award not even a fraction of our sentience to those complex beings.
Quoting TheHedoMinimalistMy thinking here is probably inapplicable to philosophy, but I always entertain the idea of a hypothetical method of measurement, a system of inference, and conditions for reproducibility. If we were to observe that our muscles strain when we lift things, and conclude that there is a force compelling objects to the the ground, this assertion wouldn't be implausible. Yet, it wouldn't have the aforementioned explanative and analytical qualities. But I acknowledge that philosophy is different from natural sciences.
Quoting TheHedoMinimalistI don't accept any view at present. I am examining the the various positions from a logical standpoint. But, speaking out of sentiment, I am leaning more towards a continuum theory.
Quoting TheHedoMinimalistThe peripheral input could be fed in as slowly as necessary to allow a relaxed scale of time that is comfortable for the human beings involved. This doesn't slow the brain down relative to the sense stimuli it receives, only to time proper. But real time does not appear relevant for the experiment.
This seems to me to suggest that John Searle wanted to reject machines sentience in general.
Quoting bongo furyFor me personally, the value of the discussion is the inspection of the logical arguments used for a given position and the examination of its distinguishing qualities. Without some kind of method of validation, meaning - any kind of quality control, it is difficult to commit. I would like a scale that starts at nothing, increases progressively with the analytico-synthetic capacity of the emergent structures, and reaches its limit at a point of total comprehension, or has no limit. It simply would make interpretations easier.
[quote=Daniel Dennett] through the microscope of molecular biology, we get to witness the birth of agency, in the first macromolecules that have enough complexity to ‘do things.’ ... There is something alien and vaguely repellent about the quasi-agency we discover at this level — all that purposive hustle and bustle, and yet there’s nobody home ...
...Love it or hate it, phenomena like this exhibit the heart of the power of the Darwinian idea. An impersonal, unreflective, robotic, mindless little scrap of molecular machinery is the ultimate basis of all the agency, and hence meaning, and hence consciousness, in the universe.'[/quote]
Daniel Dennett, Darwin’s Dangerous Idea: Evolution and the Meanings of Life (New York: Simon and Schuster, 1995), 202-3.
The philosophy of mind that is based on this view, is that the mind is simply the harmonised output of billions of neurons that produce the illusion of subjectivity.
The clever thing about this argument is that it echoes (probably unconsciously) other philosophies that declare the illusory nature of the self (which in some sense Christian and Buddhist philosophy also does). But the downside is that the human sense of being responsible agents also becomes part of the machinery of illusion. This is why Daniel Dennett only half-jokingly says that humans are really robots:
[quote=Daniel Dennett]I was once interviewed in Italy and the headline of the interview the next day was wonderful. I saved this for my collection it was... "YES we have a soul but it's made of lots of tiny robots" and I thought that's exactly right. Yes we have a soul, but it's mechanical. But it's still a soul, it still does the work that the soul was supposed to do. It is the seat of reason. It is the seat of moral responsibility. It's why we are appropriate objects of punishment when we do evil things, why we deserve the praise when we do good things. It's just not a mysterious lump of wonder stuff... that will out-live us.[/quote]
"Atheism Tapes, part 6", BBC TV documentary.
So in the same way that Richard Dawkins says that the Universe exhibits the 'appearance of design', so to do human beings exhibit the 'appearance of agency'. But really, there is neither design, nor agency, except for in the sense of molecular behaviour that creates the illusion of them.
Well spoken.
It is odd, though, that such illusory architecture should be the fundamental prerequisite for the human condition.
The jealous god dies hard. :smile:
Fair enough, I mostly suspect that insects are conscious because they are capable of moving. They also appear afraid whether I try to squash them.
Well, I suppose that there’s some significant autonomous action from some plants so some plants might indeed have some mental activity. It’s hard for me to say but I’ll have to do more research and think about this topic more.
Quoting simeonz
Fair enough, I should have been more careful at ascribing to you a viewpoint.
Quoting simeonz
Well, in that case, it’s not clear to me if the giant being made of human neurons would have any mental activity because he would thinking and acting extremely slow. This is because the slow speed of the human neurons would imply that the giant being would be taking an eternity to even perform a really basic cognitive task like responding to a stimulus. His extreme slowness and largeness would make him seem more like a mountain that can make gradual movements rather than a being of any sort. This would make mental activity seem less likely for this being since it’s probably not necessary for such basic functions. Otherwise, we might as well conclude that a lifeless rock like Mars is conscious because it’s capable of micro-movements like teutonic plate activity. So, we could say that there must be a minimum speed of processing for mental activity to occur. Just like water molecules have to move rapidly in order for water to boil. Even if you think that functionalism does imply that the giant being would be conscious, it still would make functionalism a theory which is just as plausible.
Quoting TheHedoMinimalist
This is still the question, for me. I think the OP is quite right that consciousness denial and zombie denial will both tend to lead to replacement of the vague binary (conscious/non-conscious) with an unbounded spectrum/continuum of umpteen grades (of consciousness by whatever name).
I always suspect that (replacement of heap/non-heap by as many different grades of heap as we can possibly distinguish) is a step backwards.
In this case my complaint against the unbounded spectrum is,
Quoting simeonz
Apparently not, at least not by way of the Chinese room. He does say he suspects consciousness is inherently biological, but for other reasons.
Quoting TheHedoMinimalist
But wouldn't they appear that way if they were zombie robot insects?... if you can imagine such a thing... could zombie actors help? :lol:
https://www.imdb.com/title/tt0088024/
I think that appearance of consciousness is some evidence for consciousness. Insects could be zombies but they could also be conscious. The fact that they are capable of moving and looking afraid means that there is greater evidence of insects being conscious than there is evidence that they are zombies. My knowledge of my own experience while I’m behaving a certain way provides evidence that the observation of such behavior patterns likely indicates consciousness. To make an analogy, I don’t have to taste a particular piece of candy to have a reasonable belief that it is sweet. My past experiences with similar candies would suffice as evidence for a hypothesis that the candy is more likely to be sweet than non-sweet. Similarly, my past experience of having behavioral patterns and seeing that they are influenced by my mental activity provides evidence for the hypothesis that insects are more likely to be conscious than zombies. Why do you think they are more likely to be zombies?
The brain from the thought experiment (the China brain idea, as pointed out) includes all marks of sentience that Mars does not have - great memorization, information processing, and responsiveness (through simulated peripheral output.) The time scale is off, but I do not see how this affects the assumption of awareness. In the post-Einsteinian world, time is flexible, especially when acceleration is involved, so I wouldn't relate time and sentience directly.
I can better understand such claim at a smaller scale however. One might argue that sentience requires a number of discrete aspects, which might imply a minimal quantity of retention and processing units inside a sentient reasoning apparatus. But the claim would be in the dozens, not the thousands or millions, I imagine.
1) Theist attitude. Eliminative materialists have a stoic disposition conceptually, whereas pantheists have a reverential or deifying one.
2) Subjective. Eliminative materialists may defy the subjective as intrinsic property of nature. Instead, they might argue that it is emergent, compelled phenomenon, by biology, by adaptation.
3) Metaphysics. Eliminative materialists might argue that existence does not inherently pose further questions. That metaphysical inquiries can be explained with the inability of the human species to articulate harmony (broadly speaking) with their environment and themselves.
A few personal remarks.
I do not think that the subjective can be considered more illusionary then, say, hunger is. It has an evolutionary role. I do appreciate that not all awareness has to bound to one's self however. While researching this, I found out that the octupi have distributed nervous system, such that their tentacles individually possess a sense of their environment. Yet the organism has a unified sense of self-preservation, which might imply that its intelligence still operates under a singular concept of self. I am reasonably accepting of space-time relationalism, which to some extent implies the same attitude.
Regarding metaphysical questions - I see them as a struggle for total comprehension. Which to me is an essential duty for a sentient being. They may not always produce constructive answers, but create productive attitudes.
If the brain from the thought experiment is supposed to have all the marks of sentience then I would have to disagree with that thought experiment. Perhaps my example of Mars wasn’t a very good example though. I think the marks of sentience requires that the information processing and responsiveness happens in a somewhat timely manner. If the giant being takes literally like 1000 years to respond to a stimulus like having water quickly thrown at him because the human neurons are taking forever to follow their instructions, then it’s hard to imagine what the giant being would even experience. Would he experience 1000 years of neutral emotion followed by 900000 years of being pissed off because someone threw water at him, and then would he experience the emotional states associated with calming down for another 200000 years? It’s kinda hard to imagine such slow responses would be influenced by mental states. Unless, the being experiences time really fast. But, how would experience time fast with such a slow brain. Having a slow brain doesn’t seem to make time go fast. So, I think it’s more plausible to think that the being is simply not conscious.
But then, they don't claim to be philosophically important. They're what we have in common with all animals. Dennett is quite happy to grant us animal nature.
Quoting simeonz
But he doesn't, really. He says we appear to be subjects, but the appearance of subjectivity is, in reality, the sum of millions of mindless processes.
Because of my past observations of robots whose behavior, while suggestive of mental influence, was soon explained by a revelation: either that there was no robot after all because a human actor operated from inside; or of mechanics and software inside, whose operation I recognised as obviously non-conscious. The twist in the Chinese room, I guess, is to reveal a human (Searle) who is then revealed to be, in relation to the outer behaviour of the creature, a mere machine himself.
None of this can impress you if you have lost all intuition of the non-consciousness of even simple machines. I'm not sure how to remedy that, although...
... see the motor car analogy, below, and...
... also, at least bear in mind that arguments for "other minds" (arguing by analogy with one's own behaviour and private experience) were (I'm betting... will check later) designed to counter the very healthiest intuition of zombie-ness, which might otherwise have inclined us to doubt consciousness in even the most sophisticated (e.g. biological) kinds of machines (our friends and family).
So you might at least see that your intuition of zombie-ness is likely to have depleted drastically from a previous level, before your interest in AI perhaps? Not that that justifies the intuition. Perhaps zombie denial is to be embraced.
Quoting simeonz
Short answer: my pet theory here.
More generally, how about this silly allegory... Post-apocalypse, human society is left with no knowledge or science but plenty of perfectly formed motor vehicles, called "automobiles", and a culture that disseminates driving skills through a mythology about the spiritual power of "automotovity" or some such.
Predictably, a primitive science attempts to understand and build machines with true "automotivity". The fruits of this research are limited to sail-powered and horse-powered vehicles, and there is much debate as to whether true automotivity reduces ultimately to mere sail-power, so that car engines will eventually be properly understood as complicated sail-systems. And even now the philosophers remark sagely that engines may appear to be automotive, but the appearance of automotivity is, in reality, the sum of millions of sailing processes.
Do we hope that this society replaces its vague binary (automotive/non-automotive) with an unbounded spectrum, and stops worrying about whether automotivity is achieved in any particular vehicle that it builds, because everything is guaranteed automotive in some degree?
I told you it was silly.
I’m not really understanding how this twist is relevant. If the Chinese AI was pre-programmed with knowledge of Chinese then sure I agree it is likely simply following its instructions (of course, you could never simply pre-program a machine to speak perfect Chinese). The Chinese AI would have to be programmed to know how to learn Chinese instead through interactions with Chinese speakers because it’s impossible to simply hard code the knowledge of Chinese into the AI. It would probably require you to type more lines of code than the number of atoms in the universe. But I actually think that being able to follow very complicated instructions would also require consciousness. The question of whether the AI or the human really understand Chinese is seemingly irrelevant because the functionalist could simply claim that the ability to follow really complicated instructions mentioned in the thought experiment would require you to mentally understand those instructions in some way. Just as the human in the thought experiment cannot follow his instructions without mentally understanding them, the AI couldn’t do so either.
Quoting bongo fury
Well, I actually don’t consider cars to be autonomous or having consciousness as a whole. I think the car sensors are probably conscious and self-driving cars might be conscious as a whole. So, let me ask you a question. If the post-apocalyptic world had self-driving cars, how would the reductionist sages of that world explain them in terms of simpler mechanical processes? How would they even explain a seat belt sensor through simple mechanical processes?
If you ran into one, do you think you would owe it an apology?
Now, I could not contend whether, if the environment operates at a normal pace, but the peripheral or central nervous systems slow down, the subject might feel being in a haze or slowing. However, if the environment stimuli slowed down together with the entire nervous system, I do not see how the subject would notice any difference. And in my thought experiment, the stimuli are artificially slowed down in their arrival to the brain - as if reality slows down together with the cognitive and perceptual functions.
And lastly, I do not think that the pace of thinking is the right criteria for consciousness. Reacting fast is common for insects and animals. I realize that their instincts are wired to transpire faster, but the information is still handled rapidly, and yet, it does not make them as sentient as we are.
Did I say that I believe that self-driving cars can be upset about me running into them? I wouldn’t apologize if I ran into a fish in the water. This doesn’t mean that fish aren’t conscious.
Quoting simeonz
I had just thought about a different concern about the thought experiment. While physics doesn’t recognize absolute time, time is relevant to their study since it could impact the laws of physics. For example, humans can only survive for about 80 human years. How long could the giant being survive in human years? Unless their larger size would imply a much larger life span in human years than the normal human lifespan, they might die before they experience anything. I suppose you could imagine a hypothetical immortal giant being, but I think the conditions of the thought experiment would have to be pretty outlandish for there to even be a possibility of consciousness. It’s hard for me to comment on consciousness in a scenario which is so alien to me. Either way, I’m skeptical that this thought experiment would imply that ecosystems or social systems might have mental activity.
Some free will arguments have a similar logical issue. To me at least, the distinction between influencing your own decisions and being compelled by nature seems artificial. If you are an eliminative materialist or pantheist, you already manifest as part of nature, and therefore you would be acting as compelled by yourself. Similarly, in the case of self-awareness - how can you be tricked by yourself (your biological embodiment) into believing that you are yourself (a person), while you are in fact yourself (your biological embodiment). If your embodiment is completely coextent with you, and you are equivalent, how can you be not yourself. Or why would you be considered any more mechanical then your embodiment conscious?
Quoting TheHedoMinimalistI am not claiming soundness, only the following implication - that if machines can develop mental state, and since we can build machines out of people, it follows that mental states can be composited from other mental states with separate experiences. This would apply in the context of eliminative materialism, panpsychism, functionalism, etc. Although, I fail to distinguish those very well.
In any case, I am not forcing a statement. Everyone has the right to reserve their judgement.
Ask yourself this question - what does eliminative materialism eliminate? Unless you want to beat around the bush, the answer is one word: mind. The word ‘mind’ doesn’t correspond to anything real: what we take to be ‘mind’ is simply the snap, crackle and pop of billions of neural connections programmed by Darwinian algorithms for the sole purpose of propagation of the genome. That’s all there is to it.
It brings out how processing of meaningful symbols by a machine may be no more meaningful for the machine than processing of any other materials. Even a component of the machine obviously capable of attaching meaning to certain (e.g. English) symbols might be oblivious as to any meaning attaching (for others present) to other (e.g. Chinese) symbols in its possession. It brings out how symbolic processing may be merely syntactic and not semantic. Devoid of understanding. Unconscious.
Quoting TheHedoMinimalist
Yes, putting the thought experiment on a more realistic footing could suggest conditions under which we might expect genuine semantic processing to occur. Searle would insist on interactions with Chinese speakers and the environment spoken of... so that the alleged added semantics needn't turn out to be just more syntax.
Quoting TheHedoMinimalist
But my PC fits that requirement?! But I forget, you are happy to attribute consciousness in such a case. :gasp:
Quoting TheHedoMinimalist
But remember that a premise (not necessarily realistic) of the thought experiment is that his understanding is purely of the syntax, so any mental aspect to it is surplus to requirements.
Quoting TheHedoMinimalist
Nor did I, nor did the post-apocalypse society. We (I and they) consider them to be "automobiles"... whatever that means - which (what that means) was meant to be the problem analogous to that of "consciousness". But maybe that word is too dated to work, and too easily confused with the more up to date problem of the consciousness (or otherwise) of self-driving cars. Which is just the problem of the consciousness of any AI. So I may need a different analogy.
Quoting TheHedoMinimalist
If you're talking about AI and consciousness directly and not my analogy of "automotivity" then I guess my answer would be the same as previously: their explanation is too bland and uninformative.
Quoting simeonz
Again, apparently my analogy mis-fired. No metaphysics intended. Only trying to save "conscious/non-conscious" as a vague binary.
The analogy was, "when does a vehicle become truly automotive i.e. a true automobile?".
"Ask yourself this question - what does eliminative materialism eliminate? Unless you want to beat around the bush, the answer is one word: mind. The word ‘mind’ doesn’t correspond to anything real: what we take to be ‘mind’ is simply the snap, crackle and pop of billions of neural connections programmed by Darwinian algorithms for the sole purpose of propagation of the genome. That’s all there is to it."
That is one of the absurdities I was talking about. Denying the existence/reality of minds or conscious experience is a losing move from the start. There are very few things I can be completely sure about, but here are two: I'm not mindless, and I have conscious experience. I can't be mistaken about that.
I've met materialists who have insisted they were p-zombies. That's how crazy it can get.
Of course, I agree. Here is a snippet from Thomas Nagel's review of Dennett's most recent book.
[quote=Thomas Nagel] Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”[/quote]
I think the interesting question is: why is this taken seriously? Why is it considered a philosophical argument?
//ps// a non-paywalled review here. Likewise notes:
To which my answer is, emphatically not. It turns the rhetorical techniques and lexicon of philosophy against philosophy, and tries to show that humans are instead machines, automatons, or robots. //
"I think the interesting question is: why is this taken seriously? Why is it considered a philosophical argument?"
I think things like "consciousness is an illusion/consciousness doesn't exist" are taken seriously because people are emotionally invested in a materialistic model of reality and don't want to give it up.
Also, if materialism isn't true, then some type of dualism or idealism is true, and that has very profound implications. Maybe people don't want to go there.
That said, I must agree to some extent. The spectrum of sentient qualities may have a sharp slope at some point. Even with a lot of structural complexity. I do not consider this likely - sophisticated information processing structure suddenly being vastly less aware when compared to a somewhat more complex different one. But I cannot fully disregard the possibility.
Quoting WayfarerIf the mind is co-extent with its embodiment's behavior, how can it not be real. (Not that I personally claim that the mind coincides with its embodiment, necessarily.) If the person is metaphysical solipsisist, it wouldn't be real. But then he wouldn't be eliminative materialist at the same time.
This quote from Quine (lamely borrowed by me from Wikipedia) actually describes my attitude towards the eliminination exactly:
And the text following right after that (Wikipedia's own narrative) also presents my objection to the possibility of complete elimination, such as not just the elimination of the dualism aspect:
The Wikipedia article then confirms your observations:
, which then refers to Stanford Encyclopedia of Philosophy here, where the following statement is made:
I still cannot fathom the nuance here. Isn't this just a re-phrasal with a different attitude. Unless the first group denies experience and existence. But I doubt it.
Let me inquire this - if eliminative materialists demote human beings, then what about functionalists or pantheists? I don't see how any non-dualist position would be different. Alternatively, which position would assert that we experience, such as through our mind, except that our mind is the same as our brain (co-extent with it, and has material nature)?
Again, I think that the mere existence of consciousness should not be considered at the same time as questions about the mind-body duality, the nature of the subjective, how free will manifests, the essential vs existential attitude to purpose in life, etc. I think that eliminative materialists recognize freedom and experience. They simply claim that freedom manifests (to a narrow extent) exactly due to our bodies, whereas other positions claim that it manifests despite of our bodies.
Otherwise, if they deny their existence ("I think, therefore I am not"), despite me being liberal as I am, I agree that their position would be confusing.
Demons haven’t been eliminated at all. They’ve just morphed into mass shooters and terrorists and crack dealers. Their medieval depiction simply reflected the popular imagination of the culture of the day.
Notice something here. 'Mental states = brain states'. Now, I ask you, what kind of physical object is '='? Where in the physical world, where in nature, do you find anything at all remotely resembling "="? You won't find it, because it relies on abstraction, on assigning values to things, and then saying that ‘this means that, therefore this equals that.’
What kind of 'brain state' could equal 'equal'? And how would you go about finding that out? Even to ask the question, you have to make a lot of judgements about neural images and incredibly complex data - the brain being the most complex thing known to science. And so on. 'See, this area here, we think that this is the part that processes language' (or whatever). But all of this is highly reliant on abstraction and reasoned inference. And how can you explain those capacities in terms of 'brain states', without actually using the very capacities that you're trying to explain, and thereby begging the question?
Really you should realise this is a massive dead end, this eliminativism. The very best thing they could eliminate is their project. :smile:
A sort of logical desperation on both sides: the materialist insists the brain is the source of the illusion of subjective experience because both reside between the ears but only the brain can be found there, and the non-solipsistic idealist insists any illusion that appears so real must be treated as real enough to warrant the preemptive significance no one is actually foolish enough to deny.
Both are met with impossible circumstance: the one cannot prove with apodeictic certainty the mind is nothing but illusion, and the other cannot prove its apodeitically certain reality, so they both fall back on insisting they don’t have to.
All of which raises the question......what good is it when science eliminates the free thinker?
There will be nothing to notice this, if they are right. IOW brains will be affected, but there wll be no subjective experience of 'oh, they were right, we no longer think about those mental states.'
Ah, thanks for trying to get on board with my rickety analogy. But no, that difference is a red herring, or misunderstanding. I did say (although mention of horse-drawn in that sentence may have muddied things) sail-powered vehicles, not vessels. I appreciate sail-powered vehicles never were a common sight on the road, but in my story they are the nearest that the society has come to building their own cars - which they have inherited, ready-built, in plenty. So my point is the same as yours when you suggest,
Quoting simeonz
Yes!... if you mean motor-ship. Then that's parallel, because I was equating the human/insect comparison to the automobile/sail-powered go-cart comparison. But there was no vehicle/vessel comparison for me.
One could re-tell it as being about both (or either) vehicles and vessels, except there isn't a ready-made extension of "automobile" for that purpose (that I can think of, although there could have been).
Quoting simeonz
Yes, a tempting compromise! My sharp slope, parallel to the progression from top-notch sailing to motorisation, is the journey from chimp or dog to human: from ability to follow the pointing of sticks or balls at targets to the ability to follow the (usually not actual) pointing of words or pictures at targets.
Any interest shown in this positive matter and I'll happily roll over and tolerate what strike me as more or less unacceptable consequences of an unbounded spectrum... e.g. conscious phones, insects etc. at one end, and literal talk of mental pictures, concepts, beliefs etc. at the other.
Would eliminative materialists actually use such a machine? Even if you paid them a lot of money? Or would they view it, as I do, as the equivalent of death? I think, when push comes to shove, you'd have to drag them to it, kicking an screaming.
Quoting WayfarerFor me, this does yet falsify eliminative materialism, but makes it a theory that awaits further judgement. Isn't that true for most of philosophy?
.
Quoting RogueAIThe assumption that there is such a machine, already renders the eliminative materialism wrong, which voids the question. If you are asking, if they would take the chance, without knowing - this will be like like a "sell me your soul for a dollar" type of child prank. Some people would refuse on a principle.
How can it NOT include epistemology? It concerns something fundamental to the nature of knowledge.
Quoting simeonz
As most people would instinctively say, our cognition is shaped by evolution so as to maximize our reproductive ability. But this is simply one of the dogmas of evolutionary materialism which seeks to understand every human ability in terms of evolutionary fitness. The problem with that - this might come as a shock, so brace yourself - is that evolutionary biology is not actually a philosophical doctrine at all, but a biological theory which purports to explain the phenomenon of speciation. The fact that it is so widely and casually wielded as a 'theory of everything' doesn't legitimize it. (This is the thrust of Thomas Nagel's 2012 book, Mind and Cosmos and also his earlier essay, Evolutionary Naturalism and the Fear of Religion, published in the book The Last Word.)
Besides, for there even to be a 'theory of evolution', science already has to rely on the capacity to make rational inferences, to say that 'because of this, then that must be the case'. That is fundamental to the faculty of reason and speech. And to explain that as a matter of adaptation, to say that such capacities are only trustworthy as the by-product of biology, is already to reduce reason to mere utilitarianism. Of course, modern culture does that so readily that it's almost impossible to notice.
Quoting simeonz
Most philosophy will never and can never be validated in terms that will satisfy modern science, as in their criteria for success are incommensurable.
In summary - the differences are not always completely immaterial. For example, a dualistic free will theory would confront a materialistic one on the basis of what is physically possible. Idealism and materialism might be compatible as a practical matter, but mostly to the extent to which the former is skeptical.
If you think that 'knowing you're alive' is a matter of faith then there's something the matter with your logic. :wink:
With the risk of sounding annoying, I will ask once more. Assuming one is coming from a solipsist attitude, what position states that the mind is capable of completely witnessing its own operation and construction (and ultimate demise), in a manner which appears more extrinsic (through physical sensation, and not emotion), but is ultimately just another perception of the mind by itself, as it also relates to other minds and substances from which those minds emerge, as they exist together in a comprehensive orderly fashion (order, that we conceptualize as nature)?
PS: And how is this different from what eliminative materialists essentially claim?
Not at all.
Quoting simeonz
Solipsism is dissolved by empathy.
Quoting simeonz
I don't think materialists would acknowledge that. And by asking these questions, you're already outside the reductionist circle.
Quoting WayfarerMy point was - assuming one treats the existence of the mind, rather then the body, as a starting point, wouldn't the theory I described be equivalent to that of eliminative materialism? If it isn't, what position would that be called?
Quoting WayfarerIn retrospect, projection may not have been the right term, because it implies some kind of codomain - a space to project onto. But a brain substate is a very primitive and base notion of awareness that doesn't require it to be separate from the body, and corresponds to the assumption of a medical model of psychology. Wouldn't that satisfy at least some eliminative materialists?
No, because it's not a theory at all. It is, as Descartes said it was, apodictic.
The point of materialist theories of mind, is that 'mind is what brain does'. So they're saying, what we experience as a sense of self, is really better understood as the collective output of various neural processes. You have to be clear about that. This philosophy, so-called, is rooted in a very specific historical process; it was one of the French atheist "philosophes" of the Enlightenment who said that 'the brain secretes thought like the liver secrets bile'. Even though it is a very crude expression, it is really what eliminativism believes and wants to prove.
Another one of Daniel Dennett's books is called 'Darwin's Dangerous Idea'. It spells out the philosophical implications (although again, they're actually the anti-philosophical implications) of this point of view. 'The crux of the argument is that, whether or not Darwin's theories are overturned, there is no going back from the dangerous idea that design (purpose or what something is for) might not need a designer. Dennett makes this case on the basis that natural selection is a blind process, which is nevertheless sufficiently powerful to explain the evolution of life. Darwin's discovery was that the generation of life worked algorithmically, that processes behind it work in such a way that given these processes the results that they tend toward must be so.'
There was a similar book written called Chance and Necessity, by Jacques Monod (around 1970), a Nobel-winning biochemist and also remorseless materialist. It makes very similar points. They are both canonical works of what is called 'neo-Darwinian materialism', which is the idea that life itself is a kind of runaway chemical reaction.
The argument I am deploying against such ideas is that reason itself cannot be understood in Darwinian terms, or reduced to anything understood by the laws of physics, or any other science*. Reason itself - the ability to argue from premisses to a conclusion - is naturally assumed by materialists to be explained by the same principles as other forms of adaptation that enable species to survive and therefore propagate. But I say that the biological theory of evolution never set out to provide an account of the nature of reason in the first place, but because of the circumstances of culture and history, the biological theory of evolution has now assumed, especially for the 'militant atheists' such as Daniel Dennett, a kind of quasi-religious status, as that which is finally going to destroy religion and any form of idealist philosophy once and for all. (Dennett refers to 'Darwin's Dangerous Idea' as a 'universal acid' for exactly that reason, and has written other polemical books to this end.)
So the whole 'eliminative materialist' project is basically driven by the fact that the nature of the mind itself is fundamentally irreconcilable with materialism. The reality of mind can't be acknowledged. That's all there is in this argument, there's nothing else to it.
-----
* This is a kind of transcendental argument, basically Kantian in orientation.
Well, I understand that Mr. Dennett may have overused natural selection as explanative device, and may have run overboard with his derogatory metaphors on the human condition, but that still does not render eliminative materialism as irreconcilably (Daniel Dennett aside) "mindless" position to me. If one subtracts the aspect of personal attitude, and leaves only the ontological content, I still cannot distinguish eliminative materialism from Spinozian or Leibnizian pantheism. And the latter are certainly not absent of mind phenomenon. I certainly can distinguish eliminative materialism from mind-body dualism. I will surrender my attempts to elucidate the distinction, at least for the time being. May be I just need to get familiar with the theories and give the matter further thought.
Good subject for a term paper!
Quoting simeonz
It might be of relevance that the origin of the term 'ontology' is derived the first person declension of the Greek verb 'to be' (namely, 'I am'); which has somewhat different connotations from today's definition.
Maybe not, but see the quagmire up ahead?
I suggest the choice, eventually, is between a physical binary distinction of conscious vs unconscious on the one hand, or a metaphysical binary distinction of mind vs matter on the other...
Which of these seems to you potentially the more enlightening?
While I might agree some brain states are experimentally quantifiable, insofar as reactive indicators are present for observation, I disagree that purely abstract mental conditions, that which is theorized as reason and its integrated particulars, will ever be displayed on a screen or graph. That is to say, the result of thought may be externally witnessed, but the machinations for its implementation, won’t. I mean.....how does one even look for “understanding”? And because such is altogether quite impossible, gives rise to my position that the e.m.-ist’s position is that they don’t need to measure it because there’s no such thing as understanding, e.g., corresponding to a physical brain state. Which of course, drives speculative metaphysicians straight up a very tall wall.
Me, I just think it’s kinda funny, that physicalists/materialists in general tend to deny the philosophical paradigm, all the while employing the very thing for which the philosophical paradigm stands. Still, one should be really careful in his declarations favoring one side or the other, for the sheer complexity of the human brain does not easily submit itself for definitive examination.
——————————-
Quoting simeonz
I have no such concern; I think the proposition has no meaning, because of my idea of what mind is. Mind is merely a word, a placeholder for some immaterial totality, a sort of catch-all that for which we have no better word. If I reduce my thinking to a unconditioned necessity, I arrive at mind. But I don’t need the concept of mind, in and of itself, for my reason to proceed as it does simply because I exist as a thinking subject. This modus operandi completely eliminates any possibility of partial awareness, because there is no doubt I am fully aware of that which affects my thinking, and if it was the case I was not fully aware, the very idea of the possibility of knowledge itself, becomes moot. I could never be certain of anything whatsoever, which is precisely what reason seeks.
And the beat goes on............
I should add here, that Spinoza and Leibniz are quite different - Spinoza is a more clear cut pantheist, whereas Leibniz can be considered a pantheist or an idealist. I am stretching the bracket too much already by including them both in the same category. One might say, that I cannot differentiate some varieties of pantheism and idealism between each other, which explains why I cannot distinguish eliminativism in its own right.
Quoting WayfarerYou probably allude to the fundamental inconsistency between the pursuit of philosophy and any denial of being. But, as I said, I am not sure that eliminativists are denying the existence of the mind. I think that they deny any distinction between it and nature - they strip it of transcendence. So far, you did not say what is your position on pantheism is, Do you oppose materialism, but tolerate pantheism? Because, if you consider the co-extensiveness of matter and mind that eliminativists prescribe appalling, I assume that you feel the same about pantheists.
My comprehension, at the moment, is that eliminativists do not think of our emotions, senses, and thoughts - to be descriptive of who we are. According to them, the proper way to describe our inner selves is to also perceive through empirical observations. But, as I said, I might be wrong.
In other words, treat beings as objects, no?
Quoting MwwFor me, the question here is whether the mind is first and foremost a collection of unprocessed emotions and senses, aka the intentionality, or are we biased to prioritize these experiences, because they require less mental effort, whereas the more belaboring means of self reflection that involve logical inspection of the natural world, whilst much more intricate, taxing, and sometimes unreliable, can offer further detail of our state of mind, which we are not capable of perceiving directly through emotions.
Quoting MwwThere is indeed a complication. Empirical observations as eliminativists would have them are recursive. For the brain to observe itself, it has to already be capable of sensation. But, I am not sure that this is contradiction. After all, the brain does not purport a different image - looking at your brain scan image does imply that your neurons are processing data about themselves. Also - the same recursion does exist backwards. A person can think or emote, or they can think or emote about the nature of their thoughts and emotions. While the latter actions involve greater sophistication, they are usually assumed to be fundamentally realized (whether metaphysically or biologically) in the same way as the former. The entire process could be described as such - through nature we can observe, us observing, us observing, or, observing others, observing us, etc.
Quoting MwwStill, don't you feel compelled to increase the comprehensiveness of your conceptualization? I mean, are you apathetic towards this particular type of knowledge as opposed to others, because you don't trust it is substantive? Or are you just indifferent towards the issue? I ask, because this is not how human curiosity generally operates. One could imagine what the world would be if Newton said - force is just a notion about a totality of interesting natural phenomenon and I don't have to investigate it any further.
On the other hand, I do acknowledge that we cannot validate claims in this area with certainty, probably because we don't even have a clear understanding of what the claims are. I sure don't.
I would say we shouldn’t. Any sound philosophical position would seem to require either subscribing to a method which grounds it, or actually being a method in itself from which something else is grounded. But from the context, it appears you are saying brain states that are not observable are hypothetical, and the epistemic substance of a hypothetical is questionable. I would substitute logical for hypothetical, from which a valid method may follow necessarily, and there arises something on which to base our philosophical positions. Still, I see the arbitrariness of epistemic substance, because one person may find such method satisfactory and another find such method faulty.
————————-
Quoting simeonz
Actually, with respect to mind, no. One can use the principles of reduction and of sufficient reason only to a certain point, after which he gets himself into absurdities and self-contradictions. If one treats mind as an unconditioned necessity, then tries to elaborate on the unconditioned, which is as you say, increase the comprehensiveness of conceptualizations, he has defeated the primary logical justification of absolute necessity, which translates to making the mind conditioned by whatever the elaboration becomes. A self-contradiction, which negates the entire thesis prescribing mind as the rationally unconditioned. Maybe it’s merely the lesser of two philosophical evils: it’s better to accept one immanent possibility than to require more than one transcendent possibility. In other words, grant one unprovable hypothetical rather than regress into a morass of unprovables.
So, yes, I suppose it could be said I am indifferent towards the issue. I really don’t care about mind that much; it is enough that I exist as a thinking subject and if I happen to think about mind, I can only think so far and no further without venturing into the irrational.
————————-
Quoting simeonz
I guess you could say there is greater sophistication, insofar as thinking about the nature of thinking is the actual dissection of the thought process itself, theoretically, that normally occurs just short of instantaneously. But, yes, thought and thinking about the nature of thought are both fundamentally realized the same way. Thinking about thinking is, after all, just thinking. And to say from that, that the brain is observing itself, may be a conventional easement, it is nonetheless philosophically bankrupt, because it invokes a categorical error. Thinking is one thing, observing is quite another.
Still interesting.
For example - despite appearances, I don't logically oppose dualism and idealism (even if I don't believe that all variants are sound), but I challenge the distinction between pantheism (consciousness as intrinsic matter potentiality) and eliminativism (matter in its own right), because I find that such distinction lacks reasonable explanation or definite value.
Quoting MwwObservable, but not necessarily in the empirical sense. What I meant was that the notion should have a definite meaning and a clearly expressed value. Note that I don't consider it necessary for all notions to be of this variety, only those which are subject to critical thinking. But if they are not subject to critical thinking, how can they be subject of philosophy?
Quoting MwwBeing hypothetical is not an issue for me. My problem is lacking any kind of critical evaluation - logical (because of logical independence), empirical (because of disembodiment), experiential (because of indefiniteness). I am not opposing the idea that the mind can exist independently of reason - many things do. But if it is not planted in some kind of analytical framework, as you propose, then I cannot see how it can be a component of philosophy.
Quoting MwwYou meant something more constrictive then I originally imagined. That your notion of the mind can be compared to a kind of bondage, similar to one's reaction to physical pain. Whatever its nature and substance might be, pain does provoke an adverse reaction in us, as this is its intended function.
Edit: I should be clear here. I don't oppose the mind as an instinctive notion, as long as it is treated skeptically in philosophical discussions. I don't oppose incorporating the mind as a philosophical hypothesis either, as long as it comes with some analytical content.
Quoting MwwActually, I was not trying to prove that sensation is recursive, but to weaken the argument that it couldn't be, because of unfoundedness. I wanted to make the argument, that if we assume that sensation cannot articulate its own structure, then we could make a similar argument that thinking about thinking or emoting about emotions is impossible. And the parallels I think hold well, because when I articulate my thoughts by thinking reflectively, the two thoughts are not incident, but the latter contains expressions of the former. I am able to investigate the nature of my thoughts, precisely because I am able to think about them, not merely to think in its own right. Similarly I would not be able to examine the nature of sadness, if I didn't feel any regret for it. But my regret for sadness, despite being a similar kind of emotion, is not coincidental with the sadness that provokes it. It is an expression of it. And our mental faculties are interwoven, such that we can think about our emotions, and emote on our thoughts. My argument is, that it would not be contrary to nature (i.e. paradoxical) if we could observe sensation through sensation. Our senses could express how our mental states work, just like the the rest of our mental faculties can self-reflect. Except that this self-reflection involves a space of greater complexity - the material world, which overwhelms our emotional and contemplative capacity.
Quoting simeonz
.....compels defensive thinking, yes. Tries to plant theories into sacred foundational principles, not assumptions. A theory may indeed arise from an assumption, but it can never consequently be defended by one at its foundation.
I would have agreed if you’d said logical reductionism may try to plant an idea into a sacred foundational assumption, re: the infinite, any sort of unconditional, an uncaused cause, and so on. Still, even then, a theory developed to justify such ideas, should be grounded in something immutable, which no assumption can be.
In addition, I would think logical reductionism would promote “logical interrelations between different kinds of statements”, by analyzing conclusions. While that in itself may not prevent partiality, it certainly shouldn’t be said to invite it.
Maybe logical reductionism compels defensive thinking impartially. While that may seem self-contradictory, it also seems that defense-by-law must be impartial by definition. Then it becomes an issue of partiality to a particular law, but not partiality for defense by logical reductionism.
And finally, the epitome of logical reductionism is of course, the Aristotelian laws of thought, which makes explicit any theory defended by them does try to plant it right squarely into a sacred foundational principle.
————————
Quoting simeonz
Absolutely. 1.) not all idealism is sound; 2.) matter and consciousness (as it is metaphysically described) are mutually exclusive; 3.) eliminitivism is at worst self-contradictory and at best explanatory deficient.
And I submit, Good Sir or Madam (unabashedly stolen from “Paperback Writer”) if you try to logically oppose dualism, you’ll be met with an exercise in absolute futility. ‘Tis the nature of the rational beast, and there ain’t no way around it.
—————————
Quoting simeonz
Constrictive yes; a kind of bondage...ehhh, ok. But any relation to pain is beyond the scope. Physical pain has purely empirical predicates and emotional pain is not a cognition, but a feeling, hence neither has to do with the intricacies of a rational mind.
A test for you: even if an itch is not a pain, it is still the same thing in principle. Next time you have an itch.....don’t scratch. Takes awhile, a few times, but after that, the itch just goes away. Bug bites, ant walks, sweat drops included. Bacon spatters....not included. (Grin)
—————————
Quoting simeonz
Therein lay the key: thought without content is meaningless.
But this.......
Quoting simeonz
.....I don’t quite understand. I fail to grasp how one follows from the other. I hold that sensation cannot articulate its own structure, which implies sensation has the capacity for reason, but to reason with respect to our thoughts cannot be impossible. Right? What did I miss?
Anyway......good stuff. Point/counterpoint. A proper philosophic dialectic. Socrates would be proud.
To be honest, I think that our opinions may be irreconcilable. I feel that the majority opinion is that humanity is entitled to some kind of exceptionalism, which I never understood. I admit that people may be exceptional, but I see no certain proof of it. My arguments are apparently, and probably justifiably (due to the flaws in them), not appealing enough.
I will however comment on this, because I thought it is interesting:
Quoting MwwI make my assertions corrigibly. I don't believe that I am capable of obtaining immutable principles. I possess modus operandi that is subject to continuous validation and refinement, and that is what make it a comprehension. Is logic corrigible? Could be. I rely on it, because I don't know any better and if I don't foster my conviction in some sensible image of reality, I will not be able to utilize my reason. But I doubt that immutable comprehension of any kind will be obtained by simple organisms such as ourselves anytime soon.
In any case, this was not a logical argument (obviously), just my point of view.
language, technology, science, arts, literature, philosophy.....what more evidence would you need?
Echolocation, controlling the sea's salt content, maintaining atmospheric oxygen levels, hibernation, cloud-creation, pollination, soil-making... What more evidence do you need? Non-human life is exceptional.
Maybe.......
———————-
Quoting simeonz
......maybe not. There’s nothing exceptional about humanity. It only does what it is capable of doing, just as does every other natural object. It would be exceptional if humanity did something it wasn’t capable of doing. If anything, I suppose we’re exceptional at self-aggrandizement. Just because we’re apex-intelligensia and apex-praedator in this environment through sheer evolutionary happenstance, says nothing about any other.
———————-
Quoting simeonz
If you mean by “immutable comprehension” an irreducible understanding, is there no one simple thing for which you have no doubt at all? Or, is there no one simple thing for which the doubt of it contradicts something....or possibly everything.....else?
I see what you’re getting at, though, I think. Given that human empirical knowledge is transitory to say the least gives rise to the idea that understanding, which is always antecedent to knowledge, might be the cause of doubt of immutable comprehension. We would certainly have irreconcilable opinions on that, if you respond in the negative to the questions above.
Just my point of view.......
Quoting Mww
This seems to be an accurate interpretation of the known facts.
Quoting Mww
Are you hinting that truth implies existence? That any universal statement is implicitly about a model reality, and without reality, all universal statements are equivalent, making non-existence the mathematical definition of contradiction. Or am I misreading you. On a purely technical note - what about existential statements?
I am not able to articulate what I mean right now, but while I do believe to exist, there are qualities of existence that I question. Therefore it is difficult to analyze the relationships between existence and the particular states of being, such as reasoning, sensation (which I place in the same category as reason), rational truth, pragmatic truth, etc. Whose existence - the subject or the object - is a requirement for a statement to be true? Is truth a pragmatic or a rational quality - is the correct anticipation of your environment a sufficient operational equivalent to the ideal of truth? How does truth apply to different states of being - person with a damaged brain, baby, genius - does the notion apply to them in equal measure?
As I said, this line of reasoning is a little overwhelming and I fail to set it in order at the moment.
I think there are genuine ontological distinctions between minerals, plants, animals and humans. Whereas, post-Enlightenment philosophy tends to reject ontological distinctions altogether.
Because humans are free agents who are capable of making choices, their actions have consequences, and the consequences have real significance.
A lot of modern thinking comes about because evolutionary biology occupies the place that was once occupied by religion. But whereas Western religion incorporated a sophisticated moral philosophy, derived from the Greeks as well as Biblical lore, evolutionary theory is really only a biological theory. So the attempt to shoehorn an explanation of all human nature into evolutionary theory is biological reductionism which is the default view of the secular-scientific culture. About which see this comment.
You do realise that the historical origins of an idea are not determinate either of its accuracy, nor its practicality?
So often you make these claims as if they constituted an argument... "oh such and such an idea came about only after enlightenment thinking replaced religious thought". So what? What difference does it make to the quality of an idea what point in history it first became popular?
If you actually have an argument that biological reductionism is either a less accurate or a less useful representation of reality, then just make the argument.
Of course. But it’s a comment on something specific which I believe is germane to the discussion I’ve been having with the OP.
Quoting Isaac
‘Representation of reality’ already contains the assumption of representative realism. But, leaving that aside, Darwinism when taken as a philosophy can only ever be a form of utilitarianism. Why? Because there’s only one criterion for success in Darwinism, which is the ability to propagate. Everything is implicitly subordinated to the ends of the propagation of the genome. An apt comparison is the ‘Procrustean bed’. (If you’re unfamiliar, there’s a handy summary in Wikipedia.)
Yes, but again you're merely describing, not reasoning. Why is measurement by the ability to propogate a bad thing? What use is the alternative? How are you justifying any claims to utility? If not utility, nor accuracy, then by what measure do you propose we judge competing philosophical positions?
It's no good simply saying a position is shallow simply because it dismisses fields of thought. It is also necessary to argue that those fields of thought deserve not to be dismissed. To do that you must have some criteria, agreed with by your interlocutor, by which to judge these things. Their mere existence to date is not sufficient to justify their continued existence.
I had assumed some background to the issues at stake which perchance you don't have. Do you know who the most well-known proponent of eliminative materialism is, and what his books are about? Do you know of his main critic, a philosopher by the name of Nagel, and what the basis of his criticism is? If you're interested, I can spell a lot of this out, but it might require a separate topic and quite a bit of writing.
Yes, I'm aware of both. The query I was raising was a meta-philosophical one about the approach to discussion. If the sum total of a post's contribution is to point out that there exists a counter-argument to a position, then I think we're in for a fairly boring discussion. I, and I think most other posters here, simply presume that a counter-argument exists. I don't think many of us are naive enough to think that our ad hoc thoughts have no counter.
The point I'm making about your posts in particular, is that they seem to presume the mere existence of a counter-argument constitutes an argument in itself. You quote Nagel as if the mere fact that he has said something on the subject should close the matter, without forwarding a reason why. If I wanted to know what Nagel thought about the matter, I would read Nagel. What I want to know here is why you personally find his arguments more compelling than that alternatives.
I’m suggesting lines of enquiry, that’s all. The OP is a very smart poster, I’m pointing something out.
Quoting Richard Polt
The evolutionary hypothesis does not mean to guide the population in its choice of ethics, It tries to support an explanation of how ethical choices are formed in a large statistical sample. What ethical choices should receive one person's privileged consideration is beyond its scope.
Quoting Richard Polt
Considering only the biological level of the individual when interpreting natural selection is artificially limiting. There certainly are sustainable and unsustainable types of collective behaviors and group interactions. In that sense, choices are influenced by both biological and cultural speciation. I use the latter term in the sense that, for the purposes of natural selection, we don't inherit just our genes, we inherit our culture, our social context, even the state of the environment, which interact with the survival of the species in pretty much the same way. That is, selection of the fittest is capable of explaining what drove people to complex social order, cultural conservatism, and in the ethical plane (ontology of divine miracles set aside), religion as well.
Quoting Richard Polt
Natural selection recognizes that cooperative (but competitive) member contributes to the thriving of its group. Species that act in pure chaos, driven only by self-interest, are not likely to persevere. Of course, evolution doesn't prescribe the range of ethical choices, but recognizes that choices that resolve poorly for the group won't have continuing place in history, as their presence will be eliminated through social ostracization or general extinction.
Quoting Richard Polt
The author frequently relates to some ant analogy, which apparently have been used to illustrate an evolutionary approach to social behaviors. Whoever used ants as an explanatory device, I am sure did not mean to assert that the human species are similar in their social dimension, but only that ants can be used to illustrate the formation of collective behavior or herd instinct from an evolutionary standpoint.
Quoting Richard Polt
Human beings are at the top of the evolutionary scale for a reason. The formation of social attitudes, of cultural norms, of instructional ideologies and religions produce more coherent group behavior (albeit not in every single instance). We all know that homo sapiens defeated (and ate) the neanderthals, because the latter, being averse or inept to the formation of large social groups, were forced to defend themselves in isolation. (To think of it, I am more of a neanderthal.)
Quoting Richard Polt
The author takes for granted that the structure of machines is incapable of sentience. This may or may not be true, but the author elaborates on the particulars of electrical circuitry, as if there is something inherently profane about them, which makes it unworthy of hosting sentient life. However, why electrical construction is fundamentally incompatible with life is not discussed.
Quoting Richard Polt
I am not aware of any accepted test that determines the presence of those attitudes in a non-human. And if the fact that we cannot test those qualities in machines with certainty is cause to withdraw speculations of machine sentience, then what tests have we used to confirm the universality of human sentience? Shouldn't the author present, in the context of his contrasting comparison, balanced empirical criterion for people and machines, and illustrate its failed application to machine behavior. Or otherwise, what criteria were used here - instinct?
Quoting Richard Polt
Again, essentially the same issue. The author is vague what type of demonstration would be sufficient. Admittedly machines today are still rather primitive, but in the hypothetical future when machines start to behave more elaborately, what test would satisfy the author or will he reject machine sentience purely definitionally? (On the other hand, machines may not be capable of sentience. But from my point of view, the author did not attempt to rationally prove this point.)
Quoting Richard Polt
I wouldn't limit the causes of natural selection to genetics and biological structures. The factors are all encompassing - sociology, ecology and even cosmology can ultimately play a role. But even if ethical choices are explained by natural selection, that still [s]does[/s] doesn't necessarily compel an individual to alter them. If my affection for my loved ones is explained, I wont erase them from my phonebook, just because my feelings have been reduced to primitives.
The author finishes by putting his views in historical context and then concludes that modern naturalism is an oversimplification of human life. Science indeed has the tendency to work in a narrow scope. It is abstract by design. But the author has not convinced me that science has chosen the wrong methodology to explain the emergence of ethical considerations, from an empirical standpoint. If the argument was non-empirical, then the essay should have established what logic would be used to validate it.
Mmmmm..........no. Never crossed my mind. If it had, I would’ve had to say that which exists truly does exist, but that which is true does not exist necessarily. So, truth does not imply existence.
Nahhhh.....I was just wondering if you held some unassailable truth. So as not to extend the concept of existence into the far reaches of dumb, an analytic proposition, which begins merely as something one thinks, would be true when its negation is impossible. The most famous one of all being cogito ergo sum.
—————————
Quoting simeonz
I think the criteria for truth is the relation between subject and object, not always the existence of one or the other. The statement every effect has a cause is true, but neither cause nor effect exist. At least in the strictest sense. If we mean anything that is an object of thought exists just as objects of experience exist, such as concepts or ideas, then the answer to your question would have to be....both.
Quoting MwwMy point is - the subject (speaking now in a more narrow conventional sense) need not even comprehend the correlation as it applies, for it to be true. But if the subject doesn't comprehend the statement, what is their relation to the statement. One might say, that they still will experience the truth as an effect, but this bounds truth to experience. I am not sure this is the case, as it seems to me that truth may exist in its realization, without requiring knowledge. For example, simple mechanisms form a correct expression of some reality. An electronic thermoregulator can be "correct", in the sense that its internal representation of the environment state is amorphous to the actual state. The two states can be observed, their correlation or mutual information measured. In contrast, an "incorrect" thermoregulator would have states that exhibit less correlation, which is independent of the notion of its utility. So, in some primitive sense, one can talk about truth, even without awareness, just based on state correlations. There still has to be some state space for those correlating states, Thus even in this primitive case, you need some realization or existence. I am not arguing here whether correctness has value without a sentient subject. Neither whether reality without sentient subject can be ever validated, even if the truth would hypothetically still have proper mathematical definition.
You along with every physicalist/materialist worth his lab coat, property herein meaning something that belongs to a real substance. Not that you gave any indication you are one, just that experience informs me they think along the same lines you just spoke. I sympathize; it’s pretty hard to posit as certain, a thing that has no quantifiable predicates. I rationalize the situation by coming at it from behind....I don’t have to prove subjectivity, but rather all I have to do is show how everything else becomes immediately unintelligible if there isn’t such a thing.
———————
Quoting simeonz
Exactly right!!! Well done, I must say. I take that “something thinks, therefore something exists” and turn it into “I” am that which exists as thinking subject. “I” taken to represent the spontaneity of all thought in general, also called “ego” in empirical psychology, and the thinking subject taken to represent consciousness itself, which is the totality of conscious thought in general.
Hey....it’s a theory, for whatever that’s worth.
———————-
Quoting simeonz
Perhaps, but what good would a truth be if it wasn’t comprehended as such?
Right - just what I mean. That is an example of what I regard as the missapplication of biological principles to matters beyond its scope. True, these issues are not overtly 'biological', but this style of argument applies the guiding principle of evolutionary biology (paraphrased by Herbert Spencer as 'survival of the fittest' and later used by Darwin) to account for characteristics that are intrinsically beyond the scope of biology. But then, for us, nothing is beyond the scope of biology, as we're material beings, and so ultimately explicable in scientific terms.
Quoting simeonz
As do I. Machines are devices, and devices are not beings. But again, you're explanatory framework may not permit the distinction.
Quoting simeonz
I believe it's a reference to E. O. Wilson, Of Ants and Man. The same Wilson who says 'the final decisive edge enjoyed by scientific naturalism will come from its capacity to explain traditional religion, its chief competition, as a wholly material phenomenon.' You don't seem aware of the relationship between evolutionary theory and modern materialist philosophy of mind that underpins this whole issue; I suggest you're not aware of it, because you're looking through it rather than at it.
Quoting simeonz
That's a 'burden of proof' argument. If you start from the presumption that naturalism is the explanatory paradigm, then you will want an argument to show that this is not so. The problem is, naturalism tends to rule out the premisses of its critics. In other words, if you claim that empirical method is the arbiter in such issues, then you're basically want an empirical argument against empiricism.
Quoting simeonz
A property of what? Perceived by whom?
Quoting Isaac
That's because some ways, the perspective I'm advocating is incommensurable with that of the OP (and, by extension, with the perspective of a lot of secular, analytical philosophy.) I have had OP's on this forum that have run to hundreds of pages and occupied months, but ultimately what's at issue is fairly simple, so often I will just refer to some examples.
But, if I wanted to mount a methodical criticism of biological reductionism, I think by far. The best arguments are those under the heading of 'the argument from reason'. This hinges on the argument that reason itself can't be equated with or reduced to any known physical laws or phenomena.
Quoting MwwDepending on our definition, the subjective might be possible to stretch (in ways that actually interest me), and still maintain the capacity for reason.
Quoting MwwThis certainly leans closer towards a definition of subject, that will resist attack, if awareness turned out to be potentially (or in some sense actually) impersonal.
Quoting MwwTautologically, without the subject, the truth has no value to that subject. The necessity or capacity to make distinctions is lost to a non-extant subject, but does that preclude the truth from being in its own right? One could say, that we don't have to make such judgement, assuming the nature of truth does not impact our use of it. I am not sure of that.
Quoting WayfarerThis is a very substantial postulate, that needs to have some rational grounds for me to accept it.
Quoting WayfarerWhat is the alternative methodology? The least of what I want is to conclude at my premises, but I will not be convinced through sentiment either.
Quoting WayfarerAs I said before, I am willing to allow that matter could be self-perceiving, under certain conditions. Our familiarity with matter is insufficient to make such judgement, but for me, it is a possibility. And I think that it requires the least amount of extraneous philosophical content. Considering the states of mind that an individual can experience, due to illness, or age, I can hypothesize a plethora of mental states. And since I am skeptical that our characteristic mental state is the only one that can sustain reason, I prefer to generalize philosophical arguments beyond the typical frame of mind.
But natural selection is a theory of the origin of species, and, as such, a biological theory. (Although it might be relevant to note that Alfred Russel Wallace, credited as co-discoverer of the principle, did not accept that it amounted to an in-principle explanation of the intellectual or rational faculties of h. sapiens. (Ask yourself - what is sapience?)).
Quoting simeonz
The dictionary should suffice. The definition of machines, devices, beings, and organisms, demonstrate that they are different in kind.
Quoting simeonz
As far as science knows, this is only ever evident in the case that it forms the physical aspect of sentient beings. Took several billions of years, and stellar explosions, to happen, however ;-)
Quoting WayfarerYou were arguing about the ontological content associated with different physical forms. You resolved this question by a dictionary lookup?
Quoting WayfarerWe have developed the skill of engineering and have the resolve to embody the material expression of our intelligence into an artificially produced vessel. This will change the time scale significantly. For better or for worse, it has become essentially unavoidable at this point.
It’s that simple.
Quoting simeonz
Is 'eliminative materialism' an empirical hypothesis? Is there any conceivable way of determining whether it's true by empirical means?
Edit: What methodology do you use to justify your disagreement with elminativism's self-consistency and plausibility, or is it a matter of incompatible premises of your philosophical position?
Edit 2: If you mean that empiricism cannot be validated externally by empiricism itself - this is true. But why do you think that it ought to be?
Edit 3: I just thought of another quality of eliminativism, as a hypothesis, that appeals to me. Minimalism. It assumes the least amount of unobservable externalities. I am assuming the moderate form of eliminativism the Oxford encyclopedia of philosophy refers to - that the mind is real, but is directly embodied.
Yeah, that’s pretty much standard, isn’t it? The more one voids philosophical predicates the more he leaves room for empirical predicates, if he chooses to furnish the room at all. Still, “most void” is not empty, and as long as one reasons, the philosophical position can never be empty.
————————
Quoting simeonz
Awareness is impersonal, for it merely indicates an arbitrary condition of that which is in possession of it, but cannot define it. What a subject is aware of, serves as sufficient determination of what kind of subject it is, which does define the personal.
————————
Quoting simeonz
Truth is a distinction, insofar as it is a member of a complementary pair, and if the capacity to make distinctions, that is, recognize a complementary pair, becomes lost, doesn’t that make the complement itself moot? If there is no making or comprehending a distinction, how can it be said there is one?
And, truth is already a being, the being of true. The loss of distinction of being true is exactly the same as the loss of distinction of truth. But is the truth precluded from being in its own right? So, no, I guess not. A truth will be true whether it is known or not, but that still gives us nothing. We still have to know what is true in order to know a truth.
Semantic word games....BOOOO!!! Sound logical reductionism......YEA!!!!!
I gave an argument, very early in this thread which is that there is no physical equivalent of the "=" sign. It can be extended to the argument that symbols, generally, which are the basis of language and abstract thought, can't be meaningfully reduced to physical laws (an observation which is the basis of the discipline of biosemiotics). So there's the 'hard problem' of how to understand the nature of experience, 'what it is like to be...', on the one hand, as articulated by David Chalmer's et al. But I think the much harder problem is, how to account for the nature of reason, language and abstract thought
- which are foundational to all attempts to arrive at any theory whatever, be it materialist or other. Put another way, if the universe is, as materialism tells us, intrinsically meaningless, then how is meaning and reason grounded in it?
Now I know the instinctive answer is that linguistic capability evolved and that, therefore, this problem can be addressed through the perspective of evolutionary biology. But as I've been arguing, I think this amounts to a kind of category error, as evolutionary biology is first and foremost a biological theory to account for the origin of species. To apply it to the questions of epistemology - a project called 'naturalised epistemology' - is to implicitly equate language and thought with biological adaptation, which I claim is intrinsically reductionist. To put it another way, yes, h. sapiens evolved, but at the point of becoming rational, language- and tool-using beings, crossed a threshold which is no longer with the scope of biological theory per se. [sup] 1[/sup]
Certainly, h. sapiens evolved, but the question is, can the elements of reason (such as logical laws, natural numbers and so on) be meaningfully viewed as the product of a process of biological evolution? Evolutionary materialism answers in the affirmative - but this is the very point at issue. And this attitude embodies many philosophical assumptions that I (and many others) consider unwarranted.
This also is why evolutionary theory is bound up with that of eliminative materialism. Evolutionary theory is supposed to provide a de facto 'philosophy of mind', which many take for granted nowadays. This theory is that the exigencies of survival are such that our intellectual capacities have been shaped to entail truth-bearing perceptions and cognitions. But if you really think that through, there's no guarantee that such perceptions will be true in distinction from being merely well-adapted. And, as remarked by one of Dennett's critics, 'if reason is a product of natural selection, then how much confidence can we have in a rational argument for natural selection [sup] 2 [/sup]?' This is an argument that has been developed at great length by several philosophers of religion, specifically in Alvin Plantinga's 'evolutionary argument against naturalism [sup] 3[/sup]'. Another version is 'the argument from reason', originally associated with C.S. Lewis. And there's also Thomas Nagel, whose work has the distinction of *not* being religiously motivated, but seeks to criticize the self-contradictory nature of neo-Darwinian materialism on its own terms [sup] 4[/sup].
What it all comes down to, is that eliminative materialism doesn't succeed in eliminating the subjective reality of being; it basically ignores it, and then says 'what's the problem?' But the only reason it can ignore the subjective nature of being, is because it is fundamental to everything seen, said and done - so it can be, and is, taken for granted! (Which is precisely the blind spot of modern science.) In this way, eliminative materialism actually reverses or undoes the whole project of philosophy, which is to make explicit what is usually assumed, to expose our deep and taken-for-granted presuppositions about the nature of being.
So, the methodology, or meta-methodology, that I am recommending is that of critical philosophy, which I believe exposes eliminativism's fatal shortcomings.
The abstract notions, such as equivalence, causation, correspondence, etc, can be considered predicated by nature's reproducible conditions. For a naturalist, abstraction can be explained as emergence of generalizing faculties in the human cognitive apparatus, as optimal response to the exhaustible external varieties. If you imply that human sentience is irreducible to information processing, then you will not be satisfied with this answer.
Quoting Wayfarer
The implied premise of the question, however, is that the search for (universal and eternal) meaning is not a rational need for foundational permanence. If it were, then it would make the universe meaningful by definition for a naturalist, because it is permanent, and a foundation onto itself. Such hypothesis may be overreaching, but so is the idea that the universe was conceived by an omnipotent creator, that is permanent and a foundation onto itself. The constructions are so similar, they end up being different on what seems like a technical note.
Quoting Wayfarer
Human knowledge is fragile. You are correct, that any theory that supports that knowledge is corrigible attacks its own foundations. But given that human knowledge is indeed imperfect, shouldn't any sound epistemic theory have the obligation to model our understanding in such a way, as to seed reasonable doubt in its own validity, in consideration of its origin?
Quoting Wayfarer
I don't understand how reducing the subjective states (mind) to objective states (matter) attacks the existence of the mind. This is the same as believing that allowing your doctor to examine your cough will destroy your cough from existence. I think it merely attacks the mind-body distinction. Allowing the mind to be understood empirically, does not disregard it. It rather appears to me, that dualism has a notion of the human soul, which is being attacked. But this is not the same thing as attacking the mind, except for a dualist. I dare say, I am an existentialist. I believe that if your values depend on your irreducibility, your ethical choices rest on the wrong premises. This explains why I don't subscribe to the objections against the "meaninglessness" of the material world onto itself.
But from there on, how the subject emerges, and what additional faculties are necessary for them to experience and evaluate this fact awareness, I don't know. We know that belief is not the same as truth. (I am adamant that the mentally ill, very immature, very elderly, etc, should be part in any meaningful epistemic discussion.) Also, we know that understanding requires awareness, but not self-awareness. To define the relationship of truth and subject, we have to decide what a subject is. By which I mean - determine what kinds of subjects are there (or could be), and how does the type of subject affect the aforementioned relationship.
I don't want to examine my particulars. If I am going to talk about what a subject is, I would prefer a definition that doesn't rely on actual existence (, but rather just plausibility) of the subject, creating an epistemic reference context of some kind and a mapping to a material context.
Edit: In summary. I believe that the distinction between fact awareness and fact obliviousness can be defined objectively, as representation through material symmetry of some kind, whether with an implied emerging subject or not. The assignment of truth values is a different matter, which requires something more, which if materially expressed, is very hard to define.
Yes, I can see that. But thanks for your reply.
That is fine. People cannot agree all the time.
On the other hand. Courtesy is its own argument. Becoming personal can discredit your statements.
I’d be very surprised if anybody does. I certainly don’t. Could be just a natural result of the plurality of extant physical conditions that makes it seem like a subject emerges. Maybe the subject only emerges because of the human propensity to explain everything, and when the notion of “subject” first came about, there were no explanations related to brain even possible, never mind sufficient, so we invented one.
No matter what, we cannot remove ourselves from the subjective condition, its reality or illusory appearance notwithstanding. That being the case, it doesn’t really matter where it comes from nor does it matter how it makes its presence felt. Thus, it would seem much the more productive to concentrate on what “subject” does, rather than what “subject” is.
———————-
Quoting simeonz
Ok, true enough, insofar as understanding works by means of integral synthesis, so doesn’t hold any consideration for the source of that which it synthesizes, that source being the thinking, conscious subject who by definition is certainly self-aware.
———————
Quoting simeonz
As well you should, because it’s highly doubtful the subject has an actual existence anyway. Rational existence, of course, because we can conceive it, conception being the standard-bearer for plausibility, but to call a rational existence an actual existence casts epistemic shadows on objective reality, and just sustains the notion that the primary defect in human reason is its proclivity for confusing itself.
Can we agree there cannot even be talk, which implies communication, about what a subject is without the ubiquitous subject/copula/object process? I mean, that is the condition upon which our language is built and we are never going to understand each other if we don’t both use that very specific propositional construction, irrespective of its content. But before either of us speaks anything, we have to think it, and the thought which becomes intelligible communication absolutely must adhere to the same propositional construction. It follows that when I think about what a “subject” is, the “subject” immediately becomes the object in the propositional construction, re: for me, that which exists as the necessary condition for all rational enterprise exists as “subject”, and when I communicate the thought, the objective nature of “subject” holds.
That being said, there is an intrinsic epistemic reference context, because what I think is known to me to be true, but for a concept that is itself immaterial, my talk about what a subject is, cannot have a material context. Objective context, certainly; material.....not so much.
————————
Quoting simeonz
Truth values are relative to human intelligence alone, and the something more for its assignment is reducible to, “....the accordance of the cognition with its object...”. Because this is merely a logical representation of what truth is, a material expression of it cannot hold universally because it is absolutely impossible to cognize the manifold of all possible objects. Very hard to define indeed.
I will need some time to answer coherently, but I think that the difference between our points of view, about the significance of the subjective, is not one of essence, but one of purpose. I am looking to "understand" (which given the lack of data is a strong word) what a subject is, how it becomes a subject, what kinds of subjects, with what kind of qualities are potentially possible. You are looking into the application of the subject, what they can do better or worse, what is the way to improve their performance.
I am not looking into the discussion anthropocentrically. In a sense, I consider ignorance to be a disease, that experience (collective evolutionary as well as personal) cures with time. To understand the disease, I don't want to latch into the present condition of the existing species. I want to understand what drives the process, and arrogant as it may sound, where it means to converge. That is why I frequently make references to animal cognition, insanity, dementia, infant language-absent thought, etc, because those are the closest examples to the characteristic mental state of a human being, that are relatively well known, yet deviate enough to affect the capacity for rational thought.
Ehhh.....I’m not really interested in subject qua subject, it being merely a necessary condition for the human cognitive process. And I’m not really interested in deviant human reason, for that merely tells me what it isn’t when I want to know what it is in its purest form.
If you come up with something you think might be interesting......throw it at me.....see if it sticks.
Cool.
Regarding chiming in later, I am skeptical that I will make much progress, but I may look into semiotics (a good pointer by Wayfarer), and see what crystallizes out of it.