My biggest problem with discussions about consciousness
I don't understand why. But the vast majority of people I talk to take it as a given that:
A) humans are conscious (not just me)
B) animals are conscious
C) machines are not conscious (no matter how complex)
D) inanimate objects are not conscious
And I'm baffled because I see no explanation for any of these. In fact, I think if you accept A, then B,C and D shouldn't even make sense. What makes THIS assortment of molecules (humans) conscious where THAT one isn't (machines, inanimate objects). If you exclude supernatural explanations (souls and the like) then you see that humans are nothing more than date processing, self duplicating biological machines. To assume that these biological machines are conscious whereas mechanical ones are not seems downright unreasonable to me. Do people that think this believe that consciousness is inherent in carbon atoms but not silicon? If so are hunks of coal conscious? Can anyone please explain logically why B, C and D are true or not?
A) humans are conscious (not just me)
B) animals are conscious
C) machines are not conscious (no matter how complex)
D) inanimate objects are not conscious
And I'm baffled because I see no explanation for any of these. In fact, I think if you accept A, then B,C and D shouldn't even make sense. What makes THIS assortment of molecules (humans) conscious where THAT one isn't (machines, inanimate objects). If you exclude supernatural explanations (souls and the like) then you see that humans are nothing more than date processing, self duplicating biological machines. To assume that these biological machines are conscious whereas mechanical ones are not seems downright unreasonable to me. Do people that think this believe that consciousness is inherent in carbon atoms but not silicon? If so are hunks of coal conscious? Can anyone please explain logically why B, C and D are true or not?
Comments (65)
You seem to base your reasoning on this observation, but how are you so sure the statement is true? Can you be certain that what you see is all there is?
Surely to assume that humans are ‘nothing more’ than what they see is equally unreasonable.
In the physical sense of things:
Humans and animals are similar.
Humans and machines less so.
Humans and inanimate objects even less so.
A: "Such and such is consciousness"
B: "I can relate to such and such".
Notice that nobody disagrees with you whenever you use B in a situation, because they tend to view B as an assertion you are making about yourself, rather than an assertion you are making about 'such and such' in itself.
On the other hand, whenever you say A in a situation, people normally interpret it to be an objective assertion you are making about 'such and such' in itself, regardless of whatever personal feelings and intuitions you harbor towards such and such.
In my opinion, this common realist belief that A and B refer to different things, which is itself a consequence of assuming an ontological distinction between subject and object, is the root cause of philosophical skepticism about the existence of other minds.
Philosophy of mind discusses a variety of aspects of consciousness, such as the holding of beliefs, intentionality, qualia,... and there's no evidence of such things being present in objects other than organisms with brains. However, if you believe in some form of dualism, I see no reason to rule out minds being attached to anything.
I think that typically, the line is drawn somewhere between B and C (or perhaps C and D for some) not because of what the systems are made of but because of what they can do. Functionalists typically say that thinking, knowing, feeling, perceiving, are things that brains can do but other kinds of systems cannot. Their evidence for this seems to be that when we knock out certain functions in the brain then corresponding subjective capabilities disappear, for example, we lose consciousness altogether when whacked in the head. The IIT theory of consciousness draws the line in a different place. It says a system is conscious and is only conscious if it integrates information. Brains integrate lots of information, and so are the most conscious systems. Simple atoms and molecules integrate minimal information, and so are minimally conscious. If there were a system that integrated no information, it would not be conscious.
However,
Quoting khaled
...I agree with you. I think attempts to draw lines (either sharp or fuzzy) in nature separating the conscious from the non-conscious involve conceptual errors.
Not an assortment, an arrangement. And what arranges them? It is customary to believe nowadays that this arrangement is something that just falls out of the rules that govern molecules - but is it? That account was arrived at by dividing the whole being into two halves - mind and body - and then declaring that the former is dependent on the latter, and that molecules themselves are the ground of agency. That is philosophical materialism - one of the reigning myths of modernity.
Quoting khaled
I would suggest that we generally don’t understand what ‘supernatural’ refers to, other than in the sense determined by social custom and tradition. But that we longer have adequate metaphors for conceptualising such ideas, so the metaphor of ‘mechanism’ prevails. But it’s only a metaphor; beings are not literally devices or machines; you’re actually running up against the inherent limitations of that metaphor, i.e. you can sense that it’s inadequate, but you can’t see an alternative.
That’s what I think you’re dealing with.
Because the idea that consciousness is exclusive to humans gratifies yearning and fear, in relation to the unknown, creating an illusion of intrinsic meaning and leading even to an unwarranted feeling of euphoria. It gratifies our egoism and supports our ancient concept of creationism, of which even the most steadfast skeptic doesn't want to let go.
I believe that if a machine is able to convince us that it's conscious, it's approximately the same as a human convincing us of their consciousness.
It also seems that we are comprised of so much inanimate material, and so many microorganisms, that we should query more thoroughly into their involvement in our "consciousness", or at least into the supportive biological system(s) we claim consciousness inhabits.
I was considering starting a new thread about free will and consciousness today, but not many respond to my threads, so here we are.
I've been wondering how we determine which species have consciousness and which species don't and how we feel we've securely established such egoistic claims concerning our importance in the grand scheme of things. How does my dog not have free will, or fish, or bacteria? Why have we drawn a line in the sand between ourselves and literally everything else we can observe and said to ourselves "we're the only thing of any intrinsic consequence"? It's a bit stupid, to say the least.
However there is no logical problem with believing, as Descartes did, that only humans are conscious. Descartes believed that a human was made of a body - atoms as you describe it - and a spirit, which we do not have the means to detect. It is the spirit that gives consciousness, and he believed the spirit was injected into the body by God.
There are also non-theistic hypotheses about consciousness, such as Emergence - when atoms achieve a certain special arrangement, consciousness arises.
Part of the problem in the OP comes from taking a reductive approach to life - saying that describing the atoms that make up a living organism is exhaustive.
Did you read Philosophy in the flesh? Because that sounds exactly like it. But still, you haven't offered an alternative metaphor and haven't answered my question
The reason I say that is just to be fair. We describe only the atoms that make up machines, animals and inanimate objects, detect no "spirit", then we use that to conclude that they have no consciousness. In the meantime we can't detect a spirit in ourselves and yet see no problem asserting that we're conscious. That's hypocritical and downright stupid to me
We directly detect our own consciousness. There is no supposition going on there.
The supposition first starts when we use the observation that other humans seem to be very similar organisms to ourselves, to assume that they also have consciousness. We take a further step to do that with various non-human animals, based on shared characteristics that are deemed to be relevant, such as a brain and nervous system. That sounds pretty reasonable to me, although it is well-understood that the presence of consciousness in others cannot be proven.
So I don't think we do describe other humans and animals only in terms of their bodies. We also assume they have consciousness. I suspect we spend more time talking and thinking about the conscious feelings and beliefs of other humans and non-human animals than we do about the activities of their bodies.
The question would then be whether such a consciousness is possible only on account of being a suitably competent symbolic language user, and whether some other animals, presuming that they are not symbolic language competent, nonetheless may be self-reflectively aware.
When it comes to machines, the question would be about affective response and its role in being able to intuit context; in other words as to whether it is possible to be consciousness in the self-reflective sense, or for that matter any other sense, if nothing matters to you.
If it's any consolation, they laughed at man flying. And now aeroplanes are quite mundane.
A bit off-topic:
People also laugh at the idea of flight in ancient times.
Even though the Quimbaya Airplanes have been tested and the Nazca Lines seem to mimic 'modern' trails left in the sky.
Just goes to show, humanity is one foot dragging the other.
I don’t believe a rock is conscious, BUT I do think that there is some degree of consciousness at a molecular and/or perhaps even subatomic level. While a rock has no sense of being a rock, it consists of molecules that interact and exchange energy/information with their surroundings - they are individually ‘aware’ of something more than this, here and now, at least - even if only in each fleeting, indistinct moment.
As for your description of humans as ‘data processing, self-replicating biological machines’, while I agree that we are nothing more than other animals except perhaps a more developed system, I would argue that the biological system itself is more elaborately interconnected, and therefore potentially aware of itself, than any one machine. A digital system, however, may be another story.
I think the question of how chemistry interacts with information processing might be worth exploring here. Non-living molecules seem to process information uni-dimensionally (they simply internalise it), whereas living molecules process the same information bi-dimensionally - that is, they can relate information to other information, leading them to internalise it in a different way, and eventually to distinguish between instances of that, there and then, and build a ‘picture’ of their environment.
I have nothing to back this up, mind you. Unfortunately I don’t know enough about chemistry, biology or information theory to either confirm or deny these wild speculations I have. If someone more knowledgeable could set me straight or point me towards studies in this area, I’d appreciate it...
We infer, via induction, that things other than ourselves are probably conscious, relative to their material/functional and behavioral similarities to ourselves. So it's a pretty safe bet that other humans are conscious. Re other animals, the more different they are in terms of brains and behavior, the less comfortable we are assuming that they might have anything like our consciousness.
With computers, robots, etc., we're able to program some at least superficially similar behavior, but materially, they're very different. So it's not going to be clear at what point, if any, it would make any sense to attribute consciousness to computers or robots.
Could you give an example of the evidence?
Sure. Here's one example of evidence: https://www.popularmechanics.com/science/health/a27102/read-thoughts-with-brain-scan/
Brains are made of different materials than rocks, and that is one good reason why rocks don't "experience" consciousness relative to brains. Brains are composed of particular materials interacting in particular ways relative to other stuff in the universe. And we only discover consciousness at the locations where brains are present.
I'm very suspicious of those brain-scan studies. Sure, once you build up a big enough database then you can infer meaning from the data, but what is the nature of 'that which infers meaning'? Is that something you're ever going to find in the data, or do you already have to have it to infer anything? In which case, it's internal to thought.
Have a look at Do you believe in God, or is that a software glitch?
I don't think I understand what you're asking there. If I read the question literally, you're asking "what is the nature of "individuals thinking about x in a semantic manner," but then I'm not sure why you'd be asking that.
At any rate, the example I gave is just one example. Other sorts of examples include people ingesting substances that have effects on their consciousness or thinking, brain injuries having effects on the same, etc.
I don't understand why they're calling it an "illusion" in that article.
The only place we discover consciousness is in ourselves. Or more strictly, the only consciousness I can discover is my own. The consciousness we 'discover' in others involves inferences from observed behaviour in others, the assumption that similar effects have similar causes, and the knowledge that our own behaviour is caused by our experience, and the conclusion that they must therefore have consciousness too. So if we are allowing these assumptions that opens a can of worms when it comes to deciding what behaviour is sufficiently similar to our own to validly infer consciousness. Sure, we are maximally similar to other human beings, but we are also similar to rocks in a whole load of ways. We still need a principle to tell us when we can make the inference and when we can't. Do you have a way to decide?
Of course no one would deny these well established facts about the relationship between human brain function and human experience. What I'm struggling with is what you can conclude from these, other than such and such experience in humans is dependent on such and such brain function in humans. Can you spell out your conclusion with the reasoning?
Why isn't that enough? What else are you looking for?
There’s a basic principle which I think defeats ‘brain-mind identity’ theory. This is that symbolic representation and abstraction literally cannot be understood as a physical process. They can be instantiated physically, which is how written symbols and codes are possible (not to mention computers and calculators). But the fundamental intellectual acts that form the basis of abstraction, logic and rational inference inhere wholly and solely in the relations of ideas. They are purely and only intellectual in nature, they are not physical.
One argument for this is that exactly the same ideas can be represented in completely different symbolic forms. Not only via different languages, but also via different media - like, binary code, written text, and so on. In all such cases the information remains invariant, but the representation is completely different. So, what is different, and what stays the same? The representation and the medium differs, but the information content remains exactly the same. And this is only possible because information inheres in different logical domain to the physical.
Don't you have to not be a nominalist to believe that that can be literally true?
There are degrees of similarity with regards to (bio)chemical composition and functionality. Rocks are less similar than humans in this regard.
Are you fishing for certainty with regards to "needing a principle" to make inferences about where consciousness is located?
Not necessarily certainty, a tentative hypothesis would be fine. It's the obvious question to ask someone who thinks that we can infer consciousness in other brains, but not in rocks.
It's not enough for a more general conclusion, such as the one you give:
Quoting Terrapin Station
The problem with it on my view is that you're positing numerically distinct identicals (as in different instances of "the same (exact/identical) thing"), and there are no such things on my view (as I'm a nominalist).
Why not? It seems to be more than enough for that. You'd have to try to support why you feel otherwise
https://m.youtube.com/watch?v=GLIol6viKkI
What you have is:
A subjective experience of one of the 42 concepts is reliably correlated with brain event A
Brain injury affects subjective experience in humans in systematic predictable ways
Drugs affect human experience in systematic predictable ways
Therefore, consciousness is a property of our brains
...there's too much missing. I'm not insisting on a strictly deductively valid argument, but I'd like to see some of the gaps filled in.
You could try something more precise, for example:
Brain event of type A is necessary and sufficient for subjective experience of type a in humans
Brain event of type B is necessary and sufficient for subjective experience of type b in humans
and so on, for C, D , E, F etc
Therefore, all subjective experiences in humans are dependent on and necessitated by corresponding brain events
The conclusion there has a clearer connection with the premises. We've moved from the particular to the general in a reasonably transparent manner.
But even that doesn't tell us much that's interesting about consciousness, apart from that at least one thing in nature is conscious, namely, brains. It doesn't get us anywhere nearer to figuring out if, say, a rock is conscious, or not.
EDIT: typos fixed
Gaps such as?
The conclusion is a general statement about consciousness, but the premises are all about experiences in humans.
Is that addressing my question?
First off, I'm not forwarding anything in the manner of a deductive argument. Why would you be reading it that way?
Yes
Quoting Terrapin Station
I thought you were trying to say something, offering evidence for a conclusion.
lol
Exactly. Which doesn't imply anything like a deductive argument.
If your principles are challenged by an argument, then you've either got to defeat the argument or change your principles.
It's not my "principles," per se, it's the way the world factually happens to be (on this view, of course, but it's an empirical matter). The argument is defeated because it's positing something false about the world.
But, you haven't given any argument for it. You've simply said 'Because of nominalism, it can't be true'. Whereas I'm actually making an argument! And I think it's a good argument. So far, your response is basically, 'well, I don't like it'. And as this is supposed to be a philosophy forum, I think if you're going to bother saying something, it has to add up to more than that.
The issue here is this: the mind itself, what is that 'makes humans conscious', whilst it may or may not be 'supernatural', it is not an object of cognition. There is nothing anywhere that you can point at or objectify and say it is 'mind'. You can only infer the reality of mind in others because of their behaviour. As for knowledge of your own mind - well, it's kind of contradictory to say that you know your mind - the mind is the subject of knowledge, "that which is knowing". But you can never really know it, in the same sense that the eye cannot see itself, and the hand can't grasp itself. But the mind is the unknown knower.
The OP is simply an expression of the wish to avoid this conclusion, which is distinctively non-scientific, and therefore threatening.
So, nominalism isn't the case because of an argument for it. It's the case because it's what the world is like factually.
It's just like the surface of the Earth being 75% water isn't the case because of an argument for it. It's the case because that's what the Earth is like factually.
Now, if someone doesn't believe that the surface of the Earth is 75% water, we'd have to figure out why they believe something other than that, and we can try to find ways to convince them otherwise, which might include something like deductive arguments, but that's all about trying to persuade someone. The facts in no way hinge on an argument.
To say that it has anything to do with what I like or dislike is comical. I don't like/dislike that the Earth is 75% water. It's just a fact that it is. By the same token, I don't like or dislike that nominalism is the case. It just is. Those are the facts. It's what the world is like whether we are fond of the fact or not, whether we're aware of it or not, etc.
First - your argument is not empirical, but metaphysical. It is not an argument about any state of affairs in the world, but a statement about the way language and thought proceed.
Now I say that for arguments to be coherent, they have to rely on general terms. And general terms are those which are true in all cases, across the board. Now I'm arguing that these general terms are in some important sense the same as universals. Universals are true of whole classes of things. So if nominalism denies this, saying that all general terms are only names, then I don't see how you can ever really present a coherent argument about anything. Every argument only deals with a particular thing, and never with classes, kinds, and general causes.
You have a wonderful way of expressing concepts, I always enjoy reading them )
Tibetan buddhism defines concentric spheres of knowledge:
1. that which we know but dont know that we know
2. that which we know that we know
3. that which we know we cannot know
4. that which we dont know that we cannot know.
Strangely, mlost people in the west find themselves in the outermost sphere. Consciousness more properly belongs in the third, as you say.
No, first, I'm not forwarding an argument in the sense of premises implying some conclusion.
Secondly, it is empirical, and metaphysical and empirical are not distinct in the manner in which you're trying to suggest.
Let's sort through those two things before moving on.
Kind of you to say so!
Quoting ernestm
It rings true, but I'd be interested if you could dig up a reference for that.
Quoting Terrapin Station
How is nominalism an empirical argument, then?
We're talking about what the world is like factually, empirically. Either only particulars exist or universals exist too (or instead). That's a matter of what sorts of things there are, just like any other empirical matter.
The two aren't mutually exclusive.
Quoting Wayfarer
And that's how many species of x exist, how many types of y exist, etc.
Just to re-focus: the discussion about whether only particular things are real, or whether universals are also real, is not an empirical question, because it's *not* about existing phenomena. In fact 'what the world is like' is also not an empirical question, because you don't have another empirical world to compare this world to. All of this is in the domain of metaphysics, not empiricism; not an empirical matter.
This is incorrect. It is about what phenomena exist, what those phenomena are like. That's empirical and metaphysical.
It's the spheres of knowledge drawn around Amitabha.
I've heard then described by a number of monks, and there is a mention of it in 'Foundations of Tibetan Mysticism' by Lama Govinda which, despite its title, is the most detailed and profound book on the subject Ive ever found.
Sense data is processed as patterns. The patterns are processed to form invariant representations. An invariant representation or concept of an apple, for instance, is comprised of sense data from various senses, like color, shape, texture, smell, taste, etc., as well as the various kinds of apples and states (such as fresh or rotten) of apples. All of that just to form the simple concept of ‘apple’.
For there to be some kind of nonphysical apple concept, it would need to perfectly mirror the physical world, like a nonphysical world matching and perfectly aligned with the physical world. The ultimate redundancy.
Not at all -- the opposite, in fact. What you recognise when you recognise an apple, is a type, which can then be generalised to all such types, and a superset of types.
Animals can recognise and respond to shapes, but I don't believe they can abstract from shapes to form a concept. A concept is derived from the ability to perceive likenesses and differences across a whole range of slightly different particulars and to see what is common to all of them. It is precisely that ability for which there is nothing corresponding in the physical domain.
And I suggest that if you try and explain that in terms of pattern recognition, then you need to already draw upon a stock of concepts to make the argument.
This is not clear. You’re saying there’s a nonphysical representation of ‘apple’ that is subdivided to match the physical world as need be? If so, it is still completely redundant, unless you’re saying that there is no matter and everything is mental, but that’s just the inverse of materialism, which would be materialism in all but name.
First define it…. From there we can talk about a problem.
Second imagine this 2 idiots arguing about a car
Tweedle Dee is only saying about the feeling, joy, experience, and act of driving a car, pointing out that one cannot just ram up the backside of a car, the space around the car is as much the experience of driving. A full diatribe of the horse and rider as one, Jinba Ittai.
Tweedle Dum, on the other hand, says without the schematic, of the car, the chemistry, physics, engineering and all the parts coming together to give you a machine one doesn’t have a car to drive. It is meaningless to talk about the fiction of a driver without the vehicle itself because there is nothing first to contain it. Pointing out that one learns to drive a car from exterior sources and the validation of being a legal driver is one of pure bureaucracy.
Both Tweedle Dee and Tweedle Dum understand each other implicitly but from a point of self-invested emphasis, talk right past one another. Imagine really sitting there listening to this conversation….
Would you be interested? Would you find it insightful?
Insofar as I am concerned
Tweedle Dee of consciousness can go get a lobotomy and let me know how that worked out for their conscious experience of the world.
Tweedle Dum of consciousness can go and catch a thought and show it to me.
The entire topic of consciousness is complicated enough without this being the frame of the debate.