Mental States from Matter but no Matter from Mental States?
I think there's tension between the claim that matter can produce consciousness, but not vice-versa. For example, it is claimed by many that if you arrange brain-stuff a certain way and run a current through it, you can produce the feeling of stubbing your toe. But if you arrange the feeling of stubbing your toe with the beauty of a sunset while listening to a Bach symphony, you don't a working brain from that. You never get anything material from mental states. Isn't this a problem for physicalists who believe in matter/energy conversion? Why not mental/physical conversion? Why is it a one-way street?
Comments (163)
That premise rests on the assumption that mental states aren't physical states. There is no reason to believe that physical stuff isn't mental stuff. There's no other intelligible option given what we know.
So the case presented of stubbing your toe while looking at the sunset can't be stated in the terms because, absent modified physical stuff found in brains, you couldn't even stub a toe or look at a sunset. There would be nothing there.
The reason it is a one way street is because mind is not opposed to physical stuff, it is physical stuff. It's the physical stuff of which we are most acquainted with in merely having experience.
So even if you couldn't get a working brain from phenomenal qualities, you can certainly create completely new and unique aspects of physical stuff just by thinking about anything - flying fish, Paris, a golden mountain, or anything you can think of.
There's much more to say about this, such as the topic of intentionality, the property of mind which is about the postulated objects we experience in the world. Without such a property, we couldn't even construct a world.
So in short, the dichotomy between mind and matter doesn't hold. Physical stuff just "works" the way that is does, which is astonishing enough as it is.
Why should we assume physical states even exist? What evidence do you have for the existence of the non-conscious stuff these physical states are supposedly made of?
Sure there is. Think of some music. Is there music playing in your skull right now? Does your mind seem to have weight? Does it seem to be about the size of both your hands put together? Is your imagination bound by the size of your brain? Why are some parts of the brain conscious and some parts not? What is the explanation for how consciousness arises from matter? If you don't know that, then what is the framework for the emerging explanation for how consciousness arises from matter? If you don't know that, then your belief system is severely lacking in explaining something as fundamental as consciousness. That's catastrophic, as far as I'm concerned.
There is no forthcoming explanation because no explanation is possible. Non-conscious stuff doesn't produce consciousness. It's a category error that leads to absurdities. I don't know if you in particular think a functional equivalent to a human brain made of flushing toilets would be conscious, but I've met plenty materialist who do think that, and it's not hard to get a materialist to agree to that absurdity.
Pretty much every other option is better than brain=mental states. I'll use a favorite example of mine. Imagine two ancient Greeks talking about their mental states. Pretty easy to do, right? Now, if mental states are the exact same thing as brain states, and if those ancient Greeks are meaningfully talking about their mental states (which they are), it follows they're also meaningfully talking about their brain states. But of course ancient Greeks had no idea what the brain was even for, let alone describing brain states to each other. Therefore, mental states aren't brain states.
If you got a bunch of switches and ran a current through them and turned them on and off in a certain way...would consciousness be produced?
When alarmed, your body will produce adrenaline, when in love, oxytocin. The whole field of mind-body medicine relies on this.
You're barking up the wrong tree. What materialism can't provide a satisfactory explanation for is meaning, and the faculty that perceives it, namely, reason.
All of that is compatible with idealism. Science does NOT say that adrenaline is some non-conscious stuff. Science is mum on metaphysics.
Fair point. I will amend my claim to "you don't get new/additional matter from mental states".
His general point stands: legs are a prerequisite for walking; walking does not cause legs. Atomic structure is a prerequisite for materials; material structure is not a prerequisite for atoms. A prerequisite for atoms is massive, charged particles; atoms are not a prerequisite for massive, charged particles. Or, more simply, trees are a prerequisite for forests; forests are not a prerequisite for trees.
You have an invalid assumption: that every hierarchical relationship in physics is or ought to be a two-way street. That is not a peculiarity of physics (just your conception of it) so, no, it's not* a problem for physicalists that consciousness is a function of brains but cannot create brains.
*EDIT: thanks Shirley
Do you believe the brain is a prerequisite for consciousness? If so, why do you think it's taking so long to come up with an explanation for how the brain produces consciousness? Also, how long would you be willing to wait before giving up? For example, suppose 1,000 years from now the Hard Problem remains. Would you reexamine your belief that consciousness arises from matter? What about 10,000 years from now? Also, would you agree that anything that is functionally equivalent to a working brain should be conscious?
As for running and legs and brains, we have an explanation for running/walking. We have no explanation for the emergence of consciousness from the actions of neurons. Also, "Running" and "legs" exist in the same ontology, just like "wet" and "water" and "river". No new ontological categories are required for those examples. Not so with physical states and mental states. They are obviously ontologically different things.
If physical states can cause mental states, why not vice-versa? In other cases in physics where A causes B but B can't cause A, there's an explanation. What's the physicalist explanation for why matter can produce mental states, but not vice versa?
Sure methodological / pragmatic / phenomenological 'materialists' can ... via linguistics, semiotics, discursive pragmatics, embodied cognition, cultural anthropology, etc. Vide Peirce, Wittgenstein, Merleau-Ponty, Austin, Chomsky, Levi-Strauss, Eco, Deleuze, Haack, Bourdieu, Dennett, Lakoff, Flanagan et al.
Quoting RogueAI
If so, then how do 'mental states' interact with 'physical states' without a shared (causal) ontology? Not Malebranche's occasionalism ... :roll:
The resolution to the apparent paradox is that it's all just information. Matter is like data, mind is like code. Code is nothing but data being executed, data is just anything accessible to code... and all data can in principle be executed as code, though most of it does nothing interesting when executed.
How 'materialistic' (Turing computational) of you – Democritus ... and Wolfram / Deutsch, I'm sure, would approve. :up:
Yes, uncontroversially. This is a philosophy forum, I'm well aware of the difficulty in claiming to know anything beyond that I'm a thinking thing, but as much as one can be certain of anything else, I'm at least certain of that.
Quoting RogueAI
Those are not related things. There is no necessary cause for a brain to come to understand consciousness. If humans hadn't evolved, perhaps no brain would even have a concept of consciousness. I don't think rats, crows and dolphins spend their time thinking about this stuff.
Quoting RogueAI
The hard problem is not a problem, it's a protest. It's even worded by Chalmers as such. There is nothing to wait for.
Quoting RogueAI
An of-the-gaps fallacy again. Science hasn't explained it yet, therefore it must be God/panpsychism/dualism/whatever other ism I favour. If you find yourself making this argument, stop, catch yourself, and remember: no one finds this a good argument when it's not used in the service of their pet theory. And more honest people don't think it a good argument period.
Quoting RogueAI
You've already had the answer to this.
[quote=Howard Pattee]All signs, symbols, and codes, all languages including formal mathematics are embodied as material physical structures and therefore must obey all the inexorable laws of physics. At the same time, the symbol vehicles like the bases in DNA, voltages representing bits in a computer, the text on this page, and the neuron firings in the brain do not appear to be limited by, or clearly related to, the very laws they must obey. Even the mathematical symbols that express these inexorable physical laws seem to be entirely free of these same laws.[/quote]
Quoting Howard Pattee
Woo, Shirley?
The general argument seems to be that reason and the foundations of logic can only be possible if there is a god or higher consciousness as the guarantor of their fidelity. Physicalism is self refuting - isn't that what the pre-suppositionalist apologists say (and Kant and others)?
Wouldn’t go along. I’d just say that we must have the rational faculty in order to define the physical. That’s the sense in which the rational precedes the physical.
Yes, very much. I suspect that explaining reason via God is just another God of the gaps idea - no different than explaining why there is 'something rather than nothing' using God. Explaining that meaning is only possible if there is a God is functionally no different than saying the Magic Man did it. Explaining a mystery with another mystery.
I think Max Planck said that matter is a derivative of consciousness. Logically, you probably could make a good argument for it. The problem is how to find evidence that this is the case.
If consciousness does create matter, it is doubtful that it is the individual consciousness that does so. For if everyone’s consciousness created matter at will, there would be total chaos.
So, I think we need to posit the existence of some form of universal consciousness that actually creates matter as some monistic idealists (Platonists, etc.) have done. But universal consciousness is something that science has no access to - and has not been looking into - hence it can’t say anything about it.
You could say that both consciousness and matter consist of electromagnetic fields and construct a model as to how this actually works but it would remain just a hypothesis.
The problem here is the dualistic assumption that there two incompatible states.
What is the difference between physical and mental? We know mind exists and only know brains exist by way of the mind. So which came first in the causal process? It seems to me that brains are the form the information/knowledge of other minds takes in our own mind. Brains are how our minds model other minds.
Why is my mind and not my brain observable from my end, but only my brain and not my mind observable from your end?
Not really. We know that, for instance, the laws of motion hold, but we don’t know why they hold. Asking why they hold, you could argue, is overstepping the mark - that’s when you get into all of the pseudo-scientific speculation about ‘why these laws’. Naturalism assumes that there are laws, but as soon as you ask ‘why are there these laws?’ you’re going beyond naturalism. That’s where circumspection is recommended. We know that f=ma but we don’t know why it is - that is not really ‘explaining a mystery with another mystery’, though. It’s recognising the limits of knowledge.
Quoting Apollodorus
The famous quote is:
Which can’t help remind me of the famous Einstein quote along similar lines.
I agree with this. Have I misunderstood you? I was saying it is explaining 'a mystery with a another mystery' if you use God as an explanation.
The fact that we don't know why reason works is a philosophical question that may well have a physicalist answer one day. Who knows?
Thanks for the quote. Yes, I suppose a lot of physicists would be prepared to go along with that. And, possibly, a few neuroscientists too. But, as I said, the "mind" or "consciousness" in question would probably need to be a universal one that creates both matter and individual minds or consciousnesses. And the question is, how do you access that by scientific means?
And rightly so. Physics is science. Metaphysics concerns itself with things science can't explain. Hence the word, "meta-", which means, after. It's not a temporal "after", but just a conceptual dumper or back-hoe, that pushes all the elements of human knowledge and comprehension concerns that do not fit in the sciences from the area of sciences into the area we call metaphysics.
I don't agree with this, and I can't prove it it's false. But the proponents of this sort of thinking can't prove that it's true. It is purely up to the individual's own intuitive inner world whether he accepts the above and the likes of it as true. It is futile to argue whether it is true that 'We are correct if we assume'.
But that we "must assume" is absolutely incorrect. It is equally possible without the assumption. So we "can assume", or "we are at liberty to assume", but we don't necessarily have to assume.
Hence I declare that the quote is false and misleading.
Dualism at least says there is an independent entity: a mind that lives inside the brain that can make a difference because it can have an independent effect on brain activity rather than being only an effect of brain activity. Dualism also explains the intuition there is a diachronic self - something more permanent that endures through sleep, anesthesia, aging and yet seems in some way to be the same old me. We know the brain creates much of the content of consciousness and we can have no viable perceptions or memories without it but that may just be a virtual reality presentation created for the benefit of an independent observer within the brain.
Mary Baker Eddy (1821-1910), the founder of a now well-established religion known as Christian Science, in her seminal work Science & Health with Key to the Scriptures. She asserted that Jesus' miracles were in accord with the, ``Science of God's unchangeable law.'' She also proclaimed that matter is a derivative of consciousness.
And here's the quote from Max Planck:
I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.
Erwin Schrödinger, on the same topic:
Although I think that life may be the result of an accident, I do not think that of consciousness. Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.
Notice please that Planck 'regards', and not proclaims, his belief. He says it as a matter of science: something that is up for revision should the need arise to refine the theory. Mary Baker Eddy and Erwin Schrödinger disagrees with Planck: they figure consciousness is primary.
I think this is a debate that at this point is undecided. You can be an advocate of this, or that, but you can't claim that your theory is the only one and the only possibly good one.
Yes, that's one difference all right.
There are other differences, too, but this is one of them.
I don't think anyone disputed that. The quote or citation was simply to show that some leading physicists are inclined to accept the possibility of matter being a product of consciousness.
I dunno, what if we built the greatest microscope ever and found, written on each elementary particle:
Made in Heaven
Do not dry clean
You explain why it is
https://thephilosophyforum.com/discussion/comment/546426 (@ bottom)
Because of entropy.
It’s a lot easier for a person to produce energy from matter than it is to produce matter from energy. You would need a colossal amount of energy to produce even a gram of matter, no less the amount you interact with on a day to day basis.
In this case your “mental energy-scape, mind or consciousness” would have to have the power of several million atomic bombs to manifest the physical world.
It just seems more intuitive that the energy from glucose coursing in your blood fuels a highly energy efficient biological machine which somehow generates awareness. Of pre-existing mass and external material.
It is a good question though in the sense that we should always work out why processes can’t be bi-directional because often many are.
As for another clue that consciousness (at least as it is in the human mind) probably doesn’t cause the existence of the physical world is because all of our sensory organs are clearly designed to maximise capture and focus the scattered waves, signals and stimuli of the external world.
If consciousness manifested its reality surely the organs would have evolved with a transmitter type architecture rather than a receiver type architecture.
You're more certain that physical matter exists than of pretty much anything else? What do you base this high level of certainty on?
Also, regarding consciousness, do you believe that something that is functionally equivalent to the brain will be conscious, whatever the substrate? The example that is often given is setting up an enormous system of valves, water, pumps and pipes that is functionally equivalent to a working brain and then running it. Do you think such a system would produce consciousness? How about a system of electric switches opening and closing? Do you think that if you open and close the switches in some way, the system of switches will be conscious? If so, why? Also if so, why would that particular combination of switching actions give rise to a conscious moment of, say, stubbing your toe, while a different set of switching operations give rise to, say, the beauty of a sunset?
Sure they are. If science can't solve consciousness, then it's first going to appear as an "explanatory gap" until people realize science isn't equipped to solve it. I think that's where we're at at the moment and why we're seeing people like Christof Koch turn to panpsychism.
Also, you did not give an explanation for why consciousness has been such a tough nut to crack for so long. In an interview, Paul Davies called it the number one problem in science. I may be going out on a limb with idealism, but you are certainly going out on a limb denying there's a hard problem (which you do later on). Do you think that our brains just aren't equipped to handle the consciousness problem? But then that is ad hoc: we can detect gravity waves now, but we're still in the dark about how brains produce consciousness? That shouldn't be. That's a problem for materialists.
But we do spend our time thinking about such stuff, and science prides itself on its explanatory power, and in this one area, there has been a definite lack of progress that is starting to become embarrassing, leading people like Giulio Tononi to speculate, without a shred of proof or way to verify, that consciousness is a result of information processing. That's pretty out there, but IIT is all the rage now.
How do brains produce consciousness? There is no answer, of course, which suggests there is something to wait for: the answer to how brains produce consciousness. If "there's nothing to wait for", why are so many people wasting their time trying to explain it? Your answer is not believable.
You're not reading what I said. The reason walking/running and legs isn't like consciousness emerging is because we have an explanation for walking/running and walking and running and legs all belong to the same ontological category. We don't have an explanation for consciousness (we don't even have an agreed upon definition of it), and mental states and physical states are ontologically different things.
I'm not making a god-of-the-gaps argument. I'm saying materialism will never explain science because there's a category error going on: material things cannot, in principle, give rise to consciousness, just as consciousness cannot give rise to material things.
I suspect you're going to say that a collection of electric switches, if arranged some particular way and turned on and off some particular way, will produce consciousness. This goes to the heart of the matter. A conscious collection of switches is already an absurdity, and it entails an additional absurdity: That a collection of valves, pipes, and water, if functionally equivalent to those switches that produce a conscious moment, will also be conscious. I think the debate is over when you make that claim. I think it's an obvious absurdity, so my argument against materialism isn't "god-of-the-gaps", it's a reductio absurdum: physicalism leads to conscious systems of valves and pipes and water (among other things). To which I respond: absurd.
Most of my body is non-conscious. We can argue about chairs having or not having experience, but I don't see good reasons to think chairs have experience.
Quoting RogueAI
I am thinking of music. I don't know which parts of my brain are involved in experience. But I know that if a person lack a brain, they won't be thinking much.
Quoting RogueAI
It does. Our failure to make sense of it is irrelevant. Most of us don't make sense of QM, it's inconceivable, but it happens.
Likewise, our failure to understand how matter produces experience is an example of our cognitive limitations, which must exist.
Quoting RogueAI
Here I agree. Not because I don't think mind is an outcome of brain, but because there's so much involved in experiencing the world and our state of knowledge is so rudimentary that we can't say, nor does it make sense to say, mind=brain.
It doesn't follow though, that mind is not physical.
You (probably) need a brain to arrange any of those.Quoting RogueAI
No because your whole idea of a “mental stuff” that is different from physical stuff is rejected by physicalists in the first place.
This is becoming my mantra, but an objective physical universe is overwhelmingly the simplest and indeed currently only viable explanation for phenomena.
Quoting RogueAI
A brain doesn't have to be conscious, so I'd word it as: something functionally equivalent to my brain would have the capacity for consciousness. You're conveying incredulity but there's no way this is news to you.
Quoting RogueAI
Even whether science can fully explain it is irrelevant. Again, the advent of science is not a prerequisite for brains producing consciousness.
Quoting RogueAI
An of-the-gaps argument. Science hasn't explained it, therefore it's not scientifically explicable, therefore
Quoting RogueAI
There's been a huge amount of progress. Just saying there's been none doesn't make it true: this ain't religion, there's a paper trail. Your argument is reminding me of this lady:
https://youtu.be/YFjoEgYOgRo
Let's explore this because this is important. Take a look at this comic:
https://xkcd.com/505/
Do you believe it's possible to simulate a universe of conscious beings by moving a bunch of rocks around in a certain way? If not, where do you and that comic diverge?
:up:
Quoting Kenosha Kid
:up:
Quoting 180 Proof
:chin:
So, do you think it's possible to simulate consciousness?
This is a good answer. On our level, yes, it works. But on a creationist's level, one can say that yes, there had been X amount of energy available to produce the Y amount of matter that makes up our KNOWN universe. Just because it's a large number, a scary-shit large number, it's not impossible for it to exist.
My beef is a bit different; it's got nothing to do with the amount of energy available. It has to do with the fact that energy in and by itself can't exist. It is CARRIED by matter. Some say some wave forms carry it, too, but the waveforms may need matter to propagate. There is a raging battle of minds over that; the old idea of ether filling the empty space in the universe is thought of as a form of matter. It's wholly different what Edgar Wells thought ether was, but the idea of some pervasive matter-like thing in otherwise any empty space is gaining momentum.
Anyway. Matter is the only source of energy inasmuch as energy is sapped from it. Slow down a speeding cannonball, put a resistor in a current, put a dam in a flowing river. Without some sort of energy conversion, energy rests in matter's existence. So to get the energy to create matter, without losing matter, is hugely inefficient (creating matter from heat energy, or kinetic energy); but if you use energy conversion in the maximum way, that is, converting matter into energy getting MCC energy out of it, defeats the purpose, since you need precisely as much matter to produce energy as much matter you get out of it.
So the purest energy conversion to make matter presupposes the same matter mass as you create; and every other source of energy assumes MORE mass than what gets created. In the upshot, non-matter or energy or consciousness can't create matter without using up as much matter or more as it creates.
Can you name them?
I am only asking because I heard many preachers say, "I've met many such and such that said such and such". I think it's a rhetoric and I am having a hard time believing it any more. If you met many materialists who said this or that, some names must have stuck in your mind.
I am fully aware that you can say, "Joe Montague, Harry Griffin, Michele Adieu, Robert Frankovic, Debbi Gaal, and Rosemary Thimble." I ask you to be honest. Did you actually met MANY materialists who said what you claim they all said?
No, there was a group of computationalist materialists over at the International Skeptics Forum in its heyday who were completely invested in that comic (and Hofstader's book Godel, Esher, Bach). The line of thinking was pretty simple to follow: if consciousness can be simulated, then it can be produced through switching operations. Switching operations can occur in lots of different substrates, like a system of pipes, water, pumps, and valves (for some reason, the materialists over at the ISF preferred ropes and pulleys). Such a system, if it was functionally equivalent to a working conscious human brain, would also be conscious.
None of this so far is controversial. How they got to "you can simulate consciousness by moving rocks around" was exactly what the comic claims: you can build a turing-complete computer by moving rocks around, if you have enough time and rocks. If you can compute by moving rocks around, and consciousness can be simulated on a computer, then consciousness can be simulated by moving rocks around. They even talked themselves into believing a bunch of people writing 1's and 0's on pieces of paper could also produce conscious moments.
I think they're actually correct in that chain of logic. If you're willing to believe in a conscious system of pipes and water, why not rocks being moved around in a certain way? Under materialism, consciousness shouldn't be substrate dependent, and if you can replicate the computational processes going on in some brain state(s) by moving enough rocks around, that collection of moving rocks, if it's computationally equivalent to brain state(s), should be conscious. The problem is that at the end of that chain of logic is an absurdity: the possibility that a universe of conscious beings are being simulated by someone moving rocks around.
I can name you message board names, but they won't mean anything to you. There was a substantial group of people who did buy in to what that comic was saying. I would be surprised if there weren't some materialists here who would agree with it.
I met many materialists who believe that it was possible to simulate consciousness by moving rocks around, yes.
Now, let me ask you, can consciousness be simulated on a computer?
He won't tell you. Even though it just involves writing one sentence. If he does write a sentence, it will have bold, italic, underline, quotation marks, and some kind of smiley.
Oh, and he'll quote himself.
But not before throwing a few invectives into it, just in case things don't go quite the way he wants them to, and then taking cover behind his supposedly intimidating selfie.
If I'm going to be a bully and a twat, I'd rather do it alone.
Youuuu... got it right, my friend. That IS correct.
Just what I said. Something must have upset him. I wonder what it was:
Quoting 180 Proof
Huge difference. In the pipes and water, much like in the human brain, there is a program, and the flow of water from the outside (data for the computer; outside stimuli for the brain) affect the program to react differently. The program is not changing; its response is changing to the changing data. Human reaction is different, too, whether the tongue senses sweetness, or a pin prick.
In the case of stones, there is no program. The only change that occurs is due to the thought processes of the man who puts down the stones. The stones have no communicative power beyond what the man puts down. In a program, and in the brain, there is communicative power imbedded into the program and into the brain.
This experiment in the desert being stuck there for eternity, or for a long time, would only produce a computer with stones if there were 1. A moving mechanism to move the stones 2. switches that responded to conditions 3. data 4. a way of making sure that the data affected the behaviour of some of the switches.
IN the comic there is no such thing. It is just stones lain down in a two-dimensional grid. That will produce no consciousness. But a machine that processes rocks, as described in the italics, does have a capacity to develop consciousness.
1. Why should we assume that consciousness can arise from switches? Why is that not a category error?
2. How can conscious arise from switches? What is the explanation for how the switches become conscious? How many switches are needed? In what order? Why is the act of switching important? Why does one set of switching operations produce experience x, while a different set of switching operations produces experience y, while a different set produces no experience at all? Is electricity required?
3. How could you verify whether such a system is in fact conscious?
4. What other collections of switches are conscious? Phones? My desktop computer?
5. What other physical processes besides switching operations can produce consciousness?
It’s an interesting question, and I haven’t read the rest of the thread yet, but I think there’s a misunderstanding here. Both your descriptions here assume both consciousness and a working brain exists. Producing a feeling is not the same as producing consciousness, and I’m not sure how you would ‘arrange’ feelings or experiences as you’ve described without a working brain.
The ‘feeling of stubbing your toe’ is a complex interrelation of ideas, including notions of ‘self’, ‘body’, ‘toe’, ‘movement’ and ‘impact’ as well as ‘unpleasant’, ‘sharp’ and ‘pain’. Potentially, it can all be rendered as a pattern of electric current through matter without understanding any of these ideas - provided that matter has sufficient experience to recognise and describe the pattern as ‘the feeling of stubbing your toe’. Otherwise how would you confirm this?
Conversely, one can theoretically arrange all of the above ideas in a particular way to construct a mental state that matches this pattern of electric current - without anyone ever actually stubbing their toe.
Those are safe assumptions. Do you doubt brains and/or consciousness exists?
Assuming a working brain is required for consciousness is just that: an assumption. Idealists always concede that point. They shouldn't. I will concede that it appears the brain is a necessary condition for consciousness. Are we justified in assuming appearances are as they seem? Sometimes. Sometimes not. The materialist cannot just assume brains exist and are required for consciousness. They have to argue that what seems to exist external to our minds actually does exist external to our minds. Since we can't leave our minds and verify whether anything external to our minds exists, there's no way to prove materialism. It is simply taken on faith that external stuff exists. It's no different than refuting Berkeley by kicking a rock.
Anyway, even if brains are required for consciouness, there is still the issue of brain states producing new additional experiences, but experiences incapable of producing additional brain states. What do I mean by experiences? Simple: listen to music while you stub your toe looking at a sunset. Nothing additional is created by that arrangement of experiences. Nothing additional is ever added to the universe by mental states. What do I mean by additional? At time t, there are x number of experiences that have ever happened. At t+10 min, there will be many more additional experiences. That doesn't happen the other way around. Mental states never result in the addition of anything. Nothing physical is added to the universe from mental states. I think a materialist has to argue why that dichotomy exists. I think bringing up entropy is going to lead to substance dualism.
It sounds like you're talking about consciousness from neurons/switches. See my post right above yours for my concerns on that.
Do you think mental states can exist on their own, without any substrate? That's how I read what you're saying.
But fundamentally yea I think you can simulate a universe with conscious beings interacting.
I never said you could build a functioning brain out of anything. Your question was regarding whether something with the same function as a brain would be conscious; my answer is yes. It doesn't follow that you can build a functioning brain out of rocks, liquorice or thin air: that is a purely technological problem. But _if_ you built something with the same functioning as a conscious brain out of rocks, then yes, that system would by definition be conscious.
I disagree. Photons are massless. Nothing at the speed of light can possess mass. And they are most definitely energy. Wavelengths don’t carry the energy they are a measure of the intensity of the energy and it’s ability to penetrate matter. For example the light you can see from the sun can’t generally harm you but the shorter wavelength UV (higher frequency) light can give you a sunburn as it is of higher energy.
However I do agree with you if what you meant is the measurement of energy cannot exist without matter. This is definitely true. You can only measure/ observe/ be aware of light, heat, motion etc by interacting with it. If a light particle never hits a solid surface (our eye) we cannot see it.
Because for wall we know, the brain is the house of consciousness; and it is nothing but a processor with processes and switches. Therefore another processor with switches and processes that emulate it, may or would necessarily become conscious. That's the theory. No empirical evidence yet, but the theory is solid.
By the turing test.
However, that's only verification; not proof. You can't prove that anyone else aside from yourself has a consciousness, either.
No; not phones. Not computers. They lack certain elements. But other switching mechanisms exist that supposedly developed consciousness. The fox. The elephant. The bee. The mollusk.
Whatever processes our brains employ.
On the surface I agree with you. The newest thought, however, and I described it in my same post, I don't know how you were able to miss that part, deals with this. It has to do with the ether. It is not the same ether as they thought the universe was filled with up to and including the middle of the nineteenth century. They figure there is a material element to space, which enables light to travel as a wave form, and enables the bending of space.
If and only if that theory is true, light can't transmit energy without the employment of matter.
I don't know if that theory is true or not. There is a raging battle going on in physics circles regarding that. Ether has a bad connotation for its past use up to two hundred years ago; but it is very possible for it to exist.
I think it’s important here to remember that really energy and mass are equivalent. They are the same thing and actually what differs them is velocity and relativity (time) in accordance with the equation provided by the wise and noble Einstein.
What would be a true masterpiece to understand exactly how time, space, mass and energy all interdepend on one another, how they each come about, which came first or perhaps if they are all faces of the same fundamental substance
I figured we would reach this point. I think this is where the materialist position collapses into complete absurdity. I know that's a person opinion, but I have some questions that you won't be able to answer that kind of illustrate the absurdity of it all.
1. Why should we assume that consciousness can arise from rocks? Rocks are nothing like neurons, nothing like mental states, so why is that not an immediate category error?
2. How can conscious arise from rocks? What is the explanation for how the rocks become conscious? How many rocks are needed? What do you do with the rocks to make the rocks conscious (see what I mean about the absurdity of this)? Why is the act of whatever you do with the rocks important? Why does one set of rock interactions produce experience x, while a different set of rock interactions produces experience y, while a different set produces no experience at all?
3. How could you verify whether such a system is in fact conscious?
4. What other physical processes besides rock interactions can produce consciousness?
(2) lays bare the absurdity of it all, but I think (3) is catastrophic. It's impossible to verify that anything other than you is conscious. That's just a brute fact about our epistemic position in the world. The existence of other consciousnesses is assumed but can never be proven. You can't leave your mind and check for the existence of other minds. That leads to the following problem for materialism: suppose you've got this awesome theory of consciousness and it predicts that that object over there is conscious. How do you prove it? You can't. No physicalist theory of consciousness will ever be verified. It's impossible in principle. No matter how clever the theory is, you can never get inside the object it says is conscious or isn't to examine its internal mental states or lack thereof. The physicalist project to understand consciousness is doomed to failure.
Note that this is not a god-of-the-gaps argument. There are a lot of things that cause physicalism to fail with regard to consciousness and none of them have anything to do with god:
1. The absurdity of consciousness coming from rocks
2. The total lack of explanation for how consciousness can possibly arise from doing stuff with rocks.
3. A category error in thinking that non-mental stuff can produce mental events
4. The impossibility of verification of any physicalist theory of consciousness
:100: :up:
RogueAI has to misread this, KK, or surgically remove his cranium from his sphincter which, I suspect, is much too difficult for him / her to do.
Anyway, suppose you built a machine that was functionally equivalent to a working brain. How would you test whether it's conscious or not?
I think the functionalist has to define 'consciousness' in such a way that a function can constitute it. For example, X is conscious if and only if X maps the world and can predict events. Brains can do that, therefore brains are conscious. The trouble is that's not the definition of consciousness that many philosophers are talking about (including me, and I think you). The problem is we can't agree on definitions before we start. This impasse has arisen dozens and dozens of times on this forum and the last. I don't think functionalism is really a theory of consciousness, it's a definition. Most of the time anyway. Sometimes it's a theory, I think, depending on how its forumulated. With the walking and legs analogy, it's definition. Walking just is how that action is defined. And that's not interesting.
EDIT: That wasn't very clear. I'll try and write a better one in a bit.
I think I get what you're trying to say here. Functionalism was what I figured Kenosha Kid would use to answer the questions I posed about "rock consciousness". If I put on my materialist hat, let me see if I can answer some of them:
1. Why should we assume that consciousness can arise from rocks?
Because if you make a functionally equivalent working brain, it will be conscious and we can infer this from our knowledge of conscious and brains.
Rocks are nothing like neurons, nothing like mental states, so why is that not an immediate category error?
I don't think there's a good answer the materialist can give for this. I think the best the materialist can say is, "yeah, but the system is functionally equivalent to a working brain. Who cares what it's made of?" Except, consciousness could very well be substrate-dependent. It might only come about through the interactions of biological matter, for some reason. Since it's impossible to verify whether something other than working brains(s) are conscious, the issue of substrates and conscious will continue to bedevil physicalists, particularly as Ai starts doing stuff like passing Turing Tests.
2. How can conscious arise from rocks?
Because they're in an arrangement and interacting in a way that is functionally equivalent to a working brain.
What is the explanation for how the rocks become conscious? How many rocks are needed? What do you do with the rocks to make the rocks conscious (see what I mean about the absurdity of this)? Why is the act of whatever you do with the rocks important? Why does one set of rock interactions produce experience x, while a different set of rock interactions produces experience y, while a different set produces no experience at all?
This is the Hard Problem, and the materialist can't just give a functionalism argument. An explanation has to be given for why matter arrangement X,Y,Z gives rise to conscious experience. The explanation so far is that if you take a bunch of matter and arrange it in some fiendishly complex way, and have it share electrons (or interact in some way), voila! Consciousness! Needless to say, this explanation is lacking, hence the Hard Problem and Explanatory Gap.
3. How could you verify whether such a system is in fact conscious?
I think this is catastrophic to the physicalist project of explaining materialism. Functionalism won't help here. Functionalism is the problem! Suppose we make a metal brain that is functionally equivalent to a working organic brain. If functionalism is right, it should be conscious. Time to test it! So, how do we test whether it's conscious or not? Suppose we eventually build something that can pass a billion Turing Tests simultaneously while composing an opera for the ages. Is it conscious? Just as we'll never know if anything outside of our minds is conscious, we'll never know if anything we build is conscious. This isn't a problem when we all look like each other. We just assume we're all conscious. But a machine? Are we just going to assume advanced Ai is conscious without any way to verify it? I see problems with that.
4. What other physical processes besides rock interactions can produce consciousness?
I don't think functionalism helps here. I think the problem for the materialist here is that when they claim that consciousness is substrate-independent, they're going to end up at panpsychism (which, along with computation, is all the rage in consciousness these days). Because in a physical universe, there's nothing unique about what the brain does. If a conscious moment is neurons X,Y,Z doing A,B,C, you can replace the neurons with anything. And so long as you don't know what it is that the neurons are doing that actually produces consciousness, the materialist is going to be stuck saying that matter arrangement A,B,C (e.g., a bunch of rocks) doing X,Y,Z is conscious if it's functionally equivalent to a conscious-producing brain-state. OK, so any arrangement of matter that is doing X,Y,Z is conscious? If a bunch of rocks can be conscious, what about a rock slide? Is there a chance the rocks in a rockslide can do X,Y,Z accidentally and produce a moment of consciousness? What about a rain storm? There's a lot of matter-interactions going on there. Are there conscious moments in storms? Meteor swarms? Are microbes conscious (Christof Koch thinks they are)?
As for definitions, I think that's a rabbit-hole we don't need to go down. I think we can just use a folk definition of consciousness to lay bare the problems and absurdities of materialism as it pertains to consciousness.
I get the impression from later chat that this clicked: We should NOT assume that consciousness can arise from rocks.
Quoting RogueAI
I don't think so. The hard problem allows for a bunch of rocks to be conscious, it just doesn't allow a complete third person description of that consciousness since it will not contain "what it is like to be a conscious bunch of rocks". And when I say "doesn't allow", I mean that Chalmers won't hear of it on grounds of taste.
Quoting RogueAI
This has nothing to do with non-organic consciousness as far as I can see. This problem already exists for discerning if an animal or even a person is conscious.
I don't predict this will be the difficult part given a more comprehensive model of consciousness. The issues here (in my experience) relate primarily to language. The concept of "consciousness" is vague and therefore arguable. For instance, some people don't like the idea of observing anything non-human as conscious, and that vagueness gives sufficient wiggle room to be able to say, "but that's not quite consciousness" about anything. I think this is also partly why people like Chalmers retreat to the first person in these arguments. It's possible to claim that something is lost when you transform to the third person view, as long as that something is suitably wishy-washy.
But if you really want to test for consciousness, you have to define in precise terms what consciousness is, not what it isn't.
I'm always interested in other people's ideas of what consciousness is.
Quoting RogueAI
N/A
"The hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious."
https://iep.utm.edu/hard-con/
I think that definition is fine, and I think that if you're going to argue that there's a possible world where consciousness arises from rocks, you're going to have to explain why that physical state is conscious rather than non-conscious. That's going to involve answering the questions I already posed, that you did not answer: How can conscious arise from rocks? What is the explanation for how the rocks become conscious? How many rocks are needed? What do you do with the rocks to make the rocks conscious (see what I mean about the absurdity of this)? Why is the act of whatever you do with the rocks important? Why does one set of rock interactions produce experience x, while a different set of rock interactions produces experience y, while a different set produces no experience at all?
Do you have answers to any of these?
Verifying consciousness has nothing to do with whether computers (which are non-organic) are conscious??? Of course it does. If scientists come up with a theory of consciousness and claim that that non-organic thing over there (computer) is conscious, they need a way to verify it. The problem of verifying consciousness is a problem for BOTH non-organic AND organic systems.
You don't need a precise definition of consciousness to verify whether something is conscious. You can verify you are conscious, correct? You're not in doubt about that, I assume. So, verification of consciousness using just folk terminology is possible on a personal level, but somehow the language fouls everything up when we try and verify whether other things are conscious? That's ad hoc. It's not a language problem, it's a verification problem- you can't get outside your own consciousness to verify whether anything external to you is conscious or not. As I said before, this isn't a problem at the moment because we all look like each other. It's going to become a hell of a problem when machines become as smart as us.
Do you need a precise definition of water to tell whether a glass has any water in it? Of course not. If a scientist says, "I have a theory of consciousness, and I say that that computer (or pile of rocks) is conscious!" we all know what he means. The next question for the scientist is: "How do you know it's conscious?" If he replies, "well, what do you mean by consciousness, exactly?", that's a copout. So how is a physicalist going to verify whether anything is conscious??? They can't. Positing unverifiable theories isn't science.
Physical processes besides rocks moving around are not applicable when it comes to producing consciousness? I can't be reading that right. What do you mean here? Do you think computers can be conscious? Because that would involve consciousness coming from "physical processes besides rock interactions", which would make those physical processes very applicable.
Also: you believe that consciousness is substrate-independent. What evidence do you have for that?
Yes. Now I suspect nothing clicked at all :cry:
Quoting RogueAI
Suspicion confirmed. I'm not claiming there's a possible world where consciousness can arise from rocks. I really need you to pay attention to the distinction between the two following sentences:
1. If a bunch of rocks could reproduce the function of my brain, that bunch of rocks could be conscious.
2. Consciousness can arise from rocks.
They are not the same and I have not claimed the second one.
Quoting RogueAI
That's correct. Are octopuses conscious? Does that question involve whether computers are conscious or not? No. So the question is not about computers (although a perfectly good example).
Quoting RogueAI
My answer regarded scientific descriptions of consciousness, which does require precise definitions. In order for a scientist to discover scientifically what water is, yes, she needs a definition of water. If she doesn't know what water is, she can't tell you what's in the glass. Even if she knows what water looks like, she needs to be able to differentiate it from alcohol, or any other transparent liquid. As it happens, you don't need to know _much_ about water to be able to distinguish it perfectly well from not-water (it's appearance, fluidity, taste, lack of smell). This is the extent to which the definition of consciousness also needs to be precise: to distinguish it from unconscious things.
Quoting RogueAI
I mean your question was not applicable since it was based on a misunderstanding of my claimm
:starstruck: lol
If there's no possible world where consciousness arises from rocks, then it is impossible for consciousness to arise from rocks. That is to say, no matter what you do with rocks, no matter if the rock-based system is functionally identical to a working conscious brain, if you believe there's no possible world where consciousness can arise from rocks, you CANNOT get consciousness from rocks, no matter what.
I happen to agree. Is that your claim, though? Because now I'm going to ask you: why isn't a system of rocks that's functionally equivalent to a working brain conscious? What's stopping it from becoming conscious?
This is tough for the materialist, because on the one hand, if you say, "consciousness from rocks is possible", you open yourself up to a reductio absurdum and a bunch of questions about how on Earth you can get consciousness from rocks. But if you say that consciousness from rocks is impossible (as you are now seeming to do), you're making a claim that some substrates won't produce consciousness. So, which substrates besides rocks are off limits and how do you know this? But you have to make a claim one way or the other: either consciousness from rocks is possible or impossible. Which is it?
Is it possible for computers to be conscious? If yes, how would you verify whether a specific computer is conscious or not? If computer consciousness is impossible, why is it impossible?
You're arguing my point: you don't need to know _much_ about consciousness to be able to distinguish it perfectly well from non-consciousness. We don't need a rigorous definition of consciousness to determine whether that computer that just passed the Turing Test is conscious or not. We don't need to "know much" about consciousness to pose that question. Our basic understanding of consciousness is sufficient to make sense of the question: is that computer conscious or not? Just like we don't need to know much about water to measure how much is in the glass.
Agreed?
Absolutely not. We have no common "basic understanding" of consciousness. On this site alone you'll find a new one for every thread on the subject.
Are you conscious? Is your significant(s) other conscious? To not draw this out, I'll answer for you: yes, and yes.
Now, did we need a precise definition of consciousness to answer those questions? No. Did those questions and answers make sense to you and me? Yes. I know what you mean when you say you're conscious and vice-versa.
We all have a basic understanding of consciousness. Claiming otherwise is absurd. The materialist "game" is often to retreat to language difficulties when the going gets tough (you'll notice I never once talked about qualia). You're doing that here.
There are also some outstanding questions you haven't answered:
- Is it possible to get consciousness from rocks, yes/no?
- Is it possible to simulate consciousness, yes/no?
- Is consciousness substrate independent, yes/no?
This is precisely what I was talking about before. That sort of wishy-washy 'well, I know what I mean' way of communicating is no good for answering questions about consciousness in a scientific way. It's not useful for me to note that I am conscious when trying to determine if a dolphin, an octopus or a guppy are conscious. What is required is not an obvious, possibly extreme example of a conscious being like a human but a minimal set of requirements for something to be said to be conscious.
Quoting RogueAI
Generally I think we're all pretty ignorant about, neuroscientists like Isaac aside.
Quoting RogueAI
I have answered this here:
Quoting Kenosha Kid
Quoting RogueAI
In principle, or with present technology? Probably, and definitely not respectively.
Quoting RogueAI
Already answered this too. Yes. Even if the brain turns out to be the only natural or technological means of having consciousness, the answer would still be yes.
Do you have a definition in mind when discussing consciousness? When you discuss consciousness, what is it you are discussing?
The part you quoted wasn't about lay or philosophical discussion, but scientific testing. As I am not a neurologist, how I conceive of consciousness isn't pertinent. Also, establishing the need for a scientific definition of consciousness is not the same as defining it. One can recognise that a scientific definition of consciousness must discern between conscious and unconscious things without having such a definition.
But the consciousness discussed by neurologists afaik is along the lines of: cognitive awareness of one's environment and one's cognitive awareness of that environment. In more detail, (human, at least) consciousness is a process comprised of multiple components such as awareness, alertness, motivation, perception and memory that together give an integrated picture of one's environment and how one relates to it.
And these are presumably measurable in some way? If so, they would need to be functionally defined. You input something into the person, look at the output (how the person behaves, a reading from some kind of direct brain scan), and then the degree of awareness of the environment is observed. Is that the idea?
Is this sense of 'consciousness' a collective term for a number of related cognitive faculties? Each of which could be given functional definitions and associated with functional tests to measure their degree of presence? A bit like (Banno's favourite) the Glasgow Coma Scale? Is that the idea of consciousness as studied by scientists?
What do you think of the following rough definition:
"Consciousness is subjective experience — ‘what it is like’, for example, to perceive a scene, to endure pain, to entertain a thought or to reflect on the experience itself"
Would that do as a starting point for a scientific investigation?
That's an example. Another would be brain imaging. Or both in conjunction. A good example might be experiments that detect pre-cognitive decision-making. The lapse between initiating a response and being cognitively aware of it, if accurate (it's disputed) would fit with the idea that cognition is validating other mental outputs.
Quoting bert1
Or rather an emergent function of interacting parts. For instance, you might want to put your finger in the pretty flame, but last time you did that it really hurt. This is memory working in conjunction with motivation, not just a bundle of memory, motivation and other processes operating in parallel.
Quoting bert1
No, because it doesn't say anything. It relies on the reader having their understanding of what it is like to perceive a scene, endure pain, etc. and that does the heavy lifting without examination.
Scientists are reductionists, and reductionists will look for the fundamental elements of a thing and how they produce that thing when so arranged. Philosophers can be reductionists or non-reductionists. I expect reductionist philosophers and scientists will probably speak the same language, but non-reductionists tend to take such concepts as irreducible, so speak a different language.
Things like love and consciousness are multifaceted collections of things. Two people could be in agreement with a given statement about love, but one person was thinking about attachment and the other romance, or one romance and the other sex. Consciousness is similar. Treating it as a simple thing is apt to produce ambiguity and confusion.
The Nagel/Chalmers type of approach does this. It treats What it's like as a simple thing, separable to having a bat's body, including it's brain, a bat's needs, a bat's habitat, a bat's social structure, a bat's senses, a bat's memories, all the tiny things that individually and in conjunction produce what it's like to be a bat. And, worse, tells you that because you don't have a bat's body, a bat's brain, a bat's habitat, a bat's social structure, a bat's senses, a bat's memories, you cannot imagine what it's like to be a bat (true), and that this is somehow proof of an irreducible quintessence of batness that will be left over if and when you have as complete a scientific account of the third person view of a bat as is possible. It doesn't deal with the precise elements of what it is talking about at all.
Invoking what it is like to be a bat is really a rhetorical device or thought-experiment to drive home the the understanding of the fundamental nature of the way in which a creature is embodied and how they interact with the environment
I find the phrase 'what it is like to be' an awkward expression. I think what it means to articulate is, simply, 'being'. Humans are after all designated 'beings', as are other sentient creatures, including bats. And beings are not objects, in that they're conscious agents. This is precisely what is denied by reductionism, as reductionism has no category which corresponds with the notion of 'being'. That is why reductionists (such as Dennett) are obliged to deny that beings are in any basic sense different to objects.
It's possible to have a nearly complete scientific account of an object, although since the discovery of quantum mechanics, even that is now questionable. But you can't provide a purely objective account of a subjective state of being. That's really all there is to it.
I have always summed it up this way to myself: is it more unlikely that matter gives rise to consciousness, or that consciousness gives rise to matter?
I'm not asking questions about consciousness in a scientific way. Are you, Kenosha Kid, conscious??? That's not talking about consciousness is a "scientific way". We're at a pretty basic level when I'm asking you that.
Now, your claim is that the sophistication of the language has to increase when it comes to determining whether something other than ourselves is conscious. Why? If I can ask "Are you, Kenosha Kid, conscious???" in a meaningful way and get a meaningful answer (which I can), without defining consciousness in a scientific way, why can't I ask a scientist, "Hey, is that machine over there conscious? You say it is. Can it feel pain? Can it be happy? Sad? What is it like to be that machine?" The scientist has to answer those questions. Those aren't questions that are being asked "in a scientific way". Those are ground level questions that a small child can understand.
So, my point is that the regular folk definitions of consciousness and pain and happiness and "what is it like to be that?" that we all use are perfectly appropriate to inquire meaningfully about consciousness. If that's the case, and some scientist is claiming some machine is conscious (which will eventually happen), someone is going to say, "prove it". What's the scientist going to do in that case? Retreat behind a wall of jargon? Claim he can't prove it because there's a language problem? No, the scientist can't prove a computer is conscious because it's impossible to verify the existence of other consciousnesses.
Do you dispute the bolded? If so, explain how you can verify that other minds and/or consciousnesses exist. If not, then concede the point that any physicalist theory of consciousness will be unverifiable in principle.
Before we go on to the possibility of consciousness coming from rocks, I want to close off this point: it's impossible to verify the existence of other consciousnesses. Agreed or not?
Me too.
Quoting Wayfarer
Thanks for a third and fourth example... Love, consciousness, being, agency. Reductionism does not deny that we're conscious agents, but yes it does say we're made of objects, and therefore are objects. There's no contradiction between being an object and being a conscious agent: we're just objects with higher order properties of consciousness and agency.
Quoting Wayfarer
This nicely encapsulates the hard problem. It isn't a problem, rather an insistence.
Nope. Not true. There's a rhetorical description, 'nothing but-ism' or 'nothing but-ery', which is precisely that.
There's nothing in the scientific description of objects - which is physics - in terms of which affective states and so on, can even be described.
That’s because consciousness is only ever known in the first person.
[quote=Erwin Schrodinger, Nature and the Greeks] The scientific world-picture vouchsafes a very complete understanding of all that happens — it makes it just a little too understandable. It allows you to imagine the total display as that of a mechanical clockwork which, for all that science knows, could go on just the same as it does, without there being consciousness, will, endeavor, pain and delight and responsibility connected with it — though they actually are. And the reason for this disconcerting situation is just this: that for the purpose of constructing the picture of the external world, we have used the greatly simplifying device of cutting our own personality out, removing it; hence it is gone, it has evaporated, it is ostensibly not needed.[/quote]
A child, or adult, can _think_ they understand. Asking a conscious person if they are conscious is not comparable to asking a scientist if a machine is conscious. All the person has to do to achieve the former is know how to use the word 'conscious' in a sentence. To achieve the latter, a scientist has to know what consciousness really is, what parts and processes constitute the minimal agreed criteria for consciousness.
Quoting RogueAI
I don't think there's any insurmountable barrier to determining whether another human is conscious or not, because we know a lot about what to look for. The difficulty is more likely in knowing the same about other animals whose brains we understand less well. Non-animal systems might be easier, since we're free to design the hardware, or bypass it in software with the knowledge that the kind of conscience thing you're building might not be naturally or technologically realisable.
Consider an ecological microsystem, small enough to be simulated. Place within this microsystem a sprite and assign that sprite needs: food, water, procreation, survival, with observable meters. Place within the microsystem elements that will satisfy or thwart those needs: fruits and smaller food sprites, a lake, predators, other sprites as the same class as our subject. Define a set of input channels to the sprite and couple those to the state of the environment in a fixed and incomplete way. This defines the surface of the sprite. Give the sprite a set of subroutines that a) build a model of the environment from those inputs, b) perform things like edge detection to discern other objects in that environment, c) some retrainable pattern-recognition models for enriching that view with higher order data. (As an example, you could have three small red things to eat that are nutritious and one that makes the sprite ill. This pattern-recognition package would allow the sprite to learn first that small red things are nutritious, and second that one particular small red thing is harmful.)
An important part of that package would be reflexes. For instance, if a predator sprite is detected, run home or, if not possible, run directly away from the predator. Thus we need some interface between this processor and the surface of the sprite, a sort of will. However we'll also add a second package of routines that also take input from the first. These will be algorithmic, for enriching the environmental model with metadata that can't be yielded by pattern-recognition, and for verifying the metadata from pattern-recognition. This will also feed back to the first processor so that the sprite can act on algorithmic outputs when firing the will.
Implicit in the above is somewhere to store outputs of models, pattern-recognition and algorithms so that the sprite doesn't have to work everything out from scratch every time. These will also be inputs to both processors.
Obviously there's a lot of gaps to fill in there, but you get the gist. Would the sprite be conscious? Done right, if we had the tech to do it, with those gaps sensibly filled in, I'd argue yes.
How could we tell that it's conscious? Compare its behaviour to other sprites that are missing one or more of the key features outlined above. Additionally, make some of those meters inferable from the outside. Perhaps, if we gave the sprite capacity for language such that it could teach its daughter which small red thing not to eat, then removed all of its predators and placed food outside of its den every morning, it might even invent philosophy, science and art! :)
On a purely linguistic level you ought to be able to debunk your own argument here.
Quoting Wayfarer
It is if the scientist has the same definition/concept as the non-scientist. This definition:
"Consciousness is subjective experience — ‘what it is like’, for example, to perceive a scene, to endure pain, to entertain a thought or to reflect on the experience itself"
...is given at the very start of the neuroscientist Guilio Tononi's paper on the IIT. Some scientists do start with this concept. And it is those thinkers who I think do come up with a theory of consciousness (even if it is false), and these theories are interesting to me as genuine candidates for a true theory of consciousness. However some thinkers take 'consciousness' to mean a set of observable functions or behaviours etc. That's fine if it's useful, say for a paramedic. But I don't take these as theories of consciousness as I understand it. They are definitions by fiat, and philosophically uninteresting.
EDIT: An example of the latter is H Pattee in his Cell Phenopmenology: The First Phenomenon, in which he says this:
(my bold)
To be clear the article he writes is extremely interesting in many other ways. I just don't think it touches the hard problem.
If you like. "On a purely linguistic level, you were able to debunk your own argument." :P
Quoting Wayfarer
I don't think so. Object is a class. Conscious being is a class. Even if you object to the specific claim that the latter is a subclass of the the former, there'll always be some superclass you can define that includes conscious beings and oranges.
I suppose it's pointless to try and explain what I think is the matter with this, so I'll pass.
Although I think Bert1 has done a good job of it:
Quoting bert1
Are you fibberfab? Is your significant other fibberfab? How can you answer those questions without knowing what fibberfab is or is not?
You can say that you are conscious, but what makes you conscious? How can you tell if others are conscious when you can't observe their consciousness, only their actions? Are actions conscious? If not then what is conscious and how can you tell?
Quoting Kenosha Kid
Maybe. Maybe not. Either way, the scientific definition can't contradict other definitions, or else scientists and laymen would be talking about different things.
We can talk about water as it appears from consciousness as a clear liquid, or as a combination of hydrogen and oxygen molecules as it appears from a view from nowhere. We're talking about the same thing but from different perspectives, but not contradicting ones.
Can we do the same thing with consciousness? Can you talk about how consciousness appears from consciousness and as it appears from a view from nowhere? Your consciousness appears as a physical brain that drives various actions from my conscious perspective, which is not how my consciousness appears to me so how do I know if you or I are actually conscious or not? What is concsciousness like from a view from nowhere?
No, he's RogueAI. I'm bert1 and you are Harry Hindu.
Indeed. Assuming we actually want to discuss the same thing, of course.
These are all excellent questions to begin an enquiry into consciousness. :up:
Heaven forbid!
Quoting bert1
I agree with the sentiment (talking about the same thing from different perspectives), but my point was about the precision necessary to discern a conscious thing from a non-conscious thing. As I said above, "a clear liquid" does not discern water from vodka, and might leave me in the pitiful situation of having accidentally drunk water.
But there is a logical difficulty here in talking about a first person perspective from a third person perspective. Describing subjectivity in objective terms seems like a nonsense to me.
Just as describing objective reality from my point of view is also a nonsense.
As @Wayfarer has correctly said (imho), or quoted someone as saying, science typically proceeds by eliminating the subjective as much as possible in order to arrive at an unbiased, objective, point-of-view invariant view of the world. And that's great until the 'object' of enquiry is subjectivity itself. How are scientists supposed to proceed here? By eliminating subjectivity from the enquiry into subjectivity? Or do we have to do something other than science?
Vision isn't your only sense. You have the power to smell and taste. Using all if your senses it is simple to differentiate water from vodka.
So what is consciousness like when not be observed by any sensory apparatus ie. when it's not being measured?
Every field of inquiry has its limitations. Philosophy and logic can take over from science where science no longer has an answer. But we need to remember that some questions may remain unanswered.
If you believe there is a difference between a neuron firing and the owner having an experience, yes, there will be a logical difficulty. Personally I think that the logical difficulty lies in justifying that belief.
Quoting bert1
I think that's a very outdated picture of science.
Quoting bert1
I don't think a person would work through that when answering the question "Are you conscious?" It is sufficient to know, on a linguistic level, that I am necessarily conscious, in much the same way that I am a homo sapiens.
But there's an implicit point you're making that I should address. The above quote is clearly not a scientific definition of consciousness, i.e. one could not devise a set of experiments from it. As I said earlier, consciousness is multi-faceted, and individual singular definitions may capture qualities of some of those facets, but that's not sufficient to discern anything about having a conscious experience. You cannot get from "what it is like" to an experiment; one could potentially get from a precise definition of consciousness to vaguer qualities like 'what it is like'.
A paper on QM btw wouldn't typically start with the Schroedinger equation, even though that is exactly the starting point for a physicist. Introductions are hooks in science papers, not definitions of terms and it is clear Tononi is being quite informal. My issue with 'what it is like' is not that it doesn't capture any of the facets of consciousness, but that is an impression left by the uncountable processes that enumerate what consciousness does. Tononi's project is to apply his approach to things like clinical assessment. This is scientific: he talks about consciousness in terms of discernable difference.
This seems backwards to me. Prima facie, a neuron firing is a neuron firing, and a conscious experience is a conscious experience. The first step is to give a reason why we would think these two things are, in fact, the same.
That would be my way to discern water from vodka. It's a terrible way to discern water from ethylene glycol.
Worth thinking about what smelling and tasting the unknown clear liquid entails. These are extremely sensitive chemical analysers that can usually uniquely identify most naturally occurring things.
But science doesn't proceed prima facile, it proceeds on the basis of evidence. If the model that has electricity and magnetism as two sides of the same coin is better at predicting results of experiments than the one that holds them as two distinct phenomena, proceed with the former.
In a broad sense of 'experiment' you can, I think. I can ask myself the question, "Is there something it is like to be me?" and I can consult myself and answer in the affirmative.
Sure, then there must be evidence to support this claim. Please give some examples of the evidence.
I don't think this conversation is going anywhere constructive, which is a shame as it started out interesting.
:up:
I'm interested in your views. In particular I'm interested in the relationship between neural events and particular experiences and what we can conclude from that about consciousness, if anything. That's central to this issue, no? The logic of this is interesting - arguments from analogy, tacit assumptions, alternative conclusions etc.
So continuing the analogy, you cannot have a change in an electric field without a corresponding and completely determined change in a magnetic field: this is evidence that they are "two sides of the same coin".
Same goes for the neurological correlates of consciousness: you cannot (refering back to prior discussions on this thread) have the "I see Halle Berry's face" experience without the Halle-Berry's-face-detector neuron firing and, conversely, you can't have the neuron fire without seeing Halle Berry's face. (There's citations on the older thread, can dig them out with some patience.)
This as far as I'm concerned makes the claim that they are distinct things, not the same thing from two perspectives, in need of justification, in the same way that if you turned an apple 180 degrees and expected me to believe it was a distinct apple, I'd expect a good justification. The model that fits the evidence is the one in which they're the same thing.
Yes! Sorry I was unclear. My bad. I'll get to the rest of your reply later, thanks.
:roll:
You completely missed the point.
If you can't discern the difference between water and vodka visually, but can only do so by smell or taste, then is the world is as it appears visually, or as it smells or tastes? If we could ask a bat or a dog, what would they say? Does a brain exist how we see it, smell it, or taste it? I think we are confusing the way it appears to a particular sense with the way it actually is. This reminds me of how we have a difficult time discerning the difference of light being particles or waves. Maybe it depends on the sense (measuring device) being used.
Quoting Kenosha Kid
But the evidence only appears a certain way depending on what sensory device you are using to observe the evidence. I think that we are forgetting that any time we mention evidence, we are mentioning some conscious experience of some evidence, not evidence as it exists apart from our experience of it, or the way it appears to some sensory apparatus.
...
Quoting Harry Hindu
Apparently you completely failed to include the point.
Quoting Harry Hindu
Okay, so _you_ completely missed the point. Also these are spurious dichotomies. I can only hope you're speaking metaphorically.
Quoting Harry Hindu
Fortunately you can use many devices, simultaneously if you like: again, it's not either/or.
Quoting Harry Hindu
As I said to Judaka, this is a very outdated way of looking at science. Phenomenology is an important matter in modern physics. When someone says "a photon is a click in a photo detector," they are not talking about photons as they appear to the photon detector but how we experience the photon detector's behaviour. All scientific measurement is really a human measurement of a measuring instrument. This isn't problematic: it's been a couple of hundred years since scientists thought they had direct access to objective reality.
Do you really have no idea what someone is talking about when they ask "are you conscious"? You're not able to grok that sentence?
Nothing. Consciousness, mind, and ideas are all there is. Idealism makes everything so much easier.
You can't tell, you can only assume. Since we're all built the same way, there's been no problem assuming we're all conscious, but when computers get more sophisticated, and people start claiming things other than brains are conscious, the impossibility of verifying external consciousnesses is going to become a big problem. Quoting Harry Hindu
Well said.
Quoting Harry Hindu
Can you unpack "view from nowhere"? Do you mean a god's eye view of your internal mental states?
Suppose we have an unconscious machine that knows all the physical facts about our universe. From that information, could it figure out that this thing called "consciousness" exists?
Strange. If you don't have "direct access" to "objective" reality then are you saying that you have indirect access to your own experiences? Are your experiences part of "objective" reality? It seems to me that you have "direct" access to some part of reality - namely your own mind - or else how can you ever claim that you have experiences with any certainty, much less that they are even about something else that isn't an experience. How does that even happen?
And if we can't adequately explain the part of reality that we have direct access to, or how it relates with the rest of reality, then how can you assert that we know so much about what we access indirectly?
I have an idea what someone might mean, but then that idea falls apart when subjected to logic and reason. The same goes for the word, "god". People use the word without a clear understanding of what it is that they are talking about. We need a definition in order to understand what each other are talking about so that we are not talking past each other.
Quoting RogueAI
Only because we've learned to associate consciousness with behaviors and haven't come up with an explanation of consciousness that allows us to detect consciousness more directly.
Quoting RogueAIYes, something like that.
Quoting RogueAI
I don't know what "physical" means, much less a physical fact. How about just facts, or information? I think it would be easier to figure out what consciousness is without the false dichotomy of "physical" and "mental".
Quoting RogueAI
I'm not so sure. Are you saying that my feet are conscious like my brain? Are you saying that molecules, as well as the atoms they are composed of, and then the quarks that the atoms are composed of, have points of view? What is a point of view, if not a structure of information?
I am privy to experiencing Halle Berry's face. Nothing in that experience suggests a particular neuron firing in my brain. So, no, I do not have access to the objective reality underlying my experiences.
By analogy, when I see an apple, I don't see the full apple. I cannot see the reverse side, or the inside. It's not that the objective reality of the apple is missing my experience of it, rather than my experiencing it is an incomplete and particular perspective.
I don't think we even need to use the word consciousness to poke some serious holes in materialism. For example, if scientists come up with a theory of consciousness and claim that some machine is conscious, instead of worrying about what consciousness means, we can just ask the scientists, "Is it capable of feeling anything, like pain or pleasure?" If the scientists say "yes", then they are still on the hook for proving that that machine can feel pain, and then we're back to the verification problem. People can throw up language barriers to questions like "Are you conscious?", but if they try to do so for something like "are you in pain?" it's not going to work. We all know what is meant by "are you in pain?"
For example, Kenosha Kid thinks it's possible for consciousness to arise from different substrates, like rocks or ice cream cones (I think he used that example). So, instead of getting bogged down in questions like, "How could a collection of x produce consciousness?", we can ask "how could a collection of x feel pain?" The same absurdity arises (e.g., a collection of rocks feeling pain), there's the same explanatory gap and hard problem (e.g., how could a bunch of rocks feel pain? How does that work?) and we don't even have to mention consciousness.
How would you detect consciousness in a machine, even in principle? How would you go about determining that a substrate other than neurons can generate the sensation of pain? I think this is, in principle, impossible to verify.
I'm sympathetic, and I think things are easier if we ditch physicalism altogether, but physicalism's central claim is that there is this non-conscious stuff that exists external to us and that it either causes consciousness or is consciousness. I don't think there's a problem understanding what physicalists mean when they say that. It's a pretty straightforward theory: mindless stuff exists and everything is made of it and it causes all phenomena. That's easy to understand. I happen to to think it's wrong, but I don't think there's a meaning problem there.
In monistic idealism, there is only one cosmic mind, and we are dissociated aspects of it (think dissosciative identity disorder, which used to be multiple personality disorder). So, would my feet be conscious? There's an assumption there that there are these things separate from us called "feet", and that they might be conscious. I don't think anything is separate. I think that separation is an illusion. There's only one thing that is conscious: the one mind. Our own focuses of awareness are, as I said, dissociated aspects of this one cosmic mind.
In other words, you have "direct" access to your experience of Halle Berry's face and your perspective of apples. In other words, your experiences and perspectives are part of "objective" reality. If not, then how can you talk about your experiences and perspectives like you can talk about faces and apples?
So I'm confused as to your use of "direct" and "objective". You have direct access to your experiences and perspectives and your experiences and perspectives are part of the objective world.
Like I said, if you can't explain the relationship between your experiences and perspectives and what they are about, then how do you even hope to explain the things your perspective and experiences are about, or of?
All you are doing is moving the goal posts. Now we need to define pain. What if I defined pain as being informed that you are damaged. Can a machine be informed that it is damaged to then take action repair the damage? What form does the information take? What form does the information "damage to the body" take in you, if not pain? Feelings, visuals, smells, tastes, sounds, etc. all take forms which are all different due to the different sensory organs that are used to acquire the information. You can be informed that you are injured visually as well. Both vision and pain inform you of the same state-of-affairs, but in different forms.
Quoting RogueAI
LIke I said. We first need to define what it is that we are looking for. If I define consciousness as a sensory information structure in memory, does this include machines with memory and sensory devices as having consciousness?
Quoting RogueAI
Again, they are using the terms consciousness and non-conscious as if they know the relationship between consciousness and non-conscious stuff (ie the relationship between brains and minds). How does a non-conscious thing cause consciousness? How does something cause it's opposite? That is a serious problem. It's like asserting that something comes from nothing, or that good can come from evil acts.
Quoting RogueAI
Why would there be dissociated aspects of one mind? Are you saying solipsism is the case and we don't know that our minds really aren't conscious in and of themselves, rather there is only one consciousness - this cosmic mind?
It seems simpler to just say everything is information, or processes.
Nagel pointed out that the view from within is always from somewhere, from the here and now. Somehow this is a fact, a piece of direct observation, but it is not an objective fact. I am here in this location and you are there. Biden is in England and I am in New York. If our minds just switched locations there would be no objective difference and yet there would be a difference to me. I would see things from Biden's location and be surrounded by world leaders. So even in a metaphysics that contains "points of view" as fundamental entities there would still be something missing. The subjective fact that I am one of those point of view.
is there a view from nowhere? I dont see how there can be. The idea of objectivity may be closer to a view from everywhere.
If what you are saying is the case whether anyone from any point of view agrees or is aware of it or not, then does that not make it an objective fact?
I think you're confused. Your argument here is that subjective experience is proof that subjective experiences are objects.
OK, thanks. That experiences supervene on the physical is compatible with any theory of mind, including substance dualism (I'm not a substance dualist). To spell it out in terms of substance dualism, just to make the point, there might be a lawlike relationship between physical stuff and mental stuff, such that any change in the mental stuff corresponds to a change in the physical stuff, in a consistent, lawlike way. Substance dualism is wrong for other reasons, but it's consistent with the evidence that physical neural events correspond in a very regular manner with that subject's experiences.
Regarding the view that there is one thing with two perspectives, the problem just pops up again. Lets take a rock. No neurons, no wetware, no behaviour similar to human behaviour that would allow us to infer consciousness, no? So how many perspectives on the rock are there? Just one, presumably. It has no first person perspective, the only perspective that exists is the perspective of the conscious creature looking at it. Now lets take a neural function roughly corresponding with a subject tasting some coffee. You're saying that consciousness just is that thing. The neural function looks like a bunch of readings on a brain scanner of some kind from the scientist's point of view, but from the subject's point of view, those same functions are the experience of tasting coffee.
The question now is, why does a neural function have two perspectives, and a rock only one?
In other words, in claiming an identity in order solve the hard problem (the mental just is a physical function) it becomes necessary to re-introduce a dualism in order to be able to talk about subjective experiences as distinct from neurons firing, namely, the distinction between two perspectives. But now we're back to square one. How can functional interactions of things with only one perspective result in something with two perspectives?
https://iep.utm.edu/identity/
Also: having a song stuck in your head, but no music playing inside your skull. This is one of those cases where materialism goes down a rabbit hole into absurdity.
I broadly agree with your posts in this thread I think. I prefer to avoid the term 'materialism' as it is vague and has a lot of baggage. It's also unclear to me how it is different from 'monism'. I think a more precise word to use is 'emergentism'.
Yes I see what you mean. I guess the physical brain state still has properties that its constituent neurons, or even molecules, do not have, for example it is the property of a whole system that it can see red, but the constituent parts don't have that property/capability. So there's still emergence there in that sense.
I'm asking a question, using your examples. You can clear up the confusion if you weren't trying so hard to be obtuse. Again, I'm asking what you mean by "objective" and "subjective". You're using the terms, not me. We don't have to use faces and apples as examples. We could also use racism and democracy as examples, which aren't objects but we can talk about them like we talk about experiences and perspectives. So, I'm waiting for you to clear up the confusion by simply answering my questions.
While you're at it maybe you could explain what you mean by "direct/indirect" as well, and what and where the "you" is in relation to your perspectives and experiences.
A dualism that cannot separate its dichotomy at all is an insistence rather than an explanation though, again like insisting some mysterious interaction between heads and tails when the obvious and most evident explanation is much simpler: they're the same object. I guess to a dualist, a dualist explanation would seem like the default, but it's one ism more than is necessary. (Tmk there's no proof against dualism, just no justification for it either.)
Quoting bert1
Sure, consciousness is not something a rock does. But it doesn't do a lot of things. It's a lousy printer, and it's notoriously bad a giving blood. Are we going to distinct plane of reality for every possible capacity of an object?
Quoting bert1
Because perspective is an aspect of consciousness. We are conscious so that feels very special, but there's no objective reason that consciousness is more special -- deserving of its own kind of reality -- than any number of things that can do shit consciousness can't do, like forge galaxies, create atoms, swallow planets, go through two slits at once.
A similar notion to a perspective is a frame of reference, which is unique like a perspective but doesn't require the object in question to be conscious. If you look at something like a point particle falling toward a black hole, there's an extremely different picture from different frames of reference: for a stationary observer far from the event horizon, the particle will slow down as it approaches the horizon, where it will stop; in the rest frame of the particle, it will pass the event horizon and carry on forever (or rather the event horizon will pass and recede from it forever). The difficulty in reconciling different frames of reference is not dependent on consciousness: indeed, I find it easier to grasp that the Halle Berry's face neuron firing is identically the 'experiencing Halle Berry's face' than I do reconciling the consciousness-free black hole 'perspectives'.
Quoting bert1
I think I covered this above. Disclaimer btw: I currently have sunstroke. Everything I'm saying seems perfectly reasonable and lucid to me. It is possible I will reread this tomorrow and be appalled with myself :rofl:
Quoting bert1
The thing doesn't "have" two perspectives any more than an apple has infinite perspectives.
I think, as ever, your mode of communication is just, for me anyway, not conducive to anything more than me guessing what you're circling around and hoping for the best. Having been down this road before with you, I'm going to leave the option of you considering your posts more carefully on the table and otherwise have to ignore you. Because there's nothing in that post that demands more than the ridiculously obvious "a neuron firing" is not equal to "a neuron" and that's just going to lead to you saying that's not what you meant and I'm being an asshole or something. Make an effort or I won't either.
So why not just ask that? If that's your sole question, what's with all the bunkum? Standard definitions used throughout. If I was introducing exotic definitions I would have stated them, likewise a common thing to do when communicating.
EDIT: Anticipating the follow-up question "What do people generally mean by subjective and objective?", a subjective description of a thing is that given by information available to a subject observing that thing or something derived from that thing, while an objective description of a thing is a description that is (or would be) given by complete information about that thing.
Going back to the Halle Berry's face recognition neuron, the part of the brain that is aware of Halle Berry's face is not aware that a neuron that recognises Halle Berry's face has fired: it has less information about the causes of that awareness than an objective description. An objective description of that process is, in principle, more complete (assuming the mind is fundamentally physical) such that an external observer could detect "is experiencing Halle Berry's face" in a third party. In principle, not yet in practice.
That would not be a good definition of pain. The salient feature of pain is not information about damage to the system. You can have pain without any damage to the system (e.g., phantom limb pain). The main thing about pain is that it hurts, and any definition that doesn't mention this phenomenal aspect of pain is severely lacking. Wouldn't you agree? The main thing about pain is it feels bad?
Also, I don't think there's any goalpost moving going on. I might grant you that "is x conscious?" might get bogged down in definitions, but "is x in pain?", won't. Everyone knows what that means. Either a machine can feel pain or it can't. No fancy definition is required. Do you believe that machines will ever be able to feel pain?
If anyone is unsure, one way to learn is to hit one's thumb hard with a hammer. That's pain, and from that one might further intuit the concept of consciousness.