Explaining multiple realizability and its challenges
Multiple realizability is a feature of a non-reductionist theory of mind. I want to explain why its adherents like it and how its opponents reject it.
MR is a response to the flaw in brain-state reductionism: it doesnt appear to be possible to correlate a particular brain state to a psychological state (like pain). This flaw is particularly noticeable when we think about the broad range of creatures who can feel pain: their varying anatomy makes it seem impossible to make thus correlation.
MR says we don't need to identify a particular brain state because pain can be realized by many different physical states.
If you're a proponent of MR, what would you say the basis for the claim is?
Next post: challenges to MR.
MR is a response to the flaw in brain-state reductionism: it doesnt appear to be possible to correlate a particular brain state to a psychological state (like pain). This flaw is particularly noticeable when we think about the broad range of creatures who can feel pain: their varying anatomy makes it seem impossible to make thus correlation.
MR says we don't need to identify a particular brain state because pain can be realized by many different physical states.
If you're a proponent of MR, what would you say the basis for the claim is?
Next post: challenges to MR.
Comments (66)
MR is a theory based on unknowability. I reject that. I think the functioning of the brain is knowable, but we just haven't got there yet.
MR may have practical applications, up to the point when it becomes obsolete due to advances on knowing how the brain works. If it works in the first place.
This is why I dont like that analogy: if you're running the same program on computers with different hardware, it would still be simple (with a diagnostic device called a logic analyzer) to identify correlating states. They're all doing the same thing, just with different voltage levels and technological platforms.
If we change it to devices with different brands of microprocessors so the machine language is different, we could still discover the correlation diagnostically. IOW, I wouldn't gave to identify an external state and trace it back to the state of the logic gates.
I think MR is a stronger thesis than: same software/different hardware. It's unrelated software and hardware that's only related by attachment to the same evolutionary tree.
Or is that wrong? Has a "software" format been discovered that allows us to correlate humans and octupi?
I would like to know the extent to which MR is a shot in the dark vs based on evidence.
I've always felt that there's a much stronger argument for MR than just pain, in that neuroscience can't find any objective correlation between 'brain states' and all manner of mental phenomena, including language. I mean, in individuals who suffer brain damage, the brain is able to re-route its functionality so that areas not typically associated with language abilities are re-purposed. Not only that (and I have some references for this), research has been entirely able to correlate particular patterns of neural activity for even the most simple learning tasks.
(I'm interested in this topic, but have to go to work, but will follow the thread.)
When you’re performing a function or carrying out a calculation or in reference to a machine, then this makes sense. But how would a machine realise pain? At all? You could surely program a computer to respond in a particular way to a range of inputs that would output a symbol for 'pain', but the machine by definition is not an organism and cannot literally feel pain.
I don't mean that I agree with the machine-metaphor behind reductionism, but I think it needs a subtler critique than this, now that we can envisage quasi-organisms.
(Very well, thanks.) As I said, you could simulate pain or a 'pain-type-reaction'. But one of the key points of pain is that it is felt.
Which provides me the opportunity to post one of my all-time favourite stock quotes, from Rene Descartes, in 1630 (!):
[quote= Rene Descartes]if there were machines that resembled our bodies and if they imitated our actions as much as is morally possible, we would always have two very certain means for recognizing that, none the less, they are not genuinely human. The first is that they would never be able to use speech, or other signs composed by themselves, as we do to express our thoughts to others. For one could easily conceive of a machine that is made in such a way that it utters words, and even that it would utter some words in response to physical actions that cause a change in its organs - for example, if someone touched it in a particular place, it would ask what one wishes to say to it, or if it were touched somewhere else, it would cry out that it was being hurt, and so on. But it could not arrange words in different ways to reply to the meaning of everything that is said in its presence, as even the most unintelligent human beings can do. The second means is that, even if they did many things as well as or, possibly, better than anyone of us, they would infallibly fail in others. Thus one would discover that they did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action.[/quote]
And, can we 'envisage quasi-organisms'? I maintain that all such 'envisaging' is in fact 'projection', which is a consequence of our immersion in image-producing and computing technology, such that we loose sight of the fact that computers are neither organisms, nor beings, but devices. Yet from experience on this forum, people will fight that distinction tooth and nail.
Right. So I write a program and compile it for an Intel microprocessor, then compile it for some other processor. What in the biological world compares to that "same program"?
I hear you. I'm just exploring different aspects of the concept of emergence.
What I don't see, is how the symbolic representation of pain, like the word PAIN, is actually painful. Nor how it is possible to argue that computers are subjects of experience.
Could you hurt it? Cause it to feel physical or emotional pain?
For something to care for you you need not be able to hurt it. You need only be able to care for it.
The stronger version allows the same pain, for example, to arise from different token physical states of the same system.
Horgan 1993:
"Multiple realizability might well begin at home. For all we now know (and I emphasize that we really do not now know), the intentional mental states we attribute to one another might turn out to be radically multiply realizable at the neurobiological level of description, even in humans; indeed, even in individual humans; indeed, even in an individual human given the structure of his central nervous system at a single moment of his life. (p. 308; author's emphases)"
This stronger thesis is empirically supported by evidence of neural plasticity in trauma victims.
Next: non-reductive physicalism vs functionalism:
Well, maybe it would if only it had its own grandfather. :-) Which served in the war. :-) And had experience-ready capabilities. :-)
I am only proposing that you can give a social robot enough of the appearance of a carer for humans to feel comfortable interacting with it. It seems to me that ai is now sophisticated enough to give a machine for example parameters that would represent our two broad theories of other minds, i.e. simulation theory or theory theory. And the social robot would have a head start with its human because it would indeed appear to be reading the human's mind, as that would be its primary purpose: to provide for the care needed, including anticipating future needs. For example, if a doddery person falls over when they stand on more than one occasion, a machine could perfectly well begin to anticipate and help out with that. Clever dogs are already trained to do that to a limited degree.
I'm not sure I'm understanding you, but spider eyes evolved separately from human eyes. Could arachnids continue to evolve into creatures with rich inner worlds with some commonality of visual experience with humans? If not, why not?
True enough, but it still doesn’t amount to being able to feel pain. So Putnam’s idea of ‘multiple realisability’ doesn’t extend to the domain of robots or AI.
I reiterate that I’m dubious about the effectiveness of referring to ‘pain’ as a ‘psychological state’. It seems to me a physiological state. I think a much more philosophically sophisticated argument could be constructed around the argument that the same ideas can be realised in multiple ways - different languages and even different media. So, the argument would go, if the same proposition can be represented in a vast diversity of symbolic forms, how could the meaning of the proposition be reduced to anything physical? In other words, if the information stays constant, while the physical form changes, how can you argue that the information being conveyed by the symbols is physical? To do so is to mistake ‘brain states’ for symbolic forms, which they’re surely not.
Pain interoception (nociception) is a type of corporeal state perception (sensation mental effect). So, pain is a psychological state caused by a physiological state (sensation).
In other words: the physical information of nociception becomes the semantic information of pain.
In some ways MR is compatible with functionalism, but Putman used MR against functionalism. Exploring that would take me further down the path of functionalism than I really wanted to go. Any comments welcome, though.
'That was wonderful for you, dear. How was it for me?'
Functionalism isn't like that. It emphasizes outer causes and ramifications over internal neural states.
It does make sense to think about social norms when we think about psychological states, plus individual psychology. A person who experiences a lot of pain everyday will rank a pain as minor when the same physical condition could be experienced as horrific to someone else.
An aspect of pain for humans is the so-called "pain of the pain." This is distress arising from the expectation or memory of pain.
Considering that kind of thing, functionalism makes sense.
"A transducer is a device that converts energy from one form to another. Usually a transducer converts a signal in one form of energy to a signal in another."
I'm not addressing the conversion of energy. I'm arguing that because the meaning of a proposition can be represented in different symbolic forms and even in different media, then the meaning or the intelligible content of the proposition, is separable from the physical representation. It's suggestive of a form of dualism. As far as I'm aware, it's a novel argument.
No - that's the point. Information is not reducible to energy. There's a famous aphorism to that effect by the creator of cybernetics, Norbert Wiener - '“Information is information, not matter or energy.'
And this doesn't necessarily imply Cartesian dualism. I'm not arguing for information as a substance. I think it's nature is very elusive. But it can be shown that it can't be explained in terms of 'arrangements of objects', whether they are particles or whatever.
(And, who or what is Caminante??)
Explored at greater length in this post.
From Pylyshyn (1984):
Jim sees an auto accident. He goes to a phone and dials 91. What will he do next? Most likely he'll dial another 1.
The explanation for this is a systemic generalization between
A. What he recognized
B. His background knowledge
C. His resulting intentions, and
D. That action
A reductionist's explanation will be too weak because the specific neural events and muscular contractions involved here will only be associated with one way of learning, coming to know, and the action of dialing (he could dial with a pencil, a toe, voice recognition, etc).
Because of multiple realizability, a reductionist can't capture all capturable generalizations, a tenet of scientific methodology.
I agree. As I understand Derrida, one of the deep fantasies of philosophy is meaning without 'physical support,' meaning without a vessel that is directly present 'in' or 'for' some mind. Can I talk to myself without an historically generated language? Can I talk to myself at all in the sense of learning anything from this monologue? I think that we do learn from talking to ourselves. The symbols don't refer to timeless entities but are caught up in time and recontextualization.
Clearly we think as individuals. Wouldn't you agree? But we think in the words of the tribe.
That's because language-using beings orient themselves to the world via meaning.
Quoting frank
Like I said before, and no-one seems to notice this, it's nuts to think that 'brain states' represent anything whatever. That's the hangover of Locke's representative realism, but it's completely untenable, because it mistakes neurology for semiotics, whereas neurology works at completely different level to semiotics, representation, language, and the like.
Quoting softwhere
The individual - 'me' - exists like the foam on a wave on an ocean. The most recently-arrived and most ephemeral of beings.
Indeed. The human being is a radically historical and social being. What I am pre-philosophically inclined to call 'my' reason is the work of centuries. More locally, the human being without a tribe is unthinkable. We are born helpless with necks too weak for our heavy heads. A human brain that doesn't learn a language is largely wasted.
Our quickly senescent bodies would be pathetic indeed were they not the vessels of a time-binding software or 'philosophical subject.' If philosophy is the religion of self-consciousness, then the self that is known is not primarily the helplessly mortal self (we have magazine quizzes for that) but the human in its/our unfolding potential. The materiality of the signifier and material in general are crucial for time-binding, for the human being to lift itself up from superstition and poverty (its immersion in nature, one might say).
We might say that this orientation is meaning. The mind/matter distinction is a historical contingency. The beetle in the box is problematic.
[quote=Wiki]
Wittgenstein invites readers to imagine a community in which the individuals each have a box containing a "beetle". "No one can look into anyone else's box, and everyone says he knows what a beetle is only by looking at his beetle."[16]
If the "beetle" had a use in the language of these people, it could not be as the name of something – because it is entirely possible that each person had something completely different in their box, or even that the thing in the box constantly changed, or that each box was in fact empty. The content of the box is irrelevant to whatever language game it is used in.
By analogy, it does not matter that one cannot experience another's subjective sensations. Unless talk of such subjective experience is learned through public experience the actual content is irrelevant; all we can discuss is what is available in our public language.
By offering the "beetle" as an analogy to pains, Wittgenstein suggests that the case of pains is not really amenable to the uses philosophers would make of it. "That is to say: if we construe the grammar of the expression of sensation on the model of 'object and designation', the object drops out of consideration as irrelevant."
[/quote]
https://en.wikipedia.org/wiki/Private_language_argument
If the notion of pure mind is threatened, then so is the notion of pure matter. Indeed, 'mind' and 'matter' are troubled in the same way by the argument above. Private meaning is problematic. And yet I depend on the same system of signs that I use to unveil the strangeness of this system.
With formal languages perfect translation (between media) is not only possible but common. And I agree that this is fascinating indeed. But non-formal languages are famously only imperfectly translated. The act of reading is also creative. Moreover the writings of the past are changed (recontextualized) by the writings that come after. What is 'the ideality of the literary object'? It's a 'spiritual realm,' as I see it. But this spiritual realm also seems to be dynamic, caught up in time, and subject to dissemination.
The dualism is still there, but isn't this culture versus nature?
More like it's clinging to and grasping of the sensory domain (which ends up being the meaning of 'empiricism'.)
I heard that Schrodinger's cat had eaten Wittgenstein's beetle, although others heard differently.
Quoting softwhere
It’s the ‘formal realm’, I think - the domain of laws, conventions, number, logic and the like. We ‘see’ it through the ‘eye of reason’. Whereas the spiritual realm is seen through ‘the eye of the heart’ according to mystic lore.
Then, in light of that, consider that the only perfect application of the word ‘is’ is the equals sign. Other usages of the word ‘is’ are only ever approximations.
We need the sensory domain, though. Since we are fundamentally social beings, it's our sense organs and our flesh generally that make language and thought possible.
Quoting Wayfarer
That looks like a dodge.
Quoting Wayfarer
Note the necessary appeal to metaphor. I understand the metaphor and agree with it. This metaphoricity is one of the ways that natural language exceeds formal language.
I also like 'eye of the heart.' This metaphor emphasizes the passion involved in the 'spiritual.' I realize that some might understand metaphor to be a reductive concept, but as Derrida noted: if metaphysics is metaphorical, then metaphor functions metaphysically within such an assertion. To compare God to a literary object is as much a promotion of literature as it is a demotion of God. Alternative approaches (justifying God as a scientifically defensible entity) seem the wrong way to go (I think you agree here.)
Quoting frank
The SEP entry on multiple realisibility' says something similar:
I chipped in to say that
Quoting Wayfarer
I will add, I think the talk about 'electronic states' and 'slime states' is really typical of the kind of nonsense that passes for philosophy nowadays even though I might be sympathetically inclined to the basic argument.
Frank then introduced a refinement to the OP, to wit:
Quoting frank
So I will bow out at this point as plainly the kind of argument I have in mind is completely different to anything intended by the OP.
I've never come across a panpsychist saying a whole must be conscious because some of the parts are (although no doubt there will be such people, I may even be one of them, although I don't recall making an argument of that form). If you are fallacy hunting, wouldn't the fallacy of division be more apt? Namely that the parts must be conscious because the whole is?
I could understand you crying foul in terms of a divisional fallacy. Are you sure that's not what you mean?
I'm actually just plowing through the SEP article you posted. It's kind of like homework so I can understand various angles on the concept of emergence.
Your contributions have been welcome.
Multiple Realizability is consistent with current Natural Science (inductive evidence).
Corporeal and mental events are mutually dependent, but incommensurable because:
1) While correlation can be demonstrated, causation cannot.
2) Corporeal and mental data are accessed at different levels of abstraction (i.e., Neurology and Psychology).
Also, neuroplasticity is a fact (ruling out the possibility of epiphenomenalism, which is consistent with psychoneural identity theories).
It is obvious that body and mind are open sub-systems of (at least certain) organisms (e.g., those having a central nervous system). Body is open to mind and environment, and mind is open to body. But mind cannot be a sub-system of body if neuroplasticity is a fact.
Right. This is the conclusion of the Pylyshin argument I discussed above. So the stance that best meshes with scientific methodology is non-reductive.
Quoting Galuchat
Yep. I mentioned neural plasticity, just didn't delve into it.
Which brings us to emergence.
Yep. I'd like to start a thread discussing emergence, I'm not quite there, though. Been busy.
So emergence has nothing to do with Multiple Realizability?
Fair enough.
I think you'll find, Galuchat, that every thing is related to everything else.
Inanity.
Sounds like you're spent, so I'm outta here.
Up next is challenges to multiple realizability. Please do be outta here.
So would a consequence of rejecting multiple realizability be that we have to let go of assumptions about other species' experiences?
More to the point, considering neural plasticity, should we drop the assumption that pain is something we all experience?
Or so it seems. Consider the reduction of temperature to molecular behavior. I cant predict what will be going on with any particular molecule at any particular time, yet i can say something about the mean. Per SEP:
"Following suggestions by Clifford Hooker (1981) and Enc (1983), Bickle (1998, Chapter 4) argues that the radical type of multiple realizability (in the same token system over times) is a feature of accepted historical cases of scientific reduction. It even obtains in the “textbook” reduction of classical equilibrium thermodynamics to statistical mechanics and microphysics. For any token aggregate of gas molecules, there is an indefinite number of realizations of a given temperature—a given mean molecular kinetic energy. Microphysically, the most fine-grained theoretical specification of a gas is its microcanonical ensemble, in which the momentum and location (and thus the kinetic energy) of each molecule are specified. Indefinitely many distinct microcanonical ensembles of a token volume of gas molecules can yield the same mean molecular kinetic energy. Thus at the lowest level of microphysical description, a given temerature is vastly multiply realizable in the same token system over times. Nevertheless, the case of temperature is a textbook case of reduction"