You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Explaining multiple realizability and its challenges

frank December 08, 2019 at 17:14 9250 views 66 comments
Multiple realizability is a feature of a non-reductionist theory of mind. I want to explain why its adherents like it and how its opponents reject it.

MR is a response to the flaw in brain-state reductionism: it doesnt appear to be possible to correlate a particular brain state to a psychological state (like pain). This flaw is particularly noticeable when we think about the broad range of creatures who can feel pain: their varying anatomy makes it seem impossible to make thus correlation.

MR says we don't need to identify a particular brain state because pain can be realized by many different physical states.

If you're a proponent of MR, what would you say the basis for the claim is?

Next post: challenges to MR.

Comments (66)

god must be atheist December 08, 2019 at 19:05 #360745
We simply don't know how the brain works, and our measuring techniques are not much help either.

MR is a theory based on unknowability. I reject that. I think the functioning of the brain is knowable, but we just haven't got there yet.

MR may have practical applications, up to the point when it becomes obsolete due to advances on knowing how the brain works. If it works in the first place.
Pfhorrest December 08, 2019 at 19:25 #360751
MR is not itself a theory of mind, it’s just a feature of functionalism. Functionalism says that mental states correspond to functional states of (particular kinds of) state machines, which in general are multiply realizable: you can run the same program on a computer made of transistors, vacuum tubes, or pipes and valves, in principle, as it’s not about the hardware per se but about the functionality that it can implement. Functionalism is not itself even inherently reductionist: in principle the function of the mind could be realized in some kind of immaterial substance, if such things can even exist. Functionalism just doesn’t require that such a thing exist.
frank December 08, 2019 at 20:33 #360764
Quoting Pfhorrest
you can run the same program on a computer made of transistors, vacuum tubes, or pipes and valves, in principle, as it’s not about the hardware per se but about the functionality that it can implement.


This is why I dont like that analogy: if you're running the same program on computers with different hardware, it would still be simple (with a diagnostic device called a logic analyzer) to identify correlating states. They're all doing the same thing, just with different voltage levels and technological platforms.

If we change it to devices with different brands of microprocessors so the machine language is different, we could still discover the correlation diagnostically. IOW, I wouldn't gave to identify an external state and trace it back to the state of the logic gates.

I think MR is a stronger thesis than: same software/different hardware. It's unrelated software and hardware that's only related by attachment to the same evolutionary tree.

Or is that wrong? Has a "software" format been discovered that allows us to correlate humans and octupi?
frank December 08, 2019 at 20:35 #360765
Quoting god must be atheist
MR is a theory based on unknowability. I reject that. I think the functioning of the brain is knowable, but we just haven't got there yet.


I would like to know the extent to which MR is a shot in the dark vs based on evidence.
Wayfarer December 08, 2019 at 20:38 #360766
Quoting frank
MR says we don't need to identify a particular brain state because pain can be realized by many different physical states.


I've always felt that there's a much stronger argument for MR than just pain, in that neuroscience can't find any objective correlation between 'brain states' and all manner of mental phenomena, including language. I mean, in individuals who suffer brain damage, the brain is able to re-route its functionality so that areas not typically associated with language abilities are re-purposed. Not only that (and I have some references for this), research has been entirely able to correlate particular patterns of neural activity for even the most simple learning tasks.

(I'm interested in this topic, but have to go to work, but will follow the thread.)
Pfhorrest December 08, 2019 at 20:58 #360772
Reply to frank I think functionalism is more about implementing a protocol or format or even more generally a... well, a function. AIM on Windows and Mac are different realizations of the same program, for Mac x86 or PPC are likewise even though the processors are different, AIM Windows and iChat still both communicated over the same protocol, and all of those are still chat programs just like ICQ even though that’s not directly compatible with them. Human pain and octopus pain could be comparable to iChat on Mac PPC and ICQ on a vacuum tube emulation of Windows x86: they’re very different tech stacks at every level but they’re both still doing the same thing.
Wayfarer December 08, 2019 at 22:11 #360786
Quoting Pfhorrest
Functionalism says that mental states correspond to functional states of (particular kinds of) state machines, which in general are multiply realizable: you can run the same program on a computer made of transistors, vacuum tubes, or pipes and valves, in principle, as it’s not about the hardware per se but about the functionality that it can implement.


When you’re performing a function or carrying out a calculation or in reference to a machine, then this makes sense. But how would a machine realise pain? At all? You could surely program a computer to respond in a particular way to a range of inputs that would output a symbol for 'pain', but the machine by definition is not an organism and cannot literally feel pain.
mcdoodle December 08, 2019 at 22:20 #360788
Reply to Wayfarer (HI Wayfarer, hope all is well) I don't see why a machine couldn't be developed that would know how to simulate the expression of pain, and would also know that 'pain' is usually, but not always, a sign that something is wrong, so would also either simulate the wrongness, or only express pain when something is wrong.

I don't mean that I agree with the machine-metaphor behind reductionism, but I think it needs a subtler critique than this, now that we can envisage quasi-organisms.
Wayfarer December 08, 2019 at 23:00 #360796
Quoting mcdoodle
I don't see why a machine couldn't be developed that would know how to simulate the expression of pain


(Very well, thanks.) As I said, you could simulate pain or a 'pain-type-reaction'. But one of the key points of pain is that it is felt.

Which provides me the opportunity to post one of my all-time favourite stock quotes, from Rene Descartes, in 1630 (!):

[quote= Rene Descartes]if there were machines that resembled our bodies and if they imitated our actions as much as is morally possible, we would always have two very certain means for recognizing that, none the less, they are not genuinely human. The first is that they would never be able to use speech, or other signs composed by themselves, as we do to express our thoughts to others. For one could easily conceive of a machine that is made in such a way that it utters words, and even that it would utter some words in response to physical actions that cause a change in its organs - for example, if someone touched it in a particular place, it would ask what one wishes to say to it, or if it were touched somewhere else, it would cry out that it was being hurt, and so on. But it could not arrange words in different ways to reply to the meaning of everything that is said in its presence, as even the most unintelligent human beings can do. The second means is that, even if they did many things as well as or, possibly, better than anyone of us, they would infallibly fail in others. Thus one would discover that they did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action.[/quote]

And, can we 'envisage quasi-organisms'? I maintain that all such 'envisaging' is in fact 'projection', which is a consequence of our immersion in image-producing and computing technology, such that we loose sight of the fact that computers are neither organisms, nor beings, but devices. Yet from experience on this forum, people will fight that distinction tooth and nail.
frank December 08, 2019 at 23:08 #360797
Quoting Pfhorrest
AIM on Windows and Mac are different realizations of the same program, for Mac x86 or PPC are likewise even though the processors are different


Right. So I write a program and compile it for an Intel microprocessor, then compile it for some other processor. What in the biological world compares to that "same program"?
frank December 08, 2019 at 23:16 #360800
Quoting Wayfarer
I'm interested in this topic, but have to go to work, but will follow the thread.)


I hear you. I'm just exploring different aspects of the concept of emergence.
Wayfarer December 08, 2019 at 23:16 #360801
Incidentally SEP article on the topic is here https://plato.stanford.edu/entries/multiple-realizability/
Pfhorrest December 09, 2019 at 00:03 #360814
Reply to Wayfarer That is the question of the hard problem of phenomenal consciousness, and you already know my answer to that: everything has some phenomenal experience or another, and the specifics of that experience vary with the function of the thing, so anything that realizes the same function as a human brain has the same experience as a human brain.
Wayfarer December 09, 2019 at 00:23 #360823
Quoting Pfhorrest
That is the question of the hard problem of phenomenal consciousness, and you already know my answer to that: everything has some phenomenal experience or another, and the specifics of that experience vary with the function of the thing, so anything that realizes the same function as a human brain has the same experience as a human brain.


What I don't see, is how the symbolic representation of pain, like the word PAIN, is actually painful. Nor how it is possible to argue that computers are subjects of experience.
180 Proof December 09, 2019 at 00:38 #360831
bert1 December 09, 2019 at 09:26 #360952
180, are you approving of Pfhorrest's panpsychism or of his functionalism regarding the content of consciousness? Or both?
mcdoodle December 09, 2019 at 09:37 #360958
Reply to Wayfarer Great quote! To debate it thoroughly would take us off-topic. My feeling is that social robotics - not Siri and Alexa, but the robots that provide care and comfort - have progressed to the point where they defy Descartes' first point. If it feels like a carer, if it acts like a carer, then it's a carer. But (Descartes' second point) it won't, indeed, go off-piste as humans would and tell you how moved it was by its grandfather's wartime experiences.
ovdtogt December 09, 2019 at 09:45 #360964
Reply to mcdoodle Something does not have to be aware (think) to exist ( for us).
Wayfarer December 09, 2019 at 10:05 #360974
Quoting mcdoodle
If it feels like a carer, if it acts like a carer, then it's a carer.


Could you hurt it? Cause it to feel physical or emotional pain?
ovdtogt December 09, 2019 at 10:11 #360977
Quoting Wayfarer
Could you hurt it? Cause it to feel physical or emotional pain?


For something to care for you you need not be able to hurt it. You need only be able to care for it.
frank December 09, 2019 at 13:59 #361074
Per Fodor there are two degrees of MR: a weaker MR allows the same psychological state to arise from distinct structures: say electronic technology vs biological.

The stronger version allows the same pain, for example, to arise from different token physical states of the same system.

Horgan 1993:
"Multiple realizability might well begin at home. For all we now know (and I emphasize that we really do not now know), the intentional mental states we attribute to one another might turn out to be radically multiply realizable at the neurobiological level of description, even in humans; indeed, even in individual humans; indeed, even in an individual human given the structure of his central nervous system at a single moment of his life. (p. 308; author's emphases)"

This stronger thesis is empirically supported by evidence of neural plasticity in trauma victims.
frank December 09, 2019 at 16:24 #361116
This stronger version of MR is the prevailing view in philosophy of mind at this point. 'MR in token systems over time' is non-reductive physicalism. Horgan's comment makes clear why its prevalence is a matter of fashion and tastes vs a firmer empirical foundation, which is really the question I was wondering about when I started this thread, but I'll continue by laying out the family of perspectives surrounding this issue.

Next: non-reductive physicalism vs functionalism:
god must be atheist December 09, 2019 at 17:27 #361138
Quoting mcdoodle
it won't, indeed, go off-piste as humans would and tell you how moved it was by its grandfather's wartime experiences.


Well, maybe it would if only it had its own grandfather. :-) Which served in the war. :-) And had experience-ready capabilities. :-)
180 Proof December 10, 2019 at 00:10 #361279
Reply to bert1 I agree with Pfhorrest's take on MR functionalism. His 'panpsychism', however, I don't accept; as far as I'm concerned, the notion posits an ad hoc appeal to ignorance (i.e. WOO-of-the-gaps) from which is 'derived' what amounts to nothing more than, in effect, a compositional fallacy (if some part has 'phenomenal experience', then the whole has (varying degrees of discrete(?)) 'phenomenal experience' :roll: ), which of course doesn't, even in principle, explain what it purports to explain.
armonie December 10, 2019 at 00:35 #361287
??????
mcdoodle December 10, 2019 at 16:17 #361531
Quoting Wayfarer
Could you hurt it? Cause it to feel physical or emotional pain?


I am only proposing that you can give a social robot enough of the appearance of a carer for humans to feel comfortable interacting with it. It seems to me that ai is now sophisticated enough to give a machine for example parameters that would represent our two broad theories of other minds, i.e. simulation theory or theory theory. And the social robot would have a head start with its human because it would indeed appear to be reading the human's mind, as that would be its primary purpose: to provide for the care needed, including anticipating future needs. For example, if a doddery person falls over when they stand on more than one occasion, a machine could perfectly well begin to anticipate and help out with that. Clever dogs are already trained to do that to a limited degree.
frank December 10, 2019 at 16:37 #361541
Quoting armonie
Whatever gave rise to the uniqueness of these emerging states is linked to the overall evolution of the structure.


I'm not sure I'm understanding you, but spider eyes evolved separately from human eyes. Could arachnids continue to evolve into creatures with rich inner worlds with some commonality of visual experience with humans? If not, why not?
Wayfarer December 10, 2019 at 20:05 #361595
Quoting mcdoodle
I am only proposing that you can give a social robot enough of the appearance of a carer for humans to feel comfortable interacting with it.


True enough, but it still doesn’t amount to being able to feel pain. So Putnam’s idea of ‘multiple realisability’ doesn’t extend to the domain of robots or AI.

I reiterate that I’m dubious about the effectiveness of referring to ‘pain’ as a ‘psychological state’. It seems to me a physiological state. I think a much more philosophically sophisticated argument could be constructed around the argument that the same ideas can be realised in multiple ways - different languages and even different media. So, the argument would go, if the same proposition can be represented in a vast diversity of symbolic forms, how could the meaning of the proposition be reduced to anything physical? In other words, if the information stays constant, while the physical form changes, how can you argue that the information being conveyed by the symbols is physical? To do so is to mistake ‘brain states’ for symbolic forms, which they’re surely not.
Galuchat December 10, 2019 at 22:45 #361658
Reply to Wayfarer
Pain interoception (nociception) is a type of corporeal state perception (sensation mental effect). So, pain is a psychological state caused by a physiological state (sensation).

In other words: the physical information of nociception becomes the semantic information of pain.
Wayfarer December 10, 2019 at 23:39 #361688
Reply to Galuchat :up: Thanks. That makes it a little clearer. I still think it's a pretty lame argument, so I suppose I ought to butt out.
frank December 10, 2019 at 23:47 #361690
Functionalism in philosophy of mind is kin to behaviorism except it identifies psychological states as mediators of a pattern of causes and effects. The cause of the joy at hearing Beethoven is auditory stimulation (at least in part). The effect is smiling or clapping. It could be that the main thing all experiences of joy have in common is that they might inspire a person to say "I feel joy."

In some ways MR is compatible with functionalism, but Putman used MR against functionalism. Exploring that would take me further down the path of functionalism than I really wanted to go. Any comments welcome, though.
Wayfarer December 11, 2019 at 01:42 #361702
Behaviourist after torrid love-making session with professional colleague:

'That was wonderful for you, dear. How was it for me?'
frank December 11, 2019 at 14:01 #361823
Quoting Wayfarer
Behaviourist after torrid love-making session with professional colleague:

'That was wonderful for you, dear. How was it for me?'


Functionalism isn't like that. It emphasizes outer causes and ramifications over internal neural states.

It does make sense to think about social norms when we think about psychological states, plus individual psychology. A person who experiences a lot of pain everyday will rank a pain as minor when the same physical condition could be experienced as horrific to someone else.

An aspect of pain for humans is the so-called "pain of the pain." This is distress arising from the expectation or memory of pain.

Considering that kind of thing, functionalism makes sense.
armonie December 13, 2019 at 04:00 #362456
??????
Wayfarer December 13, 2019 at 04:13 #362465
Quoting armonie
The meaning of the proposal is reduced by the transducer, but, if it argues that the information remains constant, in an ideal sense, always, without interference, then the formal objectifications could be the ones that give meaning to the meaning. Like to tell that it is not the transducer that means the idea, but rather that it can happen the other way around; that formal objectification means the transducer.



"A transducer is a device that converts energy from one form to another. Usually a transducer converts a signal in one form of energy to a signal in another."

I'm not addressing the conversion of energy. I'm arguing that because the meaning of a proposition can be represented in different symbolic forms and even in different media, then the meaning or the intelligible content of the proposition, is separable from the physical representation. It's suggestive of a form of dualism. As far as I'm aware, it's a novel argument.
armonie December 13, 2019 at 04:20 #362469
????
Wayfarer December 13, 2019 at 04:31 #362477
Quoting armonie
. The information, isn't it, energy?


No - that's the point. Information is not reducible to energy. There's a famous aphorism to that effect by the creator of cybernetics, Norbert Wiener - '“Information is information, not matter or energy.'

And this doesn't necessarily imply Cartesian dualism. I'm not arguing for information as a substance. I think it's nature is very elusive. But it can be shown that it can't be explained in terms of 'arrangements of objects', whether they are particles or whatever.

(And, who or what is Caminante??)
armonie December 13, 2019 at 04:48 #362484
?????
Wayfarer December 13, 2019 at 04:52 #362487
Reply to armonie Well - I can communicate with you. Whether you understand what I'm trying to say is another matter, but I'm trying to convey an idea. And, I'm saying, an idea can be represented by many different symbolic systems, yet still remain identifiable, and I think that says something.

Explored at greater length in this post.
armonie December 13, 2019 at 05:54 #362513
?
Wayfarer December 13, 2019 at 05:59 #362522
Reply to armonie Music requires a listener. The sounds exist irrespective of whether there is a listener but they’re only music to a listener.
frank December 14, 2019 at 21:36 #363152
This is an interesting argument against reductionism:

From Pylyshyn (1984):

Jim sees an auto accident. He goes to a phone and dials 91. What will he do next? Most likely he'll dial another 1.

The explanation for this is a systemic generalization between

A. What he recognized
B. His background knowledge
C. His resulting intentions, and
D. That action

A reductionist's explanation will be too weak because the specific neural events and muscular contractions involved here will only be associated with one way of learning, coming to know, and the action of dialing (he could dial with a pencil, a toe, voice recognition, etc).

Because of multiple realizability, a reductionist can't capture all capturable generalizations, a tenet of scientific methodology.
softwhere December 15, 2019 at 00:02 #363172
Quoting armonie
You can communicate with me because we belong to a specific linguistic community, that is where the symbolic operates, in the specific, true, but it still has a physical support.


I agree. As I understand Derrida, one of the deep fantasies of philosophy is meaning without 'physical support,' meaning without a vessel that is directly present 'in' or 'for' some mind. Can I talk to myself without an historically generated language? Can I talk to myself at all in the sense of learning anything from this monologue? I think that we do learn from talking to ourselves. The symbols don't refer to timeless entities but are caught up in time and recontextualization.
armonie December 15, 2019 at 03:08 #363203
??


softwhere December 15, 2019 at 03:30 #363205
Quoting armonie
I wonder if I have ever been able to think, or If I was just repeat someone else's words?


Clearly we think as individuals. Wouldn't you agree? But we think in the words of the tribe.
Wayfarer December 15, 2019 at 03:49 #363211
Quoting frank
Because of multiple realizability, a reductionist can't capture all capturable generalizations, a tenet of scientific methodology.


That's because language-using beings orient themselves to the world via meaning.

Quoting frank
because the specific neural events and muscular contractions involved here will only be associated with one way of learning,


Like I said before, and no-one seems to notice this, it's nuts to think that 'brain states' represent anything whatever. That's the hangover of Locke's representative realism, but it's completely untenable, because it mistakes neurology for semiotics, whereas neurology works at completely different level to semiotics, representation, language, and the like.

Quoting softwhere
Clearly we think as individuals.


The individual - 'me' - exists like the foam on a wave on an ocean. The most recently-arrived and most ephemeral of beings.
softwhere December 15, 2019 at 04:01 #363216
Quoting Wayfarer
The individual - 'me' - exists like the foam on a wave on an ocean. The most recently-arrived and most ephemeral of beings.


Indeed. The human being is a radically historical and social being. What I am pre-philosophically inclined to call 'my' reason is the work of centuries. More locally, the human being without a tribe is unthinkable. We are born helpless with necks too weak for our heavy heads. A human brain that doesn't learn a language is largely wasted.

Our quickly senescent bodies would be pathetic indeed were they not the vessels of a time-binding software or 'philosophical subject.' If philosophy is the religion of self-consciousness, then the self that is known is not primarily the helplessly mortal self (we have magazine quizzes for that) but the human in its/our unfolding potential. The materiality of the signifier and material in general are crucial for time-binding, for the human being to lift itself up from superstition and poverty (its immersion in nature, one might say).
softwhere December 15, 2019 at 04:08 #363218
Quoting Wayfarer
That's because language-using beings orient themselves to the world via meaning.


We might say that this orientation is meaning. The mind/matter distinction is a historical contingency. The beetle in the box is problematic.

[quote=Wiki]
Wittgenstein invites readers to imagine a community in which the individuals each have a box containing a "beetle". "No one can look into anyone else's box, and everyone says he knows what a beetle is only by looking at his beetle."[16]

If the "beetle" had a use in the language of these people, it could not be as the name of something – because it is entirely possible that each person had something completely different in their box, or even that the thing in the box constantly changed, or that each box was in fact empty. The content of the box is irrelevant to whatever language game it is used in.

By analogy, it does not matter that one cannot experience another's subjective sensations. Unless talk of such subjective experience is learned through public experience the actual content is irrelevant; all we can discuss is what is available in our public language.

By offering the "beetle" as an analogy to pains, Wittgenstein suggests that the case of pains is not really amenable to the uses philosophers would make of it. "That is to say: if we construe the grammar of the expression of sensation on the model of 'object and designation', the object drops out of consideration as irrelevant."
[/quote]
https://en.wikipedia.org/wiki/Private_language_argument

If the notion of pure mind is threatened, then so is the notion of pure matter. Indeed, 'mind' and 'matter' are troubled in the same way by the argument above. Private meaning is problematic. And yet I depend on the same system of signs that I use to unveil the strangeness of this system.

softwhere December 15, 2019 at 04:25 #363220
Quoting Wayfarer
I'm arguing that because the meaning of a proposition can be represented in different symbolic forms and even in different media, then the meaning or the intelligible content of the proposition, is separable from the physical representation. It's suggestive of a form of dualism. As far as I'm aware, it's a novel argument.


With formal languages perfect translation (between media) is not only possible but common. And I agree that this is fascinating indeed. But non-formal languages are famously only imperfectly translated. The act of reading is also creative. Moreover the writings of the past are changed (recontextualized) by the writings that come after. What is 'the ideality of the literary object'? It's a 'spiritual realm,' as I see it. But this spiritual realm also seems to be dynamic, caught up in time, and subject to dissemination.

The dualism is still there, but isn't this culture versus nature?
Wayfarer December 15, 2019 at 06:31 #363242
Quoting softwhere
(its immersion in nature, one might say).


More like it's clinging to and grasping of the sensory domain (which ends up being the meaning of 'empiricism'.)

I heard that Schrodinger's cat had eaten Wittgenstein's beetle, although others heard differently.

Quoting softwhere
It's a 'spiritual realm,' as I see it.


It’s the ‘formal realm’, I think - the domain of laws, conventions, number, logic and the like. We ‘see’ it through the ‘eye of reason’. Whereas the spiritual realm is seen through ‘the eye of the heart’ according to mystic lore.
Wayfarer December 15, 2019 at 06:47 #363244
Reply to softwhere I don’t know if I have mentioned the intriguingly-named philosopher Afrikan Spir, but do look him up on Wikipedia, specifically the paragraph on ‘ontology’.

Then, in light of that, consider that the only perfect application of the word ‘is’ is the equals sign. Other usages of the word ‘is’ are only ever approximations.
softwhere December 15, 2019 at 08:30 #363250
Quoting Wayfarer
More like it's clinging to and grasping of the sensory domain (which ends up being the meaning of 'empiricism'.)


We need the sensory domain, though. Since we are fundamentally social beings, it's our sense organs and our flesh generally that make language and thought possible.

Quoting Wayfarer
I heard that Schrodinger's cat had eaten Wittgenstein's beetle, although others heard differently.


That looks like a dodge.

Quoting Wayfarer
It’s the ‘formal realm’, I think - the domain of laws, conventions, number, logic and the like. We ‘see’ it through the ‘eye of reason’. Whereas the spiritual realm is seen through ‘the eye of the heart’ according to mystic lore.


Note the necessary appeal to metaphor. I understand the metaphor and agree with it. This metaphoricity is one of the ways that natural language exceeds formal language.

I also like 'eye of the heart.' This metaphor emphasizes the passion involved in the 'spiritual.' I realize that some might understand metaphor to be a reductive concept, but as Derrida noted: if metaphysics is metaphorical, then metaphor functions metaphysically within such an assertion. To compare God to a literary object is as much a promotion of literature as it is a demotion of God. Alternative approaches (justifying God as a scientifically defensible entity) seem the wrong way to go (I think you agree here.)
Wayfarer December 15, 2019 at 10:04 #363254
Reply to softwhere Fine thoughts. However, I really feel as though we've derailed Frank's thread (mea culpa), so let's try a recap. From the OP:

Quoting frank
'Multiple realisability' is a response to the flaw in brain-state reductionism: it doesn't appear to be possible to correlate a particular brain state to a psychological state (like pain). This flaw is particularly noticeable when we think about the broad range of creatures who can feel pain: their varying anatomy makes it seem impossible to make this correlation.


The SEP entry on multiple realisibility' says something similar:

The multiple realizability thesis about the mental is that a given psychological kind (like pain) can be realized by many distinct physical kinds: brain states in the case of earthly mammals, electronic states in the case of properly programmed digital computers, green slime states in the case of extraterrestrials, and so on.


I chipped in to say that

Quoting Wayfarer
I've always felt that there's a much stronger argument for MR than just pain, in that neuroscience can't find any objective correlation between 'brain states' and all manner of mental phenomena, including language.


I will add, I think the talk about 'electronic states' and 'slime states' is really typical of the kind of nonsense that passes for philosophy nowadays even though I might be sympathetically inclined to the basic argument.

Frank then introduced a refinement to the OP, to wit:

Quoting frank
Jim sees an auto accident. He goes to a phone and dials 91. What will he do next? Most likely he'll dial another 1.

The explanation for this is a systemic generalization between

A. What he recognized
B. His background knowledge
C. His resulting intentions, and
D. That action

A reductionist's explanation will be too weak because the specific neural events and muscular contractions involved here will only be associated with one way of learning, coming to know, and the action of dialing (he could dial with a pencil, a toe, voice recognition, etc).

Because of multiple realizability, a reductionist can't capture all capturable generalizations, a tenet of scientific methodology.


So I will bow out at this point as plainly the kind of argument I have in mind is completely different to anything intended by the OP.
bert1 December 15, 2019 at 11:25 #363263
Quoting 180 Proof
compositional fallacy


I've never come across a panpsychist saying a whole must be conscious because some of the parts are (although no doubt there will be such people, I may even be one of them, although I don't recall making an argument of that form). If you are fallacy hunting, wouldn't the fallacy of division be more apt? Namely that the parts must be conscious because the whole is?

I could understand you crying foul in terms of a divisional fallacy. Are you sure that's not what you mean?
frank December 15, 2019 at 15:37 #363299
Quoting Wayfarer
Frank then introduced a refinement to the OP, to wit:


I'm actually just plowing through the SEP article you posted. It's kind of like homework so I can understand various angles on the concept of emergence.

Your contributions have been welcome.
frank December 15, 2019 at 20:29 #363364
Another related avenue: psychological generalization vs situated cognition
Galuchat December 16, 2019 at 15:18 #363615
Quoting frank
I'm actually just plowing through the SEP article you posted. It's kind of like homework so I can understand various angles on the concept of emergence.


Multiple Realizability is consistent with current Natural Science (inductive evidence).

Corporeal and mental events are mutually dependent, but incommensurable because:
1) While correlation can be demonstrated, causation cannot.
2) Corporeal and mental data are accessed at different levels of abstraction (i.e., Neurology and Psychology).

Also, neuroplasticity is a fact (ruling out the possibility of epiphenomenalism, which is consistent with psychoneural identity theories).

It is obvious that body and mind are open sub-systems of (at least certain) organisms (e.g., those having a central nervous system). Body is open to mind and environment, and mind is open to body. But mind cannot be a sub-system of body if neuroplasticity is a fact.
frank December 16, 2019 at 15:29 #363617
Quoting Galuchat
Multiple Realizability is consistent with current Natural Science (inductive evidence).

Corporeal and mental events are mutually dependent, but incommensurable because:
1) While correlation can be demonstrated, causation cannot.
2) Corporeal and mental data are accessed at different levels of abstraction (i.e., Neurology and Psychology).


Right. This is the conclusion of the Pylyshin argument I discussed above. So the stance that best meshes with scientific methodology is non-reductive.

Quoting Galuchat
Also, neuroplasticity is a fact (ruling out the possibility of epiphenomenalism, which is consistent with psychoneural identity theories).


Yep. I mentioned neural plasticity, just didn't delve into it.
Galuchat December 16, 2019 at 15:33 #363619
Reply to frank
Which brings us to emergence.
frank December 16, 2019 at 15:51 #363628
Quoting Galuchat
Which brings us to emergence.


Yep. I'd like to start a thread discussing emergence, I'm not quite there, though. Been busy.
Galuchat December 16, 2019 at 16:01 #363634
Reply to frank
So emergence has nothing to do with Multiple Realizability?
Fair enough.
frank December 16, 2019 at 16:29 #363640
Quoting Galuchat
So emergence has nothing to do with Multiple Realizability?


I think you'll find, Galuchat, that every thing is related to everything else.
Galuchat December 16, 2019 at 17:33 #363652
Reply to frank
Inanity.
Sounds like you're spent, so I'm outta here.
frank December 16, 2019 at 17:57 #363653
Quoting Galuchat
Inanity.
Sounds like you're spent, so I'm outta here.


Up next is challenges to multiple realizability. Please do be outta here.
frank December 19, 2019 at 20:55 #364693
One of the objections to multiple realizability is that identity of psychological states across species is speculative. Since we can question whether goats actually feel pain, the central argument for MR is undermined.

So would a consequence of rejecting multiple realizability be that we have to let go of assumptions about other species' experiences?

More to the point, considering neural plasticity, should we drop the assumption that pain is something we all experience?

frank December 21, 2019 at 14:24 #365162
The problem a reductionist faces is that instances of reduction will appear to be about individuals at specific times, which clashes with a scientific outlook. It seems to leave us without principles to apply or predictions to test.

Or so it seems. Consider the reduction of temperature to molecular behavior. I cant predict what will be going on with any particular molecule at any particular time, yet i can say something about the mean. Per SEP:

"Following suggestions by Clifford Hooker (1981) and Enc (1983), Bickle (1998, Chapter 4) argues that the radical type of multiple realizability (in the same token system over times) is a feature of accepted historical cases of scientific reduction. It even obtains in the “textbook” reduction of classical equilibrium thermodynamics to statistical mechanics and microphysics. For any token aggregate of gas molecules, there is an indefinite number of realizations of a given temperature—a given mean molecular kinetic energy. Microphysically, the most fine-grained theoretical specification of a gas is its microcanonical ensemble, in which the momentum and location (and thus the kinetic energy) of each molecule are specified. Indefinitely many distinct microcanonical ensembles of a token volume of gas molecules can yield the same mean molecular kinetic energy. Thus at the lowest level of microphysical description, a given temerature is vastly multiply realizable in the same token system over times. Nevertheless, the case of temperature is a textbook case of reduction"