Is pencil and paper enough?
Let's say an algorithm was discovered that would give machines the ability to have experiences of color, sound, or whatever. Think of Data on Star Trek's first person experience (there was a dreaming episode), or Eva from Ex Machina imagining being outside (they showed her first person experience imagining it).
Now let's say we took this algorithm and put it to paper. A billion Chinese human computers calculate the experiences that Data or Eva have. Is there still an experience associated with the pencil and paper calculation? Does the computation imagine the colors, sounds and smells of being outside? The sensation of freedom?
If not, then what makes the processor(s) in Data, Eva or any potential computer different from pencil and paper? Is there something metaphysically magical about an electronic (or positronic) processor? What about neurons? Assuming they compute (which quite a few do), what makes them special? Does somehow hooking up a whole bunch of processors make the magic work?
This is the point that Jaron Lanier made in his paper, "You can't argue with a zombie", in which he used a meteor shower to imagine a computer that simulated a person.
Now let's say we took this algorithm and put it to paper. A billion Chinese human computers calculate the experiences that Data or Eva have. Is there still an experience associated with the pencil and paper calculation? Does the computation imagine the colors, sounds and smells of being outside? The sensation of freedom?
If not, then what makes the processor(s) in Data, Eva or any potential computer different from pencil and paper? Is there something metaphysically magical about an electronic (or positronic) processor? What about neurons? Assuming they compute (which quite a few do), what makes them special? Does somehow hooking up a whole bunch of processors make the magic work?
This is the point that Jaron Lanier made in his paper, "You can't argue with a zombie", in which he used a meteor shower to imagine a computer that simulated a person.
Comments (65)
Pencil and paper can't get you a cup of "Earl Grey Tea, Hot!" or play Chopin, or win against you in a Chess match, et cetera. Pencil and paper can't even read itself.
Why not just program qualia on your laptop?
Paper and pencil, or printed code, isn't enough. A human being can read instructions for performance -- Swan Lake or making a cake -- but reading the instructions does nothing, Until the individual executes the instructions -- dances or breaks eggs, nothing happens.
I don't quite understand how human beings performing computer code would result in an experience.
LtCdr Data has to execute instructions of some kind to notice what is going on around him. So do we. We are much less aware of our instruction set, until we try to do something new and difficult. Then we have to step our way through the instructions. A computers memory contains instructions, but they have to be executed for the computer to "experience" anything.
Whether materials other than those that comprise brains can exihibit the properties in question is a good question, one that we have no idea of the answer to at the moment.
To be clear, Marchesk is not talking about a set of instructions once committed to paper and just sitting there. He is talking about a person or lots of people performing those instructions with pencil and paper instead of silicone and electric potentials.
P.S. The OP question is, of course, a variant of the Chinese Room problem.
If so, I agree wholeheartedly.
Also, I'm delighted that you used pencil and paper, which I think is still the greatest technological innovation that our species has accomplished.
Interesting. Why that one in your opinion over fire, clothing, or the printing press?
It is except the focus is own conscious experience and not understanding. Arguably, a fair amount of progress has been made in computer understanding with machine translation, image recognition, search algorithms, etc. But no progress whatsoever, far as anyone can tell, has been made on experience.
Humans were computers before electronic computers existed. Is there a reason why enough humans given enough time can't compute any algorithm? How is that different from a turing machine with infinite tape?
Quoting tom
Does anyone have any idea what sort of algorithm that would be? The point is to ask what it is about algorithms which could lead to experience.
You can take this as a criticism either against the computational theory of mind, or a criticism against universal computation (the substrate doesn't matter).
Because racism. Only a Chinese substrate will realize a true Turing machine. God is Chinese, and Searles messed up by having the room output Chinese, otherwise he had a solid argument. Silly Searle.
But really, probably because China is known for having more than a billion people. India would have worked. Africa is just too general. I don't even know how many people live on the continent, and I'm pretty sure there aren't 1 billion Europeans.
Why 1 billion? Because it's nice big number, but not too big for there to be that many people. So now you know!
1.216 billion. See, they could do it too.
I was imprecise. Most experts agree that 1.378 billion humans is needed to implement a universal Turing machine.
A lot less Datas would be needed, though. Only about 575 Datas could emulate their dreams on paper.
So you subscribe to an identity theory of mind. The physical substrate is necessary for conscious experience. Has to be squishy meat.
Ture, but then neither can software. You have to have peripheral devices hooked up to your computer to do all that.
There are a great many practical difficulties in encouraging 1 billion people to cooperate in cranking out an algorithm. What makes you think 1 billion is enough?
Much easier to use a laptop, surely!
Quoting Marchesk
No. It is a pressing philosophical problem.
Quoting Marchesk
Denial of known physics is always an option, particularly when there are no consequences that matter.
Quoting Marchesk
Or it could be because of a very famous thought-experiment.
I'm not aware that physics requires universal computation to be the case, only that some have asserted that all physical processes can be computed. Sounds like an ontological claim to me, but maybe there is a mathematical proof for this?
Even if so, the big challenge would be to show that everything about the living brain is reducible to physics.
There exists a proof.
Quoting Marchesk
No! What is required is to demonstrate that any finite physical system can be simulated to arbitrary accuracy by finite means on a universal computer.
You think the brain has some non-physical aspect to it?
I don't know whether non-reductionism is the case or not. Some physicalists ascribe to emergentism at different levels. I'm also not sure whether physicalism is the case. Maybe someone will figure out how to give a physical explanation for consciousness, but maybe not.
In addition to that, I'm skeptical that functionalism is entirely substrate independent. I kind of think that the sort of bodies we have determines the kind of minds we have.
I'm happy to give ground to anybody that thinks Fire is more important (or maybe even spoken language - although I wonder whether some might class that as an evolved ability rather than an invention). I just love writing and drawing on paper with pencils, is all. :D
If we consider some of the claims by transhumanists or AI enthusiasts, then the right sort of computation will result in experience.
Consider the idea of mind uploading. If you could emulate your brain in software, would it have experiences? If so, then would the paper equivalent?
What is it about computation, or translations from some sets of symbols to other sets of symbols, that could produce a state of conscious awareness? I don't get it. Far more convincing is the idea of conscious awareness being, like photosynthesis, a biological phenomenon.
I don't know, but quite a few people think the mind is computable, and don't like the idea of some important mental aspect being unique to human physiology.
But what motivated the Chinese Room and similar thought experiments is the very idea that without conscious experience there is not "true" understanding. All along, it wasn't technical competence of the AI that was at issue.
Of course, what constitutes "true" understanding, as well as "true" conscious experience, is anyone's guess. I don't think there is a metaphysical truth of the matter here, because we are ultimately just stipulating how we are going to use words such as "understanding" and "conscious experience". That is, unless one intends to posit some positive metaphysics specific to consciousness - you know, the soul or some such.
We don't have to use those words. The sky looks blue to me on a clear, sunny day. But if I could see the rest of the EM spectrum in some range of color, it would look quite different. But what color is the sky when nobody's looking?
That might sound like a silly question, but consider that asking about other properties of light or the atmosphere when nobody is looking is answerable by physics. So then, where does the experience of color come from, if it's not in the sky or photons of visible light?
A tempting answer is to say that the visual cortex of the brain generates color. But when the brain is examined, there is no color to be found there, of course. So where is that color experience taking place?
Maybe it's in the interaction between the visual system and the environment. But that's just moving the problem from the brain to the entire visual system. There is still no color to be found. It's only there when someone is experiencing it.
So we end up with an objective/subjective divide. The objective account of vision leaves out the color experience.
That's why when we want to know what a bat experiences, if anything, when using echolocation, we have no way of answering that question, since we lack bat experiences, unless we can correlate bat neurophysiology for echolocation with our physiology for some experience we have.
It's the same problem a person born blind from birth will have in trying to imagine what a rainbow experience is like. No amount of third person explanation can relay color experiences.
I'm under the impression that modern philosophers don't appeal to the soul when defending versions of consciousness which aren't explainable in physical terms. Rather, they come to the conclusion that physicalism is false.
Do physicalists think consciousness is "explainable" in physical terms? Life isn't even explained in physical terms, but rather in terms of abstractions that supervene on the physical.
They think consciousness is explainable in term of abstractions that supervene on the physical, such as neuroscience. So if neuroscience can fully explain color experience (at some point in the future), then it's physical.
More broadly, it's about whether an objective account can be given for subjectivity. Tying this back to the OP, if there is such an objective account, then it might be computable, and if so, then there should be some algorithm for computing an experience of seeing blue. And if that's the case, then why wouldn't a pencil and paper computation of the algorithm result in that experience?
Inert things don't think or feel, and I think it is a mistake (voodoo) to attribute either property to inanimate objects. Pencil and paper are tools same as the computer. All tools have some designed function, pencil to write, paper to be written upon and computer to conserve paper and save tired wrists :). The fact that the computer has a drastically more complex design does not make it anything more than a tool.
I doubt that the science which will deal with the abstractions that are conscious will be neuroscience. When the philosophical breakthrough is achieved, the natural place for the science to be placed is within psychology. The theory will be at the appropriate level of abstraction.
Quoting Marchesk
Is there an objective account of life? Can a "pencil and paper" be alive?
How is the algorithm realised? i.e. turned into physical form? It requires an intepreter - otherwise it is just marks on paper. What device does that? Is there such a device?
Is a sufficiently sophisticated simulation of a living organism alive? At that point it might be a matter of how we wish to use words, although it could have ethical and legal ramifications at some point, if it's simulated human life.
If you watch or read any science fiction, you're probably come across advanced virtual worlds where characters in those worlds experience their digital reality like we do the physical world. In the book, "Permutation City", set in the 2050s when brain scans are detailed enough, digital human copies live in virtual worlds.
We can ask a Chalmers type question about all such scenarios. Are our digital copies p-zombies, or does it make sense to suppose they see color, hear sounds, etc? I don't think saying it's just a matter of how we wish to use words helps here.
Consider that you were given the option of uploading your mind to a virtual world where you don't have the same physical limitations, such as growing old. But the process is destructive to your physical self. Do you do it in anticipation of experiencing the joys of digital life? Or do you suspect that your digital self is just a bunch of 1s and 0s that won't experience anything at all?
If you think that your digital self can have experiences, then why not a pencil and paper version? What difference does the substrate matter? By the 2050s, it could be a quantum computer server farm instead of silicon and electricity.
What is an algorithm computed by a processor? It's just shuffling around 1s and 0s, right? Or to be more precise, it's just moving electricity around.
If we wanted, we could have a billion robot arms righting the writing to paper. Naturally, these would be of Chinese manufacture.
I agree, and that was Jaron Lanier's point to the functionalists who think that the mind can be computed, which is why he came up with a bizarre scenario of using a meteor shower instead of a billion Chinese to implement a digital simulation of a person. For functionalists, the substrate is immaterial, as long as it provides the functionality.
Right. Which is why a computer is essentially a highly powerful, miniturised abacus. It's a box of switches, which outputs electrical signals. But I see no grounds to suppose such a device or its components is a subject of experience or 'has experiences'.
That is why I don't accept the claims of AI or trans-humanism. They make the fundamental mistake of equating computer operations with experience. So:
Quoting Marchesk
No. Humans might attribute such a simulation with life or agency, but that is a projection on their part.
The problem with this whole subject is that 'mind' is an implicit factor in human intelligence. The mind does considerably more than compute.
Logic, DNA and Poetry.
Exactly! But there has been an attempt to do that. The project is called Cyc. It's an attempt to codify human common sense, providing a program with the knowledge needed to reason like a human being. The philosophy behind the project is summarized as, "Intelligence is 3 million rules". So, a bunch of propositions linked together in appropriate ways, permitting the right sort of inferences.
I first read about this in the 90s, and it was immediately apparent to me that this is not what human intelligence is. But, Marvin Minsky, a founder of Artificial Intelligence, has stated recently that it has been the only real attempt in AI research to create common sense in a machine, which Minsky sees as fundamental to creating human level, or general purpose AI.
The brain doesn't generate color, it experiences color (or rather, your entire organism experiences color, since the brain does not function in isolation from the rest of the organism). It would be senseless to examine the brain looking for the experience of color - what would you expect to find? When you want to drive somewhere, do you just sit and stare at your car, expecting the driving to happen by and by?
But this is veering away from the OP and towards a well-worn debate about qualia. The OP was addressed to those who already accept that consciousness can be realized in a computer, and more broadly, in any system that possesses the same structure and undergoes the same processes as those that are supposedly responsible for producing consciousness in the brain. You didn't have many takers. Instead, some flatly stated that only "meat," so to speak, can be conscious. Or conversely, that "inanimate" things or "tools" cannot. But what are the reasons for such declarations? Or are they made by way of stipulating the very definition of consciousness? Something that Russell described (in a different context) as having "all the advantages of theft over honest toil?"
But let me ask in my turn: can any amount of "honest toil" yield objective criteria for having consciousness? I think the answer is "no". If you think that consciousness can only be realized in "meat," then that is so, by definition. But on the flip side, this doesn't resolve any interesting philosophical questions, this only resolves the meaning of the word "consciousness" in your preferred usage.
It may be fun to do some further thought experiments though. What if a few neurons in your brain were seamlessly replaced by a microcomputer (or a small Chinese city doing calculations with pencil and paper, if you like)? Would you still be conscious? Would you still be you? Well, you see where this is going...
My take is that there is no objectively right or wrong answer. And no way to justify any answer given.
Is a meteor shower computationally universal?
Lanier's argument was that any physical system is, if you squint at it just right. Meaning, we interpret (and build) our computing devices to be manipulating symbols because that's useful to us. But a computer doesn't really operate on 1s and 0s (or high/low or on/off). That's just an interpretation. The real functionality is driven by physics, not computer science or boolean algebra. As such, aliens might think our computers were heaters (they produce heat).
If we wanted to, we could interpret other physical systems to be doing computations. But you have to read the paper to see how he goes about setting up the meteor shower computer thought experiment, and see whether you agree with him.
His fundamental point is that computation is cultural (physical systems don't actually manipulate symbols), not ontological, but that consciousness is ontological, and the role it plays is to select how we experience reality, out of the many ways it could be experienced (given what we know about physics). As such, conscious beings determine what computation is, not the other way around.
And so some physical systems have experiences, like my brain/body, and others don't, like my car (which could be smart and drive itself these days) or the rock I kicked.
That's why it remains problematic for physicalism.
Where is the problem? Some systems are cars and others are not. Is that a problem too?
Physicalism can't explain why some physical systems have experience and others don't. You might ask so what, but physicalism is supposed to present a comprehensive ontology. It can't leave anything out and be true.
It's like this. (Allegedly) there is a brain which is causing this conscious experience I am having. But in order to study this brain I only have at my own disposal my sensory experiences (and my thoughts). The trouble here is that those things are themselves already a conscious experience caused by a brain. So lets say somehow I examine my brain (imagine I cut my skull open and start cutting into it or something). The problem here is that what I'm examining is entirely a conscious experience. It's a visual experience of a brain, a touch experience, my thoughts, etc. But these are all themselves conscious experiences which are ALREADY being generated by a brain.
So there's an access issue here. I cannot examine my brain without using conscious experience generated by that brain. But the conscious experience generated by that brain, is NOT, the brain which is causing the conscious experience - it is what that brain is doing.
It's like I cannot step outside of my own gaze, in order to examine the eye.
So I think there is an issue of access here, in that we really cannot get at what it really is that's (allegedly - this is all just a theory that there is a brain generating our conscious experience) causing our conscious experience. We are trapped within conscious experience, and cannot step outside of that in order to examine the cause.
Which metaphysical view explains subjectivity? Actually, which other metaphysical view offers an explanation for anything?
According to physicalism, subjectivity must be a software feature.
There's always idealism, where it's mind that matters, and not the other way around. Then there's dualism, panpsychism, and neutral monism.
They have their strengths and weaknesses. Idealism doesn't have a mind/body problem, but it sure seems like we experience a material world.
Does any of them give an account of what exists, how it behaves and why?
Physicalism also can't explain why some physical systems are cars and others are not. Take any summation of Physicalism as a philosophical doctrine, and more likely than not, you won't see "cars" mentioned at all. Isn't that just as bad?
Quoting Marchesk
There's your problem. While there isn't anything like a received view of what "physicalism" stands for, I've never seen it claimed that physicalism is a theory of everything, capable of answering any question that you can think of. Physicalism posits answers to certain specific questions, and that's it.
Anyway, what question are you actually asking above? What sort of answer would you accept?
Supposing it would be blue if you were to look (it might not be, of course--it could be sunset, it could be a gray, cloudy day, etc.), then it's also blue when no one is looking, from the reference point that's the same as where your eyes would be located, and with respect to the range of electromagnetic radiation that comprises visible colors for humans.
That doesn't require a human looking at it. It's just that properties are always as they are relatively, including relative to reference points.
I don't think physicalism entails functionalism or the computational theory of mind, although they're compatible.
Right, it definitely doesn't entail functionalism.
What else could it be?
Physicalism is an updated version of materialism, not the science of physics. It just says that everything is made up of whatever physics posits. Cars being made up of physical parts isn't an issue for materialists. But experience is problematic.
I didn't make this stuff up.
How experience is made up of physical stuff. Saying that meat experiences color, while cars don't because meat, isn't an answer.
Quoting SophistiCat
An answer that would make the puzzlement go away, where we could see that experience is physical stuff, probably because we were tricked by a cognitive illusion about what experience is, or something.
Physicalism isn't necessarily framed in mereological terms (I personally dislike this approach).
Quoting Marchesk
Asking "how experience is made up of physical stuff" sounds as absurd as asking how the operation of a car is made up of physical stuff. Maybe you mean something by it, but if so, you need to explain.
You could say that brains are made up of physical stuff - an awkward statement, and not very informative. But it would be a better analogy here. But brains are not consciousness, brains are conscious [of stuff] - see the difference? It's not what the brains are made out of, it's what they do.
Quoting Marchesk
I am afraid I still don't understand the reason behind the puzzlement. I mean, consciousness is a wondrous thing and it certainly has plenty to be puzzled about, but let me remind you again that physicalism isn't supposed to be an oracle that will answer all of your questions.
Because nobody so far has come up with a way to show how experience is constituted by physical parts or processes. Neuroscience falls into that category, since it's positing neurons, neurotransmitters, etc, all of which are made up of physical parts.
To put it another way, the concepts of experience don't fit into the concepts employed by biology, neuroscience, chemistry, physics. It's really an issue of whether an objective account of the world can explain subjectivity. So it applies to computationalism as well.
Max Tegmark's mathematical world has the exact same problem. If the only real properties are mathematical ones, then how can some mathematical systems have experience, since experience isn't a mathematical property or concept?
If experience is actually mathematical, then someone needs to demonstrate how that's so.
No, but it's an ontological commitment to physical systems. So if anything can't be explained in terms of some physical system, process, or parts, then the ontology is in question.
It's possible for physicalism to be false. Maybe it's ontological commitments are incomplete. Experience isn't the only challenge.
Sure, so it's not what a car is made of, it's what it does.
Physical processes are part of the ontological commitment to everything being physical. Brains not in action aren't experiencing anything.
So does this help explain experience, saying that brains in action are conscious of something, but hurricanes, meteor showers or smart cars in action are not?
Why would any physical system or process be accompanied with experience? Why is my active brain/body having experiences?
You can replace physical above with functional, computational, mathematical, or objective, depending on one's ontological commitments or preferred explanations.
It's not the brain, it's the software running on the brain that has the experience.
So pencil and paper implementing that software would also have the same experience.
It's not the hardware, it's the software, and all computationally universal hardware is equivalent - i.e. irrelevant.
Pencil and paper is not computationally universal.
When coupled with a hand to calculate the symbols, why isn't it?
Pencil and paper coupled to a hand?