You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

The flaw in the Chinese Room

hypericin November 26, 2020 at 21:35 10375 views 115 comments
A distinction is missing:
Computers are not just computing machines. They are very special, singular machines: they can simulate any computing machine, including themselves. There are many many computing machines found in nature, which while too powerful to be feasibly simulated by a computer, nonetheless lack this property of universal simulation.

Searl's argument is not against computing machines understanding language. It is claiming that computers which simulate computing machines which understand language., themselves do not understand language. This much is plausibly argued by the Chinese Room. But even if you accept that, it by no means implies that computing machines cannot understand language.

Therefore his conclusion that consciousness is bound to some kind of biological excretion is totally unwarrented.

I'm not familiar with the secondary literature, is this objection discussed?

Comments (115)

apokrisis November 26, 2020 at 22:16 #474839
Quoting hypericin
But even if you accept that, it by no means implies that computing machines cannot understand language.

Therefore his conclusion that consciousness is bound to some kind of biological excretion is totally unwarrented.


Searle did offer the argument that consciousness is a physically embodied process. And that makes the crucial difference.

Against the AI symbol processing story, Searle points out that a computer might simulate the weather, but simulated weather will never make you wet. Likewise, a simulated carburettor will never drive a car. Simulations have no real effects on the world.

And so it is with the brain and its neurons. It may look like a computational pattern at one level. But that pattern spinning can't be divorced from the environment that is being regulated in realtime. Like the weather or a carburettor, the neural collective is actually pushing and shoving against the real world.

That then is the semantics that breathes life into the syntax. And that is also the semantics that is missing if a brain, a carburettor or the weather is reduced to a mere syntactical simulation.

So it is not that there isn't computation or syntax in play when it comes to life and mind. Organisms do exist by being able to impose their syntactic structure on their physical environments. But there also has to be an actualised semantics. The physics of the world needs to be getting rearranged in accordance with a "point of view" for there to indeed be this "point of view", and not some abstracted and meaningless clank of blind syntax inside a Chinese Room.




bongo fury November 27, 2020 at 00:07 #474884
Quoting apokrisis
Like the weather or a carburettor, the neural collective is actually pushing and shoving against the real world.

That then is the semantics that breathes life into the syntax. And that is also the semantics that is missing if a brain, a carburettor or the weather is reduced to a mere syntactical simulation.


Not stalking you @apokrisis, just interested in semiotics. (But normally purchase non-bio!)

I doubt that a carburettor will function as a referring symbol merely by functioning as an actual carburettor. It would need to perform a semantic, referential function, by being pointed at things.
apokrisis November 27, 2020 at 00:59 #474901
Quoting bongo fury
I don't think that a carburettor will function as a referring symbol merely by functioning as an actual carburettor, but only by performing a semantic, referential function, and being pointed at things.


I was just citing Searle's examples. A full semiotic argument would be more complex.

For car drivers, it is the accelerator pedal that is the mechanical switch which connects to their entropic desires. The symbol that is "in mind".

The carburettor is buried out of sight as just part of the necessary set of mechanical linkages that will actually turn an explosion of petrol vapour into me going 90 mph in 30 mph zone.

For a driver, there are all kinds of signs involved. The ones on my speedo dial that I'm relishing, the ones on the roadside that I ignore. Even just the sign of the landscape whizzing past my window at my command. Or the feeling of my foot flat to the floor as the annoying sign I can't make the damn thing go faster.

But if I'm doing all this within a car simulator, I can drive carefully or smash into the nearest lamppost without it being an actually meaningful difference. It is only when this semiotic umwelt is plugged into the physics of the world that there can be consequences that matter.

The point of talking about simulated carburettors or simulated rain storms is just to say that the syntax exists to actually do a semantic job. And that job is to regulate the material world. That is what defines semiosis - a modelling relation - so far as life and mind go.




magritte November 27, 2020 at 02:05 #474920
Quoting apokrisis
For car drivers, it is the accelerator pedal that is the mechanical switch which connects to their entropic desires. The symbol that is "in mind".


Isn't that overly simplistic in that the point of intentional action just triggers a whole range of prearranged links in the machine and unknown and at times unknowable interfaces with the environment? Just to try a couple of unlikely but conceivable cases, how does the scenario work in space or in a lake?
hypericin November 27, 2020 at 02:20 #474922
Quoting apokrisis
Simulations have no real effects on the world.


This is just not true. You can plug a simulation into the world, for example a robot, feed it inputs, and it could drive it's body and modify the world.
apokrisis November 27, 2020 at 02:35 #474926
Quoting magritte
Isn't that overly simplistic in that the point of intentional action just triggers a whole range of prearranged links in the machine and unknown and at times unknowable interfaces with the environment?


I would say it illustrates a general semiotic truth. Signs in fact have the aim of becoming binary switches. Their goal is to reduce the messy complexity of any physical situation to the simplest-possible yes/no, on/off, present/absent logical distinctions.

People talk about symbols as representing or referring - a kind of pointing. But semiosis is about actually controlling. And if your informational connection to the material world is reduced to an on/off button, some kind of binary switch to flick, then that is semiosis at its highest level of refinement. It is how reality can be controlled the most completely with the lightest conceivable touch.

So how much do I need to know about the mechanics of a car to drive it? One pedal to make it go and another pedal to make it stop is pleasantly minimalistic.

And even my body is seeking a similar simplicity in its metabolic regulation. Much of its homeostatic control boils down to the switch where insulin is the molecular message that is telling the body generally to act anabolically - store excess energy. And then glucagon is there to signal the opposite direction - act catabolically and release those energy stores.

Insulin is produced by the beta cells of the panceas. Glucagon by neighbouring alpha cells. A simple accelerator and brake set up to keep the body motoring long at the right pace.

So semiosis is not passive reference, but active regulation. And for a mere symbol to control reality, reality must be brought to its most sharply tippable state. It must be holistically responsive to a black and white command.

Stop/go. Grow/shrink. Store/burn. Etc.

Mijin November 27, 2020 at 02:37 #474927
When people talk about "computers" within the context of strong AI, they usually mean Turing machines i.e. something which runs a program which can be run on any other Turing machine.
If strong AI is true, my PC can run a consciousness program (perhaps extremely slowly, but that's besides the point) and my PC would be conscious.

If we are just saying a computer is some machine that computes, and is not necessarily Turing complete, then sure, the Chinese room doesn't apply.
But Searle would agree with you. He agrees that the brain is a kind of machine, and would obviously agree that we are capable of computation. The Chinese room is not about trying to prove we have a soul or whatever, it's just about whether running a word-understanding program is the same thing as understanding words, which is relevant to whether running a consciousness program is the same thing as being conscious.

For this latter point, I am not saying that the argument necessarily works. I am just saying that the objection of the OP is possibly based on a misconception.
apokrisis November 27, 2020 at 02:40 #474928
Quoting hypericin
You can plug a simulation into the world, for example a robot, feed it inputs, and it could drive it's body and modify the world.


Sure. Plug syntax into the world - make it dependent on that relationship – and away you go. But then it is no longer just a simulation, is it?

A simulation would be simulation of that robot plugged into the world. So a simulated robot rather than a real one.

And to be organic, this robot would have to be building its body as well as modifying its world. There is rather more to it.



magritte November 27, 2020 at 03:02 #474931
Quoting hypericin
conclusion that consciousness is bound to some kind of biological excretion is totally unwarrented.


Searle's experimental conditions can always be tightened to meet specific objections. Also, words like consciousness, biological, and computer can be adjusted depending on the desired conclusion. Is a supercharged C3PO conscious even if it never sleeps?
hypericin November 27, 2020 at 04:22 #474937
Quoting apokrisis
But then it is no longer just a simulation, is it?


Really? As soon as you attach inputs and outputs to the robot brain, it is no longer a simulation?
So, if the Chinese room simulated a famous Chinese general, and it received orders which the laborers laboriously translated, and then computed a reply, and based on this orders were given to troops, it is not a simulation? Seems absurd.

hypericin November 27, 2020 at 04:29 #474938
Reply to Mijin
So then would Searle agree that it is possible to build a machine that thinks? Either via replicating nature (the thought experiment of replacing each brain cell one by one with a functionally equivalent replacement), or, by a novel design that just fulfills the requirements of conscisiousness (whatever they may be)?
Mijin November 27, 2020 at 04:50 #474940
Reply to hypericin
Yes, that's right. According to the wiki:

[quote=Wiki]
Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, "we are precisely such machines".[5] Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using machinery that is non-computational. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding.
[/quote]

The argument is just against certain computational theories of the mind, and he is just trying to show that:

1. A mind "program" is not necessarily itself a mind
2. We cannot infer from behaviour alone whether subjective states and understanding are taking place

Again, I'm not saying that the argument necessarily works. I think at this point even Searle has conceded that the original argument needs further refinement. Or, alternatively, that the argument is often applied way beyond its intended scope.
Changeling November 27, 2020 at 05:13 #474941
Reply to hypericin what about the floor in it?
apokrisis November 27, 2020 at 05:39 #474945
Quoting hypericin
Really? As soon as you attach inputs and outputs to the robot brain, it is no longer a simulation?


A robot has arms and legs doesn't it? Or at least wheels. And sensors.

Quoting hypericin
So, if the Chinese room simulated a famous Chinese general, and it received orders which the laborers laboriously translated, and then computed a reply, and based on this orders were given to troops, it is not a simulation? Seems absurd.


I'm confused which side of the argument you are running. Do you mean emulation rather than simulation in the OP?

The universality of the Turing Machine allows the claim such a device can emulate any computer. But simulation is the claim that a computer is modelling the behaviour of a real physical system.

Strong AI proponents may then claim conciousness is just an emulatable variety of Turing computation. Biological types like myself view consciousness as a semiotic process - a modelling relation in which information regulates physical outcomes.

A Turing Machine designs out its physicality. And so it is straightforward it becomes “all information and no physics”. The argument continues from there.

Now if I am a Chinese soldier and I’m following orders from a book, is the book conscious? Or is it me that is consciously applying some information to my material actions?

And how is the Chinese room general more than the equivalent of a book I’m this thought experiment?




magritte November 27, 2020 at 11:09 #475010
Reply to hypericin
OK, let me say it another way,

Quoting SEP article
The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test” is inadequate.
Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics.
The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes


Consciousness can always be defined more or less stringently either to be included in or to be excluded from any finite set of experimental conditions. And if you define the Universe as a Turing machine, as you seem to do, then it has already computed everything there ever was.
Daemon November 27, 2020 at 13:55 #475021
Hi Hypericin,

This is a favourite topic of mine, for various reasons. It seems that the received wisdom nowadays is that digital computers are or could be conscious, and (or) that our brains are conscious because they work like digital computers. I think that Searle's argument, properly understood, shows decisively that the received wisdom is wrong in this case, and I always enjoy it when I think I know something most people don't.

Also I'm a professional translator, and I enjoy knowing that digital computers will never be able to do what I do, which is to properly understand natural language.

The crucial reason why digital computers (like the ones we are using to read and write our messages here) can never be conscious as a result of their programs is that the meanings of their inputs, processes and outputs are all, to use Searle's term "observer dependent".

You can see this in concrete, practical terms right from the start of the design of a computer, when the designer specifies what is to count as a 0 and what is to count as a 1.

And the reason a digital computer can never understand natural language as we do is that our understanding is based on conscious experience of the world.

Any questions?
Harry Hindu November 27, 2020 at 13:58 #475022
Quoting apokrisis
Simulations have no real effects on the world.

Predictions are simulations in your head, and predictions have causal power. We all run simulations of other minds in our minds as we attempt to determine the reasoning behind some behaviour.
Harry Hindu November 27, 2020 at 14:02 #475023
Reply to hypericin
Quoting SEP article
Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics.

The problem with the "Chinese" room is that the rules in the room are not for understanding Chinese. Those are not the same rules that Chinese people learned or use to understand Chinese. So Searle is making a category error in labeling it a "Chinese Room".

The question isn't whether a computer can understand Chinese by using those rules. It is whether anything, or anyone, can understand Chinese by using those rules. If nothing can understand Chinese using the rules in the room, then those are not the rules for understanding Chinese.
Daemon November 28, 2020 at 00:05 #475171
Quoting Harry Hindu
The problem with the "Chinese" room is that the rules in the room are not for understanding Chinese. Those are not the same rules that Chinese people learned or use to understand Chinese.


I think you're getting things back to front. The room is set up to replicate the way a computer works, the kinds of rules it works with. It's not trying to replicate the way humans work, the kinds of rules we use to understand language. So Searle is showing why a digital computer can't understand language.
hypericin November 28, 2020 at 01:44 #475195
Mijin:but that the brain gives rise to consciousness and understanding using machinery that is non-computational


Thanks for the quote, this is precisely the point where I disagree with Searle.
There is a middle ground between Turning Machine and physical process.
Searle argues that a Chinese Program (a strip of magnetic tape processed by a Turing Machine) does not (necessarily) understand Chinese. He then pivots from this, to say that the brain therefore gives rise to consciousness using *non-computational* machinery.

This then ties consciousness to biologic process or machines which can emulate the same physical process.

But there is a middle ground which Searle seems to overlook: computational machines which are not Turing machines, and yet are purely informational. Such a machine has no ties to the matter which instantiates it. And yet, it is not a Turing machine, it does not process symbols in order to simulate or emulate other computations. It embodies the computations. Just like us.
apokrisis November 28, 2020 at 01:59 #475198
Quoting Harry Hindu
Predictions are simulations in your head, and predictions have causal power. We all run simulations of other minds in our minds as we attempt to determine the reasoning behind some behaviour.


Of course. But you took that statement out of context. Here is the context....

Quoting apokrisis
Like the weather or a carburettor, the neural collective is actually pushing and shoving against the real world.

That then is the semantics that breathes life into the syntax. And that is also the semantics that is missing if a brain, a carburettor or the weather is reduced to a mere syntactical simulation.


Mijin November 28, 2020 at 07:05 #475216
Quoting hypericin
But there is a middle ground which Searle seems to overlook: Computional machines which are not turing machine, and yet is purely informational. Such a machine has no ties to the matter which instantiates it. And yet, it is not a Turing machine, it does not process symbols in order to simulate or emulate other computations. It embodies the computations. Just like us.


Sure but I don't think your point is actually different to Searle's, or something he's overlooked.

Because firstly, yes, when he is talking about "computers" and "computation" he really has a narrow idea in mind of what that means. He means a turing-emulatable (probably) digital computer. If this seems a straw man, note that this is the same idea of "computer" that is usually in mind for areas of study like computational neuroscience, and so is a key part of the main theories that he is arguing against.

Now, if you're suggesting "What if there's a computer that's not Turing-compatible?" well sure there is, in Searle's view: the human brain. Because, if the human brain is not Turing-compatible, then it must be an example of a non-Turing compatible computer (a "hypercomputer") because humans are obviously capable of performing computation.

Finally, if your point is about the possibility of making a non-biological hypercomputer, well we don't know that right now. Searle himself speculates that it may well be possible by, if nothing else, copying the human brain.
But anyway, the point is, that the Chinese room is not intended to show that producing a hypercomputer is impossible, and Searle himself explicitly considered it out of scope.
hypericin November 28, 2020 at 11:15 #475248
Quoting Daemon
Any questions?


I don't think this is the right approach. There is nothing special going on with observer dependence. Yes, a bit, or an assembly instruction, has no meaning in itself. But neither does a neuron. All take their meaning from the assembly of which they are a part (rather than an outside observer). In hardware, the meaning of a binary signal is determined by the hardware which processes that signal. In software, the meaning of an instruction from the other instructions in a program, which all together process information. And in a brain, the meaning of a neuron derives from the neurons it signals, and the neurons which signal it, which in totality also process information.


hypericin November 28, 2020 at 11:35 #475250
Quoting Mijin
... or something he's overlooked.

But he seems to have overlooked it.

Searle:If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding


He presents a false dichotomy:
* Consciousness cannot be emulated by a Turing machine
* Therefore, it must be physical, not informational, and can only be reproduced with the right mechanical process.

But what if consciousness is informational, not physical, and is emergent from a certain processing of information? And what if that emergence doesn't happen if a Turing machine emulates that processing?


hypericin November 28, 2020 at 11:53 #475252
Quoting apokrisis
And how is the Chinese room general more than the equivalent of a book I’m this thought experiment?

Books merely present information, they don't process it.

You seemed to be making the argument that the Chinese room does not "push against the world", therefore it is a simulation and cannot be conscious.

But my point is that any simulation can trivially be made to "push against the world" by supplying it with inputs and outputs. But it is absurd to suggest that this is enough to make a non-conscious simulation conscious.
Harry Hindu November 28, 2020 at 14:14 #475264
Quoting Daemon
I think you're getting things back to front. The room is set up to replicate the way a computer works, the kinds of rules it works with. It's not trying to replicate the way humans work, the kinds of rules we use to understand language. So Searle is showing why a digital computer can't understand language.

But that was my point... that there are only one set of rules for understanding Chinese, and both humans and computers would use the same rules for understanding Chinese. I don't see a difference between how computers work and how humans work. We both have sensory inputs and we process those inputs to produce outputs based on logical rules.

Not only are we not acknowledging that the room does not contain instructions for understanding Chinese, but we are also ignoring the fact that the instructions are in a language that the man in the room does understand. So the question is how did the man in the room come to understand the language the instructions are written in?
Harry Hindu November 28, 2020 at 14:34 #475266
Quoting apokrisis
Against the AI symbol processing story, Searle points out that a computer might simulate the weather, but simulated weather will never make you wet. Likewise, a simulated carburettor will never drive a car. Simulations have no real effects on the world.

And so it is with the brain and its neurons. It may look like a computational pattern at one level. But that pattern spinning can't be divorced from the environment that is being regulated in realtime. Like the weather or a carburettor, the neural collective is actually pushing and shoving against the real world.

That then is the semantics that breathes life into the syntax. And that is also the semantics that is missing if a brain, a carburettor or the weather is reduced to a mere syntactical simulation.


I don't really understand what you're going on about here. Making the claim that simulations have no real effects on the world when all you have to do is look at all of the imaginary ideas and their outcomes in the world, is absurd. Just look at all the Mickey Mouse memorabilia, cartoons, theme parks, etc. Isnt the memorabilia a physical simulation of the non-physical idea of Mickey Mouse, or is it vice versa?

Think about how simulated crashes with real cars, crash test dummies, and a simulated driving environment has an effect on insurance rates when it comes covering certain cars with insurance and providing consumers with crash test ratings.

I've set up virtual machines, which are software simulations of computer hardware, for companies and their business functions on these simulated computers.

Isn't every event in the world a simulation of other similar events? Every event isnt exactly the same as every other, all events are unique, but they can be similar or dissimilar, and how good of a simulation it is depends on how similar the (simulated) event is to the event being simulated. We use events (simulations) to make predictions about similar events.

A simulated carburettor isn't meant to drive a real car. It is meant to design a real car. Try designing a car well without using simulations/predictions. Weather simulations aren't meant to make you wet. They are meant to inform you of causal relationships between existing atmospheric conditions and subsequent atmospheric conditions that have an impact on a meteorologist's weather predictions. Try predicting the weather without using weather simulations. Our knowledge is just as much part of the world and simulations have a drastic impact on our understandings, knowledge and behaviors in the world.

I don't see how semantics could ever be divorced from the syntax and vice versa, or how you could have one without the other. Semantics is just as important of a rule as the syntax. Effects mean their causes and causes mean their effects, so the semantics is there in every causal process. The rules of which causes leads to which effects and vice versa, would be the syntax - the temporal and spatial order of which effects are about which causes. If some system is following all of the rules, then how can you say that they don't understand the semantics? Defining variables in a computer program is equivalent to establishing semantic relationships that are then applied to some logical function.

Mijin November 28, 2020 at 15:32 #475271
Quoting hypericin
He presents a false dichotomy:
* Consciousness cannot be emulated by a Turing machine
* Therefore, it must be physical, not informational, and can only be reproduced with the right mechanical process.

But what if consciousness is informational, not physical, and is emergent from a certain processing of information? And what if that emergence doesn't happen if a Turing machine emulates that processing?


Good point.
I need to give this more thought...does the Chinese room apply if consciousness is informational, but not Turing-compatible?
But at first glance it does appear that Searle made a claim there that goes beyond what the Chinese room actually demonstrates.
Harry Hindu November 28, 2020 at 15:40 #475272
Quoting hypericin
Therefore, it must be physical, not informational, and can only be reproduced with the right mechanical process.

I don't know what this means. Present physical states are informative of prior physical states. I don't see how you can have something that is physical that is also absent information. The only process that is needed for information to be present is the process of causation.
apokrisis November 28, 2020 at 19:06 #475295
Quoting hypericin
But my point is that any simulation can trivially be made to "push against the world" by supplying it with inputs and outputs. But it is absurd to suggest that this is enough to make a non-conscious simulation conscious.


A simulation processes information. A living organism processes matter. It’s computations move the world in a detailed way such that the organism itself in fact exists, suspended in its infodynamic relationship.

So it is not impossible that this story could be recreated in silicon rather than carbon. But it wouldn’t be a Turing Machine simulation. It would be biology once again.

Howard Pattee wrote a good paper on the gap between computationalism and what true A-life would have to achieve.

Artificial life and mind are not ruled out. But the “inputs and outputs” would be general functional properties like growth, development, digestion, immunology, evolvability, and so on. The direct ability to process material flows via an informational relationship.

And a TM has that connection to dynamics engineered out. It is not built from the right stuff. It is not made of matter implementing Pattee’s epistemic cut.

This undifferentiated view of the universe, life, and brains as all computation is of no value for exploring what we mean by the epistemic cut because it simply includes, by definition, and without distinction, dynamic and statistical laws, description and construction, measurement and control, living and nonliving, and matter and mind as some unknown kinds of computation, and consequently misses the foundational issues of what goes on within the epistemic cuts in all these cases. All such arguments that fail to recognize the necessity of an epistemic cut are inherently mystical or metaphysical and therefore undecidable by any empirical or objective criteria

Living systems as-we-know-them use a hybrid of both discrete symbolic and physical dynamic behavior to implement the genotype-phenotype epistemic cut. There is good reason for this. The source and function of genetic information in organisms is different from the source and function of information in physics. In physics new information is obtained only by measurement and, as a pure science, used only passively, to know that rather than to know how, in Ryle's terms. Measuring devices are designed and constructed based on theory. In contrast, organisms obtain new genetic information only by natural selection and make active use of information to know how, that is, to construct and control. Life is constructed, but only by trial and error, or mutation and selection, not by theory and design. Genetic information is therefore very expensive in terms of the many deaths and extinctions necessary to find new, more successful descriptions. This high cost of genetic information suggests an obvious principle that there is no more genetic information than is necessary for survival.

If artificial life is to inform philosophy, physics, and biology it must address the implementation of epistemic cuts. Von Neumann recognized the logical necessity of the description-construction cut for open-ended evolvability, but he also knew that a completely axiomatic, formal, or implementation-independent model of life is inadequate, because the course of evolution depends on the speed, efficiency, and reliability of implementing descriptions as constraints in a dynamical milieu.

https://www.researchgate.net/publication/221531066_Artificial_Life_Needs_a_Real_Epistemology




Daemon November 28, 2020 at 19:13 #475296
Quoting hypericin
I don't think this is the right approach. There is nothing special going on with observer dependence. Yes, a bit, or an assembly instruction, has no meaning in itself. But neither does a neuron. All take their meaning from the assembly of which they are a part (rather than an outside observer). In hardware, the meaning of a binary signal is determined by the hardware which processes that signal. In software, the meaning of an instruction from the other instructions in a program, which all together process information. And in a brain, the meaning of a neuron derives from the neurons it signals, and the neurons which signal it, which in totality also process information.


Here's a rather lovely working model of a Turing Machine. https://youtu.be/E3keLeMwfHY?t=258

The whole 5 minute video is worth watching, but I've skipped to the part where the narrator explains that the machine is carrying out a simple binary counting program.

It's us outside observers, including the people who built the machine, who determine that those marks are to be read as 0s and 1s, and that the binary system is to be used. There's nothing in the physics of the machine that says that 1011 is a binary number equivalent to eleven in decimal notation.

The situation is not the same where our brains (and bodies) are concerned. The processes in a brain are not dependent on what an outside observer says about them.


apokrisis November 28, 2020 at 19:53 #475302
Quoting Harry Hindu
there are only one set of rules for understanding Chinese, and both humans and computers would use the same rules for understanding Chinese. I don't see a difference between how computers work and how humans work.


But life and mind don’t “follow rules”. They are not dumb machine processes. They are not algorithmic. Symbols constrain physics. So as a form of “processing”, it is utterly different.

To understand language is to know how to act. That knowing involves constraining the uncertainty and instability of the physical realm to the point that the desired outcome is statistically sure to happen.

The connection between the information and the physics is intimate and fundamental. And with a TM, the physics is engineered out.

So you can’t just hand wave about reconnecting the computer to the physics. You have to show where this now hybrid device is actually doing what biology does rather than still merely simulating the physics required.

Daemon November 28, 2020 at 21:11 #475318
Reply to apokrisis

Hi Apokrisis,

I'm reading Cell Phenomenology, Olivier passed on your recommendation, so thanks for that. The idea of the self/non-self distinction originating with the cell has been floating around in my head for some time so this is a fascinating read for me.

I am slightly puzzled by the epistemic in epistemic cut. I understand that this is to be distinguished from Descartes ontological cut, but I don't see how epistemic relates to the subject/object distinction. Can you help?

Edit: I think I've got it, it's the cut between the observer and the observed??
apokrisis November 29, 2020 at 04:09 #475383
Quoting Daemon
Edit: I think I've got it, it's the cut between the observer and the observed??


Yep. Pattee was drawing the parallel with the observer issue in quantum mechanics. And they still talk about whether the wavefunction collapse - the act of measurement - is epistemic or ontic.

So it was a bit of jargon he imported to biology.
Harry Hindu November 29, 2020 at 13:11 #475455
Quoting apokrisis
But life and mind don’t “follow rules”. They are not dumb machine processes. They are not algorithmic. Symbols constrain physics. So as a form of “processing”, it is utterly different.

Of course life and minds follow rules. You are following the rules of the English language that you learned in grade school when you type your posts. Ever heard of the genetic code? Why do you keep saying stuff that only takes a simple observation to see that it isn't true?

Following rules doesn't mean that you are a dumb machine. It seems to me that only smart machines can create their own rules to follow, and to then get others to follow the same rules, as in the use of language as a means of communicating. After all, understanding is the possession of rules in memory for interpreting some sensory data. Understanding cannot be severed from that act of following rules, as it is the same process.

Quoting apokrisis
To understand language is to know how to act. That knowing involves constraining the uncertainty and instability of the physical realm to the point that the desired outcome is statistically sure to happen.

No. To understand language is to possess a set of rules in memory for interpreting particular scribbles and sounds. Like I said, understanding is the possession of a set of rules in memory for interpreting any sensory data. The man in the room has a different set of rules for interpreting the scribbles on the paper than the rules that Chinese people have for interpreting those same symbols. Hence, the instructions in the room are not for understanding Chinese because they are not the same set of rules that Chinese speakers learned or use. The room understands something. It understands, "write this symbol when you see this symbol." The room also understands the language the instructions are written in. How can that be if the room, or the man, doesn't understand language?

Quoting apokrisis
So you can’t just hand wave about reconnecting the computer to the physics. You have to show where this now hybrid device is actually doing what biology does rather than still merely simulating the physics required.
It seems like that is your problem to solve. You are the dualist, so you are the one that sees this as a hybrid device. As a monist, I don't see it as such. What is it about carbon that is so special in being the only element capable of producing a hybrid device in the sense that you are claiming here? Why do you think that natural selection is often confused as a smart process (intelligent design), rather than a dumb (blind) process?
apokrisis November 29, 2020 at 18:41 #475484
Quoting Harry Hindu
Of course life and minds follow rules. You are following the rules of the English language


There is a world of difference between rules as algorithms and rules as constraints.




Daemon November 29, 2020 at 18:48 #475485
Reply to Harry Hindu Quoting Harry Hindu
The man in the room has a different set of rules for interpreting the scribbles on the paper than the rules that Chinese people have for interpreting those same symbols. Hence, the instructions in the room are not for understanding Chinese because they are not the same set of rules that Chinese speakers learned or use. The room understands something. It understands, "write this symbol when you see this symbol." The room also understands the language the instructions are written in. How can that be if the room, or the man, doesn't understand language?


A digital computer can't understand language so that it can translate like a human. I'm a translator, I use a computer translation tool. It's excellent and amazing, but it can't do what I do. And the barrier is insurmountable.

You're right that the rules in the room are not those that Chinese speakers use. But that's the point: a computer can't understand language in the way we can. The reason is that we learn meaning through conscious experience.

My translation customers often want to make the reader feel good about something, typically to feel good about their products.

To truly understand what "good" means, you have to have felt good. Because you enjoyed food, or the sunshine, or sex, or being praised by your parents.

The same applies to technical translations. It can be very difficult even for a human to understand and translate instructions for the assembly of a machine for example, if they haven't had experience of assembling machinery.

And of course my PC hasn't assembled a machine, or enjoyed sex, or had any of the countless experiences we have had that allow us to understand language, and life.
apokrisis November 29, 2020 at 20:28 #475490
Quoting Daemon
My translation customers often want to make the reader feel good about something, typically to feel good about their products.


Yep. Words can constrain experience. But they can’t construct experience.

Of course words also construct those constraints in rule-constrained fashion. And the same brain, the same meat machine, is both acting out the linguistic habit and the sensorimotor habits that are the "experiences".

So it is recursive and thus irreducibly complex.

And that is the key when it comes to the debate over computational mind.

The semiotic argument is that the relationship between symbol and physics that biology embodies is irreducibly complex. It is a story of synergistic co-dependency. You can't actually break it apart in some neat reductionist fashion.

And once it is accepted that "mindfulness" is an irreducible triadic relation in this fashion - a knot in nature - then that rules out the simplicity of computational mind from the get-go. A Turing Machine is a clear category error.

Of course, a TM does require a physics to make it a real device.

It needs a gate mechanism to divide the continuity of real time and symbolise that flow as a series of discrete and equal steps.

The gate also has to be able to make marks and erase marks. It has to be able to symbolise the logical notation of digital information in a way that is fixed and remembered.

It needs an infinite length of physical tape to do this. And - usually unsaid - an infinite quantity of energy to operate the tape and the gate. And also usually unsaid, it must be isolated from a lot of other actual physics, such as the gravity that would collapse these infinite quantities into blackholes, or the quantum fluctuations that would also overwhelm the algorithmic function of a tape and gate mechanism in physical reality.

So the TM is a hoax device. It is specified to minimise the actual physics - reduce the irreducible entanglement that must exist in any real semiotic system between symbol and physics. But in the end, such a reductionist move is physically impossible.

And yet then, the computationalists like to wave this TM about - boast about its universality as an engine of algorithms, its Platonic status as implementation of pure logical idea - and demand of biology, why shouldn't a really complicated calculation also be conscious like any neurobiologically organised creature?

Computationalist feel TMs have proved something about consciousness being just an information process, and all information processes being Turing computable, therefore the burden is on everyone else to disprove their claim.

A biologist - especially if they understand the irreducible complexity of the semiotic relation - can see that a TM never actually removes the physics from the real world story. All that physics - the real constraints that space, time and entropy impose on any form of material existence, even one organised by symbols - is merely swept under the carpet.

So the burden of explanation is really the other way around. The computationalists have to get specific about how they plan to re-introduce the physics to their de-realised realm of symbol shuffling.

Semiotics doesn't say that can't be done. It just says to the degree the computationalists rely on a TM architecture, it has been all about constructing a machine that is as physics-denying as they could imagine. So they have a little bit of a problem having gone so far out on that particular limb.

Neural network architectures, or even the analog computers that came before digital computers, are more embracing of actual physics. They reacted more directly to physical constraints on their informational habits. So it is not as if information technology can't be more lifelike in working with the irreducible complexity of a proper modelling relation with the world.

But the Chinese Room argument was about dramatising how physics-less the TM story actually is.

The problem was that it makes that criticism very plainly, but doesn't then supply the argument for life's irreducible complexity that makes the counter-position of biology so compelling.

If the semiotic relation between symbols and physics is formally irreducible - at the level of mathematical proof, as has been argued by CS Peirce, Robert Rosen, Howard Pattee, etc - then that trumps the more limited claim of TMs as "universal computers".

Universal computation applies only to the truly physics-less world that exists in the human imagination.

Meanwhile back here in the real world ...




Daemon November 29, 2020 at 21:04 #475495
Quoting apokrisis
Neural network architectures, or even the analog computers that came before digital computers, are more embracing of actual physics.


Is that right? I thought a neural network was just a program running on a digital computer. And no analog computer has any connection with the physics of consciousness either.

Quoting apokrisis
The problem was that it makes that criticism very plainly, but doesn't then supply the argument for life's irreducible complexity that makes the counter-position of biology so compelling.


Searle frequently talks about the biological nature of consciousness, he refers to his position as "biological naturalism". It's not unreasonable for him to leave the biology to the biologists.

apokrisis November 29, 2020 at 21:55 #475500
Quoting Daemon
Is that right? I thought a neural network was just a program running on a digital computer. And no analog computer has any connection with the physics of consciousness either.


It is very easy to head back into these kinds of confusions. That is why I advocate for the clarity of the formal argument - the irreducible complexity of a semiotic relation vs the faux reducible simplicity of universal computation.

When it comes to building technology inspired by either TM or semiotic models, the metaphysical issues always become obscured by the grubby business of implementing something of practical usefullness.

There are no actual TM computers in use. The impracticalities of a single gate and infinite tape had to be hidden using the architectural kluges of stored programs and virtual addressing spaces. Real computers have to live within real physical constraints.

So an epistemic cut - to use Pattee's term - has to be introduced between software and hardware. And if we zoom in close on any actual conventional computer, we can see the layers and layers of mediating mechanism - from microcode and instruction sets to operating systems and middleware - that are needed to negotiate what is supposedly a black and white distinction between the algorithms and a system of material digital switches burning electricity.

So when it comes to neural networks, originally those were imagined as actual hardware implementations. You would have to have physical circuits that were not just digital switches and more like the analog electronics of pre-WW2 technologies.

But then digital computers running conventional virtual machine emulations could simulate a network of weighted nodes, just as they could simulate any kind of physics for which the physical sciences have developed a theoretical description - the algorithms we call the equation of fluid mechanics, for example.

And so that is the trick - the ruse to keep this particular debate going.

Look, we can implement the algorithms that physics uses to make its descriptions of nature suitably algorithmic!

But then - if you look to physics - you find that this is another level of the great con.

Physics is good at constructing algorithmic descriptions of nature ... up to a point. But in the end - as with quantum collapse, or the ultimate non-computability of any actual complex dynamical system – the algorithms can only coarse-grain over the realities they model.

Physicists hate this being pointed out. Like computationalists, they like to imagine that reality is actually a deterministic machine. It is the metaphysical assumption built into the discipline. And - as a useful assumption - it is great. The mechanistic view is the best way to look at the irreducible complexity of the world if your over-arching purpose is to construct a higher level of control over that world.

To the degree you can mechanise your view of nature, you can employ that view to build a machinery for regulating nature in precisely that fashion.

But at root - as with the "weirdness" of quantum mechanics or deterministic chaos - there is always going to be a gap between a mechanical and algorithmic description of nature and the metaphysical truth of nature being an irreducibly complex (ie: semiotic) relation.

Quoting Daemon
Searle frequently talks about the biological nature of consciousness, he refers to his position as "biological naturalism". It's not unreasonable for him to leave the biology to the biologists.


But I was supporting Searle, not attacking him. My first post was about how he talked of simulated rain not making anyone wet, simulated carburettors being no use in an actual car.

Simulation - or symbols - are certainly part of the biological story. But they are irreducibly connected with the physics of life from the point of origin.

There is no sharp software/hardware division as is pretended by either computation or physics as sciences. There is instead always the necessity of an epistemic bridge that spans this epistemic divide in the fashion that even the PC on your desk has layers and layers of mechanism to give effect to the idea of a virtual machine running on real hardware plugged into an uninterrupted power supply.



Daemon November 29, 2020 at 22:28 #475503
Quoting apokrisis
It is very easy to head back into these kinds of confusions


I don't think I was confused. The physics of analogue computers and digital computers is not related to the physics of consciousness.

apokrisis November 29, 2020 at 22:44 #475504
Quoting Daemon
The physics of analogue computers and digital computers is not related to the physics of consciousness.


What do you mean by the physics of consciousness then? Which part of physical theory is that?



Daemon November 29, 2020 at 22:46 #475505
Reply to apokrisis Neurons, synapses, that kind of thing.
apokrisis November 29, 2020 at 23:32 #475515
Reply to Daemon I'm puzzled as that would be exactly my point. Neurons and synapses can't be understood except as prime examples of the irreducible complexity of semiosis.

Neurons combine the physics of ion potential differences and the information of depolarisable membrane channels so as create "signals". So there is some core bit of mechanism where the two realms interface.

But how those signals become actually a signal to an organism in its responses to an environment, rather than just an entropic and noisy bit of biophysics, is where the irreducible complexity bit comes into play.

Neither physics, nor information processing theories, can tell us anything useful in isolation. You need the third framework of biosemiosis that has the two viewpoints already glued together in formal fashion.

It may be too technical, but I wrote this post a while back on how biophysics actually has drilled down to ground zero on this score now. In just the past decade, the blanks have started to get filled in.

https://thephilosophyforum.com/discussion/comment/105999


Daemon November 29, 2020 at 23:49 #475519
Quoting apokrisis
I'm puzzled as that would be exactly my point. Neurons and synapses can't be understood except as prime examples of the irreducible complexity of semiosis.


Just a misunderstanding then.

Searle says the brain doesn't do information processing: https://philosophy.as.uky.edu/sites/default/files/Is%20the%20Brain%20a%20Digital%20Computer%20-%20John%20R.%20Searle.pdf

Page 34.
apokrisis November 30, 2020 at 00:10 #475520
Reply to Daemon Yeah. It was back in the 1980s that Searle was making his case. And even then a criticism was that he overplayed the physics at this point. Although given the strength of computationalism at the time, it was good to see any philosopher trying to argue so directly against it,

So you notice how Searle says the brain isn't handling information in the TM sense - binary 0s and 1s that can literally stand for anything as they intrinsically stand for nothing.

Instead, the brain is handling particular kinds of "experiential information" - visual, tactile, auditory, kinesthetic, gustatory, etc.

But that then becomes a dualistic framing of the situation because he is talking about qualia and all the metaphysical problems that must ensue from there.

So - from a mind and life sciences point of view - you don't want to shut down the computationalists by opening the door again for the idealists.

That is where the semiotic approach came in for me during the 1990s. It is a way to glue together the computational and material aspects of organismic complexity in one formally-defined metaphysics.




Harry Hindu November 30, 2020 at 11:46 #475656
Quoting apokrisis
There is a world of difference between rules as algorithms and rules as constraints.

I don't see a world of difference between them. Algorithms are a type of constraint.
Harry Hindu November 30, 2020 at 11:53 #475657
Reply to apokrisis
Quoting Daemon
You"re right that the rules in the room are not those that Chinese speakers use. But that's the point: a computer can't understand language in the way we can. The reason is that we learn meaning through conscious experience.

The instructions in the room are written in a language - a different language than Chinese. How did the man in the room come to understand the language the instructions are written in? I've asked this a couple of times now, but you and Apo just ignore this simple, yet crucial, fact.
Harry Hindu November 30, 2020 at 11:59 #475658
Quoting Daemon
I'm a translator, I use a computer translation tool

If you are the translator then why do you need a translation tool? Where do the translations reside - in your brain or in you tool? If you need to look them up in a tool, then the understanding of that particular translation is in the tool, not in your brain.

Quoting Daemon
You're right that the rules in the room are not those that Chinese speakers use. But that's the point: a computer can't understand language in the way we can. The reason is that we learn meaning through conscious experience.

These are all unfounded assertions without anything to back it up. What are conscious experiences? What do you mean by, understand?
Daemon November 30, 2020 at 12:33 #475666
Quoting Harry Hindu
If you are the translator then why do you need a translation tool? Where do the translations reside - in your brain or in you tool? If you need to look them up in a tool, then the understanding of that particular translation is in the tool, not in your brain.


I don't need the translation tool Harry, I can do the translation on my own, the tool just saves me typing. When I come across a word that isn't in my Translation Memory I add it to the memory, together with the translation. Then the next time that word crops up I just push a button and the translation is inserted. The translation tool doesn't understand anything.

Quoting Harry Hindu
These are all unfounded assertions without anything to back it up. What are conscious experiences? What do you mean by, understand?


A dictionary definition of "understand" is "perceive the intended meaning of". Another dictionary says "to grasp the meaning of".

What do you think conscious experiences are?





Daemon November 30, 2020 at 20:14 #475713
Quoting apokrisis
Instead, the brain is handling particular kinds of "experiential information" - visual, tactile, auditory, kinesthetic, gustatory, etc.

But that then becomes a dualistic framing of the situation because he is talking about qualia and all the metaphysical problems that must ensue from there.


This is the introduction to Searle's 2004 book Mind, A Brief Introduction.

INTROD U C T I O N
Why I Wrote This Book
There are many recent introductory books on the philoso-
phy of mind. Several give a more or less comprehensive
survey of the main positions and arguments currently in
the field. Some, indeed, are written with great clarity, rigor,
intelligence, and scholarship. What then is my excuse for
adding another book to this glut? Well, of course, any
philosopher who has worked hard on a subject is unlikely
to be completely satisfied with somebody else’s writings on
that same subject, and I suppose that I am a typical
philosopher in this respect. But in addition to the usual
desire for wanting to state my disagreements, there is an
overriding reason for my wanting to write a general intro-
duction to the philosophy of mind. Almost all of the works
that I have read accept the same set of historically inherited
categories for describing mental phenomena, especially
consciousness, and with these categories a certain set of
assumptions about how consciousness and other mental
phenomena relate to each other and to the rest of the world.
It is this set of categories, and the assumptions that the
categories carry like heavy baggage, that is completely
unchallenged and that keeps the discussion going. The
different positions then are all taken within a set of
mistaken assumptions. The result is that the philosophy of
mind is unique among contemporary philosophical sub-
jects, in that all of the most famous and influential theories
are false. By such theories I mean just about anything that
has “ism” in its name. I am thinking of dualism, both
property dualism and substance dualism, materialism,
physicalism, computationalism, functionalism, behavior-
ism, epiphenomenalism, cognitivism, eliminativism, pan
psychism, dual-aspect theory, and emergentism, as it is
standardly conceived. To make the whole subject even
more poignant, many of these theories, especially dualism
and materialism, are trying to say something true. One of
my many aims is to try to rescue the truth from the
overwhelming urge to falsehood. I have attempted some of
this task in other works, especially The Rediscovery of the
Mind, but this is my only attempt at a comprehensive
introduction to the entire subject of the philosophy of
mind.

____________________________________________________________

There's also this: Why I Am Not A Property Dualist: https://faculty.wcas.northwestern.edu/~paller/dialogue/propertydualism.pdf


Banno November 30, 2020 at 20:35 #475715
I really want to start a thread called "The floor in the Chinese Room".
Harry Hindu December 01, 2020 at 01:40 #475782
Reply to Banno I'll bite.
What's so special about the floor, Banno?

Quoting Daemon
I don't need the translation tool Harry, I can do the translation on my own, the tool just saves me typing. When I come across a word that isn't in my Translation Memory I add it to the memory, together with the translation. Then the next time that word crops up I just push a button and the translation is inserted. The translation tool doesn't understand anything.

Isn't that how you learned the translation of a word and then use the translation? Didn't you have to learn (be programmed) with that information via your sensory inputs to then supply that information when prompted? How is the translation tool's understanding different than a brain's understanding?

Quoting Daemon
A dictionary definition of "understand" is "perceive the intended meaning of". Another dictionary says "to grasp the meaning of".

What do you think conscious experiences are?

You used the phrase. I thought you knew what you were talking about. I would define it as a kind of working memory that processes sensory information.
Daemon December 01, 2020 at 16:07 #475983
Quoting Harry Hindu
How is the translation tool's understanding different than a brain's understanding?


It doesn't have any understanding. It doesn't perceive the intended meaning, it doesn't perceive anything. It isn't equipped to perceive anything.

Semantics, meaning, is not intrinsic to the physics of my PC. The semantics is ascribed, in this case by me, when I tell it how to translate words and phrases.

The translation tool often produces quite spooky results, it certainly looks like it understands to a naive observer, but it's easy to see that it doesn't understand when you allow it to translate on its own without my intervention (which I never do in practice).

There's a very interesting paper here on the limits of machine translation: http://vantage-siam.com/upload/casestudies/file/file-139694565.pdf

One of the author's conclusions is that "linguistic meaning is derived from the role things and people play in everyday life". I said something about this above, using the word "good" and the translation of machine assembly instructions as examples.

If the translation tool's understanding was the same as mine, as you seem to want to believe, then machine translation would be as good as human translation. But it isn't!





bongo fury December 01, 2020 at 21:59 #476069
Reply to The Opposite Reply to Banno

A trap-door

in the floor

of the Chinese Room will eject the philosopher into a sea of woo below, immediately upon their confusing the semiotics of intelligence with the semiotics of simulation.

Quoting apokrisis
I was just citing Searle's examples.


Fair enough. Dare I say, he wanders perilously close.
Harry Hindu December 02, 2020 at 01:58 #476124
Quoting Daemon
It doesn't have any understanding. It doesn't perceive the intended meaning, it doesn't perceive anything. It isn't equipped to perceive anything.

So understanding has to do with perceiving meaning? What do you mean by, "perceive"? Is the computer not perceiving certain inputs from your mouse and keyboard? Does it not perceive the meaning of your keystrokes and mouse clicks and make the correct characters appear on the screen and windows open for you to look at?

What do you mean by "meaning"? Meaning is the same thing as information. Information is the relationship between cause and effect. Information/meaning is every where causes leave effects.

Computers contain information. They have memory. They have a processor that processes that information for certain purposes. The difference is that those purposes are not its own. They are for human purposes. It doesn't process information in order to survive and propagate. It isn't capable of learning on it's own, for it's own purposes. It can only be programmed for human purposes. But none of this is to say that there isn't some kind of mind there. If the hardware in your head can contain a mind then what makes that type of hardware special from a computer brain that processes information via inputs and outputs, just like your brain does?

No, I'm not a panpsychist that believes everything has a mind. But I do think that we need to rethink what the mind is, because our current theories of materialism, idealism and dualism just don't work.

Quoting Daemon
Semantics, meaning, is not intrinsic to the physics of my PC. The semantics is ascribed, in this case by me, when I tell it how to translate words and phrases.

But the semantics weren't ascribed by you. They were ascribed by your teacher(s) who taught you how to translate words. You weren't born knowing any language, much less how to translate them. You had to be taught that. You also weren't the one that created languages, to define what scribble and sound refers to what event or thing. You had to be taught that. You used your eyes and ears (your inputs) and your brain (your processor) to learn, to discern the patterns, so that you may survive in this social environment (produce the appropriate outputs as expected by your peers).


Quoting Daemon
The translation tool often produces quite spooky results, it certainly looks like it understands to a naive observer, but it's easy to see that it doesn't understand when you allow it to translate on its own without my intervention (which I never do in practice).

That's because the only thing it knows is to spit out this scribble when it perceives a certain mouse click or key stroke. It has the same instructions as the man in the room - write this scribble when you perceive this scribble. It doesn't have instructions that actually provide the translation, of this word = that word, and then what that word points to outside of the room, which is how you understand it, because that is how you learned it.

Given that the man in the room can understand at least one language - the language the instructions are written in, then a set of instructions that include the Chinese symbol and it's equivalent in the language the man in the room understands would go a long way in helping the man in the room understand Chinese.

I'm sure you produced quite spooky results when you first began learning how to translate a language.

Quoting Daemon

One of the author's conclusions is that "linguistic meaning is derived from the role things and people play in everyday life". I said something about this above, using the word "good" and the translation of machine assembly instructions as examples.

If the translation tool's understanding was the same as mine, as you seem to want to believe, then machine translation would be as good as human translation. But it isn't!

Because it doesn't have the same set of instructions, nor the need to learn them, that you did when you learned them, but that doesn't mean that it couldn't if it had the need and the correct set of instructions.
Harry Hindu December 02, 2020 at 03:02 #476134
If meaning is the role words play, then what about how we use words to refer to how computers function, as if they had minds of their own? They have memory, communicate, acknowledge messages, ignores, expects, monitors, and understands. Why is the computer such a good metaphor for the mind?
Sir Philo Sophia December 02, 2020 at 06:29 #476170
Quoting apokrisis
If the semiotic relation between symbols and physics is formally irreducible - at the level of mathematical proof, as has been argued by CS Peirce, Robert Rosen, Howard Pattee, etc - then that trumps the more limited claim of TMs as "universal computers".


I think you make many great points on this thread, which I tend to mostly agree with in spirit, if not exact details. If not formally, how does semiotics best deal with bridging physics to the symbols via and over the epistemic cut? Can you point me to the latest, best research white paper you think would answer that for me?
thx.
Sir Philo Sophia December 02, 2020 at 06:43 #476171
Quoting Daemon
A dictionary definition of "understand" is "perceive the intended meaning of". Another dictionary says "to grasp the meaning of".


Curious, and hypocritical, that to support your arguments you use definitions which are circular, so very flawed, because one would need to know already what perceive" or "meaning" is so as to know what the true definition of 'understand' is. However, when I try to ground definitions away from such useless circular ones, you said that is a circular endeavor that is doomed to fail as we can not define a partial truth before knowing the full truth.

You should not cite things that you don't believe in as useful truths for the sake of arguments.

care to revise your position on that?
Daemon December 02, 2020 at 09:29 #476196
Quoting Harry Hindu
So understanding has to do with perceiving meaning? What do you mean by, "perceive"? Is the computer not perceiving certain inputs from your mouse and keyboard? Does it not perceive the meaning of your keystrokes and mouse clicks and make the correct characters appear on the screen and windows open for you to look at?


Thanks very much for this Harry.

No, the computer is not perceiving inputs in the way you and I perceive things. Press a finger against the back of your hand. You feel a sensation. When you press a key on the computer keyboard, the computer doesn't feel a sensation.

Shall we try to agree on this before we move on to the rest of your ideas?



Harry Hindu December 02, 2020 at 11:37 #476214
Quoting Daemon
No, the computer is not perceiving inputs in the way you and I perceive things. Press a finger against the back of your hand. You feel a sensation. When you press a key on the computer keyboard, the computer doesn't feel a sensation.

Shall we try to agree on this before we move on to the rest of your ideas?

No, because this is the primary point of contention, and you keep ignoring the contradiction that you keep making. What makes the hardware in your head special in that it feels, but computer hardware can't? What does it mean to feel?

If there is no perceivable difference between "simulated" intelligence and "real" intelligence, then any difference you perceive would be a difference of your own making stemming from your human biases.

Daemon December 02, 2020 at 12:15 #476220
Do you think a piano feels something when you press the keys?
Harry Hindu December 03, 2020 at 11:36 #476535
Quoting Daemon
Do you think a piano feels something when you press the keys?
Again, what does it mean to feel?
You are driving all over the road. One lane at a time. We were talking about computers. You are the one using these terms that you then have a problem in defining, so why use them?
Daemon December 03, 2020 at 11:42 #476537
Reply to Harry Hindu Is the piano not perceiving certain inputs from the keyboard? Does it not perceive the meaning of your keystrokes and make the correct sounds for you to listen to?
Daemon December 03, 2020 at 11:58 #476540
Reply to Harry Hindu Harry I don't have a problem defining consciousness and suchlike. Like many words they are defined ostensively.

Wikipedia:
An ostensive definition conveys the meaning of a term by pointing out examples. This type of definition is often used where the term is difficult to define verbally, either because the words will not be understood (as with children and new speakers of a language) or because of the nature of the term (such as colours or sensations).
Harry Hindu December 04, 2020 at 11:50 #476891
Quoting Daemon
the piano not perceiving certain inputs from the keyboard? Does it not perceive the meaning of your keystrokes and make the correct sounds for you to listen to?

It appears that you've answered your own question.


Quoting Daemon
Harry I don't have a problem defining consciousness and suchlike. Like many words they are defined ostensively.

Wikipedia:
An ostensive definition conveys the meaning of a term by pointing out examples. This type of definition is often used where the term is difficult to define verbally, either because the words will not be understood (as with children and new speakers of a language) or because of the nature of the term (such as colours or sensations)


There are terms that we currently have that can define these things. The problem is that you aren't even trying to think about it. What do you think the purpose of feelings and sensations are? Let's start there.

Harry Hindu December 04, 2020 at 12:46 #476913
Quoting Daemon
Harry I don't have a problem defining consciousness and suchlike. Like many words they are defined ostensively.

Wikipedia:
An ostensive definition conveys the meaning of a term by pointing out examples. This type of definition is often used where the term is difficult to define verbally, either because the words will not be understood (as with children and new speakers of a language) or because of the nature of the term (such as colours or sensations)

But words are just colored scribbles and sounds. It seems like you'd have a problem defining the nature of words, too.
Book273 December 04, 2020 at 13:53 #476924
Reply to hypericin Then it would not be a simulation. The premise of a simulation is that, no matter the outcome of the simulation, there is no change in the real world.
Daemon December 04, 2020 at 16:41 #476961
Quoting Harry Hindu
Is the piano not perceiving certain inputs from the keyboard? Does it not perceive the meaning of your keystrokes and make the correct sounds for you to listen to? — Daemon

It appears that you've answered your own question.


The paper perceives the meaning of your penstrokes and makes the correct words for you to read?

And when you walk across the beach, the sand perceives the meaning of your footsteps and makes the correct footprints for you to look at?


Pop December 04, 2020 at 20:33 #477018
Quoting Daemon
And when you walk across the beach, the sand perceives the meaning of your footsteps and makes the correct footprints for you to look at?


According to Fritjof Capra, the fundamental unit of cognition is a reaction to a disturbance in a state.
Daemon December 04, 2020 at 20:52 #477027
Reply to Pop
Was he saying that the sand on the beach (for example) was capable of cognition?
Pop December 04, 2020 at 20:57 #477030
Quoting Daemon
Was he saying that the sand on the beach (for example) was capable of cognition?


That would be the fundamental unit of cognition - basic cause and effect. The sand acknowledges the pressure of the footprint and gives way accordingly. Its a long way from the complicated cognition we enjoy, but it is the start of it.
Daemon December 04, 2020 at 21:23 #477038
Reply to Pop Then that explains nothing. The whole universe is cause and effect, but consciousness happens in individuated pockets. The "start of it" comes when the pockets are individuated, when there's a subject and an object, an inside and an outside, self and non-self. With the cell perhaps. But not with grains of sand washed by the waves or trodden by feet, not with a piano, and not with our PCs and smartphones. Those things are not appropriately individuated.
Pop December 04, 2020 at 21:31 #477044
Quoting Daemon
Then that explains nothing. The whole universe is cause and effect, but consciousness happens in individuated pockets.


The entire universe is a process of self organization - cause and effect, not individuated pockets, and every moment of consciousness is a moment of self organization.
Harry Hindu December 05, 2020 at 12:02 #477161
Quoting Pop
That would be the fundamental unit of cognition - basic cause and effect. The sand acknowledges the pressure of the footprint and gives way accordingly. Its a long way from the complicated cognition we enjoy, but it is the start of it.

Exactly! The relationship between cause and effect is information, and information is a fundamental unit of cognition.

Quoting Daemon
Was he saying that the sand on the beach (for example) was capable of cognition?

Isn't your footprint information that Daemon passed this way? Doesn't the sand have a memory of your passing - the persistent existence of your footprint in the sand? Once the footprint is washed away, the sand forgets you ever passed this way.
Pop December 05, 2020 at 20:38 #477291
Quoting Harry Hindu
Exactly! The relationship between cause and effect is information, and information is a fundamental unit of cognition.


:up: Brilliant.
Daemon December 07, 2020 at 20:21 #477863
Quoting Harry Hindu
Isn't your footprint information that Daemon passed this way? Doesn't the sand have a memory of your passing - the persistent existence of your footprint in the sand? Once the footprint is washed away, the sand forgets you ever passed this way.


The problem is that this approach explains nothing. What are footprints in the sand? Information. What is consciousness? Information. What is memory? Information.

Memory is something that goes on in conscious minds. It's associated with conscious experience. There's a lot of very specific biological machinery involved, which has evolved over billions of years. It's an aspect of living beings. It's not an aspect of pianos, beach sand, or digital computers.


Mijin December 08, 2020 at 05:28 #478010
Quoting Harry Hindu
What makes the hardware in your head special in that it feels, but computer hardware can't? What does it mean to feel?

If there is no perceivable difference between "simulated" intelligence and "real" intelligence, then any difference you perceive would be a difference of your own making stemming from your human biases.


That's a shift of the burden of proof.

I feel pain.
I assume other humans also feel pain for various practical reasons, but also because if other humans were p-zombies they would have no reason to say that they experience pain.

Any claim beyond that, needs supporting arguments and data. In the case of animals, there are lots of good arguments for why at least some animals feel pain, but of course that's a big topic in itself.

But if someone wished to claim that computers, or non-living systems experience pain, the burden is on that person to provide an argument and data for this claim.
Harry Hindu December 08, 2020 at 11:59 #478095
Quoting Daemon
The problem is that this approach explains nothing. What are footprints in the sand? Information. What is consciousness? Information. What is memory? Information.

Sure it does. It explains that everything is information. The problem is that you just don't like the idea because you haven't been able to supply a logical argument against it.

Quoting Daemon
Memory is something that goes on in conscious minds. It's associated with conscious experience. There's a lot of very specific biological machinery involved, which has evolved over billions of years. It's an aspect of living beings. It's not an aspect of pianos, beach sand, or digital computers.

This says nothing about what memory is, or how it is associated with biological machinery and not other types of machinery.

Do the footprints inform you of anything? What type of information can you acquire from footprints? What would a private detective use footprints for? Where are the footprints, in the sand or in the detective's brain? Where is the information that the footprints provide - in the detective's brain or in the causal relationship between the footprint (the effect) and the person who put them there (the cause). In other words, meaning and information exist prior to any observer interacting with it. The footprints inform you that someone walked this way recently, which direction they were walking, how big the person was, if they were running or walking, etc., all from an impression in the sand. Where does all of this information come from if not what caused the footprint?
Harry Hindu December 08, 2020 at 12:05 #478097
Quoting Mijin
That's a shift of the burden of proof.

I feel pain.
I assume other humans also feel pain for various practical reasons, but also because if other humans were p-zombies they would have no reason to say that they experience pain.

Any claim beyond that, needs supporting arguments and data. In the case of animals, there are lots of good arguments for why at least some animals feel pain, but of course that's a big topic in itself.

But if someone wished to claim that computers, or non-living systems experience pain, the burden is on that person to provide an argument and data for this claim.

No. The burden is upon you to explain what pain is.

You can only claim that others feel pain because of their behavior. If a computer behaved like they were in pain, would you say that they feel pain? You seem to be asserting that pain is a behavior. If not, then some behavior is informative of some state of pain. What is pain? Information about the state of your body. If a computer possessed information about the state of it's body, and was programmed to engage in behaviors when that information appears in working memory, then how is that any different than what humans do?
Marchesk December 08, 2020 at 13:03 #478118
Quoting Harry Hindu
a computer possessed information about the state of it's body, and was programmed to engage in behaviors when that information appears in working memory, then how is that any different than what humans do?


Depends on whether the computer lacked a subjective experience of pain.
Daemon December 08, 2020 at 13:37 #478121
Quoting Harry Hindu
This says nothing about what memory is, or how it is associated with biological machinery and not other types of machinery.


https://www.the-scientist.com/reading-frames/book-excerpt-from-the-idea-of-the-brain-67502

In the 1970s, British researcher John O’Keefe revealed that as well as encoding memories, the hippocampus contains a map of the animal’s environment. This cognitive map, consisting of what are called place cells, also contains information about how to get from one location to another, enabling the animal to navigate the world and to predict what it will find in different places. In species with different ecologies these hippocampal maps have different forms—for example, while the maps are 2-D in rats, they are 3-D in bats—but they are always cognitive, not simply spatial.

Despite the key role played by the hippocampus and adjacent structures in creating or accessing memories, what we remember is not found in a single place. Memories are often multimodal, involving place, time, smell, light and so on, and they are distributed across the cortex through intricate neural networks.

Modern research can study these networks in exquisite detail, by controlling the activity of single neurons through optogenetics—using light to activate neurons. In Nobel Prize winner Susumu Tonegawa’s lab at MIT, false memories have been created in the rodent hippocampus, leading an animal to freeze in a particular part of the cage as though it had previously been shocked there, although it had never had any such experience.
Mijin December 08, 2020 at 15:43 #478146
Quoting Harry Hindu
No. The burden is upon you to explain what pain is.


Haha, what?
I didn't claim to know what pain is, why would I have a burden of proof on me?

What I know about pain is that it is an unpleasant subjective experience, following activation of specific regions of the parietal lobe, usually (not always) preceded by stimulation of nociceptors of the nervous system.
That's all I know about it. If you'd like me to break down what a subjective experience actually is, well I can't, and nor would any neuroscientist claim to be able to at this time. That's the hard problem that we'd like to solve.

Quoting Harry Hindu
You can only claim that others feel pain because of their behavior. If a computer behaved like they were in pain, would you say that they feel pain? You seem to be asserting that pain is a behavior.


I don't know where to begin with this. No, saying that X is evidence for Y is vastly different from saying X = Y.
If I say I think a murder happened because there are blood stains on the floor, that doesn't mean I am asserting that blood stains *are* murder.

I said that I assume (don't know) that other humans experience pain, because they freely claim that they do. P-zombies could of course claim to be in pain, but this would require the universe to be trying to fool me for some reason -- the simpler explanation for sentient beings claiming to have subjective experiences is that they actually do.

That's evidence and an argument for the existence of pain in other humans, not a claim that that is what pain *is*.

With regards to computers, yes, if an AI were able to freely converse in natural language, and it repeatedly made the claim that it felt pain, despite such sentiments not being explicitly part of its programming, and it having nothing immediate to gain by lying...then sure, I'd give it the benefit of the doubt. I wouldn't know that it felt pain, but I'd start to lean towards it being true.
Harry Hindu December 09, 2020 at 11:51 #478440
Quoting Marchesk
Depends on whether the computer lacked a subjective experience of pain.

What is a subjective experience, if not information in working memory about the environment relative to your body.

A subjective experience is when the world appears to be located relative to your sensory organs.
Marchesk December 09, 2020 at 12:03 #478442
Reply to Harry Hindu It's not all perceptual. A dream of a red apple isn't information about an apple in the external environment.
Harry Hindu December 09, 2020 at 12:03 #478443
Quoting Mijin
Haha, what?
I didn't claim to know what pain is, why would I have a burden of proof on me?

Haha, then why are you using a word that you don't know what it means. You literally don't know what you are talking about.

Quoting Mijin

That's all I know about it. If you'd like me to break down what a subjective experience actually is, well I can't, and nor would any neuroscientist claim to be able to at this time. That's the hard problem that we'd like to solve.

Then why do you use terms that you don't what they mean? That is ludicrous.

Quoting Mijin
What I know about pain is that it is an unpleasant subjective experience, following activation of specific regions of the parietal lobe, usually (not always) preceded by stimulation of nociceptors of the nervous system.

What does it even mean for "an unpleasant subjective experience that follows activation of specific regions of the parietal lobe, usually (not always) preceded by stimulation of nociceptors of the nervous system"? How do subjective states follow from physical states?

Quoting Mijin
I said that I assume (don't know) that other humans experience pain, because they freely claim that they do. P-zombies could of course claim to be in pain, but this would require the universe to be trying to fool me for some reason -- the simpler explanation for sentient beings claiming to have subjective experiences is that they actually do.

That's evidence and an argument for the existence of pain in other humans, not a claim that that is what pain *is*.

This makes no sense. You assume that other humans have it because they claim it, and don't assume it if a pzombie or computer claims it. You assume IT exist in humans without even knowing what IT is. You're losing me.




Harry Hindu December 09, 2020 at 12:07 #478444
Quoting Marchesk
It's not all perceptual. A dream of a red apple isn't information about an apple in the external environment.

This just causes more confusion about what a subjective experience is. Why do people keep using terms that they have no idea what it means? Is this not clear evidence that use and meaning are not one and the same? Can people use words that they don't know how to use?
Harry Hindu December 09, 2020 at 12:10 #478446
Reply to Daemon
Quoting Harry Hindu
This says nothing about how memory is associated with biological machinery and not other types of machinery.
Harry Hindu December 09, 2020 at 13:35 #478456
Reply to Marchesk Dreams could be simulated subjective experiences.
Daemon December 09, 2020 at 14:18 #478470
Reply to Harry Hindu Was that a mistake Harry?
Mijin December 09, 2020 at 14:18 #478471
Quoting Harry Hindu
Haha, then why are you using a word that you don't know what it means. You literally don't know what you are talking about.
[...]
Then why do you use terms that you don't what they mean? That is ludicrous.


All this started from me suggesting that your argument was a subtle shift of the burden of proof.
Call me naive, but I honestly expected a simple response like "oh, you're right, let me rephrase that" or "I don't believe it is, because..."

But instead of that we get this bizarre freakout of you claiming I don't know what "pain" means.
Well I just gave a definition of pain, in the very post you are replying to.
But, since pain sensation was a core part of my postgraduate degree I can actually talk a lot about it. At the end of that, would you respond to the point?

Quoting Harry Hindu
What does it even mean for "an unpleasant subjective experience that follows activation of specific regions of the parietal lobe, usually (not always) preceded by stimulation of nociceptors of the nervous system"? How do subjective states follow from physical states?


Nobody knows. There is no scientific model (meaning: having explanatory and predictive power) for that part. If this is a "gotcha" consider yourself, and every other human, "got".

Quoting Harry Hindu
You assume that other humans have it because they claim it, and don't assume it if a pzombie or computer claims it. You assume IT exist in humans without even knowing what IT is. You're losing me.


Possibly I am losing you because you don't read my posts? I just said I could believe that a computer could experience subjective states if it were to claim it i.e. the exact opposite of the thing you're accusing me of saying.

But on p-zombies, think through what you're saying. You're suggesting that I am wrong to assume p-zombies don't have subjective experience? Their definition is that they do not have subjective experience.
Harry Hindu December 09, 2020 at 14:23 #478474
Reply to Daemon Nope. Your reply doesn't address how memory is associated with biological machinery and not other types of machinery.
Harry Hindu December 09, 2020 at 14:33 #478478
Quoting Mijin
All this started from me suggesting that your argument was a subtle shift of the burden of proof.
Call me naive, but I honestly expected a simple response like "oh, you're right, let me rephrase that" or "I don't believe it is, because..."

But instead of that we get this bizarre freakout of you claiming I don't know what "pain" means.
Well I just gave a definition of pain, in the very post you are replying to.
But, since pain sensation was a core part of my postgraduate degree I can actually talk a lot about it. At the end of that, would you respond to the point?


Quoting Mijin
I didn't claim to know what pain is,


Quoting Mijin
That's all I know about it. If you'd like me to break down what a subjective experience actually is, well I can't, and nor would any neuroscientist claim to be able to at this time.

You keep contradicting yourself. You go back and forth between knowing what pain is and not knowing what pain is. You call it a subjective experience and then claim to not know what a subjective experience is. You aren't being very helpful.

Quoting Mijin
Nobody knows. There is no scientific model (meaning: having explanatory and predictive power) for that part. If this is a "gotcha" consider yourself, and every other human, "got".

I'm not playing "Gotcha". The fact that you think that I am just shows how you aren't even attempting to think about what you are saying. I am simply trying to get you to clarify the terms that you are using.

Quoting Mijin
Possibly I am losing you because you don't read my posts? I just said I could believe that a computer could experience subjective states if it were to claim it i.e. the exact opposite of the thing you're accusing me of saying.

Then all I have to do is program a computer to produce some text on your screen, "I have subjective states" and you would assume that the computer has conscious states?

Quoting Mijin
But on p-zombies, think through what you're saying. You're suggesting that I am wrong to assume p-zombies don't have subjective experience? Their definition is that they do not have subjective experience.

Yet, you claim that no one knows what subjective experiences are. :roll:




Mijin December 09, 2020 at 14:44 #478480
Quoting Harry Hindu
You keep contradicting yourself. You go back and forth between knowing what pain is and not knowing what pain is. You call it a subjective experience and then claim to not know what a subjective experience is. You aren't being very helpful.


Not at all; those are different concepts. What pain is, how pain sensation works, what we mean by subjective experience and how much we (don't) know about how exactly subjective experience works.
And I note that you still haven't said why your argument is not a shift of the burden of proof. i.e. The whole reason you and I are in this exchange in the first place.

Quoting Harry Hindu
Then all I have to do is program a computer to produce some text on your screen, "I have subjective states" and you would assume that the computer has conscious states?


Again, try reading my posts.
I said that under certain conditions I could gain belief that a computer was experiencing pain, and I mentioned what those conditions were. Does the program PRINT "Ouch!" fulfill those conditions?
If you read what I wrote, you would know the answer to this.

Quoting Mijin
You're suggesting that I am wrong to assume p-zombies don't have subjective experience? Their definition is that they do not have subjective experience

Quoting Harry Hindu
Yet, you claim that no one knows what subjective experiences are.


This response is a complete non sequitur.
Marchesk December 09, 2020 at 16:02 #478505
Reply to Harry Hindu All these arguments over consciousness might as well take place inside a simulation.

Harry Hindu December 09, 2020 at 23:38 #478623
Quoting Mijin
What pain is, how pain sensation works, what we mean by subjective experience and how much we (don't) know about how exactly subjective experience works.

If you can't tell me what pain is then how do you expect to tell me how it works? Can you use a word when you don't know it's meaning?

Quoting Mijin
And I note that you still haven't said why your argument is not a shift of the burden of proof. i.e. The whole reason you and I are in this exchange in the first place.

You haven't provided a consistent method of determining what type of system is conscious and which type of system isnt.

Quoting Mijin
I said that under certain conditions I could gain belief that a computer was experiencing pain, and I mentioned what those conditions were. Does the program PRINT "Ouch!" fulfill those conditions?
If you read what I wrote, you would know the answer to this.

What were those conditions?

Quoting Mijin
This response is a complete non sequitur.

No. If a pzombie is defined as having no subjective experiences and you can't define subjective experiences, then You haven't properly defined P zombies much less subjective experiences. How can you use words when you don't know what they mean?
Marchesk December 09, 2020 at 23:40 #478625
Quoting Harry Hindu
If you can't tell me what pain is then how do you expect to tell me how it works?


Don't you already know what pain is? Are you one of those rare individuals who can't feel pain? What's that like?
Harry Hindu December 09, 2020 at 23:45 #478627
Quoting Marchesk
Don't you already know what pain is? Are you one of those rare individuals who can't feel pain? What's that like?

I already said that its information.

Mijin is the one that doesnt know what pain is.

Are you asking what pain feels like, or asking what pain is? Is it the same thing?
Mijin December 10, 2020 at 01:35 #478649
Quoting Harry Hindu
If you can't tell me what pain is then how do you expect to tell me how it works? Can you use a word when you don't know it's meaning?


:roll: This is beyond infantile at this point.
I defined pain. I've answered all your questions about pain. I've told you I can elaborate on the mechanisms of pain as much as you like, because it's a topic I've studied at postgrad level.
The only one of your questions I couldn't answer, was how physical mechanisms within the brain give rise to subjective experience because no-one can.

So drop this nonsense about me not knowing what pain is, unless you also mention that you're defining "knowing pain" in such a way that no living human knows what pain is.

Quoting Harry Hindu
You haven't provided a consistent method of determining what type of system is conscious and which type of system isnt.


That's still not responding to the point. We're probably at around 8-9 posts at this point with your only response to my original objection being "no", with zero elaboration, and these various dodges.

Quoting Harry Hindu
What were those conditions?


Here is what I said on that matter:

Quoting Mijin
With regards to computers, yes, if an AI were able to freely converse in natural language, and it repeatedly made the claim that it felt pain, despite such sentiments not being explicitly part of its programming, and it having nothing immediate to gain by lying...then sure, I'd give it the benefit of the doubt. I wouldn't know that it felt pain, but I'd start to lean towards it being true.


Your response to that post, was to then say I would not believe an AI could be conscious even if it claimed it was i.e. the exact opposite of what I said.

Quoting Harry Hindu
If a pzombie is defined as having no subjective experiences and you can't define subjective experiences, then You haven't properly defined P zombies much less subjective experiences. How can you use words when you don't know what they mean?


You were saying I was wrong to assume p-zombies don't have subjective experiences. This showed that it is you that do not understand what a word (p-zombie) means.

With regard to you point, in this context, there is absolutely no need to try to break down the mechanism of subjective experience. It's like if we were to have a term "Dalaxy" meaning a galaxy that contains no dark matter. That's would still be a meaningful term even if we don't know exactly what dark matter is yet.
Harry Hindu December 10, 2020 at 02:07 #478654
Quoting Mijin
I defined pain. I've answered all your questions about pain. I've told you I can elaborate on the mechanisms of pain as much as you like, because it's a topic I've studied at postgrad level.
The only one of your questions I couldn't answer, was how physical mechanisms within the brain give rise to subjective experience because no-one can.

So drop this nonsense about me not knowing what pain is, unless you also mention that you're defining "knowing pain" in such a way that no living human knows what pain is.

You said:
Quoting Mijin
I didn't claim to know what pain is

Then you said:
Quoting Mijin
I feel pain.


Quoting Mijin
What I know about pain is that it is an unpleasant subjective experience, following activation of specific regions of the parietal lobe, usually (not always) preceded by stimulation of nociceptors of the nervous system.
That's all I know about it. If you'd like me to break down what a subjective experience actually is, well I can't, and nor would any neuroscientist claim to be able to at this time. That's the hard problem that we'd like to solve.

So what you seem to be defining pain as is a unpleasant subjective experience, and then go on to say that you don't know what a subjective experience is. If pain is a subjective experience and you don't know what a subjective experience is, then you don't know what pain is. You aren't saying anything useful about pain by asserting that pain is a subjective experience and you don't know what a subjective experience is. It's really that simple.

I have defined pain without the use of the phrase, "subjective experience", because it's a meaningless term, as you point out. Pain is information. What is information? The relationship between cause and effect. Pain informs you of injury. Injury informs you of the cause of the injury, etc. If it's causal, it's information.

Quoting Mijin
Your response to that post, was to then say I would not believe an AI could be conscious even if it claimed it was i.e. the exact opposite of what I said.

My response was a question trying to confirm what you had said. I often paraphrase what people say, and they often recant what they said because the paraphrasing gives them a different look on what they said. But this is beyond the point. The point being that if you don't know what subjective experiences are, then you aren't in any position to make judgements about who, or what has them, or not. It's like saying a blind person doesn't know what polka-dots are, but then they can pick out what has them and what doesn't have them. It's illogical.

Quoting Mijin
With regards to computers, yes, if an AI were able to freely converse in natural language, and it repeatedly made the claim that it felt pain, despite such sentiments not being explicitly part of its programming, and it having nothing immediate to gain by lying...then sure, I'd give it the benefit of the doubt. I wouldn't know that it felt pain, but I'd start to lean towards it being true.

What do you mean, "not explicitly part of its programming"?

Quoting Mijin
You were saying I was wrong to assume p-zombies don't have subjective experiences. This showed that it is you that do not understand what a word (p-zombie) means.

Where did I say that? I have only been questioning your use of the phrase, "subjective experience" because you use the phrase without knowing what it means. Why use terms that you don't know what they mean, especially if there are alternative ways of describing pain with words we do understand? :chin:






Mijin December 10, 2020 at 02:41 #478665
Quoting Harry Hindu
So what you seem to be defining pain as is a unpleasant subjective experience, and then go on to say that you don't know what a subjective experience is. If pain is a subjective experience and you don't know what a subjective experience is, then you don't know what pain is.


I said no such thing -- you were asking me about the mechanism by which physical neurology causes subjective experience. That's what we don't know.

It's like I am saying we don't know exactly what dark matter is, and you're repeatedly saying "If you don't know what dark matter is, how can you use the word?". The word still has meaning in referring to a specific phenomenon, even if we have no concrete scientific model yet.

Quoting Harry Hindu
What do you mean, "not explicitly part of its programming"?


Well the program PRINT "Ouch!" has an exclamation of pain as part of its programming, so does not fulfill the requirements.
Beyond that, in very complex programs, sure it may be much harder to say. I didn't claim we would be able to make such a judgement immediately.

Quoting Harry Hindu
Where did I say that?


Here:

Quoting Harry Hindu
You assume that other humans have [subjective experience] because they claim it, and don't assume it if a pzombie or computer claims it.


Note that this single quote from you has two issues: firstly chastizing me for assuming that p-zombies don't have subjective experience, when this is true by definition. But also secondly, saying I would not believe a computer that claimed to have subjective experience, when the post you are quoting actually says the precise opposite.
Harry Hindu December 10, 2020 at 12:04 #478761
Quoting Mijin
I said no such thing

I quoted you:
Quoting Mijin
What I know about pain is that it is an unpleasant subjective experience, following activation of specific regions of the parietal lobe, usually (not always) preceded by stimulation of nociceptors of the nervous system.
That's all I know about it. If you'd like me to break down what a subjective experience actually is, well I can't, and nor would any neuroscientist claim to be able to at this time. That's the hard problem that we'd like to solve.

:roll:

Quoting Mijin
you were asking me about the mechanism by which physical neurology causes subjective experience. That's what we don't know.

That's part of the problem - dualism. You're left with the impossible task of explaining how physical processes cause subjective processes.

Quoting Mijin
It's like I am saying we don't know exactly what dark matter is, and you're repeatedly saying "If you don't know what dark matter is, how can you use the word?". The word still has meaning in referring to a specific phenomenon, even if we have no concrete scientific model yet.

No one has ever observed dark matter. Dark matter is just an idea to account for the observed behavior of real matter, just like how subjective experiences is an idea to account for the observed behavior of human beings.

Quoting Mijin
Well the program PRINT "Ouch!" has an exclamation of pain as part of its programming, so does not fulfill the requirements.

You were programmed (learned to) to say, "Ouch" from copying the actions of those around you. If you were born in another country with a different language, you would have been programmed differently. Your genetic code is a program defining the limits of your behaviors and logic defines the limits of your ideas.

What if we designed a robot to program itself (teach itself), or learn from it's mistakes?

Quoting Mijin
You assume that other humans have [subjective experience] because they claim it, and don't assume it if a pzombie or computer claims it.
— Harry Hindu

Note that this single quote from you has two issues: firstly chastizing me for assuming that p-zombies don't have subjective experience, when this is true by definition. But also secondly, saying I would not believe a computer that claimed to have subjective experience, when the post you are quoting actually says the precise opposite.

My point was that you had already claimed to not know what a subjective experience is, yet you go on to claim that you know what has it and what doesn't. I already went over this in my last post, which you ignored. I'm done going back and forth with you.



Mijin December 10, 2020 at 13:21 #478770
Quoting Harry Hindu
That's part of the problem - dualism. You're left with the impossible task of explaining how physical processes cause subjective processes.


I am not a dualist, I am a neuroscientist. I want to understand pain because there are people both with painful injuries and diseases, or indeed simply neurological issues that cause intense pain on their own (e.g. cluster headaches). Handwaving their subjective experiences as not existing, or merely "information" is completely unhelpful.

I will never understand why some people are happier to do a handwave than actually work on solving the problem.

Quoting Harry Hindu
No one has ever observed dark matter. Dark matter is just an idea to account for the observed behavior of real matter, just like how subjective experiences is an idea to account for the observed behavior of human beings.


How should science proceed in your view? Is it all-or-nothing where the only way we can talk about a phenomenon is at the point where we have completely solved every aspect of it, otherwise the very words are verboden?

"Dark matter" definitely refers to a real phenomenon, likely to be a form of matter because we can see things like gravitational lensing from it. No, it's not understood yet, but that's why we want to talk about it and talk about what to investigate next to tease out more data.

Quoting Harry Hindu
You were programmed (learned to) to say, "Ouch" from copying the actions of those around you.


The specific word or exclamation here is obviously completely irrelevant. We don't need to be taught to experience pain.

Quoting Harry Hindu
I'm done going back and forth with you.


You won't be missed. I would have preferred if you responded to the original point I put to you though.
Harry Hindu December 10, 2020 at 13:23 #478771
Quoting Marchesk
All these arguments over consciousness might as well take place inside a simulation.

Daydreams could very well be simulated subjective experiences, or views of some process or event. The only difference between daydreams and nightdreams is that you don't have the real world imposing itself on your senses. Daydreams are like an overlay of the real world subjective experience. When sleeping, there is nothing but the simulation so the mind assumes it is reality.
Harry Hindu December 10, 2020 at 13:33 #478773
Quoting Mijin
You were programmed (learned to) to say, "Ouch" from copying the actions of those around you.
— Harry Hindu

The specific word or exclamation here is obviously completely irrelevant. We don't need to be taught to experience pain.

You said that you are able to determine that something has subjective experiences by its behavior - by exclaiming, "Ouch!", yet now you are saying that the word or exclamation is completely irrelevant. If they exclaimed, "Yippee!", would you say that they are having a subjective experience of pain?
Mijin December 10, 2020 at 14:09 #478780
Quoting Harry Hindu
You said that you are able to determine that something has subjective experiences by its behavior - by exclaiming, "Ouch!", yet now you are saying that the word or exclamation is completely irrelevant. If they exclaimed, "Yippee!", would you say that they are having a subjective experience of pain?


I think perhaps you're trolling now, as this post contains numerous errors:

1. I never said that exclaiming "ouch" would be evidence that anything was in pain. In fact it was the opposite. "Ouch" was mentioned in the context of an example of a program that I would not believe had displayed evidence of subjective experience.

2. "determine that something has subjective experiences by its behavior" -- behavior was your wording, not mine. I said that if there was an AI capable of expressing itself in natural language, and it claimed to be in pain, I would have grounds for believing it to be true.

3a. I said that the word itself was irrelevant, because you were making some point about us learning to say "ouch". It's unclear if this new post is even trying to defend that point.
3b. I said that the word itself was irrelevant, so now you're asking me What if the word is "Yippee"? :roll:

Daemon December 10, 2020 at 22:48 #478869
Quoting Harry Hindu
The problem is that this approach explains nothing. What are footprints in the sand? Information. What is consciousness? Information. What is memory? Information. — Daemon

Sure it does. It explains that everything is information. The problem is that you just don't like the idea because you haven't been able to supply a logical argument against it.


I don't like it because it doesn't explain anything. What we need is to find the differences between things. Waves on the sea, footprints on the beach, a piano, a digital computer, a biological brain.

The circuitry in a digital computer is designed to do stuff like making my text appear on your screen. It isn't designed to have consciousness.

Your brain is designed (metaphorically, through evolution) to be conscious. That's what its machinery is for. It's the most complex machinery we know. Consciousness is the dynamic state of this machinery.

It isn't like the machinery in a computer. If we wanted to make a conscious machine, we would need to make something with the same capacities as a brain.

The fact that we can feel is what makes meaning. The waves on the sea can be interpreted as information, but there's no feeling so no meaning. The electrical flows in a computer can be interpreted as information, but again, no feeling so no meaning. No reason to think a computer can feel. Its machinery isn't designed to do that.

If we made a machine that could feel it would raise serious moral questions.

We know that other people feel, that's what life's about. We know that computers, pianos, the sand on the beach and the waves on the sea don't feel, that's why nobody worries about our moral obligations towards them. Saying "it's all just information" loses this all-important distinction.

You must know this Harry. You don't worry about hurting the beach by walking on it. You don't worry whether your computer is living a happy life. You do care about the beings around you that do have feelings.



Harry Hindu December 11, 2020 at 12:05 #478964
Quoting Daemon
I don't like it because it doesn't explain anything. What we need is to find the differences between things. Waves on the sea, footprints on the beach, a piano, a digital computer, a biological brain.

Then I would assume that you would also assert that everything is "physical" doesn't explain anything either.

You do realize that different causal relations would be different information? Of course you would if you had been paying attention to anything I have said.

Quoting Daemon
It isn't like the machinery in a computer. If we wanted to make a conscious machine, we would need to make something with the same capacities as a brain.

I've already asked numerous times, what makes the brain special in that has feelings and consciousness and other things can't. When you look at an image of someone's brain, do you see feelings and consciousness, or a mass of neurons? What about when you look at a computer - any difference in seeing a mass of circuits?

Quoting Daemon
The fact that we can feel is what makes meaning.

I have no idea what this means. How do brains feel? When you look at brains do you see feelings?Quoting Daemon
You don't worry about hurting the beach by walking on it

Who said the beach is bothered by someone walking on it? It could be that the beach likes being walked on.
Daemon December 11, 2020 at 23:44 #479218
Quoting Harry Hindu
Then I would assume that you would also assert that everything is "physical" doesn't explain anything either.


That's correct.

Quoting Harry Hindu
You do realize that different causal relations would be different information? Of course you would if you had been paying attention to anything I have said.


I have been paying attention to what you've said, but I'm sorry to say a lot of it doesn't make sense.

Quoting Harry Hindu
I've already asked numerous times, what makes the brain special in that has feelings and consciousness and other things can't.


Because everything is information right? And a computer has a lot of information so it should be able to have feelings too?

It just seems like crazy talk Harry. You must know something about the complexity of the brain, it's the most complex thing we know about. And you must know something about the highly specific, highly sensitive mechanisms that make it work, and how they can be affected by injury, disease. I sometimes think you young people nowadays don't take enough drugs.

Quoting Harry Hindu
The fact that we can feel is what makes meaning. — Daemon

I have no idea what this means.


Really no idea? Cool!

Well imagine a time before there were any conscious beings. To keep it simple, imagine a time before life. There was no feeling going on, and then there was life and eventually some creature came along that was able to feel, maybe it could feel heat and cold, and heat made it feel good and cold made it feel bad. So before that time, good and bad didn't exist, didn't have meaning, and after that time, they did.

Quoting Harry Hindu
Who said the beach is bothered by someone walking on it? It could be that the beach likes being walked on.


So there were beaches before there was life, before there was feeling. Beaches don't have feelings. You know that, right?
Harry Hindu December 12, 2020 at 14:05 #479367
Quoting Daemon
Then I would assume that you would also assert that everything is "physical" doesn't explain anything either.
— Harry Hindu

That's correct.

Then we at least agree on something.

Instead of "information", what if I said that everything is causal?

Quoting Daemon
I've already asked numerous times, what makes the brain special in that has feelings and consciousness and other things can't.
— Harry Hindu

[quote="Daemon;479218"]It just seems like crazy talk Harry. You must know something about the complexity of the brain, it's the most complex thing we know about. And you must know something about the highly specific, highly sensitive mechanisms that make it work, and how they can be affected by injury, disease. I sometimes think you young people nowadays don't take enough drugs.

So your answer to the question: "What makes brains special from other things that allows it to possess feelings?" is that the brain is the most complex thing that we know? I don't think this is a very good answer to the question, if you don't mind me saying.

For instance, isn't the universe the most complex thing we know? After all it is composed of billions of brains, and an unknown number of other things, possibly other universes, etc. Does that mean that the universe, or multiverse has feelings? Does the Earth have feelings since it is where all these complex brains reside? What about dark matter and energy? Is that more complex than a brain, and what about when we find processes more complex than brains that aren't brains?






Daemon December 12, 2020 at 21:03 #479470
Quoting Harry Hindu
Instead of "information", what if I said that everything is causal?


None of these "everything is X" explanations are any good Harry. As I said before, an explanation needs to tell us what is different about different aspects of the world. Suppose you want to explain vision. A good explanation will tell us that it uses rods and cones on the retina, and so on. Suppose you want to explain hearing. A good explanation will tell us that it uses hair cells in the cochlea, and so on.

If we take your approach, all we can say is "vision is causal, hearing is causal".

Quoting Harry Hindu
For instance, isn't the universe the most complex thing we know?


This is just more of the same. We need to know about differences. Certain parts of the universe are more complex than others, The complexity isn't spread out everywhere like jam.

I do wonder what motivates you to think of things in this way. Are you a fan of Fritov Capra, like Pop? Is it mysticism?





Harry Hindu December 13, 2020 at 14:33 #479683
Quoting Daemon
None of these "everything is X" explanations are any good Harry. As I said before, an explanation needs to tell us what is different about different aspects of the world. Suppose you want to explain vision. A good explanation will tell us that it uses rods and cones on the retina, and so on. Suppose you want to explain hearing. A good explanation will tell us that it uses hair cells in the cochlea, and so on.

If we take your approach, all we can say is "vision is causal, hearing is causal".

:confused: Saying that a dog or a cat is a pet isn't saying that they aren't different, only that they share a property of being a pet.

Quoting Daemon

I do wonder what motivates you to think of things in this way. Are you a fan of Fritov Capra, like Pop? Is it mysticism?

No. It's just logic and the principle of Occam's Razor.


Daemon December 13, 2020 at 23:14 #479818

Quoting Harry Hindu
No. It's just logic and the principle of Occam's Razor.


Kent Holsinger:
Since Occam's Razor ought to be invoked only when several hypotheses explain the same set of facts equally well, in practice its domain will be very limited…[C]ases where competing hypotheses explain a phenomenon equally well are comparatively rare.


My hypothesis is that I am conscious as the result of very specific and highly organised brain states, and computers, pianos, beaches and waves on the sea are not conscious because they don't have the appropriate equipment to achieve such states.

What's your hypothesis?

Harry Hindu December 14, 2020 at 12:01 #479949
Reply to Daemon My hypothesis is that we currently don't know what is conscious or not because we don't know what makes brains special equipment in that regard.

When you have stories like this:



I tend to lean more toward the idea that information processing is related to consciousness.

Is the computer reading the woman's thoughts or brain signals? What's the difference?

What is the difference between computer memory and human memory?
Daemon December 14, 2020 at 21:32 #480055
Reply to Harry Hindu

By coincidence I am reading about exactly this amazing research right now, in an excellent book called The Idea of the Brain by Matthew Cobb. He is Professor of Zoology at the University of Manchester where his research focuses on the sense of smell, insect behaviour and the history of science. The book is described as "a monumental, sweeping journey from the ancient roots of neurology to the most astonishing recent research".

He discusses the lady drinking using the robot arm, and other related research, and then he says:

"Important as all these developments are, they do not imply that brains are actually computers, or that we know how they work. In reality they highlight the plasticity of our brains - Donoghue's group has not cracked the neural code in the brain for volition and planning; instead their computer programs are able to translate patterns of neuronal firing in the brain into the movement of the robot arm, and the patients are able to rapidly tune the activity of their brains so as to manipulate the arm in the desired way.

So the information processing in the computer is piggybacking on the neuronal activity in the brain, which is not like digital computation.

The book explores the similarities and the differences between brains and computers in some detail. It's not a philosophy book, but it, and the research you point to, supports my view and not yours.