You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

The Turing P-Zombie

TheMadFool May 28, 2020 at 19:44 11450 views 158 comments
One well-known test for Artificial Intelligence (AI) is the Turing Test in which a test computer qualifies as true AI if it manages to fool a human interlocutor into believing that s/he is having a conversation with another human. No mention of such an AI being conscious is made.

A p-zombie is a being that's physically indistinguishable from a human but lacks consciousness.

It seems almost impossible to not believe that a true AI is just a p-zombie in that both succeed in making an observer think they're conscious.

The following equality based on the Turing test holds:

Conscious being = True AI = P-Zombie

If so, we're forced to infer either that true AI and p-zombies are conscious or that there is no such thing as consciousness.






Comments (158)

Hanover May 28, 2020 at 19:58 #417075
Quoting TheMadFool
If so, we're forced to infer either that true AI and p-zombies are conscious or that there is no such thing as consciousness.


If Bob shoots Joe and it in every way appears motivated by jealousy, does there still remain a possibility that it was not? If you concede there is such a possibility, then you are conceding that behavior is not a perfect reflection of intent, and more importantly, that intent is unobservable.

The point being that behavior does not tell us exactly what the internal state is, which means it's possible that one have a behavior and not have an internal state and it's possible that one have an internal state and have no behavior.
Outlander May 28, 2020 at 19:58 #417076
If something damp leaves moisture on my finger when touched is there no such thing as water?
bongo fury May 28, 2020 at 23:26 #417111
I heard there is a growing online campaign to seek a posthumous apology from Turing for his Test.

:snicker:

InPitzotl May 29, 2020 at 02:37 #417150
Quoting TheMadFool
One well-known test for Artificial Intelligence (AI) is the Turing Test in which a test computer qualifies as true AI if it manages to fool a human interlocutor into believing that s/he is having a conversation with another human. No mention of such an AI being conscious is made.

A p-zombie is a being that's physically indistinguishable from a human but lacks consciousness.

I think it's important to point out that those are two completely different things.

"All" a computer needs to do to pass a Turing Test is say the right words in response to words a judge says. In the way it's typically imagined (there's debate on what a TT really is; let's just ignore that), the thing behind the terminal might be a human or might be a computer. Either way the only access the judge has to he/she/it per the rules is texting via a terminal. So a judge might ask something like, what's a good example of an oblong yellow shaped fruit? And if the response is "A banana", that's something a human could have said. Call that "level 1".

But here's the problem. If we take a "level 1" program and just shove it into a robot, what do you suppose we'd get? It'd be silly to presume you'd get anything other than this... a (hopefully) non-moving robot, incapable of doing anything useful, except possibly over a single channel of communication where it can receive and send sets of symbols equivalent to native language text (e.g. English). If, say, I brought a bowl of fruit, placed it in front of the robot, and typed into this channel, "show me which one is the banana"... then it doesn't matter how well the thing did in the Turing Test, I should expect no action. And that kind of feeble robot is certainly incapable of fooling anyone that it's a human. Before the "oh, that's just a minor detail... suppose we", let's actually suppose we. What do we have to add to this robot to get it to fool a human?

In this example, one thing we might expect the robot to be capable of is, when asked to pick out the banana from the bowl of fruit, that the robot would just reach out and either touch the banana or pick it up. So let's say it does that... then what more is it doing than level 1? Well, it's not just processing string data... now it's observing the environment, associating requests with an action, identifying the proper thing to do when asked to show me which is the banana, and being capable of moving its robot arm towards the banana based on its perception. Now it's not just spitting out words; it's a more semantic thing. It's not just the string "banana" that it has to respond with, but rather, it has to respond with "showing me" (aka, performing some action that counts as "showing me") one of a set of objects (i.e., be capable of identifying where that set is) that is the actual banana (associating the word banana with "the entity" that the word is about). That's a bit more involved than just passing a Turing Test... the two aren't equivalent. You need to do a lot more to build a good p-zombie than just trick people behind a terminal. P-zombies are at least "level 2."
Sir2u May 29, 2020 at 03:39 #417155
Quoting InPitzotl
So a judge might ask something like, what's a good example of an oblong yellow shaped fruit? And if the response is "A banana", that's something a human could have said. Call that "level 1".


I think that Turing meant that you could have a conversation as opposed to a question/answer session with it. It would be have to access vast amounts of data quickly and come up with the correct sentences, phrases to be able convince anyone that it was a human.

Quoting InPitzotl
But here's the problem. If we take a "level 1" program and just shove it into a robot, what do you suppose we'd get? It'd be silly to presume you'd get anything other than this... a (hopefully) non-moving robot,


I don't remember ever reading anything he wrote or that was written about him that could indicate that he thought AI would be judged by its use in robots. The test was actually based on a game.

Quoting InPitzotl
when asked to pick out the banana from the bowl of fruit, that the robot would just reach out and either touch the banana or pick it up. So let's say it does that... then what more is it doing than level 1? Well, it's not just processing string data... now it's observing the environment, associating requests with an action, identifying the proper thing to do when asked to show me which is the banana, and being capable of moving its robot arm towards the banana based on its perception.


The observation creates more strings of data for it to process, and make decisions about. Any artificial sense would produce data to be processed. A true AI would have to have a lot of processing power just for that. But for a robot to be able to move you need very little processing power. The two things are not equivalents, AI is not a robot and robots do not have to be AI

Quoting InPitzotl
That's a bit more involved than just passing a Turing Test... the two aren't equivalent.


If I just want to talk to the little black box on my desk because I am lonely, the test works fine. The test does not say that AI has to convince someone by being there in PERSON and convincing them that it is human. That would involve more that just AI, things like appearance, smell, body movement and lots of other human quirks.

InPitzotl May 29, 2020 at 05:07 #417171
Quoting Sir2u
I think that Turing meant that you could have a conversation

I think you're missing the point. Yes, the TT involves having a conversation; but the conversation is limited only to a text terminal... that is, you're exchanging symbols that comprise the language. But the TT involves being indistinguishable from a human to a (qualified) judge. And if your computer program cannot answer questions like this, then it can't pass the TT. Over a terminal, though, all you can do is exchange symbols, by design.
Quoting Sir2u
It would be have to access vast amounts of data quickly and come up with the correct sentences,

Mmm.... it's a little more complex than this. Fall back to the TT's inspiration... the imitation game. Your goal is to fool a qualified judge. So sure, if it takes you 10 minutes to figure out that a banana is a good response to an oblong yellow fruit, that's suspicious. If it takes you 10 seconds? Not so much. But if it takes you 5 seconds to tell me what sqrt(pi^(e/phi)) is to 80 decimal places, that, too, is suspicious. You're not necessarily going for speed here... you're going for faking a human. Speed where it's important, delay where it's important.
Quoting Sir2u
I don't remember ever

I'm not writing a paper discussing Turing; I'm responding to the OP in a thread on a forum. In that post, there was one paragraph talking about an AI passing a TT. The next paragraph, we're talking about p-zombies. All I'm doing is pointing out that these are completely different problem spaces; that passing the TT is woefully inadequate for making you a good p-zombie.
Quoting Sir2u
The observation creates more strings of data for it to process, and make decisions about.

Technically, yes, but that's a vast oversimplification. It's analogous to describing the art of programming as pushing buttons (keys on a keyboard) in the correct sequence. Yeah, programming is pushing buttons in the right sequence, technically... but the entire problem is about how you push the buttons in what sequence to achieve what goal.

Think of this as skillsets. Being able to talk goes a long way to being a good p-zombie, but it's only one skillset; and mastering just that skill isn't going to fool anyone into thinking "you're" conscious. That "more strings of data" and "decisions about" you're talking about here is another skillset; say, it's a vision analog and you equipped your robot with a camera. That's all well and good, but that skillset is literally about discerning objects (and like "useful things") from the images. English is a one dimensional thing with properties like grammar; images are two dimensional things which convey information about three dimensional objects, which has no grammar... but rather, there are rules to both vision per se and to object behaviors. There's also acting; and reducing that to just moving a part is another oversimplification. Touching a banana isn't a function of "moving a part", but "moving a part appropriately such that it attains the goal of touching the banana"... that involves the vision again, and planning, and the motion, and adjustment, and has to tie in to object recognition and world modeling as well as relating that English text from that first skillset to appropriate actions to respond with. That's another skillset, and it's a distinct one. Furthermore, once your p-zombie starts interacting, it's not truly "just a computer" with "inputs" and "outputs" any more... it's really something more like "two computers" dancing... one with sensors and servos and silicon, but the other, the dance partner, is made of objects, object properties, and physical mechanics. The interaction required to just act towards attaining a goal, a must for fooling a human into thinking you're a conscious agent, is so mechanically interfused with what you're acting with that the entire system becomes important (consider that your actions have effects on your senses and that modeling that, which requires modeling your dance partner, is a required part of the skillset).
Quoting Sir2u
The test does not say that AI has to convince someone

Sure, but that's required to be a p-zombie.
Quoting Sir2u
That would involve more that just AI, things like appearance, smell, body

...well not quite. The p-zombie isn't trying to fool you into thinking that it's a human; it's just fooling you into thinking it's conscious.
TheMadFool May 29, 2020 at 05:07 #417172
Quoting Hanover
If Bob shoots Joe and it in every way appears motivated by jealousy, does there still remain a possibility that it was not? If you concede there is such a possibility, then you are conceding that behavior is not a perfect reflection of intent, and more importantly, that intent is unobservable.

The point being that behavior does not tell us exactly what the internal state is, which means it's possible that one have a behavior and not have an internal state and it's possible that one have an internal state and have no behavior.


So the Turing test is flawed? Behavior is not a reliable indicator of consciousness? Doesn't that mean p-zombies are possible and doesn't that mean physicalism is false?

Reply to InPitzotl You are taking the Turing test too literally. The idea of an AI fooling a human has a much broader scope - it includes all interactions between the AI and humans whether just chatting over a network or actual physical interaction.



InPitzotl May 29, 2020 at 05:16 #417176
Quoting TheMadFool
You are taking the Turing test too literally.

I can only reply that I've seen people choke on this point. Also, the term Turing Test is a term of art with a literal meaning, so I'm not sure how taking it literally can be a bad thing. p-zombie is also a term of art with a distinct meaning. Surely it's better to just be clear, especially if people get confused, right?
TheMadFool May 29, 2020 at 05:21 #417178
Quoting InPitzotl
I can only reply that I've seen people choke on this point. Also, the term Turing Test is a term of art with a literal meaning, so I'm not sure how taking it literally can be a bad thing. p-zombie is also a term of art with a distinct meaning. Surely it's better to just be clear, especially if people get confused, right?


In my humble opinion...

[quote=Bruce Lee]Its like a finger pointing away to the moon. Dont concentrate on the finger or you will miss all that heavenly glory.[/quote]
InPitzotl May 29, 2020 at 05:46 #417188
Quoting TheMadFool
In my humble opinion...
"Its like a finger pointing away to the moon. Dont concentrate on the finger or you will miss all that heavenly glory." — Bruce Lee

Alright, let's turn this into a question then. In your original post, you said this:
Quoting TheMadFool
the Turing Test in which a test computer qualifies as true AI if it manages to fool a human interlocutor into believing that s/he is having a conversation with another human.

...after which you offered:
Quoting TheMadFool
The following equality based on the Turing test holds:
Conscious being = True AI = P-Zombie

...so, that reads like it possibly suggests this:

Conscious being = True AI = Passes Turing Test = Fools human interloculators into believing you are having conversations with another human = P-Zombie

So the question is... was that your intent in this thread?
TheMadFool May 29, 2020 at 06:12 #417198
Reply to InPitzotl Indeed those are my words but surely you could've taken my words in a much broader setting.
InPitzotl May 29, 2020 at 06:22 #417202
Reply to TheMadFool
I'm not after a gotcha or a fight; just demonstrating that there's genuine room for confusion here. I'll take your response as a no, so hopefully that would clear things up about your intent.
TheMadFool May 29, 2020 at 06:27 #417205
Quoting InPitzotl
I'm not after a gotcha or a fight; just demonstrating that there's genuine room for confusion here. I'll take your response as a no, so hopefully that would clear things up about your intent.


Are you saying my post is confusing? Well, I did try to keep my wordcount to a minimum. Perhaps that's where the fault lies.

bongo fury May 29, 2020 at 09:18 #417244
Quoting TheMadFool
Well, I did try to keep my wordcount to a minimum.


:ok:

Quoting TheMadFool
Perhaps that's where the fault lies.


Never, ever.
VagabondSpectre May 29, 2020 at 09:41 #417245
Reply to TheMadFool What did the reanimated corpse Rene Descartes say when asked if he was conscious?

[hide]I zom, therefore I be![/hide]

On a serious note, I'm inclined to agree with your last statement from the OP.

Maybe our reports of our own conscious experiences are those of a P-zombie; we're hard-wired to believe that we are conscious..

Silly though it may seem, the notion does hold with the way we work on a neurological level: we have stored memories that exist as a somewhat static record, and every new milliseconds our brains generate new frames of cognition (perception, learning, inference, action).

If we imagine cutting the lifespan of an individual into a series of discrete frames (quite a lot of them I guess) - if we could freeze time -, is a single frame of a live person "conscious"? (by anyone's measure...).

If we merely juxtapose two adjacent frames and flick back and forth between two states (maybe there is some measurable change, like some neurons sending or receiving a new signal), does that create consciousness? Perhaps on the most minimal order of the stuff that consciousness is made of?

The hard problem is pretty hard indeed... Even panpsychism starts to make sense after too long...

I'm tend to air on the side of self-delusion. That consciousness is something "real" (as in, woo woo magic and other presumptive metaphysical rigamarole) is almost certainly delusional. That it's "something" at all beyond a mere report or belief does seem somewhat plausible, but I would not be surprised if we're just self-delusion all the way down.
TheMadFool May 29, 2020 at 11:58 #417273
Quoting VagabondSpectre
Maybe our reports of our own conscious experiences are those of a P-zombie; we're hard-wired to believe that we are conscious..


Someone in another thread had the opinion that consciousness/mind could be an illusion. I take it that he meant there's a physical basis for the phenomenon of consciousness.

This idea of consciousness being an illusion reminds me of Searle's chinese room argument. Searle contends that though true AI will be able to fool a human in conversation, that in itself doesn't prove that it has the capacity to understand like humans. All the AI is doing, for Searl, is mechanically manipulating symbols and the AI never actually understands the meanings of the symbols and their combinations.

That raises the important question of what understanding is and, more importantly, whether it is something beyond the ability of a computer AI? Speaking from my own experience, understanding/comprehension seems to start off at the very basic level of matching linguistic symbols (words, spoken or written) to their respective referents e.g. "water" is matched to "cool flowing substance that animals and plants need", etc. This is cleary something a computer can do, right? After such a basic cognitive vocabulary is built what happens next is simply the recognition of similarities and differences and so, continuing with my example, a crowd of fans moving in the streets will evoke, by its behavior, the word and thus the concept "flowing" or a fire will evoke the word/concept "not cold" and so on. In other words, there doesn't seem to be anything special about understanding in the sense that it involves something more than symbol manipulation and the ability to discern like/unlike thing.

If we were to agree with Searle on this then the onus of proving understanding/comprehension is not just symbol manipulation falls on Searle's supporters.

Reply to bongo fury :smile:
Harry Hindu May 29, 2020 at 13:18 #417283
Quoting TheMadFool
A p-zombie is a being that's physically indistinguishable from a human but lacks consciousness.

What does it mean to be physically indistinguishable? Are there other ways of being distinguishable or indistinguishable?

Quoting TheMadFool
If so, we're forced to infer either that true AI and p-zombies are conscious or that there is no such thing as consciousness.

I don't know. What is "consciousness"?
TheMadFool May 29, 2020 at 13:22 #417287
Quoting Harry Hindu
What does it mean to be physically indistinguishable? Are there other ways of being distinguishable or indistinguishable?


Good question but how might I word it to be more explicit than that? Perhaps physical in the sense that the p-zombie has a head, trunk, limbs, internal organs - identical in every sense of bodily parts?

Quoting Harry Hindu
What is "consciousness"?


What is consciousness? Perhaps best elucidated as the difference between sleep and awake states.
Harry Hindu May 29, 2020 at 13:28 #417292
Reply to TheMadFool So are we talking about distinguishing between body types or waking and sleeping states?

My computer goes to sleep sometimes and then wakes up when I move the mouse or hit a key on the keyboard.

What if someone is dreaming? Are they conscious?
TheMadFool May 29, 2020 at 13:31 #417297
Quoting Harry Hindu
So are we talking about distinguishing between body types or waking and sleeping states?


Indeed, what else could "physical" mean?

Quoting Harry Hindu
My computer goes to sleep sometimes and then wakes up when I move the mouse or hit a key on the keyboard


Could the computer be conscious?

Quoting Harry Hindu
What if someone is dreaming? Are they conscious?


You overlooked non-REM sleep.
Harry Hindu May 29, 2020 at 14:04 #417329
Quoting TheMadFool
So are we talking about distinguishing between body types or waking and sleeping states?
— Harry Hindu

Indeed, what else could "physical" mean?

Waking and sleeping states aren't physical states?

Quoting TheMadFool
Could the computer be conscious?

Well, you did define consciousness as the difference between waking and sleeping states, so it seems to be the case, yes.
TheMadFool May 29, 2020 at 14:15 #417338
Quoting Harry Hindu
Waking and sleeping states aren't physical states?


I want to clarify what consciousness is.

Also, what are alseep and awake states then, if not physical?

Quoting Harry Hindu
Well, you did define consciousness as the difference between waking and sleeping states, so it seems to be the case, yes.


So, a standard issue computer is capable of consciousness? I guess we're not seeing eye to eye on what consciousness means.

Why don't you give it a go? What is consciousness to you?
Harry Hindu May 29, 2020 at 14:25 #417349
Quoting TheMadFool
So, a standard issue computer is capable of consciousness? I guess we're not seeing eye to eye on what consciousness means.

It's not you and I that aren't seeing eye to eye. You aren't seeing eye to eye with your previous statement.

What makes it impossible for a "standard issue computer" to be capable of consciousness if you defined consciousness as the difference between waking and sleeping states? If it is still impossible even though you defined it as such, then is consciousness something more than just the difference between waking states, or something else entirely that has nothing to do with waking and sleeping states?

Quoting TheMadFool
Why don't you give it a go? What is consciousness to you?

I got to what consciousness is for me by asking these questions that I'm asking you to myself. I think that if I tell you what I think consciousness is, it would turn into an argument. Let's see where these questions lead us.

Quoting TheMadFool
Also, what are alseep and awake states then, if not physical?

Then why are you trying to determine if consciousness exists by distinction in body type and function, rather than being awake or asleep? I could build a humanoid robot that goes to sleep and wakes up, like a "standard issue computer". Is it conscious? If P-Zombies look and behave like humans, which includes going to sleep and waking up, then p-zombies are conscious.




TheMadFool May 29, 2020 at 14:35 #417356
Quoting Harry Hindu
What makes it impossible for a "standard issue computer" to be capable of consciousness if you defined consciousness as the difference between waking and sleeping states?


When I mentioned sleep and awake states I thought you'd immediately know that the domain of discussion is humans and not anything else.

Quoting Harry Hindu
I think that if I tell you what I think consciousness is, it would turn into an argument


I'm all eyes and ears. We disagree. I'd like to know why and for that I need to know what you think consciousness is.

Harry Hindu May 29, 2020 at 14:37 #417358
Quoting TheMadFool
When I mentioned sleep and awake states I thought you'd immediately know that the domain of discussion is humans and not anything else.

I think that's part of the problem - anthropomorphism.

I thought you were talking about p-zombies too, and the point still applies to them:
Quoting Harry Hindu
If P-Zombies look and behave like humans, which includes going to sleep and waking up, then p-zombies are conscious.


TheMadFool May 29, 2020 at 14:40 #417361
Quoting Harry Hindu
I think that's the problem.

I thought you were talking about p-zombies too, and the point still applies to them:
If P-Zombies look and behave like humans, which includes going to sleep and waking up, then p-zombies are conscious.


That's begging the question.
Harry Hindu May 29, 2020 at 14:43 #417364
Reply to TheMadFool It's applying your definition, not mine

I asked you this:
Quoting Harry Hindu
If it is still impossible even though you defined it as such, then is consciousness something more than just the difference between waking states, or something else entirely that has nothing to do with waking and sleeping states?


If there is something more, then what is it - that it has to be a human? Then by definition p-zombies aren't conscious because they aren't humans.
TheMadFool May 29, 2020 at 14:56 #417370
Quoting Harry Hindu
It's applying your definition, not mine.


I didn't provide a definition. If I did anything, it's give you just a rough idea of what I think consciousness is. Again, with the hope of coming to some agreement with you, what is your definition of consciousness?
Harry Hindu May 29, 2020 at 16:00 #417403
Quoting TheMadFool
I didn't provide a definition. If I did anything, it's give you just a rough idea of what I think consciousness is.

Then you mistook what I was asking for. I wasn't asking for a rough idea, but a specific one as you seemed to know the specifics if you can behave like the arbiter of what is conscious and what isn't. If you've already determined that you must be a human to be conscious, then you've answered your own question.

Your qualifiers were waking/sleeping and being human. P-zombies fit the former but not the latter, therefore p-zombies being conscious is false.

If you're going to restrict the discussion to only humans then you're not going to agree with my definition, but then that would exclude p-zombies from the discussion as well, and your thread is inadequately named.

Sir2u May 29, 2020 at 16:48 #417417
Quoting InPitzotl
I think you're missing the point. Yes, the TT involves having a conversation; but the conversation is limited only to a text terminal... that is, you're exchanging symbols that comprise the language. But the TT involves being indistinguishable from a human to a (qualified) judge.


The original game was to try to distinguish between a man and a women using only written text answers to questions, obviously speaking would have made it too easy.
I think that you missed the point of exactly what he said. Look it up.
Turing did not set any limit to how the test would be carried out with a computer, his vision of a computer that was capable of fooling a human was 50 years from his time. I doubt that he thought that computers would just stay as text terminals, even though they did not exist in his time. His statement was futuristic.

Quoting InPitzotl
Mmm.... it's a little more complex than this. Fall back to the TT's inspiration... the imitation game. Your goal is to fool a qualified judge. So sure, if it takes you 10 minutes to figure out that a banana is a good response to an oblong yellow fruit, that's suspicious. If it takes you 10 seconds? Not so much. But if it takes you 5 seconds to tell me what sqrt(pi^(e/phi)) is to 80 decimal places, that, too, is suspicious. You're not necessarily going for speed here... you're going for faking a human. Speed where it's important, delay where it's important.


You can program response time and delays into a computer quite easily, an AI would have a basic set of rules to follow for most of its operations. Just as it would need rules to follow when picking something up.

Quoting InPitzotl
Technically, yes, but that's a vast oversimplification. It's analogous to describing the art of programming as pushing buttons (keys on a keyboard) in the correct sequence. Yeah, programming is pushing buttons in the right sequence, technically... but the entire problem is about how you push the buttons in what sequence to achieve what goal.


So an AI would need trillions of lines of code instead of millions, which brings us back to the part of processing power. Sequences using IF/THEN/ELSE would decide which buttons were pressed and when, again trillions of them.

Quoting InPitzotl
Think of this as skillsets.


Computer learning has come a long way in the last few years. Some of them can and do recognize objects. Some of them can and do pick them up and manipulate them, with great competence. Put the two together and there is your machine.

While accomplishing this has not happened in the 50 years Turing predicted it is getting closer everyday.

But one thing that most people seem to forget about when talking about the theoretical P zombie is that it is not written anywhere that it has to be a physical object, only that it has to be able to convince real people that it is a real person. A hologram would probably be able to do that.
If someone throw a banana at you, would you try to catch it or dodge it. Fear of contact from a physical object would be just as convincing as actually catching it.
InPitzotl May 29, 2020 at 22:11 #417525
Quoting Sir2u
Look it up.

Sure. Here's a pdf copy; and here's an html one. (Context for others... these are links to Alan Turing's article "Computer machinery and Intelligence", 1950, which introduced what's now regarded as the Turing Test).
Quoting Sir2u
So an AI would need

That's reasonable; since it would involve more things, it likely would involve more code.
Quoting Sir2u
Computer learning has come a long way

Well, yeah, it has.
Quoting Sir2u
But one thing that most people seem to forget about

"Forget" is a strong word; that implies not remembering something said. Cite?

Sir2u May 30, 2020 at 01:14 #417570
Quoting InPitzotl
"Forget" is a strong word; that implies not remembering something said.


Sorry about that, definitely badly worded. I have been busy in classes all day and just skipped a few minute to drop in here at lunch. Let me rewrite that part.

But one thing that most people don't seem to realize when talking about the theoretical P zombie is that it is not written anywhere that it has to be a physical object, only that it has to be able to convince real people that it is a real person. A hologram would probably be able to do that.

Is that the only problem?
TheMadFool May 30, 2020 at 06:16 #417652
Quoting Harry Hindu
Then you mistook what I was asking for. I wasn't asking for a rough idea, but a specific one as you seemed to know the specifics if you can behave like the arbiter of what is conscious and what isn't. If you've already determined that you must be a human to be conscious, then you've answered your own question.

Your qualifiers were waking/sleeping and being human. P-zombies fit the former but not the latter, therefore p-zombies being conscious is false.

If you're going to restrict the discussion to only humans then you're not going to agree with my definition, but then that would exclude p-zombies from the discussion as well, and your thread is inadequately named.


Firstly, why are you so coy about your definition of consciousness?

Secondly, I'd like to know what your analysis of the Turing test is vis-a-vis consciousness and p-zombies? The Turing test would have us believe that behavior alone (of the AI) suffices to come to the conclusion that the AI is conscious. Compare and contrast that to the p-zombie in which case, if a p-zombie is possible, behavior alone is insufficient to infer consciousness.
Marchesk May 30, 2020 at 10:12 #417693
Quoting TheMadFool
Compare and contrast that to the p-zombie in which case, if a p-zombie is possible, behavior alone is insufficient to infer consciousness.


The problem with p-zombies is that they can debate consciousness in just as nuanced a manner as a philosopher like Chalmers or any of us discussing our everyday subjectivity. I find that bordering on incoherent.

So my assumption would be that if an AI can pass a robust Turing Test on consciousness, then it's probably conscious like us. But it has to be robust, and not just clever programming techniques. Humans are easily fooled since we have a tendency to see agency in things.
unenlightened May 30, 2020 at 12:13 #417738
The problem with p-s is that they conflate consciousness and thought. Zombies do not have problems.

The characteristic of consciousness is not a good line in bullshit, but giving a fuck.

So anyone not blinded by their own bull, has no difficulty recognising the consciousness of someone with Downs, or a cat or a bird. Philosophers and world leaders are the difficult borderline cases.
TheMadFool May 30, 2020 at 14:09 #417781
Quoting Marchesk
The problem with p-zombies is that they can debate consciousness in just as nuanced a manner as a philosopher like Chalmers or any of us discussing our everyday subjectivity. I find that bordering on incoherent.


But why is it "bordering on incoherent"?
TheMadFool May 30, 2020 at 14:11 #417782
Quoting Marchesk
not just clever programming techniques.


What do you mean?
Marchesk May 30, 2020 at 14:16 #417784
Quoting TheMadFool
What do you mean?


"Siri, what's the temperature?"

"It's 20 degrees outside. Brrrr, cold."
Marchesk May 30, 2020 at 14:19 #417785
Quoting TheMadFool
But why is it "bordering on incoherent"?


Think about a p-zombie telling other p-zombies about a dream they had. Now what could the dream teller mean, and what would the listeners understand, given that they have no dream experiences?

Or take a movie like one of the Terminator ones where at some point you see the world from the first person perspective of the terminator. Now how would a p-zombie understand that?
Heiko May 30, 2020 at 14:44 #417790
The p-zombie has no consciousness per definition. On the other hand, nobody has proven that a sheet of aluminum does not have feelings. So what was the point again? Ah yes! We can define things to have a certain property or not and then "prove" that it has it or not. Pretty scholastic.
ssu May 30, 2020 at 16:08 #417808
Quoting TheMadFool
The following equality based on the Turing test holds:

Conscious being = True AI = P-Zombie

If so, we're forced to infer either that true AI and p-zombies are conscious or that there is no such thing as consciousness.

Starting from the fact that we cannot agree on just what is consciousness and have big problems in deciding just what is and what isn't conscious, it's hardly surprising that even a brilliant mind like Turing would be vague on the subject.

After all, a far more simple and theoretical issue like calculation, computability / uncomputability puzzles us still quite a lot.

Yet the the proponents of computers and computer theory have been very willing to declare AI to be actuality even now, whereas many laymen still consider real AI to be that truly conscious robotic chap that indeed has a mind of it's own. Needless to say, a smart program can pass the Turing test many times. But with luck a clever recording would do that also sometimes...
Heiko May 30, 2020 at 19:04 #417838
Quoting ssu
Starting from the fact that we cannot agree on just what is consciousness and have big problems in deciding just what is and what isn't conscious, it's hardly surprising that even a brilliant mind like Turing would be vague on the subject.


I guess the point was: something is intelligent, if it is called intelligent because it appears to be intelligent.
The judgement is already made by people, not by some arbitrary criteria. :lol:
The written form of communication was chosen to prevent a bias based on seeing a human or not.

One of the best points is made in Gibsons "Neuromancer"
Case: "Are you sentient?"
AI: "Well, if you ask me then I am. But I guess that is some kind of philosophical question. hrhr"

Intelligent?
Harry Hindu May 30, 2020 at 21:55 #417895
Quoting TheMadFool
Firstly, why are you so coy about your definition of consciousness

Because you've restricted the domain of the discussion to humans.

One of the possible answers to "what is the definition of consciousness" is, "I dont know". From there we dont assume that it is necessary to be human to be conscious.

Quoting TheMadFool
Secondly, I'd like to know what your analysis of the Turing test is vis-a-vis consciousness and p-zombies? The Turing test would have us believe that behavior alone (of the AI) suffices to come to the conclusion that the AI is conscious. Compare and contrast that to the p-zombie in which case, if a p-zombie is possible, behavior alone is insufficient to infer consciousness.

What type of behavior is indicative of being conscious? Any human behavior? What about sleeping?


And what did we end up concluding here:
Quoting TheMadFool
What does it mean to be physically indistinguishable? Are there other ways of being distinguishable or indistinguishable?
— Harry Hindu

Good question but how might I word it to be more explicit than that? Perhaps physical in the sense that the p-zombie has a head, trunk, limbs, internal organs - identical in every sense of bodily parts?

If there are no other ways for something to be distinguishable or indistinguishable, then "physically" is a useless term, at least in the context of the distinguishable and indistinguishable.

Maybe the dichotomy between the physical and mental is just as useless and should be considered when defining consciousness.

Is consciousness something you have, something you do, or something you are?
ssu May 30, 2020 at 21:57 #417896
Quoting Heiko
I guess the point was: something is intelligent, if it is called intelligent because it appears to be intelligent.
The judgement is already made by people, not by some arbitrary criteria.

The reasoning in Turing's test is quite similar to yours: that we'd just notice it, because we are conscious. Yet the fact is that appearances can be deceptive.
TheMadFool May 30, 2020 at 23:34 #417916
Quoting Marchesk
Think about a p-zombie telling other p-zombies about a dream they had. Now what could the dream teller mean, and what would the listeners understand, given that they have no dream experiences?


del
TheMadFool May 30, 2020 at 23:44 #417918
Quoting Marchesk
Think about a p-zombie telling other p-zombies about a dream they had. Now what could the dream teller mean, and what would the listeners understand, given that they have no dream experiences?


Are you saying that certain behavior, e.g. talking about dreams, thoughts, feelings, etc, is sufficient to infer the presence of consciousness? If so, how do you reconcile your point of view with the Turing test which basically claims that all a computer has to do is mimic a person, converse with us as if it has dreams, feelings and thoughts while not actually experiencing them? I mean the Turing test suggests, in no uncertain terms, that consciousness isn't necessary to be a human.
TheMadFool May 30, 2020 at 23:46 #417919
Quoting Harry Hindu
Because you've restricted the domain of the discussion to humans.


Into what other areas would you like to extend this discussion into? I'm game.
TheMadFool May 30, 2020 at 23:49 #417920
Quoting ssu
Starting from the fact that we cannot agree on just what is consciousness and have big problems in deciding just what is and what isn't conscious, it's hardly surprising that even a brilliant mind like Turing would be vague on the subject.

After all, a far more simple and theoretical issue like calculation, computability / uncomputability puzzles us still quite a lot.

Yet the the proponents of computers and computer theory have been very willing to declare AI to be actuality even now, whereas many laymen still consider real AI to be that truly conscious robotic chap that indeed has a mind of it's own. Needless to say, a smart program can pass the Turing test many times. But with luck a clever recording would do that also sometimes...


If Turing thought that a computer AI has only to mimic a human to qualify as conscious then it seems he would also think the p-zombies are conscious.
Harry Hindu May 30, 2020 at 23:49 #417921
Quoting TheMadFool
If so, how do you reconcile your point of view with the Turing test which basically claims that all a computer has to do is mimic a person,

To say that a computer mimics a person is already defining consciousness as something that can simulated or emulated. Can consciousness be mimicked or is it that wherever some behavior exists consciousness necessarily exists and can't be something that is mimicked?
Harry Hindu May 30, 2020 at 23:51 #417923
Reply to TheMadFool Awesome. I'm looking forward to your response to the rest of my post that you just quoted, and one after that.
TheMadFool May 30, 2020 at 23:55 #417924
Quoting Harry Hindu
To say that a computer mimics a person is already defining consciousness as something that can simulated or emulated. Can Consciousness be mimicked or is it that wherever some behavior exists consciousness necessarily exist and can't be something that is mimicked?


It seems that Turing thought that behavior of an entity is adequate grounds to believe that entity to be conscious. Chalmers, because he thinks p-zombies are possible, doesn't share Turing's sentiments.
ssu May 31, 2020 at 00:35 #417928
Quoting TheMadFool
If Turing thought that a computer AI has only to mimic a human to qualify as conscious then it seems he would also think the p-zombies are conscious.

Did he have that in mind? I haven't read his papers well enough to make that specific conclusion. If you have a direct quote, feel free to enlighten me.

Because, again, how do I know I'm not responding to a very clever bot here either, but another human being?

I can make the argument that your responses seem to be made by a conscious human being. But that assumption doesn't mean I think p-zombies are conscious. What I do know is that we don't understand consciousness yet, simple as that.




TheMadFool May 31, 2020 at 00:46 #417930
Quoting ssu
Did he have that in mind? I haven't read his papers well enough to make that specific conclusion. If you have a direct quote, feel free to enlighten me.

Because, again, how do I know I'm not responding to a very clever bot here either, but another human being?

I can make the argument that your responses seem to be made by a conscious human being. But that assumption doesn't mean I think p-zombies are conscious. What I do know is that we don't understand consciousness yet, simple as that.


If you think, as Turing supposedly did, that consciousness can be inferred from the behavior of a computer then it isn't much of stretch to conclude that Turing would've come to the conclusion that p-zombies are impossible.
Heiko May 31, 2020 at 09:28 #418020
Quoting TheMadFool
If you think, as Turing supposedly did, that consciousness can be inferred from the behavior of a computer then it isn't much of stretch to conclude that Turing would've come to the conclusion that p-zombies are impossible.


Turing wasn't concerned with philosophy. And without reading his papers I guess he spoke of intelligence, not consciousness.
If p-Zombies make you headache, may I ask, do you think other people are zombies or not and why? This surely would make for a life-style...
TheMadFool May 31, 2020 at 10:50 #418032
Quoting Heiko
And without reading his papers I guess he spoke of intelligence, not consciousness


I'm afraid reading his papers isn't something I've done; I wouldn't understand it anyway. Do you suppose Turing wasn't talking about consciousness when he formulated the now famous Turing test?

I find that hard to believe because the test specifically mentions that the AI has to convince a human that it (the AI) is human and being human involves consciousness - in fact consciousness is the defining feature of being human.

You mentioned "intelligence". Going with that, AI would have to mimic human intelligence. See below:

[quote=wikipedia]Human intelligence is the intellectual capability of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness[/quote]

Self-awareness is the cornerstone of consciousness.

Heiko May 31, 2020 at 11:08 #418034
Quoting TheMadFool
I find that hard to believe because the test specifically mentions that the AI has to convince a human that it (the AI) is human and being human involves consciousness - in fact consciousness is the defining feature of being human.

I guess this is more about transitivity. Humans are assumed to be intelligent. Commonly this is assumed to be indicated by intelligent behaviour and communication. Therefor the behaviour of an intelligent machine must be indistinguishable from human behaviour in this respect.
It is not really about pretending to be a human.
TheMadFool May 31, 2020 at 11:18 #418035
Quoting Heiko
I guess this is more about transitivity


What do you mean?Quoting Heiko
Humans are assumed to be intelligent.


Is this a false assumption?

Quoting Heiko
It is not really about pretending to be a human


The Turing test specifically states that all the AI has to do is give the impression that it's a human. Pretend?
Heiko May 31, 2020 at 11:27 #418037
Quoting TheMadFool
What do you mean?

If two subject are the same and one of them is the same as yet another, then all three are the same.

Quoting TheMadFool
Is this a false assumption?

It is anthropocentric.

Quoting TheMadFool
The Turing test specifically states that all the AI has to do is give the impression that it's a human.

If it can it's behaviour must be intelligent.
TheMadFool May 31, 2020 at 11:44 #418047
Heiko May 31, 2020 at 12:24 #418053
Reply to TheMadFool As I see it there are numerous fields where computers generate intelligent solutions for particular problems. But there is no implementation yet that could do that ad hoc for any field (and yes, you can fail intelligently).
I would guess that (due to science-fiction) today's society is much more open to the idea of an intelligent machine than maybe was the case at Turing's time. I guess this is reflected in the whole setting. Today there are software engineers running around trying to tweak natural language processing as they know where the flaws are. I would take it pretty seriously if they once say: "Okay, now we really cannot tell the difference anymore."
Heiko May 31, 2020 at 13:44 #418075
And if I think about it: This is what is pretty annoying about this discussion and the question "But is it conscious?". First one should be able to answer the question if that mattered at all if that thing constantly behaved as if it were. Everone knows it could not be decided by asking what is "true".
So put it clear: We want computer-slaves that can solve our problems without us doing much work.
What are the risks? What is the potential profit worth?
It seems the last thing "philosophers" would consider is asking the machine if it had any problems with doing so. That is significant
TheMadFool May 31, 2020 at 15:22 #418126
Quoting Heiko
As I see it there are numerous fields where computers generate intelligent solutions for particular problems.


Can you give me some examples?

Quoting Heiko
Today there are software engineers running around trying to tweak natural language processing as they know where the flaws are.


Kindly elaborate.Quoting Heiko
I would take it pretty seriously if they once say: "Okay, now we really cannot tell the difference anymore."


You would and even I would (probably) but how would one know whether a computer is conscious in the sense we are?
A Raybould June 13, 2020 at 16:36 #423501
Reply to Hanover There are a great many things that are unobservable, yet widely regarded as plausible, such as electrons, viruses, and, apparently, jealousy itself. One can, of course, take the position that it is possible that none of them are real, but that road, if taken consistently, leads ony to solipsism. To invoke this line of argument only over just some unobservables is not necessarily wrong (skepticism is an important trait) but it can also be tendentious, or an excuse for avoiding the issue. In particular, I regard it as tendentious, if not an outright inconsistency, to invoke zombie-conceivability arguments in the case of putative future AI but not in the case of people (or people other than oneself, if you are certain about your own case.)

With regard to your specific claim that there can be behavior without internal state: certainly, but once you have observed more than a few behaviors that do seem to be dependent on internal state (e.g. learned behavior, or any behavior apparently using memories), then the possibility that none of the observed behaviors were actually state-dependent becomes beyond-astronomically improbable.
A Raybould June 14, 2020 at 01:46 #423625
Reply to TheMadFool
how would one know whether a computer is conscious in the sense we are?


Do you know that I am conscious in the same way that you are? (or that any other person is, for that matter.) If so, then apply whatever method you used to come to that conclusion to a computer - and if that method depends on me being human and is not applicable to computers, then you would be begging the question.
fishfry June 14, 2020 at 02:16 #423627
Quoting TheMadFool
how would one know whether a computer is conscious in the sense we are?


Haven't been following discussion but noticed this. If I may jump in, I would say that this is THE core question.

I put it in the following even stronger form: How do I know my neighbor is self-aware and not a p-zombie? I leave the house in the morning and see my neighbor. "Hi nieghbor." "Hi ff. Looks like a nice day." "Yeah sure is." "See you later." "You too!" I cheerfully respond as I drive off with a wave. What a sentient fellow, I think to myself. He surely must possess what Searle would call intentionality. This is my little joke to myself, how little evidence we accept for the sentience of people. The interrogator is always the weak link in real life Turing test experiments. The humans are always TOO willing to accept that the computer's a human.

In truth, I operate by the principle that my neighbor is a fellow human, whatever other differences we may have; and that all fellow humans are self-aware. That's my unspoken axiom.

Computer scientist and blogger Scott Aaronson calls this attitude meat chauvinism and he has a point.

I have no way of knowing if my neighbor is self-aware, let alone some inanimate program. But at the same time I must admit that just because a thing is different from me, does not count as evidence that the thing is not intelligent. If self-awareness can arise in a wet messy environment like the brain; why shouldn't it arise in a box of circuit boards executing clever programs?

Personally I don't think machines can ever be conscious; but still I do admit my human-centric bias. I have no proof that self-awareness can only arise in wetware. Who could begin to know a thing like that? The wisest among us don't know.

And of course this was Turing's point. As a closeted gay man in 1950's England, he argued passionately for the acceptance of those who were different. That's how I read his 1950 paper, not only mathematically but also psychologically.

If we define self-awareness as a purely subjective experience, then by definition it is not accessible to anyone else. There is no hope of having an actual self-awareness detector. Turing offers an alternative standard: Behavioral. If we interact with it and can't tell the difference, then there is no difference.

Some days I do wonder about my neighbor.
TheMadFool June 14, 2020 at 06:41 #423653
Quoting A Raybould
Do you know that I am conscious in the same way that you are? (or that any other person is, for that matter.) If so, then apply whatever method you used to come to that conclusion to a computer - and if that method depends on me being human and is not applicable to computers, then you would be begging the question.


@fishfry

The difficulty with employing a method to detect consciousness is that such a method is completely behavior-dependent and that raises the specter of p-zombies, non-conscious beings that can't be distinguished from conscious beings behaviorally.

The Turing test, as a way of identifying AI (conscious), simply states that if there's no difference between a candidate AI and a human in terms of a person who's assessing the AI being fooled into thinking the AI is human then, the AI has passed the test and, to all intents and purposes, is conscious.

P-zombies are treated differently: even if they pass the Turing test adapted to them, they're thought not to deserve the status of conscious beings.

In short, something, AI, that's worlds apart from a flesh-and-bone human, is considered worthy of the label of consciousness while something that's physically identical to us, p-zombies, aren't and in both cases the test for consciousness is the same - behavior based.
fishfry June 14, 2020 at 07:12 #423662
Quoting TheMadFool
The difficulty with employing a method to detect consciousness is that such a method is completely behavior-dependent and that raises the specter of p-zombies, non-conscious beings that can't be distinguished from conscious beings behaviorally.


Yes.

Quoting TheMadFool

The Turing test, as a way of identifying AI (conscious), simply states that if there's no difference between a candidate AI and a human in terms of a person who's assessing the AI being fooled into thinking the AI is human then, the AI has passed the test and, to all intents and purposes, is conscious.


Yes but with a big caveat. You used the word conscious but Turing uses the word intelligent. Intelligence is a behavior and consciousness is a subjective state. So we could use the Turing test to assert that we believe an agent to be intelligent, while making no claim about its consciousness. In fact asking if an agent other than yourself is conscious is essentially meaningless, since consciousness (or self-awareness) is a subjective state.

So that's a semantic difference. The Turing test evaluates intelligence (whether it does that successfully or not is a matter for discussion). But it makes no claims to evaluate consciousness nor do we think any such test is possible, even in theory. Not for an AI and not for my neighbor, who's been acting strangely again.

Quoting TheMadFool

P-zombies are treated differently: even if they pass the Turing test adapted to them, they're thought not to deserve the status of conscious beings.


Thought by whom? You and me, I think. But Turing and Aaronson would say, "If it acts intelligent it's intelligent. And nobody can know about consciousness."

On what rational basis do we say our neighbors are conscious but that a general AI, if one ever passed a more advanced and clever version of the Turing test, is "intelligent but not conscious." That sounds like an unjustified value judgment; a prejudice, if you will. Meat chauvinism.

Quoting TheMadFool

In short, something, AI, that's worlds apart from a flesh-and-bone human, is considered worthy of the label of consciousness while something that's physically identical to us, p-zombies, aren't and in both cases the test for consciousness is the same - behavior based.


Isn't that just an irrational prejudice? They used to say the same about certain ethnic groups. It's the same argument about dolphins. Just because they don't look like us doesn't mean they're not "considered worthy of the label of consciousness." What is your standard for that?

I hope you see the problem here. If we can't tell a human from a p-zombie then what's the difference? I'm not advocating that argument, I disagree with it. I just admit that I can't articulate a rational defense of my position that doesn't come down to "Four legs good, two legs bad," from Orwell's Animal Farm.

TheMadFool June 14, 2020 at 11:17 #423710
Quoting fishfry
You used the word conscious but Turing uses the word intelligent. Intelligence is a behavior and consciousness is a subjective state.


I realize that AI = artificial intelligence and not consiousness.

Firstly, even if that's the case, we still have the problem of inferring things about the mind - intelligence, consciousness, etc. - from behavior alone. The inconsistency here is that on one hand (AI) behavior is sufficient to conclude that a given computer has a mind (intelligence-wise) and on the other hand, p-zombies, it isn't (p-zombies have no minds).

Secondly, my doubts notwithstanding, intelligence seems to be strongly correlated with consciousness - the more intelligent something is, the more capacity for consciousness.

In addition, and more importantly, aren't computers more "intelligent" in terms of never making a logical error i.e. Turing had something else in mind regarding artificial intelligence - it isn't about logical thinking which we all know for a fact that even run of the mill computers can beat us at.

What do you think this something else is if not consciousness? Consciousness is the only aspect of the mind that's missing from our most advanced AI, no? The failure of the best AI to pass the Turing test is not because they're not intelligent but because they're not, or are incapable of mimicking, consciousness.

In short, the Turing test, although mentioning only intelligence, is actually about consciousness.

Quoting fishfry
In fact asking if an agent other than yourself is conscious is essentially meaningless, since consciousness (or self-awareness) is a subjective state.


It's not meaningless to inquire if other things have subjective experiences or not.

Quoting fishfry
Isn't that just an irrational prejudice? They used to say the same about certain ethnic groups. It's the same argument about dolphins. Just because they don't look like us doesn't mean they're not "considered worthy of the label of consciousness." What is your standard for that?


All I'm saying is a p-zombie is more human than a computer is. Ergo, we should expect there to be more in common between humans and p-zombies than between humans and computers, something contradicted by philosophy (p-zombie) and computer science (Turing test).

Kenosha Kid June 14, 2020 at 11:25 #423714
I had a recent conversation wherein the responses I was getting were just weird, random-seeming, unassociated to anything I was saying, which made me wonder...

If many humans cannot pass the Turing test, is it really a good test?
A Raybould June 14, 2020 at 15:34 #423770
Reply to TheMadFool

Quoting TheMadFool
The difficulty with employing a method to detect consciousness is that such a method is completely behavior-dependent and that raises the specter of p-zombies, non-conscious beings that can't be distinguished from conscious beings behaviorally.


Firstly, just to be clear, and as you say in your original question, p-zombies are imagined as not merely behaviorally indistinguishable from humans, but entirely physically indistinguishable (and therefore beyond scientific investigation - and if they could be studied philosophically, I don't think anyone, not even Chalmers, has explained how.)

Secondly, I don't think the Turing test should be considered as the only or ultimate test for consciousness - it was merely Turing's first shot at such a test (and, given that, it has endured surprisingly well.) For the purpose of this discussion, we can use Turing 's version to stand in for any such test, so long as we don't get hung up on details specific to its particular formulation.

I assume that you think other people are conscious, but on what is your belief grounded? Is it because they behave in a way that appears conscious? Or, perhaps, is there an element of "they are human, like me, and I am conscious?"

If you are going to throw out all behavioral evidence, in the case of AI, on account of p-zombies, then you would be inconsistent if you did not also throw it out in the case of other people. If you make an exception for other people because they are human, then you would be literally begging Chalmers' 'hard question of consciousness'. What does that leave? If you have another basis for believing other people are conscious, what is it and why would that not work for AIs? Suppose we find an enclave of Neanderthals, Homo Erectus, or space aliens - how would you judge if they are p-zombies?

This, I suspect, is what Dennett is alluding to when he says "we are all p-zombies" - he sees no reason to believe that we have these extra-physical attributes that p-zombies would lack.

Returning to your original question, you raise an interesting point: why should p-zombies not be considered conscious? After all, they were conceived explicitly to be indistinguishable from conscious entities. That they are allegedly lacking something that a philosopher says an entity must have, to be conscious, is not much of an argument; the philosopher might simply have an inaccurate concept of what consciousness is and requires.

Putting that aside, there is a third option to be considered: that p-zombies are ultimately an incoherent concept. When we look at how strange, ineffable, unique and evidence-free a concept Chalmers had to come up with in order to defeat physicalism, how selective he had to be in what he chooses to bless with being conceivable, in order to get there, and the highly doubtful leap he makes from conceivability to possibility (I can conceive of the Collatz conjecture being true and it being false, but only one of these is possible), I am simply going to apply Occam's razor to p-zombies, at least until a better argument for their possibility comes along.
TheMadFool June 14, 2020 at 16:27 #423778
Quoting A Raybould
I don't think the Turing test should be considered as the only or ultimate test for consciousness - it was merely Turing's first shot at such a test (and, given that, it has endured surprisingly well.)


What test do you propose? Any ideas?

Quoting A Raybould
If you are going to throw out all behavioral evidence, in the case of AI, on account of p-zombies, then you would be inconsistent if you did not also throw it out in the case of other people. If you make an exception for other people because they are human, then you would be literally begging Chalmers' 'hard question of consciousness'.


Yes, that is the sticking point. I don't see a light at the end of this tunnel.

Quoting A Raybould
That they are allegedly lacking something that a philosopher says an entity must have, to be conscious, is not much of an argument; the philosopher might simply have an inaccurate concept of what consciousness is and requires.


What, according to you, is an "accurate" concept of consciousness?

Quoting A Raybould
that p-zombies are ultimately an incoherent concept.


Why is it incoherent? Quoting A Raybould
I am simply going to apply Occam's razor to p-zombies, at least until a better argument for their possibility comes along.


How and where is Occam's razor applicable?
fishfry June 15, 2020 at 02:01 #423936
Quoting TheMadFool
I realize that AI = artificial intelligence and not consiousness.


I think we're agreed on that.

I jumped into this thread based on only one phrase from one post without reading the rest. I only posted to get out of my system pretty much everything I knew about the subject. I have no idea what the answers are to the question of consciousness and machines; but I do think I have a fair grasp of the questions, at both the technical and philosophical level.

So I said my piece, and if it's not clear, I'm not espousing or even expressing any kinds of opinions about anything. If you disagree with anything I write, that's perfectly ok. I disagree with a lot of it too. I probably won't engage though. I literally said, at a high level, everything I know abut the philosophy machine intelligence in my first post.

Quoting TheMadFool

Firstly, even if that's the case, we still have the problem of inferring things about the mind - intelligence, consciousness, etc. - from behavior alone. The inconsistency here is that on one hand (AI) behavior is sufficient to conclude that a given computer has a mind (intelligence-wise) and on the other hand, p-zombies, it isn't (p-zombies have no minds).


Yes ok. I'm not agreeing or disagreeing or having an opinion about this, beyond what I've already said. I perfectly well agree with your analysis of the problem.

Quoting TheMadFool

Secondly, my doubts notwithstanding, intelligence seems to be strongly correlated with consciousness - the more intelligent something is, the more capacity for consciousness.


AHA! But ... why do you say that? Until you give a rational reason for WHY you believe that to be the case, I regard it as an entirely bio-centric prejudice on your part. Mean chauvinism again. So on that point, I am pushing back a bit on your ideas. I want to know what is the rational reason we think that intelligence must correlate with consciousness? Other than that it's how our mind works?

Quoting TheMadFool

In addition, and more importantly, aren't computers more "intelligent" in terms of never making a logical error i.e. Turing had something else in mind regarding artificial intelligence - it isn't about logical thinking which we all know for a fact that even run of the mill computers can beat us at.


No, Gödel and Turing decisively delineated the hard limitations of what can be computed. The core of the argument that humans do something machines can't do is that WE can solve noncomputable problems. Sir Roger Penrose believes consciousness is not a computation. I personally believe consciousness is not a computation.

Computers aren't intelligent at all. They're dumb as rocks. In fact you can implement a computer with a bunch of rocks painted white on one side and black on the other. You can sit there flipping rock-bits according to programs, and what you are doing is computing. Computers don't know the first thing about anything. That's my opinion. Others have different opinions. That's ok by me.

Quoting TheMadFool

What do you think this something else is if not consciousness? Consciousness is the only aspect of the mind that's missing from our most advanced AI, no? The failure of the best AI to pass the Turing test is not because they're not intelligent but because they're not, or are incapable of mimicking, consciousness.[/url]

Funny you said "mimicking" consciousness instead of implementing it. As in faking it. Pretending to be conscious.

I think we're each using a slightly different definition of consciousness. I think it's purely subjective and can never be tested for. I gather you believe the opposite, that there are observable behaviors that are reliable indicators of consciousness. We my need to agree to disagree here.

[quote="TheMadFool;423710"]
In short, the Turing test, although mentioning only intelligence, is actually about consciousness.


Nonsense. Turing never used the word. You're adding your own interpretation to what's not actually there. Do you know anything about how chatbots work? People have a tendency to think dumb chatbots are human. That means nothing.

Quoting TheMadFool

It's not meaningless to inquire if other things have subjective experiences or not.


It's not meaningless, it's just damned hard to investigate! I heard one thing they do is the "mirror test." If an animal or a fish or whatever can recognize its own reflection, we think it's got some kind of inner life going on.

I don't disrespect or downplay the importance of the question.

I do oppose overly glib and strongly asserted answers. Truth is nobody knows.


Quoting TheMadFool

All I'm saying is a p-zombie is more human than a computer is.


As I mentioned I haven't read the rest of the thread and wasn't really talking about p-zombies except as a shorthand for Turing machines passing as humans among us in society. Essentially the same idea as the standard meaning of "something that looks and acts exactly like a normal person, but has no inner life at all."

For purpose of anything I'm saying, these two ideas of p-zombies can be taken as the same. I'm not really up on any fine points of difference. A standard p-zombie looks human but has no inner life. A Turing machine operating a perfectly lifelike robot body would in my opinion BE a p-zombie; but I guess you'd say that if it behaves with intelligence, it must be conscious.

Quoting TheMadFool

Ergo, we should expect there to be more in common between humans and p-zombies than between humans and computers,


I'm afraid I don't see the distinction between p-zombies and computers. In my opinion a program running on a computer might possibly implement a true p-zombie -- something that behaves perfectly like a human; but that has no inner life whatsoever.

If all you mean is that the p-zombies are wetware, why do you have such a meat prejudice? Where does it say that being made of meat is better than being made of vegetable matter? It's meat chauvinsim: believing that meat is superior because we are meat. That is not a rational argument.


Quoting TheMadFool

something contradicted by philosophy (p-zombie) and computer science (Turing test).


I am not aware of this interpretation, but I don't know much about p-zombies. It seems to me that a lifelike chatbot is exactly what philosophers mean by a p-zombie: a thing that behaves like a human but isn't and that has no inner life. It just operates on a program. Like a computer.

I see p-zombies and computer programs as being very closely related. Perhaps you can educate me as to what I'm missing about p-zombies.

A Raybould June 15, 2020 at 13:11 #424064
Reply to TheMadFool Quoting TheMadFool
What test do you propose? Any ideas?


I once half-jokingly suggested that devising a test that we find convincing should be posed as an exercise for the AI. The only reason I said 'half-jokingly' is that it would have a high false-negative rate, as no human has yet completed that task to everyone's satisfaction!

I do not think Turing, or anyone else until much later, anticipated how superficially convincing a chatbot could be, and how effectively a machine could fake the appearance of consciousness by correlating the syntactical aspects of a vast corpus of human communication. By limiting his original test to a restricted domain - gender roles and mores - Turing made his test unnecessarily defeasible by these means, and subsequent variations have extended the scope of questioning. These tests could be further improved by focusing on what a machine understands, rather than what it knows.

While there is a methodological difficulty in coming up with a test that defeats all faking, this is not the same problem as p-zombies allegedly pose, as that takes the form of an unexplained metaphysical prohibition on AI ever being 'truly' conscious (by 'unexplained', I mean that, in Chalmers' argument, we merely have to conceive of p-zombies, without giving any thought to how they might be so.)

Quoting TheMadFool
What, according to you, is an "accurate" concept of consciousness?


I don't know, any better than the next person, what consciousness is, and if anyone had come up with a generally-accepted, predictive, falsifiable explanation, we would no longer be interested in the sort of discussion we are having here! For what it's worth, I strongly suspect that, for example, theories linking consciousness to quantum effects in microtubules are inaccurate. In a more general case, I think that any argument that insists physicalism must require a purely reductive explanation of conscious states in terms of brain states, without considering the possibility that the former may be emergent phenomena arising from the latter, are also inaccurate.

Quoting TheMadFool
Why is it incoherent?


I am not saying that p-zombies are definitely an incoherent concept, though I suspect they are - that it will turn out that it is impossible to have something that appears to be as conscious as a human without it having internal states analogous to those of humans.

Chalmers defends p-zombies as being "logically conceivable", but that is a vary low bar - it simply means that it is not simply a self-contradictory concept (such as 'male vixen' is, to quote one of Chalmers' examples), and that we don't know of any fact that refutes it - but that is always at risk of being overturned by new evidence, as has happened to many other concepts that were once seen as logically conceivable, such as phlogiston (actually, some form of metaphysical phlogiston theory might still be logically conceivable, but no-one would take seriously an argument based on that.)

Quoting TheMadFool
How and where is Occam's razor applicable?


Chalmers is looking forward to a time when neuroscience has a thorough understanding of how brains work, and is trying to say that no such explanation can be complete - that there is something non-physical or magical going on as well. He cannot say what that is or how it works, or offer any way for us to answer those questions, but he insists that it must be there. It is for exactly these sort of unfalsifiable claims that Occam's razor was invented (even though the concept of falsifiability was not explicitly recognized until centuries later!)
A Raybould June 15, 2020 at 13:53 #424075
Reply to fishfry Quoting fishfry
I see p-zombies and computer programs as being very closely related. Perhaps you can educate me as to what I'm missing about p-zombies.


Chalmers' canonical p-zombie argument is a mataphysical one that is not much concerned with computers or programs, even though they are often dragged into discussions of AI, often under the misapprehension that chatbots and such are examples of p-zombies. The argument is subtle and lengthy, but I think this is a good introduction.
TheMadFool June 15, 2020 at 14:29 #424084
Quoting A Raybould
I once half-jokingly suggested that devising a test that we find convincing should be posed as an exercise for the AI


That solves the mystery of who or what we consider to be more "intelligent"? I think, despite the who's who of computer science constantly reminding us that computers are not intelligent, people are still under the impression that computers are. I wonder why? Even you, presumably a person in the know about the truth of computer "intelligence", half-thought they were suited to a task humans have difficulty with.

Quoting A Raybould
These tests could be further improved by focusing on what a machine understands, rather than what it knows.


I see nothing special in understanding. For the most part it involves formal logic, something computers can do much faster and much better. I maybe wrong of course and you'll have to come up with the goods showing me how and where.

Quoting A Raybould
I am not saying that p-zombies are definitely an incoherent concept, though I suspect they are - that it will turn out that it is impossible to have something that appears to be as conscious as a human without it having internal states analogous to those of humans.


If I maybe so bold as to hazard a "proof": The idea that certain behavior is adequate grounds to infer consciousness is, I believe, an inductive argument. Each person is aware that s/he's conscious and that this state has a set of behavioral patterns associated with it. S/he then observes other people conduct themselves in a similar fashion and then an inference of consciousness is made. Nevertheless, this association between consciousness experienced in first person and some set of behaviors is not that of necessity (no deductive proof of it exists) but is that of probability (an inductive inference made from observation). Ergo, in my humble opinon, p-zombies are conceivable and possible to boot.





fishfry June 16, 2020 at 02:47 #424201
Quoting A Raybould
Chalmers' canonical p-zombie argument is a mataphysical one that is not much concerned with computers or programs, even though they are often dragged into discussions of AI, often under the misapprehension that chatbots and such are examples of p-zombies. The argument is subtle and lengthy, but I think this is a good introduction.


Can you summarize the main point please? What is a p-zombie if it's not a TM (or some alternate mechanism) that emulates a human without being self-aware? I confess that I have little inclination to dive into a subtle and lengthy piece by Chalmers. My limitation, I admit. The question I asked is simple enough though. What's a p-zombie if not a TM or super-TM (TM plus some set of oracles) that emulates a human without being self-aware?

ps -- Feeling guilty at my laziness for not clicking on your link, I clicked. And it's NOT an article by Chalmers at all. I skimmed quickly but didn't read it. I'd still like a simple answer to how a p-zombie differs from "a thing that is indistinguishable from a human but lacks self-awareness," such as a TM in a nice suit.
A Raybould June 16, 2020 at 10:49 #424294
Reply to TheMadFool Quoting TheMadFool
That solves the mystery of who or what we consider to be more "intelligent"?


No, it is intended to be what you asked for, an alternative to the Turing test, and the purpose of that test is to figure out if a given computer+program is intelligent.

Quoting TheMadFool
Even you, presumably a person in the know about the truth of computer "intelligence", half-thought they were suited to a task humans have difficulty with.


I don't see where you got that from. I am writing about a hypothetical future computer that at least looks like it might be intelligent, just as Turing was when he presented his test.

Quoting TheMadFool
I see nothing special in understanding. For the most part it involves formal logic, something computers can do much faster and much better.


If that is so, then how come the most powerful and advanced language-learning program has a problem with "common-sense physics", such as "If I put cheese in a refrigerator, will it melt?"

Consider Einstein's equation E = mc^2. A great many people know it, but only a tiny fraction of those understand how it arises from what was known of physics at the beginning of the 20th. century. Einstein did not get there merely (or even mostly) by applying formal logic; he did so through a deep understanding of what that physics implied. A computer program, if lacking such insight, could not find its way to that result: for one thing, the problem is vastly too combinatorially complex to solve by exhaustive search, and for another, it would not understand that this formula, out of the huge number it had generated, was a particularly significant one.

Quoting TheMadFool
Nevertheless, this association between consciousness experienced in first person and some set of behaviors is not that of necessity (no deductive proof of it exists) but is that of probability (an inductive inference made from observation). Ergo, in my humble opinion, p-zombies are conceivable and possible to boot.


For one thing, you seem to be making an argument that they are conceivable, but the controversial leap from conceivable to possible is not really argued for here, it is just asserted as as if it followed automatically: "...and possible to boot."

More interestingly, if I am following you here, you do consider it possible that other people are p-zombies. That is very interesting, because Chalmers hangs his argument against physicalism on the assumption that they are not, and I know of no counter-argument that challenges this view (even when Dennett says, apparently somewhat tongue-in-cheek, that "we are all p-zombies", I think his point is that he thinks Chalmers' distinction between p-zombies and us (the non-physical element that they lack) is illusory.)

Having said that, I have three follow-up questions: firstly, if other people could be p-zombies, do you think that you are different, and if so, why? Secondly, if it is possible that other people are p-zombies, why would it matter that it would be possible for a p-zombie to pass the Turing test? Thirdly, if it is possible that other people are p-zombies, why did we evolve a highly-complex, physical state machine called the human brain? After all, if the p-zombie hypothesis is correct, our minds are independent of the physical brain. The most parsimonious hypothesis here seems to be that the p-zombie hypothesis is false, and our minds actually are a result of what our physical brains do.



A Raybould June 16, 2020 at 11:36 #424308
Reply to fishfry
Indeed, that article is not by Chalmers; is that a problem? Is reading Archimedes' words the only way to understand his principle?

If you want to read Chalmers' own words, he has written a book and a series of papers on the issue. As you did not bother to read my original link, I will not take the time to look up these references; you can find them yourself easily enough if you want to (and they may well be found in that linked article). I will warn you that you will find the papers easier to follow if you start by first reading the reference I gave you.

Quoting fishfry
I'd still like a simple answer to how a p-zombie differs from "a thing that is indistinguishable from a human but lacks self-awareness," such as a TM in a nice suit.


That is a different question than the one you asked, and I replied to, earlier. The answer to this one is that a TM is always distinguishable from a human, because neither a human, nor just its brain, nor any other part of it, is a TM. A human mind can implement a TM, to a degree, by simulation (thinking through the steps and remembering the state), but this is beside the point here.

If you had actually intended to ask "...indistinguishable from a human when interrogated over a teletype" (or by texting), that would be missing the point that p-zombies are supposed to be physically indistinguishable from humans (see the first paragraph in their Wikipedia entry), even when examined in the most thorough and intrusive way possible. This is a key element in Chalmers' argument against metaphysical physicalism.

As a p-zombie is physically identical to a human (or a human brain, if we agree that no other organ is relevant), then it is made of cells that work in a very non-Turing, non-digital way. Chalmers believes he can show that there is a possible world identical to ours other than it being inhabited by p-zombies rather than humans, and therefore that the metaphysical doctrine of physicalism - that everything must necessarily be a manifestation of something physical - is false.

Notice that there is no mention of AI or Turing machines here. P-zombies only enter the AI debate through additional speculation: If p-zombies are possible, then it is also possible that any machine (Turing or otherwise), no matter how much it might seem to be emulating a human, is at most emulating a p-zombie. As the concept of p-zombies is carefully constructed so as to be beyond scientific examination, such a claim may be impossible to disprove, but it is as vulerable to Occam's razor as is any hypothesis invoking magic or the supernatural.

TheMadFool June 17, 2020 at 00:44 #424532
Quoting A Raybould
If that is so, then how come the most powerful and advanced language-learning program has a problem with "common-sense physics", such as "If I put cheese in a refrigerator, will it melt?"


How do you think a human processes this question?

Quoting A Raybould
A great many people know it, but only a tiny fraction of those understand how it arises from what was known of physics at the beginning of the 20th. century. Einstein did not get there merely (or even mostly) by applying formal logic


Is it possible to get to E = mc^2 without logic?

Quoting A Raybould
problem is vastly too combinatorially complex to solve by exhaustive search


What do you mean by that? Do you mean there's a method to insight? Or are insights just lucky guesses - random in nature and thus something computers are fully capable of?

Quoting A Raybould
For one thing, you seem to be making an argument that they are conceivable, but the controversial leap from conceivable to possible is not really argued for here,


What's the difference between conceivable and possible?

A Raybould June 17, 2020 at 15:06 #424642
Reply to TheMadFool
Quoting TheMadFool
How do you think a human processes this question?


A person who does not just know the answer might begin by asking herself questions like "what does it mean for cheese to melt?" "what causes it to do so?" "what does a refrigerator do?" and come to realize that the key to answering the question posed may be reached through the answers to two subsidiary questions: what is the likely state of the cheese initially, and how is its temperature likely to change after being put into a refrigerator?

At this point, I can imagine you thinking something like "that's just a deductive logic problem", and certainly, if you formalized it as such, any basic solver program would find the answer easily. The part that is difficult for AI, however, is coming up with the problem to be solved in the first place. Judging by the performance of  GPT-3, it would likely give good answers to questions like "what causes melting" and "what is a refrigerator?", but it is unable to put it all together to reach an answer to the original question.

It gets more interesting when we consider a slightly more difficult problem: for "cheese", substitute the name of a cheese that the subject has never heard of (there are some candidates here). There is a good chance that she will still come up with the right answer, even if she does not suspect that the object is a form of cheese, by applying suitable general principles and some inductive thinking. Current AI, on the other hand, will likely be flummoxed.


Quoting TheMadFool
Is it possible to get to E = mc^2 without logic?


That is beside the point. To think that the use of logic in getting to E = mc^2 somehow implies that, once you can get a machine to do logic, there's "nothing special" in getting it to understand things, is, ironically, a failure to understand the role (and limits) of logic in understanding things.

Ultimately, you are arguing against the straightforward empirical fact that current AI has trouble understanding the information it has.


Quoting TheMadFool
Do you mean there's a method to insight? Or are insights just lucky guesses - random in nature and thus something computers are fully capable of?


Neither of the above. There is a method to solving certain problems in formal logic, that does a breadth-first search through the tree of all possible derivations from the given axioms, but that is nothing like insight: for one thing, there is no semantic content to the formulae themselves. (One of the first successes in AI research, Logic Theorist, proved many of the early theorems from Principia Mathematica, and as doing so is considered a sign of intelligence in people, some thought that AI was close to being a solved problem. They were mistaken.)

What I was thinking is this: if you formalized the whole of classical physics, and started a program such as the above on discovering what it could deduce, the chances that it would come up with E=mc^2 before the world comes to an end are beyond-astronomically small (even more importantly, such a program would not understand the importance of that particular derivation, but that is a separate issue.) The reason for this is the combinatorial complexity of the problem - the sheer number of possible derivations and how fast they grow at each step (even the Boolean satisfiability problem 3-SAT is NP-complete.)

Actually, I have since realized that even this would not be successful in getting to E = mc^2: to get there, Einstein had to break some 'laws' of physics, treat them as approximations, and substitute more accurate alternatives that were still consistent with everything that had been empirically determined. That's not just logic at work.

Lucky guessing has the same problem, and anyone dismissing Einstein's work as a lucky guess just does not understand what he did. There is something more to understanding than any of this, and the fact that we haven't nailed it down yet is precisely the point that I am making on this tangential issue of whether understanding remains a tough problem for AI.


Quoting TheMadFool
What's the difference between conceivable and possible?


Consider the example I gave earlier: I can conceive of the Collatz conjecture being true and of it being false, but only one of these is possible. This situation exists because it is either true or it is false, but so far, no-one has found a proof either way.

In everyday usage, the words might sometimes be considered synonymous, but in the contexts of metaphysics and modal logic, which are the contexts in which the p-zombie argument is made, 'possible' has a specific and distinct meaning. As the example shows, there's an aspect of 'so far as we know' to conceptions, which is supposed to be resolved in moving to metaphysical possibility - when we say, in the context of modal logic, that there is a possible world in which X is true, we are not just saying that we suppose there might be such a possible world. We should either assert it as an axiom or deduce it from our axioms, and if Chalmers had done the former, physicalists would simply have said that the burden was on him to justify that belief (it gets a bit more complicated when we make a counterfactual premise in order to prove by contradiction, but that is not an issue here.)

If the two words were synonymous, Chalmers would not have spent any time in making the distinction and in attempting to get from the former to the latter, and his opponents would not be challenging that step.
TheMadFool June 18, 2020 at 14:01 #424895
Quoting A Raybould
At this point, I can imagine you thinking something like "that's just a deductive logic problem", and certainly, if you formalized it as such, any basic solver program would find the answer easily


Quoting A Raybould
The part that is difficult for AI, however, is coming up with the problem to be solved in the first place. Judging by the performance of  GPT-3, it would likely give good answers to questions like "what causes melting" and "what is a refrigerator?", but it is unable to put it all together to reach an answer to the original question.


Your ideas are interesting but one thing we agree on is the necessity for logic. Now, the question that pops into my mind is this: what does anyone mean when s/he says something like, "I understand."? To me, understanding is just a semantics game that's structured with syntax. The latter, syntax, doesn't seem to be an issue with computers; in fact computers are veritable grammar nazis. As for the former, semantics, what's the difficulty in associating words to their referents in a computer's memory? That's exactly what humans do too, right? The question for you is this: is there a word with a referent that's impossible to be translated into computer-speak? I would like to know very much, thank you.

Quoting A Raybould
The reason for this is the combinatorial complexity of the problem


Quoting A Raybould
That's not just logic at work.


You talk of "combinatorial complexity" and the way you speak of Einstein's work suggests to me that you think E=mc^2 to be nothing short of a miracle. May I remind you that Newton once said that he achieved what he did only by standing on the shoulder's of giants. There's a long line of illustrious predecessors that paves the way to many scientific discoveries in my opinion. You also seem to ignore serendipity - times when people simply get lucky and make headway with a problem. You seem to be of the view that there's a method behind sheer luck, as if to say there's a way to influence the outcome of a die when in fact it can't be done.

Quoting A Raybould
I can conceive of the Collatz conjecture being true and of it being false, but only one of these is possible.


Just curious here. I feel that I've not understood you as much as I'd have liked but bear with me a while...

If conceivability and possibility are different then the following are possible and I'd like some examples of each:

1. There's something conceivable that's impossible

2. There's something possible that's inconceivable

A Raybould June 18, 2020 at 16:46 #424957
Reply to TheMadFool

Quoting TheMadFool
To me, understanding is just a semantics game that's structured with syntax.


I have no idea what that means. I hope that it means more than "understanding is semantics with syntax", which is, at best, a trite observation that does not explain anything.

Searle says that syntax can not give rise to semantics, and claims this to be the lesson of his "Chinese Room" paper. I don't agree, but I don't see the relationship as simple, either.


Quoting TheMadFool
Is there a word with a referent that's impossible to be translated into computer-speak?


This is beside the point, as translation does not produce meaning, whether it is into "computer-speak" or anything else. Translation sometimes requires understanding, and it is specifically those cases where current machine translation tends to break down.

You have not, so far, addressed a point that is relevant here: the problem current AI has with basic, common-sense understanding is not in solving logic problems, but in formulating the right problem in the first place.

If you really think you have solved the problem of what it takes to understand something, you should publish (preferably in a peer-refereed journal), as this would be quite a significant advance in the study of the mind. At the very least, perhaps you could address an issue that I have raised twice now: if, as you say, there's nothing special to understanding, and semantics is just associating words to their referents in a computer's memory, how come AI is having a problem with understanding, as is stated in the paper I linked to? Do you think all AI researchers are incompetent?


Quoting TheMadFool
You talk of "combinatorial complexity" and the way you speak of Einstein's work suggests to me that you think E=mc^2 to be nothing short of a miracle.


Well I don't, so I think I can skip your arguments against it - though not without skimming them to see if you made any point that stands on its own. In doing so, I see that you put quotes around combinatorial complexity, as if you thought it was beside the point, but it is very much to the point that humans achieve results that would be utterly infeasible if the mind worked like current theorem solving programs.


Quoting TheMadFool
If conceivability and possibility are different then the following are possible and I'd like some examples of each:

1. There's something conceivable that's impossible

2. There's something possible that's inconceivable


They may be possible, but it is certainly not necessary that there must be something possible that's inconceivable - and if there is, then neither me, you nor anyone else is going to be able to say what it is. On the other hand, in mathematics, there are non-constructive proofs that show something is so without being able to give any examples, and it seems conceivable to me that in some of these cases, no example ever could be found. If this is so, then whether these things should be regarded as inconceivable or not strikes me as a rather subtle semantic issue.

I have twice given you an example of the former: If the Collatz conjecture is true, then that it is false is conceivable (at least until a proof is found) but not possible, and vice-versa. It has to be either one or the other.

By the way, this example is a pretty straightforward combination of syntax, semantics and a little logic, so how do you account for your difficulty in understanding it?



TheMadFool June 18, 2020 at 17:39 #424963
Quoting A Raybould
I have no idea what that means. I hope that it means more than "understanding is semantics with syntax", which is, at best, a trite observation that does not explain anything.

Searle says that syntax can not give rise to semantics, and claims this to be the lesson of his "Chinese Room" paper. I don't agree, but I don't see the relationship as simple, either.


The problem here is simple: what does it mean to understand? Computers are symbol manipulators and that means whatever can be symbolized, is within the reach of a computer. If you believe there's more to understanding than symbol manipulation, you'll have to make the case that that's the case. I didn't find anything in your posts that achieves that. In short, I contend that even human understanding is basic symbol manipulation and so the idea that it's somehow so special that computers can't handle is not true as far as I'm concerned.

Searle's argument doesn't stand up to careful scrutiny for the simple reason that semantics are simply acts of linking words to their referents. Just consider the sentence, "dogs eat meat". The semantic part of this sentence consists of matching the words "dog" with a particular animal, "eat" with an act, and "meat" with flesh, i.e.to their referents and that's it, nothing more, nothing less. Understanding is simply a match-the-following exercise, something a computer can easily accomplish.

Quoting A Raybould
This is beside the point, as translation does not produce meaning, whether it is into "computer-speak" or anything else. Translation sometimes requires understanding, and it is specifically those cases where current machine translation tends to break down.


How do you think machine translations work? Each word in a language is matched with another word in a different language. The idea of language translation, as you rightly pointed out, is for humans is an exercise in semantics and yet, despite some mistranslations, computers do a pretty good job. Ask yourself the question: what exactly does understanding semantics mean if a machine, allegedly incapable of semantics, can do as good a job as a human translator of languages?

Quoting A Raybould
if, as you say, there's nothing special to understanding, and semantics is just associating words to their referents in a computer's memory, how come AI is having a problem with understanding, as is stated in the paper I linked to? Do you think all AI researchers are incompetent?


While I'm not claiming I'm correct in all what I've said, I've heard that even the very best expert can and do make mistakes.

Quoting A Raybould
They may be possible, but it is certainly not necessary that there must be something possible that's inconceivable - and if there is, then neither me, you nor anyone else is going to be able to say what it is. On the other hand, in mathematics, there are non-constructive proofs that show something is so without being able to give any examples, and it seems conceivable to me that in some of these cases, no example ever could be found.

I have twice given you an example of the former: If the Collatz conjecture is true, then that it is false is conceivable (at least until a proof is found) but not possible, and vice-versa. It has to be either one or the other. Whether these things would be regarded as inconceivable or not strikes me as a rather subtle semantic issue.

By the way, this example is a pretty straightforward combination of syntax, semantics and a little logic, so how do you account for your difficulty in understanding it?


Kindly furnish the definitions of "conceivable" and "possible". I'd like to see how they differ, if you don't mind.
bongo fury June 18, 2020 at 19:11 #424984
Quoting TheMadFool
Searle's argument doesn't stand up to careful scrutiny for the simple reason that semantics are simply acts of linking words to their referents. Just consider the sentence, "dogs eat meat". The semantic part of this sentence consists of matching the words "dog" with a particular animal, "eat" with an act, and "meat" with flesh, i.e. to their referents and that's it, nothing more, nothing less. Understanding is simply a match-the-following exercise, something a computer can easily accomplish.


Quoting bongo fury
I wish I could locate the youtube footage of Searle's wry account of early replies to his vivid demonstration (the chinese room) that so-called "cognitive scripts" mistook syntax for semantics. Something like, "so they said, ok we'll program the semantics into it too, but of course what they came back with was just more syntax".


I'll have another rummage.

I expect that you, like "they" in the story, haven't even considered that "referents" might have to be actual things out in the world? Or else how ever did the "linking" seem to you something simple and easily accomplished, by a computer, even??? Weird.
TheMadFool June 18, 2020 at 19:50 #425000
Quoting bongo fury
I'll have another rummage.

I expect that you, like "they" in the story, haven't even considered that "referents" might have to be actual things out in the world? Or else how ever did the "linking" seem to you something simple and easily accomplished, by a computer, even??? Weird.


Yes, I did consider that. Referents can be almost anything, from physical objects to abstract concepts. Most physical objects have perception-based qualities to them. Take red wine for example - it's red, has a certain taste, stored in bottles in a cellar, etc. The set of qualities that define what wine is is linked to the word "wine" in the human mind - this is the essence of semantics and computers, in my opinion, are up to the task.

A similar argument can be made for concepts.
bongo fury June 18, 2020 at 20:08 #425006
Quoting TheMadFool
I expect that you, like "they" in the story, haven't even considered that "referents" might have to be actual things out in the world?
— bongo fury

Yes, I did consider that.


Ok...

Quoting TheMadFool
Referents can be almost anything, from physical objects to abstract concepts.


Ah, so after due consideration you decided not. (The referents don't have to be things out in the world.) This was Searle's frustration.

You can be sure you are in the respectable company of his critics at the time. Also of a probable majority of philosophers and linguists throughout history, to be fair.
TheMadFool June 18, 2020 at 20:14 #425008
Quoting bongo fury
Ah, so after due consideration you decided not. (The referents don't have to be things out in the world.) This was Searle's frustration.


They don't have to be but they can be, no? I hope you reply soon to this query.
bongo fury June 18, 2020 at 20:28 #425015
Quoting TheMadFool
I hope you reply soon to this query.


Why? A quick reply isn't usually a thoughtful one. In my case at least. Actually, I think the site should instigate a minimum time between replies, as well as a word limit.

Quoting TheMadFool
They don't have to be but they can be, no?


I don't think I've been understood, here. (Take more time?) I was trying to explain why @A Raybould was non-plussed by your statements about semantics. See also @InPitzotl's recent posts here.
TheMadFool June 18, 2020 at 21:43 #425034
Reply to bongo fury Well, I don't know why people make such a big deal of understanding - it's very simple. Referring to InPitzotl's example of chips and dip, I don't see any difficulty at all - a computer with the right programming can recognize these items if the necessary visual and other data have been linked to the words "chips" and "dip" i.e. if the word has been accurately matched to their referents.
bongo fury June 18, 2020 at 21:55 #425037
Quoting TheMadFool
Well, I don't know why people make such a big deal of understanding


It's about

Quoting TheMadFool
referents


They (and I) mean things out there, you mean just more words/data.
TheMadFool June 19, 2020 at 01:28 #425124
Reply to bongo fury What's the problem with referents? The clear liquid that flows in rivers and the oceans that at times becomes solid and cold, and at other times is invisible vapor is the referent of the word "water".
A Raybould June 19, 2020 at 02:24 #425140
Reply to TheMadFool Quoting TheMadFool
Computers are symbol manipulators and that means whatever can be symbolized, is within the reach of a computer.


"Within the reach" avoids precision where precision is needed. What do you mean, here?


Quoting TheMadFool
If you believe there's more to understanding than symbol manipulation...


Whether it is symbol manipulation is beside the point. What's at issue here is my statement that "[Turing-like] tests could be further improved by focusing on what a machine understands, rather than what it knows" and your reply that you don't see anything special in understanding. Being symbol manipulation does not automatically make it simple, and your explanations, which invoke symbol manipulation whithout showing what sort of manipulation, are just part of the reason for thinking that it is not.


Quoting TheMadFool
If you believe there's more to understanding than symbol manipulation...


That is a view I have not stated and do not hold. My position has consistenly been that understanding is not a simple issue and that it remains a significant obstacle for AI to overcome. I have also taken the position that current logic solvers are not sufficient to give a machine the ability to understand the world, which should not be mistaken for a claim that no form of symbol manipulation could work. To be clear, my position on consciousness is that I suppose that a digital computer could simulate a brain, and if it did, it would have a mind like that of the brain being simulated.


Quoting TheMadFool
Understanding is simply a match-the-following exercise, something a computer can easily accomplish.


Please expand on 'match-the-following', as I cannot imagine any interpretation of that phrase that would lead to a computer being able to understand anything to the point where it would perform reasonably well on "common-sense physics" problems (in fact, perhaps you could work through the "if you put cheese in a refrigerator, will it melt?" example?)


Quoting TheMadFool
How do you think machine translations work?


You have taken hold of the wrong end of the stick here. I was replying to your question "Is there a word with a referent that's impossible to be translated into computer-speak?" by pointing out that it is irrelevant to the issue because translation does not create or add meaning. In turn, it is also irrelevant to this point whether the translation is done by humans or machines: neither of them create or add meaning, which is delivered in the original text.


Quoting TheMadFool
Ask yourself the question: what exactly does understanding semantics mean if a machine, allegedly incapable of semantics, can do as good a job as a translator of languages?


You are barking up the wrong tree here, precisely because translation does not modify the semantics of its input. To address the matter at hand, you would need an example that does demand understanding. We have that, at least to a small extent, in the common-sense physics questions discussed in the paper I linked to, and even here the performance of current AI is weak (note that this is not my assessment or that of critics; it is from the team which developed the program.) You have avoided addressing this empirical evidence against your claim that machine understanding is simple, until...

Quoting TheMadFool
I've heard that even the very best expert can and do make mistakes.


Really? Do you understand that, for this excuse to work, in would not take just one or two experts to make a few mistakes; it would require the entire community to be mistaken all the time, never seeing the simple solution that you claim to have but have not yet explained?


Quoting TheMadFool
Kindly furnish the definitions of "conceivable" and "possible". I'd like to see how they differ, if you don't mind.


At last! Back to the main issue. I will start by quoting what I wrote previously:

Quoting A Raybould
In everyday usage, the words might sometimes be considered synonymous, but in the contexts of metaphysics and modal logic, which are the contexts in which the p-zombie argument is made, 'possible' has a specific and distinct meaning. As the example shows, there's an aspect of 'so far as we know' to conceptions, which is supposed to be resolved in moving to metaphysical possibility - when we say, in the context of modal logic, that there is a possible world in which X is true, we are not just saying that we suppose there might be such a possible world. We should either assert it as an axiom or deduce it from our axioms, and if Chalmers had done the former, physicalists would simply have said that the burden was on him to justify that belief (it gets a bit more complicated when we make a counterfactual premise in order to prove by contradiction, but that is not an issue here.)

If the two words were synonymous, Chalmers would not have spent any time in making the distinction and in attempting to get from the former to the latter, and his opponents would not be challenging that step.


To expand on that, one could hold that any sentence in propositional form is conceivable, as it is conceived of merely by being expressed (some people might exclude propositions that are false a priori, but a difficulty with that is that we don't always (or even often) know whether that is the case.)

In the context of modal arguments, of which the p-zombie argument is one, for the sentence to be possible, it must be true in a possible world. In modal logic, if you want a claim that something is possible to be accepted by other people, you either have to get them to accept it as an axiom, or you must derive it from axioms they have accepted.

I am not sure if the above is going to help, because the debate over whether Chalmers can go from conceivability to possibility is, in part, a debate over what, exactly, people have accepted when they accept the conceivability of p-zombies. What seems clear, however, is that neither side is prepared to say that they are the same.

By the way, your positions seem to generally physicalist, except that you are troubled by p-zombies, which are intended to be anti-physicalist. AFAIK, this is quite an unusual combination of views.
bongo fury June 19, 2020 at 09:13 #425203
Quoting TheMadFool
What's the problem with referents?


Whether they are things out in the world, or merely more words referring to those things.

Quoting TheMadFool
The clear liquid that flows in rivers and the oceans that at times becomes solid and cold, and at other times is invisible vapor is the referent of the word "water".


Yep. So what is it that a computer so easily (according to you) links to the word "water"? The referent you just described, or merely the description?
TheMadFool June 19, 2020 at 13:25 #425274
Quoting bongo fury
Yep. So what is it that a computer so easily (according to you) links to the word "water"? The referent you just described, or merely the description?


The description consists of referents.
TheMadFool June 19, 2020 at 13:38 #425281
Quoting A Raybould
"Within the reach" avoids precision where precision is needed. What do you mean, here?


While the initial association of symbols may require human input, once the work is complete, a computer can use the databank just as humans do.

Quoting A Raybould
What's at issue here is my statement that "[Turing-like] tests could be further improved by focusing on what a machine understands, rather than what it knows" and your reply that you don't see anything special in understanding. Being symbol manipulation does not automatically make it simple, and your explanations, which invoke symbol manipulation whithout showing what sort of manipulation, are just part of the reason for thinking that it is not.


You insist that human understanding is not something a computer can do but what's the argument that backs this claim? I'd like to see an argument if it's all the same to you.

Quoting A Raybould
You are barking up the wrong tree here, precisely because translation does not modify the semantics of its input.


What I'm saying is very simple. Semantics is nothing more than associating words with something else - be it a concrete as a stone or as abstract as calculus. Associating two things is easily done by a computer and ergo, in my humble opinion, semantics can be handled by a computer.

To back up my position, I'll ask a few questions:

1. When you read the word "water" and understand it, what's going on in your head?

By the way, you haven't furnished the definitions for "conceivable" and "possible".
bongo fury June 19, 2020 at 14:09 #425296
Quoting TheMadFool
Yep. So what is it that a computer so easily (according to you) links to the word "water"? The referent you just described, or merely the description?
— bongo fury

The description consists of referents.


Ok, well to see "why people make such a big deal of understanding" you need to see that they are interested in how we link the word "water" to the water itself, and not merely to more words for water.

"Referent" usually refers to the designated object itself, not to other words, semantically related or not.
TheMadFool June 19, 2020 at 15:43 #425312
Quoting bongo fury
Ok, well to see "why people make such a big deal of understanding" you need to see that they are interested in how we link the word "water" to the water itself, and not merely to more words for water.

"Referent" usually refers to the designated object itself, not to other words, semantically related or not.


How do we do it, link the word "water" to the water itself, in your opinion?
bongo fury June 19, 2020 at 15:56 #425317
Quoting TheMadFool
How do we do it, link the word "water" to the water itself, in your opinion?


By learning to agree (or disagree) with other people that particular tokens of the word are pointed at particular instances of the object.

TheMadFool June 19, 2020 at 18:35 #425382
Quoting bongo fury
By learning to agree (or disagree) with other people that particular tokens of the word are pointed at particular instances of the object.


That's to say there is no meaning except in the sense of a consensus. What makes you think computers can't do that? Can't one computer use the same word-referent associations as another?
bongo fury June 19, 2020 at 19:21 #425405
Quoting TheMadFool
That's to say there is no meaning except in the sense of a consensus.


If you like. Is that an objection?

Quoting TheMadFool
What makes you think computers can't do that?


What, agree and disagree about where each other's words have 'landed', out in the world? If by computers you mean some future AI, then sure. This would no doubt be a few steps more advanced than, say, being able to predict where each other's ball has (actually) landed. Which I assume is challenging enough for current robots.

A Raybould June 19, 2020 at 22:34 #425430
Reply to TheMadFool
I am replacing my original reply because I do not think the nitpicking style that this conversation has fallen into is helpful.

From your explanations and 'water' question in your latest reply, it seems increasingly clear to me that we have very different ideas of what understanding is. For you, it seems to be something such that, if a person memorized a dictionary, they would understand everything that is defined in it. For me, it is partly an ability to find the significant, implicit connections between the things you know, and there is also a counterfactual aspect to it: seeing the consequences if things were different, and seeing what needs to change in order to get a desired result.

Given these different conceptions, it is not surprising that you might think it is an easy problem, while I see significant difficulties. I will not repeat those difficulties here, as I have already covered them in previous posts.

As for what I believe, no extant computer(+program) can perform human-like understanding, but I expect some future computer could do so.

With regard to conceivability versus possibility, I gave my working definitions in my previous post, though not spelled out in 'dictionary style.' For completeness, here are the stripped-down versions:

Conceivable: Anything that can be stated as a proposition is conceivable.

Possible: In the context of modal logic, which is the context of Chalmers' argument, something is possible if and only if it can be stated as a proposition that is true in some possible world.

I do not think reducing them to bare definitions is very helpful, and by doing so, perhaps I can persuade you of that. I urge you to take another look at the Collatz conjecture example from before.



TheMadFool June 20, 2020 at 09:04 #425534
Quoting A Raybould
Given these different conceptions, it is not surprising that you might think it is an easy problem, while I see significant difficulties


I agree. For your kind information, I suspect my idea of understanding is simpler than yours; that probably explains why I feel computers are capable of it.

What exactly does understanding mean for you?

Let me illustrate what understanding means (for me):

1. "Trees need water" consists of three words viz. "trees", "need" "water". It's clear that all three of them have referents. "Trees" and "water" being concrete objects, they maybe easy for a computer to make the connection between these words and their referents. The referent for "water" will consist of the sensory data that are associated with water and to that we can add scientific knowledge that water is H20, has Hydrogen bonds, etc.

The same can be done with "trees". To cut to the chase, understanding the words "trees" and "water" is simply a process of connecting a specific set of sensory and mental data to these words.

Coming to the word "need" we immediately recognize that the word refers to a concept i.e. the word has an abstract referent. The concept of need is a pattern abstracted from instances such as the relationship between plants and water, animals and oxygen, fire and heat, cars and gasoline, etc. In other words, if computers can be programmed to seek patterns the way humans can then, even abstract concepts, ergo abstract referents, are both within the reach and grasp of computers so programmed.

In summary, understanding is about 1) associating sets of [sensory & mental] data (sensory as in through the sense organs and mental as in knowledge within a given paradigm) and 2) pattern detection.

Which of the two aspects of understanding delineated above are difficult or impossible for a computer in your opinion?

What's your definition of understanding?

Quoting A Raybould
For me, it is partly an ability to find the significant, implicit connections between the things you know, and there is also a counterfactual aspect to it: seeing the consequences if things were different, and seeing what needs to change in order to get a desired result.


I covered this above.

Quoting A Raybould
Conceivable: Anything that can be stated as a proposition is conceivable.

Possible: In the context of modal logic, which is the context of Chalmers' argument, something is possible if and only if it can be stated as a proposition that is true in some possible world.



Thanks but I have an issue with your definition of "conceivable". According to you, unlike possibility, conceivability has no logical significance at all. When I say, "p-zombies are conceivable" I'm making the claim about p-zombies. That I can say that sentence ("can be stated") is the least of my concerns. Your definition of "conceivable" falls short of being relevant to the issue of whether p-zombies are possible/conceivable. Thanks anyway.
TheMadFool June 20, 2020 at 09:13 #425535
Quoting bongo fury
What, agree and disagree about where each other's words have 'landed', out in the world? If by computers you mean some future AI, then sure. This would no doubt be a few steps more advanced than, say, being able to predict where each other's ball has (actually) landed. Which I assume is challenging enough for current robots.


Do you mean that human understanding is reducible to computer logic but that we haven't the technology to make it work? If yes then that means you agree with me in principle that human understanding isn't something special, something that can't be handled by logic gates inside computers.

bongo fury June 20, 2020 at 11:17 #425574
Quoting TheMadFool
Do you mean that human understanding is reducible to computer logic


Only in the almost trivial sense that neurons are quite evidently some kind of switch or trigger.

Quoting TheMadFool
but that we haven't the technology to make it work? If yes then that means you agree with me in principle that human understanding isn't [s]something special[/s], something that can't be handled by logic gates inside computers.


I roughly agree with you now (maybe, or maybe the switches will have to be actual neurons; we don't yet know), since you're talking about way off in the future.

But do you at last see the trouble here,

Quoting TheMadFool
Searle's argument doesn't stand up to careful scrutiny for the simple reason that semantics are simply acts of linking words to their referents. Just consider the sentence, "dogs eat meat". The semantic part of this sentence consists of matching the words "dog" with a particular animal, "eat" with an act, and "meat" with flesh, i.e. to their referents and that's it, nothing more, nothing less. Understanding is simply a match-the-following exercise, something a computer can easily accomplish.


?
Harry Hindu June 20, 2020 at 12:06 #425586
Quoting A Raybould
Searle says that syntax can not give rise to semantics, and claims this to be the lesson of his "Chinese Room" paper. I don't agree, but I don't see the relationship as simple, either.


The problem with Searle's Chinese room is that the man in the room does understand something - the rules he is following - write this symbol when you see this symbol. It's just not the same rules that Chinese speaking people follow when using those same symbols.

The meaning of symbols can be arbitrary. Just look at all the different words from different languages that refer to the same thing. When we aren't using the same rules for the same symbols it can appear as if one of us isn't understanding the symbols.

That's what understanding is - having a set of rules for interpretting symbols. In this sense, computers understand thanks to their programming (a set of rules).

TheMadFool June 20, 2020 at 12:23 #425600
Quoting bongo fury
Only in the almost trivial sense that neurons are quite evidently some kind of switch or trigger.


What would a non-trivial sense look like? I mean your beliefs on human understanding seems to me as something approaching, oddly and also intriguingly, incomprehensibility for/by humans themselves. Unless, ofcourse, you have something to say about it...what's your take on understanding?

Quoting bongo fury
But do you at last see the trouble here


To make it short, no.
fishfry June 22, 2020 at 00:57 #426175
Quoting A Raybould

If you want to read Chalmers' own words, he has written a book and a series of papers on the issue. As you did not bother to read my original link, I will not take the time to look up these references; you can find them yourself easily enough if you want to (and they may well be found in that linked article). I will warn you that you will find the papers easier to follow if you start by first reading the reference I gave you.


You're right, will do as the inclination strikes.


Quoting A Raybould

That is a different question than the one you asked, and I replied to, earlier. The answer to this one is that a TM is always distinguishable from a human, because neither a human, nor just its brain, nor any other part of it, is a TM. A human mind can implement a TM, to a degree, by simulation (thinking through the steps and remembering the state), but this is beside the point here.[


Oh my. That's not true. But first for the record let me say that I agree with you. A TM could perhaps convincingly emulate but never implement an actual human mind. I don't believe the mind is a TM.

But many smart people disagree. You have all the deep thinkers who believe the entire universe is a "simulation," by which they mean a program running on some kind of big computer in the sky. (Why do these hip speculations always sound so much like ancient religion?) We have many people these days talking about how AI will achieve consciousness and that specifically, the human mind IS a TM. I happen to believe they're all wrong, but many hold that opinion these days. Truth is nobody knows for sure.

I've read many arguments saying that minds (and even the entire universe) are TMs. Computations. I don't agree, but I can't pretend all these learned opinions aren't out there. Bostrom and all these other likeminded characters. By the way I think Bostrom was originally trolling and is probably surprised that so many people are taking his idea seriously.

Quoting A Raybould

If you had actually intended to ask "...indistinguishable from a human when interrogated over a teletype" (or by texting), that would be missing the point that p-zombies are supposed to be physically indistinguishable from humans (see the first paragraph in their Wikipedia entry), even when examined in the most thorough and intrusive way possible. This is a key element in Chalmers' argument against metaphysical physicalism.


I'm perfectly willing to stipulate that a p-zombie is physically indistinguishable. I made the assumption, which might be wrong, that their impetus or mechanism of action is driven by a computation. That is, they're totally human-like androids run by a computer program.

If you are saying the idea is that they're totally lifelike and they have behavior but the behavior is not programmed ... then I must say I don't understand how such a thing could exist, even in a thought experiment. Maybe I should go read Chalmers.

Quoting A Raybould

As a p-zombie is physically identical to a human (or a human brain, if we agree that no other organ is relevant), then it is made of cells that work in a very non-Turing, non-digital way.


You and I would both like to believe that. But neither of us has evidence that the mind is not a TM, nor do we have hours in the day to fight off the legion of contemporary intellectuals arguing that it is.

Roger Penrose is famous for arguing that the mind is not a computation, and that it has something to do with Gödel's incompleteness theorem being solved in the microtubules. Nobody takes the idea seriously except as a point of interest. Sir Roger's bad ideas are better than most people's good ones.

But you can't be unaware that many smart people argue that the mind is a TM.

We don't know of anything that works in a "non-Turing, non digital way." There are mathematical models of hypercomputation or supercomputation in which one assumes capabilities beyond TMs. But there's no physics to go along with it. Nobody's ever seen hypercomputation in the physical world and the burden would be on you (and I) to demonstrate such.


Quoting A Raybould

Chalmers believes he can show that there is a possible world identical to ours other than it being inhabited by p-zombies rather than humans, and therefore that the metaphysical doctrine of physicalism - that everything must necessarily be a manifestation of something physical - is false.


I recall this argument. It's wrong. If our minds are a logical or necessary consequence of our physical configuration, and a p-zombie is identical to a human, then the p-zombie must be self-aware.

Otherwise there is some "secret sauce" that implements consciousness; something that goes beyond the physical. Some argument for physicalism. You just refuted it. Maybe I"m misunderstanding the argument. But if the mind is physical and a p-zombie is physically identical, then a p-zombie has a mind. If a p-zombie is physically identical yet has no mind, then mind is NOT physical. Isn't that right?

Quoting A Raybould

Notice that there is no mention of AI or Turing machines here.


Only, in my opinion, because not every philosopher understands the theory of computation.

What animates the p-zombie? If it's a mindless machine, it must have either a program, or it must have some noncomputable secret sauce. If the latter, the discovery of such a mechanism would be the greatest scientific revolution of all time. If Chalmers is ignoring this, I can't speak for him nor am I qualified to comment on his work.


Quoting A Raybould
P-zombies only enter the AI debate through additional speculation: If p-zombies are possible, then it is also possible that any machine (Turing or otherwise), no matter how much it might seem to be emulating a human, is at most emulating a p-zombie.


As I mentioned earlier, it's entirely possible that my next door neighbor is only emulating a human. We can never have first-hand knowledge of anyone else's subjective mental states.

I still want to understand what is claimed to animate a p-zombie. Is it a computation? Or something extra-computational? And if it's the latter, physics knows of no such mechanism.


Quoting A Raybould

As the concept of p-zombies is carefully constructed so as to be beyond scientific examination,
p/quote]

Ah. Perhaps that explains my unease with the concept. My understanding is that p-zombies are logically incoherent. They are identical to human enough to emulate all human behavior, but they don't implement a subjective mind. In which case, mind must be extra-computable. Penrose's idea. I tend to agree that the mind is not computable. But how do p-zombies relate?

[quote="A Raybould;424308"]
such a claim may be impossible to disprove, but it is as vulerable to Occam's razor as is any hypothesis invoking magic or the supernatural.


I think you've motivated me to at least go see what Chalmers has to say on the subject. Maybe I'll learn something. I can see that there must be more to the p-zombie argument than I'm aware of.

ps -- I just started reading and came to this: "It seems that if zombies really are possible, then physicalism is false and some kind of dualism is true."

https://plato.stanford.edu/entries/zombies/

This tells me that my thinking is on the right track. If we are physical and p-zombies are physically identical then p-zombies are self-aware. If they aren't, then humans must have some quality or secret sauce that is non-physical.

I would raise an intermediate possibility. The mechanism of mind might be physical but not computable. So we have three levels, not two as posited by the p-zombie theorists:

* Mind is entirely physical.

* Mind is entirely physical but not necessarily computable, in the sense of Turing 1936. It might be some sort of hypercomputation as is studied by theorists.

* Mind might be non-physical. In which case we are in the realm of spirits and very little can be said.


pps --

AHA!!

" Proponents of the argument, such as philosopher David Chalmers, argue that since a zombie is defined as physiologically indistinguishable from human beings, even its logical possibility would be a sound refutation of physicalism, because it would establish the existence of conscious experience as a further fact."

https://en.wikipedia.org/wiki/Philosophical_zombie

This is exactly what I'm understanding. And I agree that I probably made things too complicated by inserting computability in there. The p-zombie argument is actually much simpler.

Well that counts as research for me these days. A couple of paragraphs of SEP and a quote-mine from Wiki. Such is the state of contemporary epistemology. If it's on Twitter it's true.
A Raybould June 28, 2020 at 12:39 #429141
Reply to TheMadFool
There is no point in discussing your own private definition of 'understanding' - no-one can seriously doubt that computers are capable of performing dictionary look-up or navigate a predefined network of lexical relationships; even current AI can do much more than just that.

We can make no judgment, however, of whether AI is performing at human-like levels by only looking at simple examples, and the fact remains that AI currently has problems with certain more demanding cognitive tasks, such as with "common-sense physics" (as I mentioned previously, that is not just my opinion, it is a quote from those who are actually doing the work.) You have given no plausible explanation for how your concept of understanding, and of how it can easily be achieved, solves this problem, and in your only attempt to explain away why, if it is so easy, it remains an acknowledged problem in actual AI research, you implied that the whole AI community has consistently failed to see what is obvious to you.

There is no reasonable doubt that AI currently has a problem with something here; I just don't know what it is called in your personal lexicon.

Personal lexicons come up again in the issue of 'conceivable' vs. 'possible', where the definition I attempted of 'conceivable' apparently doesn't match yours. There is no point in getting into a "you say, I say" argument, but we don't have to: it is a straightforward fact that the distinction between 'conceivable' and 'possible' is widely accepted among philosophers and is central to Chalmer's p-zombie argument. I will grant that it is conceivable, and even possible, that you are right and they are all wrong, but I don't think it is probable.

You would be more convincing if you could explain where the example I gave earlier, using the current status of the Collatz conjecture, goes wrong.
A Raybould June 28, 2020 at 13:41 #429154
Reply to fishfry
Even in Bostrom's simulation argument, neither brains nor minds are TMs: in that argument, I (or, rather, what I perceive as myself) is a process (a computation being performed), and what I perceive as being the rest of you is just data in that process. To confuse a process (in either the computational sense here, or more generally) with the medium performing the process is like saying "a flight to Miami is an airplane." A computation is distinct from the entity doing the computation (even if the latter is a simulation - i.e. is itself a computation - they are different computations (and even when a computation is a simulation of itself, they proceed at different rates in unending recursion.))

I recognize that this loqution is fairly common - for example, we find Searle writing "The question is, 'Is the brain a digital computer?' And for the purposes of this discussion I am taking that question as equivalent to 'Are brain processes computational?" - but, as this quote clearly shows, this is just a manner of speaking, and IMHO it is best avoided, as it tends to lead to confusion (as demonstrated in this thread) and can prime the mind to overlook certain issues in the underlying question (for example, if you assume that the brain is a TM, it is unlikely that you will see what Chalmers is trying to say about p-zombies.) To me, Searle's first version of his question is little more than what we now call click-bait.


TheMadFool June 28, 2020 at 15:53 #429233
Quoting A Raybould
There is no point in discussing your own private definition of 'understanding


What's the public definition of understanding. Since no definition was agreed upon I thought I might just throw in my own understanding of understanding. What's your definition, if I may ask?

Quoting A Raybould
you implied that the whole AI community has consistently failed to see what is obvious to you.


I don't know what's so uncomputable about understanding. As far as I can tell, all that's required are:

1. Word-referent connection

2. Pattern recognition

Is there anything else to understanding? If there is I'd like to know. Thanks.

Quoting A Raybould
'conceivable' vs. 'possible'


What is the difference between the two? If they're the same then it's true that conceivable if and only if possible.

If they're different then it's possible that conceivable but not possible and possible but not conceivable. Please provide examples of both scenarios for my benefit. Thanks.

fishfry June 29, 2020 at 22:08 #429884
Quoting A Raybould
Even in Bostrom's simulation argument, neither brains nor minds are TMs: in that argument,


If the word simulation means something other than computation, you need to state clearly what that is; and it has to be consistent either with known physics; or else stated as speculative physics.

I'll agree that Bostrom and other philosophers (Searle included) don't appear to know enough computability theory to realize that when they say simulation they mean computation; and that when they say computation they must mean a TM or a practical implementation of a TM. If not, then what?

When we simulate gravity or the weather or a first person shoot-'em-up video game or Wolfram's cellular automata or any other simulation, it's always a computer simulation. What other kind is there?

And when we say computation, the word has a specific scientific meaning laid out by Turing in 1936 and still the reigning and undefeated champion.

Now for the record there are theories of:

* Supercomputation; in which infinitely many instructions or operations can be carried out in finite time; and

* Hypercompuation; in which we start with a TM and adjoin one or more oracles to solve previously uncomputable problems.

Both supercomputation and hypercomputation are studied by theorists; but neither are consistent with known physical theory. The burden is on you to be clear on what you mean by simulation and computation if you don't mean a TM.

Quoting A Raybould

I (or, rather, what I perceive as myself) is a process (a computation being performed), and what I perceive as being the rest of you is just data in that process.


But what do you mean by computation? Turing defined what a computation is. If you mean to use Turing's definition, then you have no disagreement with me. And if you mean something else, then you need to clearly state what that something else is; since the definition of computation has not changed since Turing's definition.


Quoting A Raybould

To confuse a process (in either the computational sense here, or more generally) with the medium performing the process is like saying "a flight to Miami is an airplane."


I have done no such thing. I don't know why you'd think I did. A computation is not defined by the medium in which it's implemented; and in fact a computation is independent of its mode of execution. I genuinely question why you think I said otherwise.

If you agree that you and I are "processes," a term you haven't defined but which has a well-known meaning in computer science with which I'm highly familiar, then a process is a computation. You can execute Euclid's algorithm with pencil and paper or on a supercomputer, it makes no difference. It's the same computation.

Quoting A Raybould

A computation is distinct from the entity doing the computation (even if the latter is a simulation - i.e. is itself a computation - they are different computations (and even when a computation is a simulation of itself, they proceed at different rates in unending recursion.))


You're arguing with yourself here. I have never said anything to the contrary. A computation is independent of the means of its execution. What does that have to do with anything we're talking about?

Quoting A Raybould

I recognize that this loqution is fairly common - for example, we find Searle writing "The question is, 'Is the brain a digital computer?'


Searle also, in his famous Chinese room argument, doesn't talk about computations in the technical sense; but his argument can be perfectly well adapted. Searle's symbol lookups can be done by a TM.

And again, so what? You claim the word simulation doesn't mean computation; and that computation isn't a TM. That's two claims at odds with reality and known physics and computer science. The burden is on you to provide clarity. You're going on about a topic I never mentioned and a claim I never made.

Quoting A Raybould

And for the purposes of this discussion I am taking that question as equivalent to 'Are brain processes computational?" - but, as this quote clearly shows, this is just a manner of speaking,


But a computation is a very specific technical thing. If I start going on about quarks and I say something that shows that I'm ignorant of physics and I excuse myself by saying, "Oh that was just a manner of speaking," you would label me a bullshitter.

If you mean to use the word computation, you have to either accept its standard technical definition; or clearly say you mean something else, and then say exactly what that something else is.


Quoting A Raybould

and IMHO it is best avoided, as it tends to lead to confusion (as demonstrated in this thread)


I'm not confused. My thinking and knowledge are perfectly clear. A computation is defined as in computer science. And if you mean that we are a "simulation" in some sense OTHER than a computation, you have to say what you mean by that, and you have to make sure that your new definition is compatible with known physics.


Quoting A Raybould

and can prime the mind to overlook certain issues in the underlying question (for example, if you assume that the brain is a TM, it is unlikely that you will see what Chalmers is trying to say about p-zombies.)


I understand exactly what Chalmers is saying about p-zombies now that I re-acquainted myself with the topic as a result of this thread.

But you're going off in directions.

What do you mean by simulation, if not a computer simulation? And what do you mean by a computation, if not a TM?

Quoting A Raybould

To me, Searle's first version of his question is little more than what we now call click-bait.


Whatever. I'm not Searle and he got himself into some #MeToo trouble and is no longer teaching. Why don't you try talking to me instead of throwing rocks at Searle?
A Raybould June 29, 2020 at 22:15 #429885
Reply to fishfry
What part of 'a computation is what a Turing machine does, not what it is' do you not understand? At least until we sort that out, I am not going to read any more of this jumble.
fishfry June 29, 2020 at 22:23 #429889
Quoting A Raybould
What part of 'a computation is what a Turing machine does, not what it is' do you not understand? At least until we sort that out, I am not going to read any more of this jumble.


Best you don't, since I couldn't have been more clear.

A TM is not a physical device. It's an abstract mathematical construction. A computation, by definition, is anything that a TM can do. This isn't me saying this, it's Turing followed by 80 years worth of computer science saying that.

If you think there's something that counts as a computation, that

a) Can not be implemented by a TM; and

b) Is consistent with known physics;

then by all means tell me what that is to you.
A Raybould June 29, 2020 at 22:25 #429890
Reply to fishfry Quoting fishfry
I couldn't have been more clear.


I rather suspect that's true, unfortunately.
fishfry June 29, 2020 at 22:27 #429892
Quoting A Raybould
I rather suspect that's true, unfortunately.


I haven't seen your handle much before. People who know me on this board know that I'm perfectly capable of getting into the mud. I'm sorely tempted at this moment but will resist the urge. I say to you again:

A TM is not a physical device. It's an abstract mathematical construction. A computation, by definition, is anything that a TM can do. This isn't me saying this, it's Turing followed by 80 years worth of computer science saying that.

If you think there's something that counts as a computation, that

a) Can not be implemented by a TM; and

b) Is consistent with known physics;

then by all means tell me what that is to you.
A Raybould June 29, 2020 at 22:53 #429901
Reply to fishfry
Quoting fishfry
People who know me on this board know that I'm perfectly capable of getting into the mud.


I will treat that comment with all the respect it deserves.

Quoting fishfry
A TM is not a physical device. It's an abstract mathematical construction...


Regardless, the question I asked a couple of posts ago applies either way.

Quoting fishfry
...A computation

...but it is not the computation that the abstract machine is computing. I covered that in yesterday's post.
Do you think that in Bostrom's simulated universes, it's TMs all the way down? Clearly not, as his premises don't work in such a scenario - there's a physical grounding to whatever stack of simulations he is envisioning.


fishfry June 29, 2020 at 23:26 #429904
Quoting A Raybould
Do you think that in Bostrom's simulated universes, it's TMs all the way down?


I discussed this at length. You chose not to engage with my questions, my points, or my arguments. You failed to demonstrate basic understanding of the technical terms you're throwing around. You repeatedly failed to define your terms "process" and "simulation" even after my repeated requests to do so.

This is no longer productive.
A Raybould June 30, 2020 at 02:31 #429981
Reply to TheMadFool
First, let me make one thing clear (once again): The issue is not whether understanding is uncomputable, and if you think I have said so, you are either misunderstanding something I wrote, or drawing an unwarranted conclusion. The issue here is your insistence that there is nothing special about understanding and that it is a simple problem for AI.

I have already given you a working definition that you chose to ignore. Ignoring me is one thing, but if, instead, you were to look at what real philosophers are thinking about the matter, you would see that, though it is a work in progress, at least one thing is clear: there is much more to it than you suppose.

We can, however, discuss this matter in a way that does not depend on a precise definition. If, as you say, having AIs understand things is simple, then how come the creators of one of the most advanced AI programs currently written acknowledge that understanding common-sense physics, for one thing, is still a problem? Here we have a simple empirical fact that really needs to be explained away before we can accept that understanding (regardless of how you choose to define it) actually is simple - yet many posts have gone by without you doing so.

Of course, anyone reading your 'explanation' of how to do machine understanding will have a problem implementing it, because it is so utterly vague. It most reminds me of many of the dismissive posts and letters-to-the-editor written after IBM's Watson's success in the Jeopardy contest: "it's just database lookup" was typical of comments by ignoramuses who had no idea of how it worked and by how much it transcended "just" looking up things in a database.


Quoting TheMadFool
If they're different then it's possible that conceivable but not possible and possible but not conceivable. Please provide examples of both scenarios for my benefit. Thanks.


When I read this, I got the distinct feeling that I was dealing with a bot, which would be quite embarrassing for me, given the original topic of this thread! Things that tend to give away a bot include blatant non-sequiturs, a lack of substance, a tendency to lose the thread, and repetition of errors. You asked essentially the same question as this one here (complete with the same basic error in propositional logic) a few posts back, but when I provided just such an example (the same one as I had given more than once before) you ignored it and went off in a different direction, only to return to the same question now.

I am tempted to just quote my reply from then, but I will spell it out more formally, so you can reference the first part you don't agree with:

  • P1 Anything that has been conceived of is conceivable.
  • P2 I have conceived of the proposition 'The Collatz conjecture is true.'
  • L1 'The Collatz conjecture is true' is conceivable. (P1, P2)
  • P3 I have conceived of the proposition 'The Collatz conjecture is false.'
  • L2  'The Collatz conjecture is false' is conceivable. (P1, P3)
  • P4 Either the Collatz conjecture is true, or it is false; it cannot be both, and there are no other alternatives.
  • L3 If the Collatz conjecture is true, then the conceivable proposition 'The Collatz conjecture is false' does not state a possibility. (L2, P4)
  • L4 If the Collatz conjecture is false, then the conceivable proposition 'The Collatz conjecture is true' does not state a possibility. (L1, P4)
  • C1 There is something that is conceivable but not possible. (L3, L4)

TheMadFool June 30, 2020 at 12:01 #430198
Quoting A Raybould
The issue here is your insistence that there is nothing special about understanding and that it is a simple problem for AI.


In other words, it's implied, you feel understanding is uncomputable i.e. there is "something special" about it and for that reason is beyond a computer's ability.

Quoting A Raybould
If, as you say, having AIs understand things is simple, then how come the creators of one of the most advanced AI programs currently written acknowledge that understanding common-sense physics, for one thing, is still a problem?


I'll make this as clear as I possibly can. Do you think humans are different from machines in a way that gives humans certain abilities that are not replicable in machines? Do we, humans, not obey the laws of chemistry and physics when we're engaged in thinking and understanding? It seems to me that all our abilities, including understanding, arise from, are determined by, chemical and physical laws all matter and energy must obey. The point being there's nothing magical going on in thinking/understanding - it's just a bunch of chemical and physical processes. We're, all said and done, meat machines or wet computers.

There's nothing physically or chemically impossible going on inside our heads. Hence, I maintain that thinking/understanding is, for sure, computable.

Quoting A Raybould
When I read this, I got the distinct feeling that I was dealing with a bot, which would be quite embarrassing for me, given the original topic of this thread! Things that tend to give away a bot include blatant non-sequiturs, a lack of substance, a tendency to lose the thread, and repetition of errors. You asked essentially the same question as this one here (complete with the same basic error in propositional logic) a few posts back, but when I provided just such an example (the same one as I had given more than once before) you ignored it and went off in a different direction, only to return to the same question now.

I am tempted to just quote my reply from then, but I will spell it out more formally, so you can reference the first part you don't agree with:

P1 Anything that has been conceived of is conceivable.
P2 I have conceived of the proposition 'The Collatz conjecture is true.'
L1 'The Collatz conjecture is true' is conceivable. (P1, P2)
P3 I have conceived of the proposition 'The Collatz conjecture is false.'
L2  'The Collatz conjecture is false' is conceivable. (P1, P3)
P4 Either the Collatz conjecture is true, or it is false; it cannot be both, and there are no other alternatives.
L3 If the Collatz conjecture is true, then the conceivable proposition 'The Collatz conjecture is false' does not state a possibility. (L2, P4)
L4 If the Collatz conjecture is false, then the conceivable proposition 'The Collatz conjecture is true' does not state a possibility. (L1, P4)
C1 There is something that is conceivable but not possible. (L3, L4)


obscurum per obscurius

Define the words "conceivable" and "possible" like a dictionary does.

A Raybould June 30, 2020 at 12:28 #430206
Reply to TheMadFool Quoting TheMadFool
In other words, it's implied, you feel understanding is uncomputable i.e. there is "something special" about it and for that reason is beyond a computer's ability.


Absolutely not. As you are all for rigor where you think it helps your case, show us your argument from "there's something special about understanding" to "understanding is uncomputable."

Quoting TheMadFool
Hence, I maintain that thinking/understanding is, for sure, computable.


As I have pointed out multiple times, that is not the issue in question. Here, you are just making another attempt to change the subject, perhaps because you have belatedly realised that you cannot sustain your original position? Until you have completed the above task, stop attempting to attribute to me straw-man views that I do not hold and have not advocated.

Quoting TheMadFool
obscurum per obscurius


Now you are just trolling, and using Latin does not alter that fact. Here we have a straightforward argument that you apparently don't agree with, but for which you cannot find a response.

Quoting TheMadFool
Define the words "conceivable" and "possible" like a dictionary does.


I see you are reverting to bot-like behavior, as outlined in my previous post. We have been round this loop before. I see, from other conversations, that you frequently use demands for definitions to browbeat other people when you have run out of arguments, to take the discussion in a different direction... Well, it won't work here: I am not going to follow you in another run around the rabbit-warren until you have addressed the specific argument here.
TheMadFool June 30, 2020 at 13:21 #430220
Reply to A Raybould :smile: I've made it amply clear what understanding means to me. It simply involves 2 things:

1. Connect symbols to referents (computable). Even Google search can do that. For instance when I type "water" into the search bar, Google takes me to articles that contain the word "water" and even images of glasses of water.

2. Recognize patterns (computable). Google search couldn't have taken me to articles on water and images of water without it being able to identify patterns.

As for the definitions of "conceivable" and "possible", I'd like to see them in a familiar format please, like in a dictionary.

Olivier5 June 30, 2020 at 13:29 #430222
Quoting TheMadFool
Conscious being = True AI = P-Zombie

One possible conclusion from this equation is that p-zombies, as defined, cannot exist.
TheMadFool June 30, 2020 at 13:46 #430227
Quoting Olivier5
One possible conclusion from this equation is that p-zombies, as defined, cannot exist.


:chin:
Olivier5 June 30, 2020 at 20:38 #430316
Reply to TheMadFool A p-zombie is defined as a human without consciousness. If it is equal to a human with consciousness, then either consciousness is equal to nothing, or p-zombies must be self aware in order to be able to behave as normal humans, and therefore true p-zombies cannot exist.

I think it's obvious that they cannot possibly exist. Mind you, they don't actually exist. They are just a mind experiment designed to probe the nature of consciousness.
A Raybould July 01, 2020 at 03:39 #430385
Reply to TheMadFool
I have noticed a pattern here: you will post a claim, I will respond, then you will raise a different issue as if you had no counter-argument. A post or two later, however, the first issue will rise again, zombie-like, as if it had never been discussed before.

For convenience, I will list these recurring arguments and my responses. That way, you can make a comment that merely states the number of the argument du jour, and I can reply by picking a corresponding response ID and just posting that. It will make things so much easier!


A1: Understanding is simple, because understanding is computable.

R1: Being computable does not necessarily entail simplicity. If this were the case, the whole of AI would be simple, the ABC conjecture would have a simple proof (if there really is one), etc.


A2: Understanding is simple, because understanding is just a matter of connecting symbols to referents and recognizing patterns.

R2.1 This is too vague to establish simplicity. It is so vague that you could make the same claim for any aspect of AI, or AI as a whole (like the people who dismiss current AI as "just database lookup"), but if it is that simple, how come there are still outstanding problems? 

R2.2: Regardless of what definition you put forward, the claim that it is simple to implement is inconsistent with the fact that current AI has ongoing difficulties with, for example, understanding common-sense physics.


A3: Understanding is simple, as shown by this simple example of what I understand from the word 'water'.

R3.1: You cannot establish simplicity through only simple examples, unless there are only simple examples. As it happens, there are difficult examples here, such as the afforementioned difficulty with understanding common-sense physics.

R3.2: If you could establish simplicity through simple examples, then the whole of mathematics would be simple, as established by the fact that you can teach five-year-olds to add two numbers.


Did I miss any? Your help in completing the list would be appreciated!

So, according to this list, your latest post is A2 A3, to which I reply R2.2 R3.1 (I have some flexibility here.)


Quoting TheMadFool
As for the definitions of "conceivable" and "possible", I'd like to see them in a familiar format please, like in a dictionary.


We have already been around this define / whats-the-difference? / define  loop once before, and as I made clear, I have no intention of going round again until you address the whats-the-difference example.

If you want to get into definitions, it's your turn to offer some, so how about if you proffer definitions which make the Collatz conjecture example/argument invalid or unsound? (Or, if you find that infeasible, you could simply say which premise or conclusion you first disagree with, and we can proceed from there.)
TheMadFool July 01, 2020 at 07:01 #430482
Quoting A Raybould
I have noticed a pattern here: you will post a claim, I will respond, then you will raise a different issue as if you had no counter-argument. A post or two later, however, the first issue will rise again, zombie-like, as if it had never been discussed before.


You've failed to notice your own pattern.

Quoting A Raybould
Being computable does not necessarily entail simplicity


I never claimed understanding as simple. I said it's computable.

Quoting A Raybould
This is too vague to establish simplicity


I never brought up the notion of simplicity. I don't know where you got that from. Also, 1)word-referent matching and 2) pattern recognition, as far as I can tell, aren't vague at all. If you insist they are, demonstrate them to be so.

Quoting A Raybould
We have already been around this define / whats-the-difference? / define  loop once before, and as I made clear, I have no intention of going round again until you address the whats-the-difference example.

If you want to get into definitions, it's your turn to offer some, so how about if you proffer definitions which make the Collatz conjecture example/argument invalid or unsound? (Or, if you find that infeasible, you could simply say which premise or conclusion you first disagree with, and we can proceed from there.)


I don't see a necessary connection between conceivable & possible and the Collatz conjecture. Are you implying the meanings of conceivable and possible are based off of the Collatz conjecture? If yes, I'm more than happy to see a proof of that.
A Raybould July 02, 2020 at 03:25 #430793
Reply to TheMadFool
Quoting TheMadFool
I never claimed understanding as simple...


Quoting TheMadFool
I don't know why people make such a big deal of understanding - it's very simple.

Quoting TheMadFool
To cut to the chase, understanding the words "trees" and "water" is simply a process of connecting a specific set of sensory and mental data to these words.

...and so on. These are not 'gotcha' quotes taken out of context; the alleged simplicity of understanding is a big part of your claim that there is nothing special about it.

Quoting TheMadFool
...I said it's computable.

But the issue is not whether it is computable, as I have repeatedly had to remind you. Do you not remember this?
Quoting A Raybould

In other words, it's implied, you feel understanding is uncomputable i.e. there is "something special" about it and for that reason is beyond a computer's ability.
— TheMadFool

Absolutely not. As you are all for rigor where you think it helps your case, show us your argument from "there's something special about understanding" to "understanding is uncomputable."


I am quite willing to believe that initially, you may have merely misunderstood what I meant, by drawing an unjustified conclusion such as the one above, but to continue as if this is the issue in contention, after having been repeatedly corrected on the matter, is another example of trollish behavior, and I will continue to call you on it wherever I see it, whether it is in response to me or someone else (in fact, if it were not for this aspect of your replies, I would drop the issue as being merely a misunderstanding and a difference of opinion.)

Quoting TheMadFool
1)word-referent matching and 2) pattern recognition, as far as I can tell, aren't vague at all.

For one thing, they are vague, when considered as an explanation of understanding, in that they lack the specificity needed for it to be clear that anything having just those two capabilities would necessarily understand, say, common-sense physics or Winograd schema. I am willing to believe that a machine capable of understanding these things could be described as having these capabilities, but I am also pretty sure that many machines, including extant AIs such as GPT-3, could also be so described, while lacking this understanding. If so, then this description lacks the specificity to explain the difference between machines that could and those that cannot understand these things.

  
Quoting TheMadFool
Are you implying the meanings of conceivable and possible are based off of the Collatz conjecture?


No - I should have made it clear that the Collatz conjecture is just something for which neither a proof nor a refutation has been found so far; any other formal conjecture would do as well in its place. The essence is that there are two conceivable things here, and we know that only one of them is possible (even though we don't know which one), so the other (whichever one it is) is conceivable but not possible.
TheMadFool July 02, 2020 at 09:06 #430890
Quoting A Raybould
?TheMadFool
I never claimed understanding as simple...
— TheMadFool

I don't know why people make such a big deal of understanding - it's very simple.
— TheMadFool
To cut to the chase, understanding the words "trees" and "water" is simply a process of connecting a specific set of sensory and mental data to these words.
— TheMadFool
...and so on. These are not 'gotcha' quotes taken out of context; the alleged simplicity of understanding is a big part of your claim that there is nothing special about it.


Firstly, I admit that I used the word "simple" in reference to understanding but only to characterize its nature as not something magical and beyond the scope of computers. Understanding, as it appears to be, is probably a complex phenomena, nevertheless computable. That's what I mean when I said "I don't know why people make such a big deal of understanding - it's very simple. Very simple in the sense of being reducible to logic, something computers are capable of. To clarify further, take aerodynamics - the science of flight - and you'll notice a simplicity in the fact that flying involves only 4 forces viz. lift, weight, thrust and drag but building a plane is a complex affair. Similarly, the simplicity in understanding lies in it involving only a couple of actions viz. word-referent matching and pattern recognition but the complexity lies in how these simple actions can be programmed at the required level in a computer.

Quoting A Raybould
For one thing, they are vague, when considered as an explanation of understanding, in that they lack the specificity needed for it to be clear that anything having just those two capabilities would necessarily understand, say, common-sense physics or Winograd schema. I am willing to believe that a machine capable of understanding these things could be described as having these capabilities, but I am also pretty sure that many machines, including extant AIs such as GPT-3, could also be so described, while lacking this understanding. If so, then this description lacks the specificity to explain the difference between machines that could and those that cannot understand these things.


I wouldn't go so far as to say the two conditions for understanding I mentioned in my post are complete. Some other conditions may need to be added.

Quoting A Raybould
No - I should have made it clear that the Collatz conjecture is just something for which neither a proof nor a refutation has been found so far; any other formal conjecture would do as well in its place. The essence is that there are two conceivable things here, and we know that only one of them is possible (even though we don't know which one), so the other (whichever one it is) is conceivable but not possible.


So, you're saying, with reference to the Collatz conjecture, that it's conceivable for the Collatz conjecture to be true AND false. Put differently, you're claiming:

1. It is conceivable that the Collatz Conjecture is true AND it is conceivable that the Collatz Conjecture is false

That's a contradiction.

Olivier5 July 02, 2020 at 10:38 #430904
Quoting TheMadFool
Understanding, as it appears to be, is probably a complex phenomena, nevertheless computable. That's what I mean when I said "I don't know why people make such a big deal of understanding - it's very simple. Very simple in the sense of being reducible to logic, something computers are capable of.

Computers as we know them are not aware of the world around them, and that means they cannot realy understand anything, because they don't know that there exists referents out there for words like "trees" or "water".

Let me take your own example to illustrate the point. Here is how, as a human being, I understand the proposition "trees need water": it means to me "IN ORDER TO STAY ALIVE, trees need TO ABSORB SOME MINIMUM AMOUNT OF water PER UNIT OF TIME".

STAY ALIVE: Evidently, dead trees don't need water for anything. The need is related to life and its maintenance.

ABSORB: Evidently, trees don't need water that they can't absorb. They absorb water usually though their root system, so if you just place a glass of water standing next to a tree, you're not providing for its need.

SOME AMOUNT... PER UNIT OF TIME: Evidently their water needs are not infinite. If you don't water to a tree for a day or two, it will be fine. And if you place a tree under water (or water-log its root system) it may well die. So they need SOME water, and some trees require more water than others.

So your seemingly simple proposition, "trees need water" cannot be properly understood by a machine who has no clue about trees and their biology.
A Raybould July 02, 2020 at 23:24 #431049
Reply to TheMadFool
I think we have reached the point that we can agree to differ over whether or not there is something special about understanding, because we are approaching the question from different perspectives.

With regard to conceivability: It would be a contradiction to say that 'the Collatz Conjecture is true AND the Collatz Conjecture is false', but 'It is conceivable that the Collatz Conjecture is true AND it is conceivable that the Collatz Conjecture is false' is not the same, at least formally.

To see this, consider sentences of the form 'It is P that the Collatz Conjecture is true AND it is P that the Collatz Conjecture is false'. Substituting 'true' for P leads to a contradiction, but substituting 'uncertain' does not. Without more precise semantics for 'conceivable', we cannot say that we get a contradiction when we substitute it for P (in the Collatz conjecture argument, I avoided giving a full definition of 'conceivable' by saying that everything that has been conceived of is a subset of everything conceivable. You can read 'conceived of' as 'thought of'.)

Chalmer's p-zombie argument is entirely dependent on taking the step from p-zombies being conceivable to being possible, so what he intends these two words to mean, and the relationship between them, is of critical importance. If they have the same meaning, then he is simply asserting that p-zombies are possible, without offering any argument for that claim; to put it another way, he would merely be inviting us to share his belief, without there being any risk of us falling into a contradiction if we decline to do so.

Chalmers, therefore, is walking a narrow path: his definition of 'conceivable' has to be distinct from 'possible', but not so distinct that he needs additional assumptions to get from the former to the latter, and especially not any contingent assumptions, which could be false as a matter of fact.

I know one thing you are thinking right now: So what is Chalmers' definition of 'conceivable'? I am not certain, and I don't think there is an easy answer; the first place to look would be his paper Does Conceivability Entail Possibility?, though it is not an easy read. For what it is worth, my impression of the paper was that he only says it does so in those cases where there are other, independent, reasons for saying that the conceivable thing is possible - which amounts to saying "no, conceivability by itself does not entail possibility", and therefore his 'argument' for the possibility of p-zombies is merely an unargued-for belief.
TheMadFool July 03, 2020 at 01:40 #431068
Reply to A Raybould:up: Thanks. Sorry if it seemed like I was trolling. I couldn't troll anyone as efficiently and as expertly as I troll myself.
A Raybould July 03, 2020 at 12:19 #431125
Reply to TheMadFool
Thanks for saying that. It is easy to get carried away whan defending a point of view. I do, and I used to do it a lot more; I have to make a conscious effort to back away.

A Raybould July 03, 2020 at 12:39 #431127
Reply to TheMadFool
I had some more thoughts on conceivabilty vs. possibility. Most philosophers accept possible-world semantics for dealing with questions of possibility and necessity, in which to say something is possible is to assert that there is a possible world in which it is true, regardless of whether it is true in the actual world (that looks somewhat self-referential, but logicians seem to agree its OK.)

It is also generally held that mathematical truths are necessary truths, and necessary a priori at that. A mathematical fact is true in all possible worlds, and always has been.

Putting these two things together gets tricky when we consider a mathematical conjecture. Given that we do not know whether it is true, we might want to say it is possible that it is true; at another time, we might want to say that it is possible that it is false. Under possible-world semantics, however, if mathematical truths are necessary truths, then one or the other of those statements must be false: if the conjecture is true, it is necessarily true, so there are no possible worlds in which it is false, and vice-versa.

What we want is a way of saying that something might be true, without invoking all the implications that come with possible-world semantics. Saying that it is is conceivable is a way of doing that. (Note that even though, in everyday usage, 'might be true' and 'possibly true' usually mean more-or-less the same thing, they are different when 'possible' is being used in the context of possible-world semantics.)

Chalmers also says p-zombies are 'logically possible', which looks like a strong statement, but it really just says that he is unaware of any facts that could disprove them. Given that he has defined p-zombies in a way that makes them immune from being ruled out by any sort of scientific investigation or discovery whatsoever, this is not saying much.
TheMadFool July 04, 2020 at 11:23 #431472
Reply to A Raybould Let me get this straight. In reference to a mathematical conjecture like the Collatz one, you mean to say, that there are two conceivable outcomes (the conjecture is true or false) but only one outcome is possible. So, in a sense, conceivability in re a proposition is about available options as to its truth value but possibility has to do with which of these options obtain. Am I on the right track here?

What about the following statements then:

1. It is possible that a mathematical conjecture is true or false.

Nevertheless,

2. It's impossible that a mathematical conjecture is both true AND false

Compare statements 1 and 2 with:

3. It is conceivable that a mathematical conjecture is true or false

Nevertheless

4. It is jnconceivable that a mathematical conjecture is both true AND false

As you can see, the notion of possibility does have, within it, the idea of all available truth values for a given proposition, just like conceivability does. Most importantly, contradictions are both impossible AND inconceivable which seems to suggest that conceivability supervenes, if that's the right word, on possibility.
ssu July 04, 2020 at 11:52 #431483
Reply to TheMadFool
But isn't the real question here, that in some certain cases there is no way for us to prove / compute if a conjecture is either true or false? Wasn't this what Turing showed in the first place?

That mathematics, to be logical, has to have unprovable statements, which still are either true or false.

And hence, it would be perhaps provable that consciousness is unprovable.
A Raybould July 04, 2020 at 14:12 #431546
Reply to TheMadFool
Yes, I think you are on the right track here, though it may be leading in a surprising direction.

Firstly, there is no difficulty with 'it is possible/conceivable that the Collatz conjecture is true (X)OR it is possible/conceivable that the Collatz conjecture is false', which is the correct way to express the fact of our incomplete knowledge (at least if it is decidable.)

Secondly, your assertion that 'it is inconceivable that a mathematical conjecture is both true AND false' depends on whether false statements are conceivable. You may find it inconceivable that they are, but others may logically disagree.

There is an alternative view here that is... conceivable? It says that it is conceivable that a mathematical conjecture is both true AND false, it is just that we can immediately refute it (i.e. immediately see that it is not possible.) In this view, we had to conceive of it first (to form the thought in our minds), in order to prove that it is not possible.

You might wonder if there is anything that is not conceivable in this view. For one thing, I can conceive of there being inconceivable ideas by virtue of them requiring too much physical information for a brain to contain (and, as information can grow exponentially with the medium in which it is expressed, I do not think this can be avoided by positing an AI larger than a brain.) Maybe the true Theory of Everything is like this. Also, as Eliezer Yudkowsky pointed out, two millennia of philosophizing over epistemology and metaphysics never conceived of the sort of non-local reality that is the only sort allowed by Bell's Inequality; it only became conceivable in the light of new knowledge.

In my previous post, I wrote "Given that we do not know whether [the Collatz conjecture] is true, we might want to say it is possible that it is true; at another time, we might want to say that it is possible that it is false." I have emphasized "at another time" here because in my first draft, I wrote 'and' instead, but changed it, as the original statement would be begging the question I was trying to address. The point I wanted to make in that post is that, under possible-world semantics, and regardless of any difficulties with conjunctions, one cannot even know[1] that 'it is possible that the conjecture is true', because if it is actually false, there are no possible worlds in which it is true.

This is mostly moot, however, because what matters here is not how you or I see it, but how Chalmers is using it. Chalmers, and apparently most philosophers, seem to take the view that obviously false ideas are not conceivable, but obviousness is in the mind of the beholder, and is dependent on what they believe, yet if we take 'obviously' out of the definition, then 'conceivable' is simply a synonym for 'possible'. Likewise, if we rule out concepts that are false by definition (such as Chalmers' example 'male vixen'), they are also dependent on what we know, and often on which definition we accept. This may not matter, as things that are true by definition are usually uninteresting (the vixen case, for example, is just a consequence of the contingent fact that the English language happens to have different words for the male vulpes vulpes and the female vulpes vulpes. There are no profound metaphysical truths to be found in this.)

I think you are trying to show that 'possible' and 'conceivable' are synonyms. If so, then fair enough, but you should realize that, as Chalmers' argument depends on a distinction between 'conceivable' and 'possible', you would be disputing Chalmers' p-zombie argument (and, furthermore, over the same issue that many other people dispute it.)

[1] I originally wrote 'assert' instead of 'know', but then realized that one can, of course, assert a counterfactual.
A Raybould July 04, 2020 at 14:26 #431550
Reply to ssu
I think you are alluding to the Lucas-Penrose argument aganst the possibility of there being algorithms that produce minds. If so, that is a separate argument from Chalmers' p-zombie argument. Chalmers is attempting to refute metaphysical physicalism, but Penrose is a physicalist.

I am not sure what you mean by 'It would be perhaps provable that consciousness is unprovable.' Specifically, I am not sure what it would mean to say that conciousness is provable - what is the premise that one would be proving?
TheMadFool July 04, 2020 at 16:03 #431590
Quoting ssu
And hence, it would be perhaps provable that consciousness is unprovable.


As far as I can tell, Nagel made a big deal of consciousness being just too subjective to be objectivity-friendly. Since proofs are objective in character, it appears that consciousness can't be proven to an other for that reason. Nonetheless, to a person, privately, consciousness is as real as real can get.
ssu July 04, 2020 at 16:13 #431597
Quoting A Raybould
I think you are alluding to the Lucas-Penrose argument aganst the possibility of there being algorithms that produce minds.

No, just the basic mathematics in Turings answer on the Entscheidungsproblem with his Turing Machine argument. Above Reply to TheMadFool was talking about mathematical conjectures.

The fact is that there exists unprovable conjectures, even if they are either true or false.

Quoting A Raybould
Specifically, I am not sure what it would mean to say that conciousness is provable - what is the premise that one would be proving?

Let me try to explain.

We can agree on the definition what is an living organism and what isn't and when an organism lives or is dead. Do we agree on a clear definition on what consciousness is? I don't think there is a clear definition to that. We don't know what it is and philosophers find it puzzling and controversial. Just look at everything what has been written about consciousness.

Now a little thought experiment:

a) Let's assume that mathematics models reality extremely well: hence mathematical conjectures and objects can as models of reality tell us something about reality.

b) In mathematics there are unprovable, but true objects. The problem is of course that we can not give a direct proof about them (or calculate or compute them). We can give only an indirect proof: it cannot be so, that they wouldn't be true.

c) Assume these unprovable, but true objects do model our reality also. What would they look like?



ssu July 04, 2020 at 16:19 #431606
Quoting TheMadFool
I can tell, Nagel made a big deal of consciousness being just too subjective to be objectivity-friendly.Since proofs are objective in character, it appears that consciousness can't be proven to an other for that reason. Nonetheless, to a person, privately, consciousness is as real as real can get.

Wow.

Well, he's right on that thing. Because it is genuinely a problem about subjectivity and objectivity. Or to put it another way: the limitations of objectivity. And proving something has to be objective. You simply cannot make an objective model about something that is inherently subjective.

Can you give reference where Nagel said that? It would be interesting to know.
A Raybould July 04, 2020 at 16:53 #431629
Reply to ssu
It's not entirely straightforward to come up with a definition of what's alive and what's dead; there is some disagreement over whether viruses are truly living, and defining the exact point of death of a complex organism is not a simple matter.

Definitions are not proofs, and they are not generally provable, even though some of the arguments made for favoring one definition over another may be provable or disprovable. We don't have a clear, generally agreed-upon definition of consciousness because we don't know enough about it, and gaining sufficient knowledge will be an exercise in science, not logic.

Even if we accept that mathematics models reality extremely well, it does not follow that every mathematical entity models some aspect of reality. I think it is true to say that all unprovables require infinities, and it seems unlikely that modeling any finite aspect of reality, such as the human mind or the whole of the visible universe, require infinities (for example, the singularities that appear in relativistic models of black holes are taken for evidence that the models are not complete, and the expectation is that they would be resolved in a more complete theory.)

I am not convinvced that you simply cannot make an objective model about something that is inherently subjective. Qualia, for example, are widely regarded as subjective, yet it has been posited that they can be explained as a set of abilities.
ssu July 04, 2020 at 17:07 #431638
Quoting A Raybould
It's not entirely straightforward to come up with a definition of what's alive and what's dead; there is some disagreement over whether viruses are truly living, and defining the exact point of death of a complex organism is not a simple matter.

Sure, but it's easier than defining consciousness and what is conscious and what isn't. Doctors have some kind of definition that they apply on the issue.

Quoting A Raybould
Even if we accept that mathematics models reality extremely well, it does not follow that every mathematical entity models some aspect of reality.

Not every, but true but unprovable mathematical objects could be useful. At least in explaining what the problem we face is.

Quoting A Raybould
I am not convinvced that you simply cannot make an objective model about something that is inherently subjective.

Ok, I'll use from math/set theory negative self reference and Cantor's diagonalization to make an example.

a) Reply to my post with an answer that you "A Raybold" will never give.

b) Do such answers exist as in a)?

Naturally you cannot give any answer that never give, but obviously as life is finite there obviously are answers or remarks that you, A Raybold, don't give. Notice the 1) subjectivity and the 2) negative self reference. Notice that in Turing's example of the Turing Machine and that it never halts, the logic behind it is of negative self reference too.
A Raybould July 04, 2020 at 17:56 #431663
Reply to ssu
Quoting ssu
Not every, but true but unprovable mathematical objects could be useful. At least in explaining what the problem we face is.


Well, we'll see - or not.

With regard to your example, it is not clear to me what your argument here is - in fact, I cannot even see what your conclusion is. Can you state your conclusion, and the steps by which you reach it? And where is the subjectivity that you mention?
TheMadFool July 04, 2020 at 18:15 #431666
Quoting A Raybould
I think you are trying to show that 'possible' and 'conceivable' are synonyms. If so, then fair enough, but you should realize that, as Chalmers' argument depends on a distinction between 'conceivable' and 'possible', you would be disputing Chalmers' p-zombie argument (and, furthermore, over the same issue that many other people dispute it.)


One example of something being conceivable but impossible is FTL (faster than light) speed - a favorite leitmotif of sci-fi, clearly conceived of (conceivable) but, within the framework of the best available theory of science (relativity), impossible. Basically goes to show that what's conceivable may not be possible. Note that the meaning of conceivable here doesn't seem to go beyond just entertaining an idea, simply a matter of asking "what if?" questions. What is conceivable then is nothing more than having thoughts about a hypothetical that's removed from any and all contexts or frameworks that would force some kind of conclusion that could then be used to figure out possibility/impossibility.

The moment something conceivable, the hypothetical, is placed within a certain framework/context (of knowledge) it entails the truth/falsity of other propositions that render it possible or impossible. Take the FTL travel example I gave. Just asking the question "what if FTL travel were possible?", outside the theory of relativity, is what conceivable is all about. The instant the theory of relativity is used to give some context to the "what if?" question of FTL travel, we see that FTL is impossible. As you can see conceivability is utterly useless in an argument for it's logically inert.

If Chalmers, with his p-zombie argument, is concerned with conceivability, then it's imperative that we provide a backdrop, a context, to derive possibility/impossibility from the mere conceivable. Chalmers is attacking physicalism - the theory that consciousness is physical - and so we have the backdrop/context we're looking for viz. physicalism. If physicalism is true and consciousness is physical then p-zombies (all physical attributes present but lacking consciousness) should be impossible. Notice however, that for Chalmer to show that p-zombies are possible, he can't use physicalism as his context for, quite evidently, p-zombies are impossible in physicalism. If Chalmers thinks p-zombies are possible, he must use a non-physical theory of consciousness but that's precisely what needs to be proved - begging the question.
TheMadFool July 04, 2020 at 18:48 #431676
Quoting ssu
Wow.

Well, he's right on that thing. Because it is genuinely a problem about subjectivity and objectivity. Or to put it another way: the limitations of objectivity. And proving something has to be objective. You simply cannot make an objective model about something that is inherently subjective.

Can you give reference where Nagel said that? It would be interesting to know.


Unfortunately, I'm working with fragments, bits and pieces, of information. I can't assist you as much as I'd like. I can't quite get it right when it comes to who said what but I can tell you, with some confidence, that there's a subjectivity-objectivity issue people seem mighty concerned about regarding consciousness.
ssu July 04, 2020 at 20:15 #431697
Quoting TheMadFool
I can't assist you as much as I'd like. I can't quite get it right when it comes to who said what but I can tell you, with some confidence, that there's a subjectivity-objectivity issue people seem mighty concerned about regarding consciousness.

Obviously it's so when studying consciousness.

Yet Ernst Nagel was a logical positivist, hence he as a philosopher that knew his math & logic. He did a wonderful little book about Gödel's incompleteness theorems. If Nagel refers to proofs here, that is the really interesting part. I would bet five dollars that he's talking about a proof equivalent to a mathematical proof. Because indeed proving something is indeed something objective. A subjective proof sounds not only like an oxymoron, but something deeply illogical in general philosophy: me and you simply cannot have separate truths, truths that are true only to one but not the other (yeah, I know, we live in a time of post-truth or whatever).

Why am I so interested in the issue from the focus of math & logic? Well, if one can show it in mathematics, that's a very clear language to show it. And Turing was a mathematician.

(Well, if you remember some literature where you got the quote, please tell it.)
ssu July 04, 2020 at 21:00 #431709
Quoting A Raybould
Can you state your conclusion, and the steps by which you reach it? And where is the subjectivity that you mention?

I'll try to explain again.

1) You can make write whatever kind of response here on PF, right? There is no limitations on what you can write. (The administrators might delete your answer or ban you, but that is a different matter)

2) I will make the argument that there exist responses that A Raybould will never write on PF.

3) Can you then write a response that A Raybold will never write? No, you cannot. Because of being yourself A Raybold (hence the subjectivity). You yourself define not only those responses that you write, but there will be an infinite set of responses that you don't write, which is also defined by you. Do you see the negative self reference?


A Raybould July 04, 2020 at 23:09 #431738
Reply to ssu
No, I don't see a negative self-reference that has any relevance to whether subjective experience can be explained. Nor do I see any any subjectivity. Nor do I see a conclusion, and nor do I see a coherent argument. Other than that, everything is clear.
AlienFromEarth September 19, 2021 at 21:00 #597621
ok yeah um, just my 2 cents.

What if consciousness is basically just a non-physical extra sensory kind of perception?

Ok say you have 2 roombas. Both roombas are brand new and identical. But you take one roomba now, and you install a bunch of sensors on it that can sense everything the roomba is doing, whereas, the other roomba that is not modified can only sense with it's factory sensors and doesn't know much about what it's doing or it's environment, only enough to vacuum debris off the floor.

Now, the "super sensing" roomba, doesn't actually do anything differently in it's job to vacuum the floor. Yes it does sense way more things, but that information just sits there in memory and gets analyzed by the CPU and that's about it. Doesn't do anything physical with it.

So we have 2 identical looking roombas that do the same exact job in the same exact way, except that one has a very advanced sensing capability installed in it that is otherwise useless to it's functionality.

What if consciousness mirrors this scenario? Let's say you have 2 identical twins in a room. You ask them to perform certain tasks, you realize they do these tasks exactly the same way. But what if one of the twins had some kind of extra sensory perception that is not physical and therefore cannot be measured with the use of microscopes or any other *physical* means? So they would do the same thing, except one would be aware of everything his body is doing, and the other would not be aware of anything at all. But because both bodies are physical, they physically do everything the same way.

So if it makes sense to say there could be a "higher sensing capability" which we call consciousness in one twin, then it is conceivable that there might be somehow a completely absent consciousness in the other twin.
Cabbage Farmer September 22, 2021 at 13:04 #598801
Quoting TheMadFool
One well-known test for Artificial Intelligence (AI) is the Turing Test in which a test computer qualifies as true AI if it manages to fool a human interlocutor into believing that s/he is having a conversation with another human. No mention of such an AI being conscious is made.

A p-zombie is a being that's physically indistinguishable from a human but lacks consciousness.

It seems almost impossible to not believe that a true AI is just a p-zombie in that both succeed in making an observer think they're conscious.

It seems you've got the wires crossed.

AI that passes the Turing test need not be *physically indistinguishable* from a human. A p-zombie is physically (and behaviorally) indistinguishable from a human.

So an AI's passing the Turing test does not entail that the AI is a p-zombie. Not even close.

On the other hand, it's not clear whether a p-zombie should count as *artificial* intelligence at all, or rather as some strange (perhaps impossible) form of human.


Quoting TheMadFool
The following equality based on the Turing test holds:

Conscious being = True AI = P-Zombie

Perhaps the correction I've provided above is enough to persuade you that the Turing-AI and p-zombies are not the same sort of thing.

Moreover, how do you justify slipping consciousness into this three-part identity statement? Nothing you've said so far about Turing-AI or about p-zombies suggests that they are conscious. As you've noted, the p-zombie is not conscious by definition, and Turing-AI only fools people into mistakenly believing they are conversing with a human.

How does any of that get you to the inference that AI and P-zombies are conscious?
frank September 22, 2021 at 16:49 #598910
Reply to Cabbage Farmer
They're conscious by a functionalist definition, aren't they?
TheMadFool September 22, 2021 at 18:10 #598938
Reply to Cabbage Farmer Hi. You dug up a corpse of a thread.

My point is simple. Imagine an AI (X)that looks exactly like a human i.e. externally, you don't have the option of examining its innards which would be a dead giveaway.

Now, picture a p-zombie (Y) and though they are, unlike the AI, physically indistinguishable from conscious humans, here too you're not allowed to do a thorough inspection of the specimen.

Both X and Y will convince you that each is conscious. In other words, if I (assuming I'm not an AI or a p-zombie) stand next to X and Y, and you interview the three of us, you simply won't be able to tell which is which.

You have two choices:

1. Declare that all 3 are conscious.

OR

2. Declare that all 3 are not conscious.

If 1, physicalism is true (p-zombies are impossible) BUT you'll have to concede AI is conscious and not just because they can mimic consciousness (pass the Turing test) but that AI is actually conscious.

If 2, physicalism is false (p-zombies are possible) BUT then you'll have to contend with the possibility that other people are p-zombies.

It's a dilemma: either AI is true consciousness OR other people could be p-zombies.

Cabbage Farmer September 23, 2021 at 17:18 #599459
Quoting TheMadFool
You have two choices:

1. Declare that all 3 are conscious.

OR

2. Declare that all 3 are not conscious

If I had good reason to believe that there were lots of human-like AI robots and lots of p-zombies tooling around the Earth, I would go with a third choice, which you neglected to mention:

3. Suspend judgment on whether these seeming sentient beings are genuine sentient beings, p-zombies, or mere simulations.

However, since I have yet no reason to believe there are any AI systems that seem (physically and behaviorally) just like humans from the outside, and since I have no reason to believe that there is (or could be) any such thing as a p-zombie, I don't bother suspending judgment on this point.

Ordinarily I infer that the seeming-humans I encounter in my immediate vicinity are genuine human beings. Of course I could be wrong. But this conceivability of error is no different than the conceivability of error that attends all my ordinary perceptual judgments. Ordinarily I infer, further, that genuine human beings are sentient beings -- except perhaps when they seem unconscious, in which case I may suspend judgment on the matter.

In any case, surely my "declaring" that something is a genuine human (or a genuine dog, star, barn...) doesn't make it so. The facts are the facts, no matter what declarations I may be disposed to heap upon the basis my own perceptual experience.

Quoting TheMadFool
If 1, physicalism is true (p-zombies are impossible) BUT you'll have to concede AI is conscious and not just because they can mimic consciousness (pass the Turing test) but that AI is actually conscious.

If 2, physicalism is false (p-zombies are possible) BUT then you'll have to contend with the possibility that other people are p-zombies.

It's a dilemma: either AI is true consciousness OR other people could be p-zombies.

In keeping with my preceding assessment of your list of "declarations", I'd have to say the rest of your argument doesn't get off the ground.
Cabbage Farmer September 23, 2021 at 17:21 #599461
Quoting frank
They're conscious by a functionalist definition, aren't they?

What sort of functionalist definition do you have in mind? And do you mean the p-zombies, the Turing-
AI, or both?
frank September 23, 2021 at 18:34 #599488
Quoting Cabbage Farmer
What sort of functionalist definition do you have in mind? And do you mean the p-zombies, the Turing-
AI, or both?


A functionalist says there are only functions of consciousness like reportability. There's no extra awareness. IOW, functionalists basically think we're all p-zombies or Turing AIs.
TheMadFool September 23, 2021 at 23:21 #599597
Quoting Cabbage Farmer
3. Suspend judgment on whether these seeming sentient beings are genuine sentient beings, p-zombies, or mere simulations.


Quoting Cabbage Farmer
I'd have to say the rest of your argument doesn't get off the ground.


In other words, you can't tell whether the three (other humans, true AI, p-zombies) are conscious or not. You can't commit or come to a definitive conclusion because to do so has implications that you're not willing to accept.

The dilemma, as I stated it earlier:

If you say other humans are conscious, you'll have to accept AI and p-zombies are conscious. We can forget about the p-zombie for the moment and focus our attention on AI - they have to be treated as conscious/sentient.

If you affirm that AI isn't conscious, you must concede that other humans may not be conscious/sentient or that other humans are p-zombies.

So, either AI is conscious or other humans are p-zombies.

Thus, I've demonstrated that AI and p-zombies are intimately linked together. That amounts to something, right?
Cabbage Farmer September 27, 2021 at 14:53 #601157
Quoting frank
A functionalist says there are only functions of consciousness like reportability. There's no extra awareness. IOW, functionalists basically think we're all p-zombies or Turing AIs.

I'm aware that sort of view has been fashionable among hard behaviorists, functionalists, computationalists, eliminative materialists, and their ilk. But I'm not sure all functionalists are committed to that sort of view.

Consider the definitive doctrine ascribed to the functionalist by the SEP, "that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part." Couldn't one hold this "doctrine" while remaining agnostic about the "extra awareness" you indicate? I suppose one might adopt a functionalist account of "mental states", and even of "mind", without denying that some or all minds have that "extra" awareness, and perhaps without any interest in that proposition.

That qualification aside: I agree that if one believes "there's no extra awareness", as you put it, then the distinction between sentient beings and p-zombies collapses.

So far as I can make out, that would mean there's no sense in talking about p-zombies for them. For the rest of us, it will seem as though their conception of sentience is akin to our conception of the p-zombie. I'm not sure where that leaves their talk of AI. I mean, on what grounds would they require that an AI system pass the Turing test in order to count as "conscious"? I'd expect them to count a much wider range of AI systems as "conscious". I'd ask them to provide some account of their distinction between conscious and nonconscious systems, regardless of whether they are natural or artificial.
frank September 27, 2021 at 16:01 #601174
Quoting Cabbage Farmer
Couldn't one hold this "doctrine" while remaining agnostic about the "extra awareness" you indicate? I suppose one might adopt a functionalist account of "mental states", and even of "mind", without denying that some or all minds have that "extra" awareness, and perhaps without any interest in that proposition.


I think so, it's just that the explanations provided by functionalism don't cover qualia. So it's left hanging, so to speak.

Quoting Cabbage Farmer
So far as I can make out, that would mean there's no sense in talking about p-zombies for them. For the rest of us, it will seem as though their conception of sentience is akin to our conception of the p-zombie.


The p-zombie was originally just a thought experiment indicating that consciousness doesn't reduce to function, so functionalism can't pass for a complete theory of consciousness.

A functionalist might take up the idea of the p-zombie to show that functionalism is conceivably adequate for explaining consciousness. But then, nobody has argued that it's inconceivable that we're all p-zombies. It just doesn't square with what most of us experience.

Quoting Cabbage Farmer
I mean, on what grounds would they require that an AI system pass the Turing test in order to count as "conscious"?


I don't know. I gather that consciousness is pretty thoroughly deflated for functionalists, so they can say the word applies or doesn't as they like. Maybe they would advise that it's all a matter of language game.