You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

The important question of what understanding is.

Daemon September 29, 2021 at 23:03 8850 views 214 comments
Quoting TheMadFool
That raises the important question of what understanding is and, more importantly, whether it is something beyond the ability of a computer AI? Speaking from my own experience, understanding/comprehension seems to start off at the very basic level of matching linguistic symbols (words, spoken or written) to their respective referents e.g. "water" is matched to "cool flowing substance that animals and plants need", etc. This is clearly something a computer can do, right? After such a basic cognitive vocabulary is built what happens next is simply the recognition of similarities and differences and so, continuing with my example, a crowd of fans moving in the streets will evoke, by its behavior, the word and thus the concept "flowing" or a fire will evoke the word/concept "not cold" and so on. In other words, there doesn't seem to be anything special about understanding in the sense that it involves something more than symbol manipulation and the ability to discern like/unlike thing.


I have been a professional translator for 20 years. My job is all about understanding. I use a Computer Assisted Translation or CAT tool.

The CAT tool suggests translations based on what I have already translated. Each time I pair a word or phrase with its translation, I put that into the "translation memory". The CAT tool sometimes surprises me with its translations, it can feel quite spooky, it feels like the computer understands. But it doesn't, and it can't.

I do a wide range of translation work. I do technical translations, operating and maintenance instructions for machines for example. To understand a text like that, you need to have had experience of work like that. Experience is the crucial element the computer lacks. Experience of all facets of our world. For example, to understand fundamental concepts like "up" and "down", "heavy" and "light", you need to have experienced gravity.

I translate marketing texts. Very often my clients want me to make their products sound good, and they want their own customers to feel good about their products and their company. To understand "good" you need to have experienced feelings like pleasure and pain, sadness and joy, frustration and satisfaction.

I translate legal texts, contracts, court documents.

A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.

B. The councillors refused to allow the protestors to demonstrate, because they feared violence.

A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience.

Comments (214)

Tom Storm September 29, 2021 at 23:40 #602004
Reply to Daemon Was there a question? Language is never the thing it describes so it is not surprising that language can be mastered and words mustered that never make contact with experience. It strikes me too that people are often not much better than computers and often master languages - management speak, or whatever and have no idea what they are saying.
frank September 29, 2021 at 23:45 #602008
Quoting Tom Storm
. It strikes me too that people are often not much better than computers and often master languages - management speak, or whatever and have no idea what they are saying.


Nobody masters a language with no idea what they are saying. I don't know why you would suggest such a thing.
Daemon September 29, 2021 at 23:48 #602009
Reply to Tom Storm It was more of an answer than a question.

Yes, human translators sometimes don’t understand what they are translating. Everyone has been baffled by poorly translated product instructions I guess. And sometimes this is because the human translator does not have experience assembling or using the product, or products like it.
Tom Storm September 29, 2021 at 23:49 #602010
Quoting frank
Nobody masters a language with no idea what they are saying. I don't know why you would suggest such a thing.


Because it is true? But maybe you don't get my meaning and are making it too concrete. I've met dozens of folk who work in government and management who can talk for an hour using ready to assemble phrases and current buzz words without saying anything and - more importantly - not knowing what they are saying.
Tom Storm September 29, 2021 at 23:50 #602011
Quoting Daemon
Yes, human translators sometimes don’t understand what they are translating. Everyone has been baffled by poorly translated product instructions I guess. And sometimes this is because the human translator does not have experience assembling or using the product, or products like it.


Yes, as long as you obey certain rule understanding the meaning can be secondary
Daemon September 29, 2021 at 23:52 #602012
Luckily though, there is still work for translators who do understand what they are talking about.
Daemon September 29, 2021 at 23:54 #602014
Quoting Tom Storm
Because it is true? But maybe you don't get my meaning and are making it too concrete. I've met dozens of folk who work in government and management who can talk for an hour using ready to assemble phrases and current buzz words without saying anything and - more importantly - not knowing what they are saying.


You are speaking rather loosely here. Exaggerating.
frank September 29, 2021 at 23:56 #602015
Quoting Tom Storm
. I've met dozens of folk who work in government and management who can talk for an hour using ready to assemble phrases and current buzz words without saying anything and - more importantly - not knowing what they are saying.


One assumes when they're finished bullshitting you they go back to speaking a language they do understand, so you haven't addressed the OP, have you?
Tom Storm September 29, 2021 at 23:59 #602017
Reply to frank It was an aside, Frank - the idea being that it is not only computers that can assemble syntax without connecting to the content. You don't have to agree.
Tom Storm September 29, 2021 at 23:59 #602018
Quoting Daemon
You are speaking rather loosely here. Exaggerating.


Not by much.
frank September 30, 2021 at 00:02 #602019
Quoting Tom Storm
It was an aside, Frank - the idea being that it is not only computers that can assemble syntax without connecting to the content. You don't have to agree.


I think the OP's point was that context is important for translation. You seemed to be arguing to the contrary. Now I don't know what your point was.
Tom Storm September 30, 2021 at 00:03 #602020
[hide][/hide]Reply to frank Apologies if I got it wrong. I thought I was agreeing and extending the point.
TheMadFool September 30, 2021 at 02:41 #602061
Quoting Daemon
I have been a professional translator for 20 years. My job is all about understanding. I use a Computer Assisted Translation or CAT tool.

The CAT tool suggests translations based on what I have already translated. Each time I pair a word or phrase with its translation, I put that into the "translation memory". The CAT tool sometimes surprises me with its translations, it can feel quite spooky, it feels like the computer understands. But it doesn't, and it can't.

I do a wide range of translation work. I do technical translations, operating and maintenance instructions for machines for example. To understand a text like that, you need to have had experience of work like that. Experience is the crucial element the computer lacks. Experience of all facets of our world. For example, to understand fundamental concepts like "up" and "down", "heavy" and "light", you need to have experienced gravity.

I translate marketing texts. Very often my clients want me to make their products sound good, and they want their own customers to feel good about their products and their company. To understand "good" you need to have experienced feelings like pleasure and pain, sadness and joy, frustration and satisfaction.

I translate legal texts, contracts, court documents.

A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.

B. The councillors refused to allow the protestors to demonstrate, because they feared violence.

A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience


Thanks for your comment. I just watched a video on minds & machines - The Turing Test & Searle's Chinese Room Argument. As per the video a computer that passes the Turing Test does so soley based on the syntactic properties of, and not the semantic properties of, symbols. So, at best, an AI (passed the Turing Test) is simply a clever simulation of a human being, no more no less.

My own views on the matter is semantics is simply what's called mapping - a word X is matched to a corresponding referent, say, Jesus. I'm sure this is possible with computers. It's how children learn to speak, using ostensive definitions. We could start small and build up from there as it were.

Daemon September 30, 2021 at 08:32 #602145
Quoting TheMadFool
My own views on the matter is semantics is simply what's called mapping - a word X is matched to a corresponding referent, say, Jesus. I'm sure this is possible with computers. It's how children learn to speak, using ostensive definitions. We could start small and build up from there as it were.


The examples I gave were intended to illustrate that semantics isn't simply mapping!

Mapping is possible with computers, that's how my CAT tool works. But mapping isn't enough, it doesn't provide understanding. My examples were intended to illustrate what understanding is.

Children learn some things by ostensive definition, but that isn't enough to allow understanding. I have a two-year-old here. We've just asked him "do you want to play with your cars, or do some gluing?"

He can't understand what it is to "want" something through ostensive definition. He understands that through experiencing wanting, desire.



Hermeticus September 30, 2021 at 09:27 #602149
Quoting Daemon
Each time I pair a word or phrase with its translation, I put that into the "translation memory". The CAT tool sometimes surprises me with its translations, it can feel quite spooky, it feels like the computer understands.


It's not unlike what we humans do. Our "translation memory" simply is the mental act of association. I take input/sensation A (I see, feel or taste an apple) and link it to input/sensation B (the word apple in auditory or visual form).

Quoting Daemon
Experience is the crucial element the computer lacks.


Quoting Daemon
For example, to understand fundamental concepts like "up" and "down", "heavy" and "light", you need to have experienced gravity.


Quoting Daemon
He can't understand what it is to "want" something through ostensive definition. He understands that through experiencing wanting, desire.


Machines can experience physical phenomena that reflects our perception - from cameras to thermometers, pressure and gyro sensors - none of our senses can't be adopted digitally. This means that fundamental concepts like "up" and "down", "heavy" and "light" can indeed be experienced by computers.

Your last example though is a whole different phenomena and this is where it gets interesting. Qualia - emotional sensation like happiness, sadness, desire and alike can not be found and measured in the physical realm. I have a hard time imagining that AI will ever get a good grip on these concepts.

I think it's a question of how machinized someone perceives our human organism. Afterall, the statement that there's no physical phenomena corresponding to emotions is false. Strictly speaking, it's all chemistry - qualitatively and quantitatively measurable. If and how this could possibly translate to machines perceiving emotion is beyond me. All I know is that it does raise one of the most interesting philosophical questions that I have seen in Sci-Fi: If a robot seemingly acts and feels like a human, how are we to know whether they are merely acting or if they actually engage with sensation and stimulation in the same way we do?







Daemon September 30, 2021 at 09:50 #602153
Quoting Hermeticus
Machines can experience physical phenomena that reflects our perception - from cameras to thermometers, pressure and gyro sensors - none of our senses can't be adopted digitally. This means that fundamental concepts like "up" and "down", "heavy" and "light" can indeed be experienced by computers.


A camera does not see. A thermometer does not feel heat and cold. A pressure sensor does not feel pressure.

Wayfarer September 30, 2021 at 10:57 #602159
Reply to Daemon Very true. Computers don’t experience anything, any more than does an abacus. A computer is a vast array of switches, although they’re so effective that they emulate some aspects of experience.

I worked at an AI company briefly for a few months, some time back. The biggest problem they had was in imparting 'context' to their agent. She (she was given a female persona) always seemed to lack a sense of background to queries.

One of my early experiences is illustrative. I had a set of test data to play with, from supermarket sales. I noticed you could request data for single shoppers or families with children. I asked 'Shirley' (not her name, but that's a trade secret) if she had information on bachelors. After a moment, she asked 'Bachelor: is that a kind of commodity: olive?' So she was trying to guess if the word 'bachelor' was a kind of olive. I was super-impressed that she tried to guess that. But then the guess was also kind of clueless. This kind of issue used to come up a lot. Like, I notice with Siri, that there's certain contextual things she will never get. (I also have Alexa, but all she does is play the radio for me.)


Quoting Hermeticus
Afterall, the statement that there's no physical phenomena corresponding to emotions is false. Strictly speaking, it's all chemistry - qualitatively and quantitatively measurable.


That is lumpen materialism. There is a reason why all living beings, even very simple ones, cannot be described in terms of chemistry alone. It's that they also encode memory which is transmitted in the form of DNA. Ernst Mayr, a leading theoretical biologist, said 'The discovery of the genetic code was a breakthrough of the first order. It showed why organisms are fundamentally different from any kind of nonliving material. There is nothing in the inanimate world that has a genetic program which stores information with a history of three thousand million years’. Furthermore, all beings, even simple ones, are also subjects of experience, not simply objects of analysis, which introduces a degree of unpredictability which no simple objective model can hope to capture.





Hermeticus September 30, 2021 at 11:23 #602163
Quoting Daemon
A camera does not see. A thermometer does not feel heat and cold. A pressure sensor does not feel pressure.


The physical principles behind these sensors and the senses of our body are literally the same. The difference is in the signal that is sent thereafter (and even then, both signals are electric) and how the signal is processed.

It goes way further than that though. The field of bionic prosthetics has already managed to send all the right signals to the brain. There are robotic arms that allow the user to feel touch. They are working on artificial eyes hooked up to the optic nerve - and while they're not quite finished yet, the technology already is proven to work.

Quoting Wayfarer
That is lumpen materialism. There is a reason why all living beings, even very simple ones, cannot be described in terms of chemistry alone.


When we talk about what is, it's easiest to speak in terms of materialism. If two processes are comparable, one biological, one mechanical, why shouldn't I be able to compare them? As I said:
Quoting Hermeticus
I think it's a question of how machinized someone perceives our human organism.


I was going to agree with "Living beings can not be described in terms of chemistry alone" but the more I think about it - I'm not so sure. Your example doesn't make any sense to me either way. What do you think DeoxyriboNucleic Acid is, if not chemistry?




Daemon September 30, 2021 at 11:26 #602164
Quoting Wayfarer
There is a reason why all living beings, even very simple ones, cannot be described in terms of chemistry alone. It's that they also encode memory which is transmitted in the form of DNA.


Bacteria can swim up or down what is called a chemical gradient. They will swim towards a source of nutrition, and away from a noxious substance. In order to do this, they need to have a form of "memory" which allows them to "know" whether the concentration of a chemical is stronger or weaker than it was a moment ago.

https://www.cell.com/current-biology/comments/S0960-9822(02)01424-0

Here's a brief extract from that article:

Increased concentrations of attractants act via their MCP receptors to cause an immediate inhibition of CheA kinase activity. The same changes in MCP conformation that inhibit CheA lead to relatively slow increases in MCP methylation by CheR, so that despite the continued presence of attractant, CheA activity is eventually restored to the same value it had in the absence of attractant. Conversely, CheB acts to demethylate the MCPs under conditions that cause elevated CheA activity. Methylation and demethylation occur much more slowly than phosphorylation of CheA and CheY. The methylation state of the MCPs can thereby provide a memory mechanism that allows a cell to compare its present situation to its recent past.

The bacterium does not experience the chemical concentration.

The "memory" encoded by DNA can also be described entirely in terms of chemistry. So I think Mayr got this wrong.
TheMadFool September 30, 2021 at 11:38 #602165
Quoting Daemon
The examples I gave were intended to illustrate that semantics isn't simply mapping!

Mapping is possible with computers, that's how my CAT tool works. But mapping isn't enough, it doesn't provide understanding. My examples were intended to illustrate what understanding is.

Children learn some things by ostensive definition, but that isn't enough to allow understanding. I have a two-year-old here. We've just asked him "do you want to play with your cars, or do some gluing?"

He can't understand what it is to "want" something through ostensive definition. He understands that through experiencing wanting, desire


Thanks for trying to clarify the issue for me. Much obliged. Please tell me,

1. What understanding is, if not mapping?

2. Whatever thinking is, it seems to be some kind of pattern recognition process. That looks codable? Semantics are patterns e.g. dogs = domesticated (pattern) wolves (pattern).

In short, semantics seems to be within the reach of computers provided pattern recognition can be coded.

What say you?
Daemon September 30, 2021 at 11:38 #602166
Quoting Hermeticus
The field of bionic prosthetics has already managed to send all the right signals to the brain. There are robotic arms that allow the user to feel touch. They are working on artificial eyes hooked up to the optic nerve - and while they're not quite finished yet, the technology already is proven to work.


But this can't be done without using the brain!


Hermeticus September 30, 2021 at 11:55 #602170
Quoting Daemon
But this can't be done without using the brain!



Quoting Hermeticus
The difference is in the signal that is sent thereafter and how the signal is processed.

Quoting Hermeticus
I have a hard time imagining that AI will ever get a good grip on these concepts.

Quoting Hermeticus
If and how this could possibly translate to machines perceiving emotion is beyond me.


It's hard to picture an artificial brain because we don't even fully understand how our brains work. It's a matter of complexity. Our understanding of it is getting better and better though. On what basis can we say that an artificial brain wouldn't be possible in the future?
Daemon September 30, 2021 at 12:07 #602175
Quoting Hermeticus
On what basis can we say that an artificial brain wouldn't be possible in the future?


We can't, but this is science fiction, not philosophy. I love science fiction, but that's not what I want to talk about here.

Daemon September 30, 2021 at 12:13 #602177
Quoting TheMadFool
Please tell me,

1. What understanding is, if not mapping?


My examples were intended to illustrate what understanding is.
Wayfarer September 30, 2021 at 12:19 #602179
Quoting Daemon
Bacteria can swim up or down what is called a chemical gradient. They will swim towards a source of nutrition, and away from a noxious substance.


Something which no inorganic substance will do. Nobody would deny that such behaviours involve chemistry, but they're not reducible to chemistry.

Quoting Daemon
The bacterium does not experience the chemical concentration.


Perhaps behaviour is what experience looks like from the outside.
Daemon September 30, 2021 at 12:22 #602180
Reply to Wayfarer But if you read that article, you can see that bacterial chemotaxis is entirely reducible to chemistry!
TheMadFool September 30, 2021 at 12:31 #602182
Quoting Daemon
My examples were intended to illustrate what understanding is


Automated Theorem Proving

[quote=Wikipedia]Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs.[/quote]

I'm no mathematician but if math proofs are anything like proofs in philosophy, semantics is a cornerstone. One has to to understand the meaning of words, sentences/statements.

In automated theorem proving computers have been used to prove math theorems but we know that computers are semantics-blind and can only manage syntax. Yet, they're doing things (proving theorems) which, in our case, requires understanding. My question is, can semantics be reduced to syntax?
Daemon September 30, 2021 at 12:50 #602189
Quoting TheMadFool
Yet, they're doing things (proving theorems) which, in our case, requires understanding.


They aren't doing things, we are using them to do things.

It's the same with an abacus. You can push two beads to one end of the wire, but the abacus isn't then proving that 1 + 1 = 2.
Hermeticus September 30, 2021 at 13:01 #602193
Quoting Daemon
We can't, but this is science fiction, not philosophy. I love science fiction, but that's not what I want to talk about here


Well, we're talking about understanding and you made your premise experience. I've argued that it's absolutely possible for an AI to have the same experiences we have with our senses and that it's merely a question of how the content of these experiences are processed. If we're not talking about hypotheticals then the answer is obviously no, AI can not understand like humans do.

If you just want to talk about what understanding in general is, I'm totally with @TheMadFool here. Understanding is mapping. Complex chains of association between sensations and representations.






TheMadFool September 30, 2021 at 13:02 #602194
Quoting Daemon
They aren't doing things, we are using them to do things.

It's the same with an abacus. You can push two beads to one end of the wire, but the abacus isn't then proving that 1 + 1 = 2.


So, if we ask a group of grade 1 students to carry out the math operation 1 + 1, they aren't doing things, we are using them to do things.

TheMadFool September 30, 2021 at 13:02 #602196
Quoting Hermeticus
Understanding is mapping


Thank you!
Daemon September 30, 2021 at 13:06 #602198
Quoting Hermeticus
If you just want to talk about what understanding in general is, I'm totally with TheMadFool here. Understanding is mapping. Complex chains of association between sensations and representations.


But computers don't have sensations, they don't make associations, they don't use representations.
Daemon September 30, 2021 at 13:14 #602200
Quoting Hermeticus
I've argued that it's absolutely possible for an AI to have the same experiences we have with our senses


In science fiction?
Hermeticus September 30, 2021 at 13:25 #602204
Quoting Daemon
In science fiction?


I'm not gonna repeat myself forever.

Quoting Daemon
But computers don't have sensations, they don't make associations, they don't use representations.

Quoting Hermeticus
If we're not talking about hypotheticals then the answer is obviously no, AI can not understand like humans do.


If we were to talk in hypotheticals:

Sensation
Quoting Hermeticus
The physical principles behind these sensors and the senses of our body are literally the same.

We already have this.

Association
Quoting Hermeticus
The difference is in the signal that is sent thereafteand how the signal is processed.

We don't have this yet, hence I raised the point of artificial brain.

And as for representations - computers are literally built on it. They're a representational system. Everything you see on your browser is a representation of a programming language. The programming language is a representation of another programming language (machine code). Machine code is a representation of bit manipulation. Bits are a representation of electric current.
Daemon September 30, 2021 at 16:30 #602245
Quoting Hermeticus
If we were to talk in hypotheticals:


I'm not interested in discussing hypotheticals. The Cambridge Dictionary says hypothetical means "imagined or suggested, but perhaps not true or really happening".

Quoting Hermeticus
Sensation

The physical principles behind these sensors and the senses of our body are literally the same. — Hermeticus

We already have this.


We do not. Sensors do not have sensations.

Quoting Hermeticus
And as for representations - computers are literally built on it. They're a representational system. Everything you see on your browser is a representation of a programming language. The programming language is a representation of another programming language (machine code). Machine code is a representation of bit manipulation. Bits are a representation of electric current.


But the representation is to us, not to the computer. All there is in the computer is electric current. No bits, no languages. We say that the electric current represents something, in the same way that the beads on an abacus represent numbers.



Outlander September 30, 2021 at 16:54 #602250
Naturally, to understand what is one must first be intimately acquainted with what isn't. The first step is understanding whatever is can become what isn't. The second is understanding you know very little, if anything at all. Observations of current circumstance are not knowledge but simple consciousness, which as we know, is volatile and its state subject to change.
Daemon September 30, 2021 at 19:20 #602280
Quoting TheMadFool
Understanding is mapping — Hermeticus


Thank you!


Mapping is not understanding, as illustrated by my examples.

180 Proof September 30, 2021 at 20:18 #602287
Interesting discussion. From a previous thread:
Quoting 180 Proof
Understanding denotes conceptual reflection (i.e. metacognition) by which knowing is distinguished from, and contextualized by, not knowing.

I.e. performative competences developed by lived-experience (of failure and adaptation).

Deleted User September 30, 2021 at 21:51 #602316
This user has been deleted and all their posts removed.
TheMadFool October 01, 2021 at 02:27 #602358
Quoting Daemon
Mapping is not understanding, as illustrated by my examples.


Ok so, what's your definition of understanding?

Please don't repeat yourself by saying, "...as illustrated by my examples...".
Daemon October 01, 2021 at 08:49 #602412
Quoting tim wood
If I'm correct, maybe someone can do a better job than I explaining why.


No, I think you're not correct Tim. You can take the comma out of both sentences or add parentheses to both if you wish, without affecting the meaning (or the grammaticality).



Daemon October 01, 2021 at 09:05 #602417
Quoting TheMadFool
Ok so, what's your definition of understanding?

Please don't repeat yourself by saying, "...as illustrated by my examples...".


But why not? As Wittgenstein famously observed "meaning is use". You can tell what I mean by "understanding" by the way I use it in my examples. I'm using it in the standard way. I could of course provide you with dictionary definitions of "understand", but it hardly seems necessary as you already know how the word is normally used. If you didn't already understand the word, you wouldn't understand the definition.
TheMadFool October 01, 2021 at 09:08 #602418
Quoting Daemon
But why not? As Wittgenstein famously observed "meaning is use". You can tell what I mean by "understanding" by the way I use it in my examples. I'm using it in the standard way. I could of course provide you with dictionary definitions of "understand", but it hardly seems necessary as you already know how the word is normally used. If you didn't already understand the word, you wouldn't understand the definition.


Red Herring :yawn:
Daemon October 01, 2021 at 09:11 #602420
Reply to TheMadFool If you want a dictionary definition, Google it. I'm using the word in the standard way.
TheMadFool October 01, 2021 at 09:12 #602421
Quoting Daemon
If you want a dictionary definition, Google it. I'm using the word in the standard way.


I kept my end of the bargain, you should keep yours.
Daemon October 01, 2021 at 09:13 #602422
I've done it for you:

Understand: perceive the intended meaning of (words, a language, or a speaker).
TheMadFool October 01, 2021 at 09:16 #602423
Quoting Daemon
Understand: perceive the intended meaning of (words, a language, or a speaker).


That's mapping words to referents.
I like sushi October 01, 2021 at 09:20 #602424
Quoting Daemon
A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.

B. The councillors refused to allow the protestors to demonstrate, because they feared violence.


These are different meanings. In A the councillors advocate violence and B they fear violence (which has two meanings in and of itself).

If something is poorly written then it is harder to translate. Don't blame the computer for someone's lack of clarity in their writing.
Daemon October 01, 2021 at 09:22 #602426
Reply to I like sushi

You have misunderstood the sentences!
I like sushi October 01, 2021 at 09:32 #602429
Reply to Daemon No, I have followed the meaning via the 'subject' - which in both cases is 'councillors'.

I believe the rule is if it isn't clear who is giving the reason we go with the subject not the object. In day-to-day speech there is no need as the sentence is usually understood within the given context. They are both open to a degree of interpretation that would be cleared up with sentences that precede or follow it.

As stand alone sentences I would assume the 'councillors' are the ones 'fearing violence' or 'advocating violence'.
Daemon October 01, 2021 at 10:06 #602433
Reply to I like sushi I don't know where you got your "rule" from but that isn't how language works.

The point of the example, which it seems is rather wasted on you, is that we already understand the context without needing to see preceding and succeeding sentences. We know how councillors and protestors behave.
I like sushi October 01, 2021 at 10:46 #602435
Here’s a better example: “The chicken is ready to eat”
I like sushi October 01, 2021 at 10:49 #602437
The point is that it is on the writer to avoid ambiguity in sentences when needed. About the ‘rule’ I mentioned I’m not sure if it an actual prescriptive grammatical rule or not
I like sushi October 01, 2021 at 11:17 #602442
Reply to Daemon Just to add. People living in a society where the councillors have been pro violence for generations would certainly have a different interpretation. Computer translation is very limited because it generally doesn't deal with things like a double entendre or the context any given sentence is written in.

A program would certainly have to be programmed to better adjust to what is a living and changing language not one that is set in stone. The event of the internet has already dramatically changed the evolution of human languages.
Daemon October 01, 2021 at 13:00 #602476
Quoting I like sushi
Here’s a better example: “The chicken is ready to eat”


It's not a better example, it's just a slightly less interesting example.

Quoting I like sushi
The point is that it is on the writer to avoid ambiguity in sentences when needed.


No, that completely misses the point of my example! The point was to show how our immersion in a world of experience allows us to understand things which a computer can't understand.



InPitzotl October 01, 2021 at 13:19 #602485
Quoting Daemon
The CAT tool suggests translations based on what I have already translated.

"The store has bananas" might be translated by the CAT tool from another language; perhaps it's translating to French, and it would map "banana" to "banane". That's a mapping of symbols to symbols.

But the referents for bananas aren't in English or French dictionaries... they are in store shelves, inside pies, and so on. What TMF is talking about is a mapping from "banana" to the stuff on the store shelf, the stuff infused within banana bread, the stuff in banana cream pies.

I think TMF is just having problems expressing this... on a forum, we generally use words. But the referents here are not words.
I like sushi October 01, 2021 at 13:19 #602486
Computers don’t understand context. Or anything. Humans often confuse context.

Your example has different interpretations and are ‘right’ or ‘wrong’ how you’ve framed them. If you cannot see that it’s a problem.

I guess you wrote these sentences because you seem to be offended because myself, and another above, have pointed out they are poorly written.

Computer translators are not programmed to understand slang, idioms or metaphors right? I imagine they may have some in their database yet they don’t ‘know’ when and when not to apply the rule - unless the writer has put the saying in special parenthesis?
Daemon October 01, 2021 at 15:06 #602507
Quoting I like sushi
I guess you wrote these sentences because you seem to be offended because myself, and another above, have pointed out they are poorly written.


I did not write those sentences, I am not offended, they are not poorly written, and you are still completely missing the point.

frank October 01, 2021 at 15:08 #602510
Reply to Daemon
I guess a question would be: how do you know your experiences are similar enough to allow understanding?
Daemon October 01, 2021 at 15:18 #602514
Reply to InPitzotl Quoting InPitzotl
What TMF is talking about is a mapping from "banana" to the stuff on the store shelf, the stuff infused within banana bread, the stuff in banana cream pies.


Well, in my translation memory, in my computer, I would have a Dutch word, "banaan", and an English translation, "banana". Can you tell me how I could get all that stuff about the store shelf and banana bread into my translation memory? Or the stuff about councillors and protestors?
Daemon October 01, 2021 at 15:27 #602517
Quoting frank
I guess a question would be: how do you know your experiences are similar enough to allow understanding?


Not a very clear question Frank. But in 20 years of full time work as a translator I've translated around 10 million words, I very rarely receive complaints about my translations, my work is checked by an editor and I very rarely receive corrections from them, and my customers keep coming back to me and paying for my services. Does that answer your question?

Outlander October 01, 2021 at 15:40 #602518
Quoting I like sushi
These are different meanings. In A the councillors advocate violence and B they fear violence (which has two meanings in and of itself).


Hm, I can see the point. Why not:

Out of fear of violence, the councilors refused to allow the protestors, whom are known to advocate violence, to demonstrate.

Or something of the like. Granted not everyone speaks casually in such a manner so it is useful for any application that plans to be relevant to be able to recognize as much variation in sentence construction as possible. Which as has been noted, is quite difficult.

Edit: And of course technically both sentences can mean either or, with a little thought. Granted we know and should assume the same meanings as in the OP, but there's nothing that prevents the opposite.
Daemon October 01, 2021 at 15:46 #602521
Quoting Outlander
Hm, I can see the point


No you can't. You're missing the point completely.

Outlander October 01, 2021 at 15:51 #602522
Reply to Daemon

Would you perhaps mind explaining it then, seeing as you now hold the minority viewpoint of 'understanding' in this discussion?
frank October 01, 2021 at 15:52 #602523
Quoting Daemon
Not a very clear question Frank. But in 20 years of full time work as a translator I've translated around 10 million words, I very rarely receive complaints about my translations, my work is checked by an editor and I very rarely receive corrections from them, and my customers keep coming back to me and paying for my services. Does that answer your question?


Is it possible that this is happening in spite of a rift in understanding? Could it be that you're applying certain rules correctly, and so there are no complaints about your service, and yet there is no communing of intent?

How would you prove that this extra thing beyond rule following, this 'understanding' exists?
Daemon October 01, 2021 at 16:00 #602526
Reply to Outlander
A. The councillors refused to give the protestors permission for their demonstration as they advocated violence.
B. The councillors refused to give the protestors permission for their demonstration as they feared violence.

In A. "they" refers to the protestors, in B. it refers to the councillors. We know this because of our experience of the world. It's an example of something a computer couldn't know.
Daemon October 01, 2021 at 16:09 #602528
Quoting frank
How would you prove that this extra thing beyond rule following, this 'understanding' exists?


Well, for example, there are sometimes mistakes in the source text. Maybe somebody writes "the saw blade must be touched with the fingers while it is still rotating". So I write to the customer and say "I think you missed out the word 'not' here". And they say "yes, thank you, you're right".

Does that answer your question?

Outlander October 01, 2021 at 16:32 #602538
Quoting Daemon
In A. "they" refers to the protestors, in B. it refers to the councillors. We know this because of our experience of the world. It's an example of something a computer couldn't know.


They only refer to the protestors and councilors respectively, because the father, or author, of the sentence determined so. Or I suppose "it simply happened that way" or as you say, that's just how "the world" (generally) works.. There are numerous scenarios, one of which has been posted previously, where it could easily be the opposite.

The same (likely) context recognition could be achieved, albeit haphazardly, with a 'word map' database.

Councilor = government, order, ruler, leader, society, peace, stability

Protestor = worker, grievance, anger, rebellion, uprising, turmoil, injustice

The more general words (violence) matched with context specific words (they), that happen to match a subsequent 'word map' of words relevant/associated with each party or subject(s) can more often than not determine which party to apply said word to. It would take a great deal of finagling, sure. But it's doable. Not with any laser accuracy, of course. Which I suppose was your point.

What exactly are we discussing and for what purpose? I do fail to see the profoundness or any possible fruit of this topic. Computers, AI =/= human comprehension. I doubt there was any disagreement at any point.
Deleted User October 01, 2021 at 16:38 #602540
This user has been deleted and all their posts removed.
I like sushi October 01, 2021 at 16:40 #602541
Quoting Daemon
In A. "they" refers to the protestors, in B. it refers to the councillors. We know this because of our experience of the world. It's an example of something a computer couldn't know.


To most people. Why do you keep refusing to accept this? In a place where the councillors are corrupt/vicious why not the opposite.

A computer cannot understand anything. It is a CODED not THINKING. Other than that what is your point? I don't actually see one but I'm assuming there is one somewhere that is why I'm persisting.
I like sushi October 01, 2021 at 16:41 #602543
Quoting tim wood
In a perfect world an editor marks them for rewrite.


Precisely
Alkis Piskas October 01, 2021 at 16:41 #602544
Quoting Daemon
I have been a professional translator for 20 years. My job is all about understanding.

I have been a professional translator (freelance) for the same amount of years!
However, I never thought of my job as something that is all about understanding. Undestanding is of course essential, but it refers only part of the whole process. The most important things in a translation are 1) be proficient in the language you translate in (target language) and 2) be able to relay information as accurately as possible without being literal. The level of accuracy depends of course on the subcect: the more technical is a subject the more accurate one has to be. On the other hand, if the subject is literary, one can relax on accuracy and rely more on expresssion. But still, the meaning of the source text has always to be relayed.

All this is an art and and this is how I see a translation. Writing is an art. So is translation. Only that here the ideas come from someone else than youself.

Quoting Daemon
The CAT tool suggests translations based on what I have already translated.

Yes, CAT tools are very good, but mainly for technical subjects. I used them extensively in translation manuals (75% of my total workload!) But on general text, I use Google translation, which I call "pre-translation". Although in the past Google translations were quite inferior --in Greek, which is my native language, it was actually deplorable, because of the complexity of the Greek grammar-- but these days they are really excellent, even in Greek! Most probably because of their hugely increased database of both words/terms, phrases and evem full sentences. So, after that, your task is only to correct minor mistakes and trim the text in general. It's there that your proficiency in your native language comes in as the most important element. Undestanding becomes of secondary importance. It's a fact.

That's enough about translation! :smile:

***

Now, I don't know how you have reduced such an interesting topic as "The important question of what understanding is" into a translation subject! I have a lot to say about "understanding", what it is, how it works, etc. but it seems that it is not what it matters anymore! :smile:

Daemon October 01, 2021 at 17:09 #602551
Quoting Alkis Piskas
Now, I don't know how you have reduced such an interesting topic as "The important question of what understanding is" into a translation subject!


Because I was responding to something TheMadFool said, which I quoted at the very start of this thread:

Quoting TheMadFool
That raises the important question of what understanding is and, more importantly, whether it is something beyond the ability of a computer AI?


We could of course talk about a single language, but discussing computer translation is an excellent way to address the question of understanding.


Daemon October 01, 2021 at 17:16 #602555
Reply to tim wood You still aren't getting the point Tim. The two sentences are there solely to provide an example of the limits of machine translation.

Quoting Outlander
What exactly are we discussing and for what purpose? I do fail to see the profoundness or any possible fruit of this topic. Computers, AI =/= human comprehension. I doubt there was any disagreement at any point.


Quoting Daemon

That raises the important question of what understanding is and, more importantly, whether it is something beyond the ability of a computer AI? — TheMadFool


frank October 01, 2021 at 20:45 #602620
Quoting Daemon
Well, for example, there are sometimes mistakes in the source text. Maybe somebody writes "the saw blade must be touched with the fingers while it is still rotating". So I write to the customer and say "I think you missed out the word 'not' here". And they say "yes, thank you, you're right".

Does that answer your question?


It wasn't really a question. :razz:

You don't know for sure that you and your client have the same understanding.

In exactly the same way, you don't know that the world is out there as it appears to be.

You get by just fine not knowing these things. Or we could say you know one just as well as you know the other.
InPitzotl October 01, 2021 at 22:53 #602662
Quoting Daemon
Can you tell me how I could get all that stuff about the store shelf and banana bread into my translation memory?

I can sketch it out.

You need some bootstrap capabilities outside of dictionaries... things like what humans have; e.g.:
Quoting Daemon
A camera does not see.

...the ability to see. Add to that some basic sapience. The general idea is that this should have the ability to interact with reality in real time on scales roughly approximating that of your typical language using naked apes. Some of this interaction would involve exploiting "seeing" (or other kinds of sensations) in the attainment of goal oriented behaviors analogous to how we "intentionally do things"; i.e., at roughly the same levels of abstractions as the "things we do" or, more to the point, at roughly the same levels of abstractions as the "things we talk about".

Once you have such a thing, we need two more ingredients to make it final: (a) a banana, (b) a shelf. All of this, or something akin to it, would need to be in place before you can have something to map "banana" to and call it understanding.

I skipped a few steps, but it's not like I wouldn't have had to skip steps anyway at some point; I have never built such a thing.
Srap Tasmaner October 02, 2021 at 02:23 #602716
Quoting Daemon
A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.

B. The councillors refused to allow the protestors to demonstrate, because they feared violence.

A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience.


I like this very much. Whether one could somehow, someday develop an artificial system that could deal with such a case, who knows. I lean toward your view, but I wouldn't put money on it either way.

But it is a lovely example of the sort of thing we manage easily everyday, only noticing when it goes wrong for some reason. Funny things, pronouns

Quoting Daemon
The examples I gave were intended to illustrate that semantics isn't simply mapping!


Of course it isn't. I'm surprised anyone would think it is. In point of fact, I'm not even sure what it's supposed to mean: people look up the meanings of words in dictionaries, sure, but you can't look up the meaning of a sentence in the sentence-dictionary, so if sentences have meanings, they must not "map" to them, or they must have a different kind of meaning.
Daemon October 02, 2021 at 11:42 #602851
Quoting I like sushi
To most people. Why do you keep refusing to accept this? In a place where the councillors are corrupt/vicious why not the opposite.


That doesn't affect the point of the example. If there were such a place, the computer wouldn't have access to that external set of circumstances.

The point is that we are able to make a judgement about the meaning of the sentences which a computer can't possibly make.
Daemon October 02, 2021 at 11:54 #602856
Quoting InPitzotl
Can you tell me how I could get all that stuff about the store shelf and banana bread into my translation memory? — Daemon

I can sketch it out.

You need some bootstrap capabilities outside of dictionaries... things like humans have; e.g.:

A camera does not see. — Daemon

...the ability to see.


This is just a waste of everybody's time. I mean, come back to us when there's a camera that can see and we'll have something to talk about.
Alkis Piskas October 02, 2021 at 12:08 #602858
Quoting Daemon
ecause I was responding to something TheMadFool said, which I quoted at the very start of this thread:

I read this of course. But it's still about undestanding ... and my wondering is still unanswered! :grin:

Quoting Daemon
discussing computer translation is an excellent way to address the question of understanding.

Well, in that case, even if you had used a more specific title, like "Computers and understanding" or something like that, it would be still inappropriate because computers do not possess any understaning!

Well, maybe it's not so important, generally and for most people. But it just happens that undestanding and communication are among my favorite subjects. I have studied them extensivley and I was even teaching about them (theory and practice) in the past ...

Daemon October 02, 2021 at 12:30 #602860
Quoting Srap Tasmaner


A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.

B. The councillors refused to allow the protestors to demonstrate, because they feared violence.

A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience. — Daemon


I like this very much.


So do I Srap! It seems to have caused nothing but confusion above though.

Whether one could somehow, someday develop an artificial system that could deal with such a case, who knows.


According to my theory the artificial system would need to be able to experience and interact with the world in the way we do. It would need to experience such things as pain and pleasure in order to understand what "good" and "bad" mean. Do we really want to create artificial beings that can experience pain? Surely there's enough trouble in the world with the experiencing beings we can already create?

InPitzotl October 02, 2021 at 12:31 #602861
Quoting Daemon
I mean, come back to us when there's a camera that can see

You were the one who asked me the question. You were also the one opening this thread with your OP, where you wrote this:
Quoting TheMadFool
matching linguistic symbols (words, spoken or written) to their respective referents

...and you were the one talking about CAT tools as if that had anything to do with referents.

There's a giant difference between responding to "Can you pick up some bananas from the store?" ...by showing me the phrase translated (poorly or greatly) to Dutch; and responding to "Can you pick up some bananas from the store?" ...by showing up on my doorstep with a bunch in your hand.
Daemon October 02, 2021 at 12:40 #602865
Quoting Srap Tasmaner
The examples I gave were intended to illustrate that semantics isn't simply mapping! — Daemon


Of course it isn't. I'm surprised anyone would think it is.


Well there are at least two in this discussion, and I was attempting to apply the Principle of Charity, which asks us to:

"Assume that the opponent is making the strongest argument and interpret others as rational and competent."

Despite, I suppose, all the evidence to the contrary.

Daemon October 02, 2021 at 12:55 #602866
Reply to Alkis Piskas What are you still wondering?
Daemon October 02, 2021 at 13:07 #602869
Quoting InPitzotl
You were the one who asked me the question.


I was kinda hoping you'd realise you couldn't answer the question. In other words, you'd realise that you can't get a computer to understand things in the way we can.

You were also the one opening this thread with your OP, where you wrote this:

matching linguistic symbols (words, spoken or written) to their respective referents — TheMadFool


TheMadFool wrote that, I was quoting him. I'm arguing against him.

InPitzotl October 02, 2021 at 13:38 #602876
Quoting Daemon
I was kinda hoping you'd realise you couldn't answer the question. In other words, you'd realise that you can't get a computer to understand things in the way we can.

Ultimately that's correct, but the gaps are really in details.
Quoting Daemon
TheMadFool wrote that, I was quoting him. I'm arguing against him.

Sorry, I misspoke here... what I meant was that in the OP that was what you quoted. TMF did indeed write that, but he didn't explain what a referent was too well; the way he explained it, a referent could be interpreted as a phrase... so the proposal could be understood that your CAT tool might understand what "water" is if it mapped "water" to the phrase: "cool flowing substance that animals and plants need".

But that's not what the word "referent" means. The referent for "water" isn't another word (it's not "agua"); nor is it another phrase (it's not "cool flowing substance that animals and plants need"). There's no set of shape you can squiggle on a sheet of paper that is the referent for water; instead, you're going to have to go turn your taps on, point to the stuff falling from the sky outside, or go find that stuff fish swim in. Humans that know what "water" means map that word to that stuff... and to do that, we form a concept of that stuff that comes out of taps, that stuff that falls from the sky, that stuff that fish swim in. The idea of such things is an abstraction; it's a model of the stuff we're made aware of by, say, seeing it; swimming in it; drinking it; and so on. And it is that model that we map "water" to when we understand it; not more words.
bongo fury October 02, 2021 at 14:11 #602884
Quoting InPitzotl
Humans that know what "water" means map that word to that stuff...


:100:

Quoting InPitzotl
and to do that, we form a concept


Careful now...

Quoting InPitzotl
The idea of such things


Quoting InPitzotl
a model of the stuff


Are concepts and ideas and models any more harmlessly, less misleadingly identified as the referent of "water" than are phrases like "cool flowing substance"?

I like sushi October 02, 2021 at 14:13 #602886
Quoting Daemon
The point is that we are able to make a judgement about the meaning of the sentences which a computer can't possibly make.


And?
InPitzotl October 02, 2021 at 14:15 #602887
Quoting bongo fury
Are concepts and ideas and models any more harmlessly, less misleadingly identified as the referent of "water" than are phrases like "cool flowing substance"?

Oooooh! What a great question! I think this naturally falls out of our agency. We use our senses to sense the world; as we do so, we create world models. We refer to these world models, in real time even, to "do things". But we also as part of this model "project" it as something independent from us and, well, it winds up that's a good theory of what the world is. I think something along these lines (at least for claims about the state of the external world) is what gives rise to intentionality.

ETA: Just to close the loop here... when we act in the world, we're not merely using our world models... we're literally using that world. By this I mean that we don't simply imagine ourselves walking to the sink, we walk over there. These interactions are in real time, and they are updated by real time world sensations... any difference between what our world model is and these sensations is updated by deferring to the sensed world. This is the long form of what I mean by "project" here.
Daemon October 02, 2021 at 18:17 #602938
Quoting frank
You don't know for sure that you and your client have the same understanding.

In exactly the same way, you don't know that the world is out there as it appears to be.

You get by just fine not knowing these things. Or we could say you know one just as well as you know the other.


I agree that I don't know for sure that my translations can be correctly understood, that's part of my own philosophical position, and I also believe that, in a certain philosophical sense, words do not carry or convey meaning. But I don't tell my translation clients about any of this.

I do think there's an abundance of evidence that we are able to understand a great deal of what we say to one another. You couldn't get people to land on the moon and come back without shared understanding of language. And on a smaller scale, you and perhaps a couple of others have understood at least some of what I've said in this discussion, as evidenced by your coherent responses.

Whether the world is as it appears to be is another (vast) question, and perhaps off topic for the Philosophy of Language forum. Personally I'm satisfied that the world is enough like it appears to be for us to travel to the moon and back, and for me to make the pasta dish I'm going to eat soon.

Alkis Piskas October 02, 2021 at 18:25 #602940
Quoting Daemon
What are you still wondering?

You have replied to me that your topic was a kind of answer to @TheMadFool about undestanding. This didn't change at all my wondering of how has the subject of "undestanding" in the title been replaced by the subject of "translation" in the description! But after this, I'll stop wondering! So, don't worry! :smile:
frank October 02, 2021 at 18:49 #602943
Quoting Daemon
Whether the world is as it appears to be is another (vast) question, and perhaps off topic for the Philosophy of Language forum


My point was that skepticism about our ability to communicate (Quine, for instance), is very like skepticism about the world.

In both cases the skeptic belies her supposed beliefs with her behavior. And I think both kinds of skepticism are very lonely places to be. :grin:

Daemon October 02, 2021 at 20:09 #602960
That's interesting about Quine. How absolute is his scepticism about communication?

(I think the curly c is much prettier than the kicking k in the word "scepticism").

My own provisional position is that when we say for example that a word or a sentence has or carries or conveys meaning, that is a metaphor, one we find difficult to rekognise as such.



frank October 02, 2021 at 20:22 #602964
Quoting Daemon
That's interesting about Quine. How absolute is his scepticism about communication?


He would say there's no fact of the matter regarding whether your translations are correct. I think he would want us to deflate the concept of correctness.

Quoting Daemon
My own provisional position is that when we say for example that a word or a sentence has or carries or conveys meaning, that is a metaphor, one we find difficult to rekognise as such.


Nietzsche agrees and so do I, except there's a special brand of language use called propositions. I think this is communication, not between people, but between an individual and the world. IOW, I think we relate to the world as if it's a person and true propositions are its utterances. I think this has its roots in the time when people really did think the world was alive.

Daemon October 02, 2021 at 21:55 #602996
Reply to frank But you still think that propositions are special, and the world issues utterances, even though you don't think the world is alive??

frank October 02, 2021 at 22:05 #602997
Quoting Daemon
But you still think that propositions are special,


They're special because they're supposed to transcend any particular speaker. You and I can express the same proposition in different ways at different times. This invites questions about the nature of propositions, especially false ones.

I think we relate to the world as if it can talk: we ask it questions and expect answers even if we may not understand those answers at first, as with quantum physics.

When we don't understand what the world is telling us, we proceed as if it's just a matter of asking more questions. In fact some physicists celebrate the fact that the world has not yet revealed all its secrets.

I'm saying this general framework of interrogation is something we've inherited, and it explains the nature of propositions.

It's not normal in my time to believe the world can really talk, so that's why propositions are philosophically confusing.
Daemon October 02, 2021 at 22:58 #603010
Is that "transcending any particular speaker" just a metaphor, a fiction?
frank October 02, 2021 at 23:07 #603017
Quoting Daemon
Is that "transcending any particular speaker" just a metaphor, a fiction?


You and I can assert the same proposition. Logically, that means the proposition is neither our utterances nor the sentences we use. See what I mean?
Daemon October 02, 2021 at 23:09 #603019
Tomorrow
I like sushi October 03, 2021 at 01:51 #603063
@frank Perhaps you can point out more clearly what this thread is about?

All I can see at the moment is someone stating the obvious (nothing wrong with that!) and trying to look beyond the obviousness ... it is the later part I'm having trouble with seeing.

Computers don't understand and humans do. Translation programs don't 'think'. Our experience of language within a given context helps us choose the better/correct meaning behind statements made - computers are limited to what they're programmed to do.

We are self correcting and constantly learning and relearning the world about us.

Where in here is the OP's idea/point/question?
frank October 03, 2021 at 02:40 #603074
Reply to I like sushi

Daemon gives two sentences:

Quoting Daemon
. A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.

B. The councillors refused to allow the protestors to demonstrate, because they feared violence.

A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience.



Some would say human language is a matter of rule following, but making sense of the above sentences seems to require experience with a point of view.

It may be that you're so closely allied with Daemon's outlook that it seems he's stating the obvious, but some might argue that Daemon is wrong: no experience with life is necessary for translation. The motive for arguung that would be to eliminate any reliance on experience to explain anything because the goal is to deny that there is any such thing as experience.
I like sushi October 03, 2021 at 03:32 #603079
Quoting frank
Some would say human language is a matter of rule following, but making sense of the above sentences seems to require experience with a point of view.


Well, there is no 'seems to' about it. There is no manual for language. Anyone 'arguing' against is just plain wrong! I think maybe that some people confuse Chomsky's view of language as saying that there are strict rules. That isn't at all what he is saying though. Undoubtedly there are certain elements that constitute what we commonly refer to as 'language,' but there is still a lot of work to do in terms of the cognitive neurosciences. Sadly a large section of the 'Philosophy of Language' group were a bit slow catching up with the science and were still occupied with problems that had be solved by neuroscience ... it takes time for things like that to bed down. Ironically habituation is a huge element of our experience and understanding about-the-world.

The only space where confusion arises is within what we're framing as 'language'. I have big issues with that. Also, some people view 'thinking' as purely about the spoken/written word where within actual studies of language this divide is not always applied (context dependent given what is actually being considered for study).

It might help us @Daemon if you told me if you'd loosely say that these here words are 'translations' of my 'cognitive capacities' expressed with the purpose of elucidating some common meaning/understanding?

I think we might be slipping into semiotics here.

@frank Any chance you could look at thread about 'Choice: The problem with power' and see if you can disagree with me or highlight something?
Daemon October 03, 2021 at 09:25 #603137
Quoting I like sushi
Computers don't understand and humans do. Translation programs don't 'think'.


I Googled the phrase "Can computers think". I got 21,000 hits, including this, from Oxford University's Faculty of Philosophy (my italics):

Quoting Oxford University's Faculty of Philosophy
Can Computers Think?

The Turing Test, famously introduced in Alan Turing's paper "Computing Machinery and Intelligence" (Mind, 1950), was intended to show that there was no reason in principle why a computer could not think. Thirty years later, in "Minds, Brains, and Programs" (Behavioral and Brain Sciences, 1980), John Searle published a related thought-experiment, but aiming at almost exactly the opposite conclusion: that even a computer which passed the Turing Test could not genuinely be said to think. Since then both thought-experiments have been endlessly discussed in the philosophical literature, without any very decisive result.


It seems it's still very much a live question.

Daemon October 03, 2021 at 09:28 #603138
Quoting frank
You and I can assert the same proposition. Logically, that means the proposition is neither our utterances nor the sentences we use. See what I mean?


From what you said previously though, we can't know if we are asserting the same proposition?
I like sushi October 03, 2021 at 09:53 #603145
Quoting Daemon
Computers don't understand and humans do. Translation programs don't 'think'.
— I like sushi

I Googled the phrase "Can computers think". I got 21,000 hits, including this, from Oxford University's Faculty of Philosophy (my italics):

Can Computers Think?

The Turing Test, famously introduced in Alan Turing's paper "Computing Machinery and Intelligence" (Mind, 1950), was intended to show that there was no reason in principle why a computer could not think. Thirty years later, in "Minds, Brains, and Programs" (Behavioral and Brain Sciences, 1980), John Searle published a related thought-experiment, but aiming at almost exactly the opposite conclusion: that even a computer which passed the Turing Test could not genuinely be said to think. Since then both thought-experiments have been endlessly discussed in the philosophical literature, without any very decisive result.
— Oxford University's Faculty of Philosophy

It seems it's still very much a live question.


I can only suggest that you reread and ask yourself what you're referring to above^^

If you can read into what I write something that explicitly isn't there then you probably don't get paid much for your work (or shouldn't) :D

Jibing aside; have fun I'm exiting :)
Daemon October 03, 2021 at 10:36 #603156
Quoting I like sushi
I can only suggest that you reread and ask yourself what you're referring to above^^


I don't know what you're on about.

If you can read into what I write something that explicitly isn't there then you probably don't get paid much for your work (or shouldn't) :D


But oddly enough I do.

Jibing aside; have fun I'm exiting :)


Oh good.

frank October 03, 2021 at 10:44 #603159
Quoting I like sushi
Any chance you could look at thread about 'Choice: The problem with power' and see if you can disagree with me or highlight something?


Sure!

Quoting Daemon
From what you said previously though, we can't know if we are asserting the same proposition?


Yes. I don't think there's any logic that overcomes skepticism there, you just have to look at the cost of it: how much do you actually lose if you embrace that skepticism?

I think one result is that you can't know whether you agree with yourself from one moment to the next.

As some have noted (I think Chomsky did) if you adopt Quine's skepticism, meaning of any kind breaks down, so there would be no understanding.

Skepticism about the external world also results in a breakdown in meaning if you note Heidegger's point: that you are inextricable from your world. So if you deny the external world, the thing that's left isn't you. It's some foreign entity.
Daemon October 03, 2021 at 19:06 #603271
Quoting frank
Yes. I don't think there's any logic that overcomes skepticism there, you just have to look at the cost of it: how much do you actually lose if you embrace that skepticism?


You can't just pick and choose though, can you? I mean if the scepticism is justified, then it doesn't matter if you embrace it or not.
frank October 03, 2021 at 21:33 #603324
Quoting Daemon
You can't just pick and choose though, can you? I mean if the scepticism is justified, then it doesn't matter if you embrace it or not.


The human mind has a flair for justification. You can justify pretty much any belief you like. Make your own religion and build a community of believers who will support you all the way to the Kool Aid.

Identity and emotion are in charge. Logic is a brittle autumn leaf in a hurricane.



Daemon October 03, 2021 at 21:41 #603329
Reply to frank I did start my own religion on the internet, years ago. I attracted some adherents! Can't remember much about it but it was called "The New Religion". I offered people a chance to be in at the start of a new religion, and the opportunity to help develop its tenets. It was fascinating how people did want to be involved in it!

But do you think identity and emotion are in charge of you?

frank October 03, 2021 at 21:47 #603334
Quoting Daemon
But do you think identity and emotion are in charge of you?



I think so. You?
Daemon October 03, 2021 at 21:52 #603337
Not entirely. I apply some logic. I don't think "identity" is important to me...not national identity or any group identity...
TheMadFool October 04, 2021 at 02:30 #603461
Quoting InPitzotl
matching linguistic symbols (words, spoken or written) to their respective referents
— TheMadFool
...and you were the one talking about CAT tools as if that had anything to do with referents.

There's a giant difference between responding to "Can you pick up some bananas from the store?" ...by showing me the phrase translated (poorly or greatly) to Dutch; and responding to "Can you pick up some bananas from the store?" ...by showing up on my doorstep with a bunch in your hand.


The word banna is mapped to the fruit banana - every word has a referent.
Daemon October 04, 2021 at 08:41 #603566
Reply to TheMadFool What's the referent of "almost"?
TheMadFool October 04, 2021 at 09:00 #603572
Quoting Daemon
What's the referent of "almost"?


A pattern (the referent) which we can extract from the following scenarios:

1. I tried to jump over the fence, my feet touched the top of the fence but I couldn't clear the fence.

2. Sara tried eating the whole pie, she ate as much as she could but a small piece of it was left.

3. Stanley tried to run 14 km but he managed only 13.5 km, he had to give up because of a sprained ankle.
Daemon October 04, 2021 at 09:04 #603574
Reply to TheMadFool So for a computer to understand "almost" it has to somehow extract it from that load of drivel? Come on man. Do you think because you don't know anything about this, nobody else does either?
TheMadFool October 04, 2021 at 09:39 #603586
Quoting Daemon
So for a computer to understand "almost" it has to somehow extract it from that load of drivel? Come on man. Do you think because you don't know anything about this, nobody else does either?


What drivel? You asked me a question, I answered it. If you have any issues with the way I view semantics (as mapping of word to its referent), please be specific about where exactly I go wrong. Kindly refrain from derailing the discussion from something worthwhile to something puerile.
Daemon October 05, 2021 at 22:01 #604247
Quoting TheMadFool
A pattern (the referent) which we can extract from the following scenarios:

1. I tried to jump over the fence, my feet touched the top of the fence but I couldn't clear the fence.

2. Sarah tried eating the whole pie, she ate as much as she could but a small piece of it was left.

3. Stanley tried to run 14 km but he managed only 13.5 km, he had to give up because of a sprained ankle.


Extracting "almost" from those three sentences is a good example of something a computer couldn't do! If you asked a human to identify what the sentences have in common, they might say "they are all about people trying and failing". There's no "mapping" from those sentences to the word "almost", even for us.

Your ideas are simplistic and naive.

TheMadFool October 06, 2021 at 02:33 #604332
Quoting Daemon
Extracting "almost" from those three sentences is a good example of something a computer couldn't do! If you asked a human to identify what the sentences have in common, they might say "they are all about people trying and failing". There's no "mapping" from those sentences to the word "almost", even for us.

Your ideas are simplistic and naive.


I'm surprised that you're ignoring important details in my examples that help you abstract the meaning of "almost". Also, try and use the word "almost" in some sentences and reason from them the pattern which the word refers to.
180 Proof October 06, 2021 at 03:51 #604354
In general, understanding consists in being able to effectively orient oneself – find critical paths – around or through complexities and uncertainties inherent in some existential situation or discursive domain (i.e. how the fly finds its way out of the fly-bottle).

Can a 'thinking machine', according to this definition(?), 'understand'? I suspect, if so, it can only understand to the degree it can recursively map itself within a map of a domain (or domains) recursively nested within a situation (or diachronic process).

Btw, particularly in philosophy, I think understanding
Quoting 180 Proof
[results from] making explicit ordinarily implicit (i.e. unreflective) discursive uses, misuses and abuses of e.g. concepts, criteria, questions, problems, knowledge, etc.
Daemon October 06, 2021 at 08:10 #604389
Reply to 180 Proof A fly in a fly-bottle has no understanding of bottles. But it does at least exist as an entity, which is a prerequisite for understanding. A machine (a computer) is not an entity in the appropriate sense.

The first entities on Earth were single-celled organisms. The cell wall is the boundary between the organism and the rest of the world. No such boundary exists between a computer and the rest of the world.

Quoting 180 Proof
Can a 'thinking machine', according to this definition(?), 'understand'? I suspect, if so, it can only understand to the degree it can recursively map itself


It isn't appropriate to talk (in the present context) about the computer "itself".
InPitzotl October 06, 2021 at 12:20 #604424
Quoting Daemon
But it does at least exist as an entity, which is a prerequisite for understanding.

Why is it a prerequisite?
Daemon October 06, 2021 at 12:49 #604433
Because understanding can't take place without an entity which understands.
180 Proof October 06, 2021 at 14:36 #604445
Quoting Daemon
A fly in a fly-bottle has no understanding of bottles

Someone ought to tell that to Wittgenstein.
Daemon October 06, 2021 at 14:37 #604446
Reply to 180 Proof I think he knew already.
180 Proof October 06, 2021 at 14:44 #604448
Reply to Daemon I know, I'm just pointing out that it's you, not I, who misunderstands a metaphor for understandung to the point of taking it literally.
Daemon October 06, 2021 at 14:51 #604449
Reply to 180 ProofYour use of the metaphor wasn't very helpful. We don't reach understanding in the way the fly gets out of the bottle.

180 Proof October 06, 2021 at 14:52 #604450
Reply to Daemon When you're literal-minded, metaphors can't be helpful to you.
Daemon October 06, 2021 at 15:12 #604454
Reply to 180 Proof Say something interesting ffs.
Ennui Elucidator October 06, 2021 at 15:15 #604456
Reply to Daemon But isn't the point that understanding is a demonstration of proficiency? To the extent that a fly can escape from a bottle by other than chance, is that evidence of understanding?

Not that I understand Wittgenstein or much of anything else.

SEP on Psychology of Understanding:When we turn to understanding, by contrast, some have claimed that a new suite of cognitive abilities comes onto the scene, abilities that we did not find in ordinary cases of propositional knowledge. In particular, some philosophers claim that the kind of mental action verbs that naturally come to the fore when we think about understanding—“grasping” and “seeing”, for example—evoke mental abilities “beyond belief”, i.e., beyond simple assent or taking-to-be-true (for an overview, see Baumberger, Beisbart, & Brun 2017).



Ennui Elucidator October 06, 2021 at 15:24 #604458
And just because I think only one person briefly mentioned it, let's be a bit express about the Chinese Room and how it relates to minds and understanding.

4.4 The Other Minds Reply

Related to the preceding is The Other Minds Reply: “How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers.”

Searle’s (1980) reply to this is very short:

The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn’t be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In ‘cognitive sciences’ one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.

Critics hold that if the evidence we have that humans understand is the same as the evidence we might have that a visiting extra-terrestrial alien understands, which is the same as the evidence that a robot understands, the presuppositions we may make in the case of our own species are not relevant, for presuppositions are sometimes false. For similar reasons, Turing, in proposing the Turing Test, is specifically worried about our presuppositions and chauvinism. If the reasons for the presuppositions regarding humans are pragmatic, in that they enable us to predict the behavior of humans and to interact effectively with them, perhaps the presupposition could apply equally to computers (similar considerations are pressed by Dennett, in his discussions of what he calls the Intentional Stance).
180 Proof October 06, 2021 at 16:19 #604466
Reply to Daemon FFS. I try not to waste words on the literal-minded. :yawn:
Daemon October 06, 2021 at 16:22 #604468
Reply to Ennui Elucidator

Just for clarity, the part from "Critics hold" onwards is the SEP and not Searle.

The evidence we have that humans understand is not the same as the evidence that a robot understands. The problem of other minds isn't a real problem, it's more in the nature of a conundrum, like Zeno's paradoxes. The paradox of Achilles' Arrow for example is supposed to show that a flying arrow doesn't move. But it does.

The nature of consciousness is such that I can't experience your understanding of language. But I can experience my own understanding, and you can experience yours. It would be ridiculous for me to believe that I am the only one who operates this way, and it would be ridiculous for you to believe that you are the only one.

With a robot, we know that what looks like understanding isn't really understanding, because we programmed the bloody robot. My translation tool often generates English sentences that make it look like it understands Dutch, but I know it doesn't, because I programmed it.

Ennui Elucidator October 06, 2021 at 17:52 #604490
Reply to Daemon

It is hard to not completely change topics and respond to your point. Suffice it to say that minds are really hard stuff. This may be due to the fact that people are generally unwilling to treat the idea inclusively (things are in till proven out vs. things are out till proven in). Accepting for a moment that understanding is a function of an agent demonstrating a particular capability, I think it is easy enough to say that understanding can not be discrete, i.e. that a system that can only do one thing (or a variety of things) well lacks agency for this purpose. However, at some point, a thing can do enough things well that it feels a bit like bad faith to say that it isn't an agent because you understand how it was constructed and how it behaves (indeed, if determinism obtains, the same could be said of people). Being a bit aggressive, I might suggest that you can't rule out panpsychism and so despite your being responsible for the behavior and assemblage of a computer, it may very well be minded (or multiply minded) sufficiently to understand what it is doing. We have no present way to demarcate minded from non-minded besides interpreting behavior. If something behaves like it understands (however strictly or loosely you want to define demonstrating a skill/ability/competency), bringing up whether it has a mind sufficient for agency doesn't do much work - it merely states the obvious: we don't know what has a mind.

I suppose if being explicable renders a thing mindless, increasing number of things that previously were marginally minded (after we admitted that maybe more than just white men could have minds) would go back to not having them. I just don't know how our minds will survive the challenge 10,000 years from now (when technology is presumably vastly superior to what we managed to create in the last hundred or so years). Before you know it, we will be arguing about p-Zombies. For my part, I might approach the thing with humility and err on the side of caution (animated things are minded) rather than dominion (people are special and can therefore subjugate the material world aside from other people).

Daemon October 06, 2021 at 20:29 #604525
Quoting Ennui Elucidator
I think it is easy enough to say that understanding can not be discrete, i.e. that a system that can only do one thing (or a variety of things) well lacks agency for this purpose. However, at some point, a thing can do enough things well that it feels a bit like bad faith to say that it isn't an agent because you understand how it was constructed and how it behaves (indeed, if determinism obtains, the same could be said of people).


In the case of a computer, it isn't just that we know how it was constructed and how it behaves, the point is that we know it is not using understanding.

Not only that: a computer is not an agent, we are the agents making use of it. It doesn't qualify for agency, any more than an abacus does.
InPitzotl October 06, 2021 at 22:55 #604596
Quoting Daemon
With a robot, we know that what looks like understanding isn't really understanding, because we programmed the bloody robot. My translation tool

I'm a bit confused here. Is your translation tool a robot?
Daemon October 07, 2021 at 07:44 #604709
Reply to InPitzotl There's no significant difference.
InPitzotl October 07, 2021 at 09:54 #604752
Quoting Daemon
There's no significant difference.

There absolutely is a significant difference. How are you going to teach anything, artificial or biological, what a banana is if all you give it are squiggly scratches on paper? It doesn't matter how many times your CAT tool translates "banana", it will never encounter a banana. The robot at least could encounter a banana.

Equating these two simply because they're programmed is ignoring this giant and very significant difference.
Daemon October 07, 2021 at 10:21 #604756
Reply to InPitzotl A robot does not "encounter" things any more than a PC does. When we encounter something, we experience it, we see it, feel it, hear it. A robot does not see, feel or hear.
Daemon October 07, 2021 at 10:43 #604764
I shouldn't be having to say this stuff. It feels like you are all suffering from a sort of mass delusion.
InPitzotl October 07, 2021 at 12:50 #604793
I'm detecting a few problems here.
Quoting Daemon
A robot does not "encounter" things any more than a PC does. ...

The question isn't about experiencing; it's about understanding. If I ask a person, "Can you go to the store and pick me up some bananas?", I am not by asking the question asking the person to experience anything. I am not asking them to be consciously aware of a car, to have percepts of bananas, to feel the edges of their wallet when they fish for it, etc. I am asking for certain implied things... it's a request, it's deniable, they should purchase the bananas, and they should actually deliver it to me. That they experience things is nice and all, but all I'm asking for is some bananas.
Quoting Daemon
When we encounter something, we experience it, we see it, feel it, hear it.

I disagree with the premise, "'When humans do X, it involves Y' implies X involves Y". What you're asking me to believe is in my mind the equivalent of that asking "Can you go to the store and pick me up some bananas?" is asking someone to experience something; or phrased slightly more precisely, that my expectations that they understand this equate to my expectations that they (consciously?) experience things. And I don't think that's true. I think I'm just asking for some bananas.

The other problem is that you missed the point altogether to excuse a false analogy. A human doesn't learn language by translating words to words, or by hearing dictionary definitions of words. It's kind of impossible for a human to come to the point of being able to understand "Can you go to the store and pick me up some bananas?" by doing what your CAT tool does. It's a prerequisite for said humans to interact with the world to understand what I'm asking by that question.

IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do.
Daemon October 07, 2021 at 14:28 #604825
Quoting InPitzotl
The question isn't about experiencing; it's about understanding.


As I emphasised in the OP, experience is the crucial element the computer lacks, that's the reason it can't understand. The same applies to robots.

Quoting InPitzotl
If I ask a person, "Can you go to the store and pick me up some bananas?", I am not by asking the question asking the person to experience anything.


But in order to understand your question, the person must have experienced stores, picking things up, bananas and a multitude of other things. Quoting InPitzotl
IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do.


Quoting InPitzotl
IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do.


Neither my CAT tool nor a robot do what I do, which is to understand through experience.

.
InPitzotl October 07, 2021 at 17:04 #604869
Quoting Daemon
As I emphasised in the OP, experience is the crucial element the computer lacks, that's the reason it can't understand.

Nonsense. There are people who have this "crucial element", and yet, have no clue what that question means. If experience is "the crucial" element, what is it those people lack?

I don't necessarily know if a given person would understand that question, but there's a test. If the person responds to that question by going to the store and bringing me some bananas, that's evidence the person has understood the question.
Quoting Daemon
The same applies to robots.

Quoting Daemon
But in order to understand your question, the person must have experienced stores, picking things up, bananas and a multitude of other things.

Your CAT tool would be incapable of bringing me bananas if we just affix wheels and a camera on it. By contrast, a robot might pull it off. The robot would have to do more than just translate words and look up definitions like your CAT tool does to pull it off... getting the bananas is a little bit more involved than translating questions to Dutch.
Quoting Daemon
Neither my CAT tool nor a robot do what I do, which is to understand through experience.

Neither your CAT tool nor a person who doesn't understand the question can do what a robot who brings me bananas and a person who brings me bananas do, which is to bring me bananas.

I'm not arguing that robots experience things here. I'm arguing that it's a superfluous requirement. But even if you do add this superfluous requirement, it's certainly not the critical element. To explain what brings me the bananas when I ask for them, you have to explain how those words makes something bring me bananas. You can glue experience to the problem if you want to, but experience doesn't bring me the bananas that I asked for.
Ennui Elucidator October 07, 2021 at 18:41 #604900
Quoting InPitzotl
I'm not arguing that robots experience things here. I'm arguing that it's a superfluous requirement. But even if you do add this superfluous requirement, it's certainly not the critical element. To explain what brings me the bananas when I ask for them, you have to explain how those words makes something bring me bananas.


I'm not so sure that Daemon accepts that the understanding is in the doing. A person and a robot acting identically on the line (see box, lift box, put in crate A, reset and wait for next box to be seen) do not both, on his view, understand because the robot is explicable (since he, or someone else, built it from scratch and programmed it down to the last detail). He is after minds as the locus of understanding, but he seems unwilling to accept that what has a mind is not based on explicability. It is a bit like a god of the gaps argument that grows ever smaller as our ability to explain grows ever larger. We will have minds (and understanding) only so long as someone can't account for us.
Daemon October 07, 2021 at 22:26 #604976
Reply to Ennui Elucidator Thank you, that is interesting, but it is definitely not what I am saying.

My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.

I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.

Because we can explain the robot, we know that its actions are not due to understanding based on experience.

We will continue to have minds and understanding even after we understand our minds.

Daemon October 11, 2021 at 08:55 #605814
Quoting InPitzotl
I'm not arguing that robots experience things here. I'm arguing that it's a superfluous requirement. But even if you do add this superfluous requirement, it's certainly not the critical element. To explain what brings me the bananas when I ask for them, you have to explain how those words makes something bring me bananas. You can glue experience to the problem if you want to, but experience doesn't bring me the bananas that I asked for.


We're not trying to explain how you get bananas, we're trying to explain understanding.
InPitzotl October 11, 2021 at 11:39 #605858
Quoting Daemon
We're not trying to explain how you get bananas, we're trying to explain understanding.

Quoting InPitzotl
"Can you go to the store and pick me up some bananas?"

Quoting Daemon
My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.

I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.

A correct understanding of the question is comprised of relating it to a request for bananas. How this fits in to the world is how one goes about going to the store, purchasing bananas, coming to me and delivering bananas. You've added experiencing in there. You seem too busy trying to compare CAT tools not understanding and an English speaker understanding to relate understanding to the real test of it: the difference between a non-English speaker just looking at me funny and an English speaker bringing me bananas.

So what you've tried to get me to do is accept that a robot, just like a CAT tool, doesn't understand, even if the robot brings me bananas; and the reason the robot does not understand the question is because the robot does not experience, just like the CAT tool. My counter is that the robot, just like the English speaker, is bringing me bananas, which is exactly what I meant by the question; the CAT tool is just acting like the non-English speaker, who does not understand the question (despite experiencing; surely the non-English speaker has experienced bananas, and even experiences the question... what's missing then?). "Bringing me bananas" is both a metaphor for what the English speaker correctly relates the question to that the non-English speaker doesn't, and how the English speaker demonstrates understanding the question.
Daemon October 12, 2021 at 09:17 #606160
Reply to InPitzotlYou can redefine "understanding" in such a way that it is something a robot or a computer can do, but the "understanding" I am talking about is still there. The kind a robot can't do.

"If the baby fails to thrive on raw milk it should be boiled".
Josh Alfred October 12, 2021 at 10:12 #606171
Reply to Daemon A) Artificial intelligence can utilize any sensory device and use it to compute. If you understand this you can also compare it to human sensory experience. There is little difference. Can you understand that? There is no doubt in my mind, that B) even if computers can not understand everything understandable by humans NOW in A future they will be able to. This (b) is clearly demonstrated by the advancements in computing devices that has taken decades to improve; where-upon their intelligences have gone through such testing as the the Turing Test and others.
InPitzotl October 12, 2021 at 10:13 #606172
Quoting Daemon
You can redefine "understanding" in such a way that it is something a robot or a computer can do, but the "understanding" I am talking about is still there.

The concept of understanding you talked about on this thread doesn't even apply to humans. If "the reason" the robot doesn't understand is because the robot doesn't experience, then the non-English speaker that looked at me funny understood the question. Certainly that's broken.

I think you've got this backwards. You're the one trying to redefine understanding such that "the robot or a computer" cannot do it. Somewhere along the way, your concept of understanding broke to the point that it cannot even assign lack of understanding to the person who looked at me funny.
Josh Alfred October 12, 2021 at 10:19 #606173
Reply to Daemon There is some kind of break and convergence between A) Being able to translate languages B) Understanding languages. I am not sure what those differences and similarities are, as I have never posited the two for comparison. Computers are capable of both. I think @TheMadFool is right on defining understanding. It requires referents and those referents require some kind of experience of their objects That is linguistic empiricism in a pure form, but what it doesn't account for is 1) how we know things through rational deduction, where ONE lacks experience yet knows the premises and conclusions to be valid or invalid deductively. 2) And probably a whole different milieu of other cognitive quandaries.
Daemon October 12, 2021 at 11:27 #606195
Quoting Josh Alfred
A) Artificial intelligence can utilize any sensory device and use it to compute. If you understand this you can also compare it to human sensory experience. There is little difference. Can you understand that?


I can understand what you're saying, but it is quite wrong. When you experience through your senses you see, feel and hear. A computer does not see, feel and hear. I shouldn't need to be telling you this.
InPitzotl October 12, 2021 at 12:07 #606202
Quoting Daemon
When you experience through your senses you see, feel and hear.

And yet, Josh (guessing) does not understand Sanskrit, and you do not understand understanding. A person who does not understand something does not understand it. I shouldn't need to be telling you this.

You've convinced yourself that experience is the explanation for understanding. The problem is, experience does not explain understanding. A large number of animals also experience; but somehow, only humans have mastered human language. Experience cannot possibly be the explanation for understanding if it isn't even an explanation of understanding.
Daemon October 12, 2021 at 13:30 #606222
Reply to InPitzotl

I am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.

To understand what "pain" means, for example, you need to have experienced pain.
InPitzotl October 12, 2021 at 23:04 #606446
Quoting Daemon
I am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.

To understand what "pain" means, for example, you need to have experienced pain.

Your example isn't even an example of what you are claiming, unless you seriously expect me to believe that you believe persons with congenital analgesia cannot understand going to the store and getting bananas.

There's a gigantic difference between claiming that X is necessary for understanding, and claiming that X is necessary to understand X.

ETA: Your claim is that experience is necessary for understanding. I interpret this claim as equivalent to saying that there can be no understanding without experience. The expected justification for this claim would be to show how understanding at all necessarily involves experience (because if it doesn't, the claim is wrong). This is quite different than pointing out areas of understanding that require experience (such as your pain example).

Explain to me, for example, how you connect the requirement of experience to the example question requesting some bananas.
Daemon October 13, 2021 at 10:46 #606656
Quoting InPitzotl
I am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.

To understand what "pain" means, for example, you need to have experienced pain. — Daemon

Your example isn't even an example of what you are claiming, unless you seriously expect me to believe that you believe persons with congenital analgesia cannot understand going to the store and getting bananas.
to

I don't really see what you're getting at here. I'm not saying you need to experience pain to understand shopping. You need to experience pain to understand pain.

To understand shopping, you would need to have experienced shopping.

InPitzotl October 13, 2021 at 11:59 #606668
Quoting Daemon
To understand shopping, you would need to have experienced shopping.

Pain is a feeling. Shopping is an act.

If I see a person walking through the store, looking at various items, picking up some of them and putting them into the cart, the person is shopping. If I see a robot walking through the store, looking at various items, picking up some of them and putting them into the cart, the robot is shopping. It's hard to say what a robot feeling pain is by comparison, but that being all shopping is, that robot is shopping.

Also, are you implying nobody knows what my question means unless they have bought me bananas? (Prior to which, they have not experienced buying me bananas?)
Daemon October 13, 2021 at 17:02 #606751
Reply to InPitzotl

A robot is not an individual, an entity, an agent, a person. To say that a robot is shopping is a category error.

Of course in everyday conversation we talk as though computers and robots were entities, but here we need to be more careful.

You could say that the robot is simulating shopping.

Do you think the robot understands what it is doing?
Daemon October 13, 2021 at 17:28 #606767
Quoting InPitzotl
Also, are you implying nobody knows what my question means unless they have bought me bananas? (Prior to which, they have not experienced buying me bananas?)


I wrote this above: [i]My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.

I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.[/i]

So a person can understand instructions to shop for your bananas if they have had sufficiently similar experiences.

If the baby fails to thrive on raw milk, boil it.

InPitzotl October 14, 2021 at 04:58 #606961
Quoting Daemon
A robot is not an individual, an entity, an agent, a person.

Just a quick reminder... we're not talking about robots in general. We're talking about a robot that can manage to go to the store and get me some bananas.

I don't believe such a robot can possibly pull this off (with any sort of efficacy) without being an individual, an entity, or an agent.

But, sure... it need not be a person.

I suspect that your concept of individuality/agency drags in baggage I don't myself drag in.
Quoting Daemon
Of course in everyday conversation we talk as though computers and robots were entities, but here we need to be more careful.

Okay, so let's be careful.
Quoting Daemon
To say that a robot is shopping is a category error.

Quoting Daemon
You could say that the robot is simulating shopping.

Imagine this theory. Shopping can only be done by girls. I say, that's utter nonsense. Shopping does not require being a girl; I'm a guy, and I can certainly pull it off. But the objection is raised that it's a category error to claim that a guy can shop; you could say that I am simulating shopping.

I don't quite buy that said argument counts as being careful. I'm certainly, in this particular hypothetical scenario, not committing a category error simply by claiming that I, a guy, can shop; it's not me that's claiming only girls shop. In fact, the suggestion that an event that actually occurs should be considered a simulation seems to raise red flags to me.

This sounds like exactly the opposite of being careful.

ETA: You've managed to formulate a theory that makes unjustified distinctions. There's now real shopping, where real bananas get put into real carts, money from real accounts change hands, and the real bananas are brought to me; and then there's simulated shopping, where all of that stuff also happens, but we're missing vital ingredient X. There's by this logic real walking, where one manages to perform a particular choreography of controlled falling in such a way as to invoke motion without falling over, and simulated walking, where all of this stuff happens, but you're not doing it with the right stuff. There's real surgery, where a surgeon might slice me open with a knife, remove a tumor, and sew me up while managing not to kill me; and simulated surgery, where all of this stuff happens... the tumor's still removed, I'm still alive... but the thing slicing me open didn't quite have feels in the right way.

It seems to me there's no relevant difference here between the real thing and what you're calling a simulation... which is also the real thing, but is missing the ingredient you demand the real thing requires to call it real. All of this stuff still gets done... so to me, this is the ultimate test demonstrating that the thing you demand must be there to do it isn't in fact necessary at all. Are you sure you want this to be your standard that vital ingredient X is necessary? Because it sounds to me like this is the very definition of ingredient X not being vital.

A genuine argument for ingredient X's vitality should not look like a True Scotsman fallacy. If experience isn't doing any work for you to explain something crucial about understanding, it's as I said superfluous... and your inclusion of it just to include it is simply baggage. If you have a good reason to suspect experience is necessary, that is what you should present; not just a narrative that lets you say that, but an explanation for how it critically fits in.
Quoting Daemon
Do you think the robot understands what it is doing?

In a nutshell, yes. But again, to be clear, this does not stem from a principle that doing things is understanding. Rather, it's because this is precisely the type of task that requires understanding to do with any efficacy.

Daemon October 15, 2021 at 09:15 #607425
Quoting Josh Alfred
There is some kind of break and convergence between A) Being able to translate languages B) Understanding languages. I am not sure what those differences and similarities are, as I have never posited the two for comparison. Computers are capable of both.


Researchers have compared the results of machine translation to a jar of cookies, only 5% of which are poisoned.

Computers can make an amazingly good job of translating, but they don't do what we do when we translate. We use our understanding, and you can see from the faults in machine translation that that is what a computer lacks.

If a computer could do what I can do, people would use Google Translate and I wouldn't have any work. Google Translate is free and I am quite expensive.

What the computer lacks is involvement with the world.

I put this sentence into Google Translate: "If the baby fails to thrive on raw milk, boil it."

Google translated this into Dutch as "Als de baby niet gedijt op rauwe melk, kook hem dan."

That means "If the baby fails to thrive on raw milk, boil him."

Google Translate is extremely ingenious, but it lacks understanding, because it is not involved with the world as we are, through experience. QED.







TheMadFool October 15, 2021 at 13:04 #607469
Quoting Daemon
My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience


Mary's Room?

The question is, does Mary learn anything new?

I recall mentioning this before but what is red? Isn't it just our eyes way of perceiving 750 nm of the visible spectrum of light?

Look at it in terms of language. This :point: 0 is sifr in Persian, zero in English and sunya in Hindi but do we claim that the Persian knows something more from the word "sifr" or that an Englishman got an extra amount of information from the word "zero" and so on?

Likewise, does Mary get ahold of new information when she sees the color red? It's just 750 nm in eye language.

I dunno. :chin:
Daemon October 15, 2021 at 21:17 #607642
Reply to TheMadFool

Mary's deficit in the room is only that she hasn't seen red. Apart from that she is a normal experiencing human being.

A computer doesn't experience anything. All the information you and I have ever acquired has come from experience.
Varde October 15, 2021 at 21:21 #607645
Understanding is knowing a subject mechanically, such as with C++. If - there is - a - any number here - do - X.

A tree is/can be green and brown...

If you know each part of the former sentence, you therefore understand what is meant. To get a machine to understand then it must be programmed concisely.
TheMadFool October 15, 2021 at 21:28 #607647
Quoting Daemon
Mary's deficit in the room is only that she hasn't seen red. Apart from that she is a normal experiencing human being.

A computer doesn't experience anything. All the information you and I have ever acquired has come from experience.


As I tried to explain with Mary's room thought experiment, redness is just 750 nm (wavelength of red) in eye dialect. Just as you can't claim that you've learned anything new when the statement the burden of proof is translated in latin as onus probandi, you can't say that seeing red gives you any new information.
Daemon October 15, 2021 at 21:34 #607651
Quoting TheMadFool
As I tried to explain with Mary's room thought experiment, redness is just 750 nm (wavelength of red) in eye dialect. Just as you can't claim that you've learned anything new when the statement the burden of proof is translated in latin as onus probandi, you can't say that seeing red gives you any new information.


What you've set out here is just one side of the disagreement about Mary's Room, but I am suggesting that not just red but everything you have learned comes from experience. Do you have a counter to that?
TheMadFool October 15, 2021 at 21:56 #607663
Quoting Daemon
What you've set out here is just one side of the disagreement about Mary's Room, but I am suggesting that not just red but everything you have learned comes from experience. Do you have a counter to that?


Yes, I think so. I'll give you an argument Socrates made.

1. Nothing in our experience is truly, precisely, equal. Everything we encounter around us is either never equal or only approximately equal.

Yet,

2. We have the concept of perfect equality.

Ergo,

3. Not everything we know is drawn from experience.

InPitzotl October 16, 2021 at 00:25 #607748
Quoting TheMadFool
I recall mentioning this before but what is red? Isn't it just our eyes way of perceiving 750 nm of the visible spectrum of light?

Eyes do not perceive, so the answer to the question is no (I'm sure you didn't literally mean that eyes perceive, but you have to be specific here enough for me to know what you did mean).

Color vision in most humans is trichromatic; to such humans, 750nm light would affect the visual system in a particular way, that contrasts quite a bit from 550nm light. The tristimulus values for each would be X=0.735, Y=0.265, Z=0 and X=0.302, Y=0.692, Z=0.008 respectively. A protanope would be dichromatic; the protanope's visual system might have [s]tristimulus[/s] distimulus values for each color as X=1.000, Y=0.000 and 550nm light as X=0.992, Y=0.008.

Assuming Jack is typical, Jane has an inverted spectrum, and Joe is a protanope, Jack and Jane agree 750nm light is red and 550nm light is green; and Joe doesn't quite get what the fuss is about.
Quoting Daemon
A computer doesn't experience anything. All the information you and I have ever acquired has come from experience.

Imagine a test. There are various swatches within 0.1 units of each other from X=0.735, Y=0.265, Z=0; and this is mixed in with various swatches within 0.1 units from X=0.302, Y=0.692, Z=0.008. Jack, Jane, Joe, and a robot affixed with a colorimeter are tasked to sort the swatches of the former kind together and the swatches of the latter kind together into separate piles. Jack, Jane, and the robot would be able to pass this test. Joe will have some difficulty.

Jack and Jane do this task well using their experiences of seeing the swatches. Joe will have great difficulty with this task despite experiencing the swatches. The robot can be programmed to succeed at this test with success rates rivaling Jack and Jane, despite having no experiences.

I'll grant that all of the information Jack, Jane, and Joe have ever acquired has come from experience. I'll grant that the robot here does not experience. But granting this, with regard to this test, Joe's the odd one out, not the robot.

Maybe Jack, Jane, and Joe only being able to sort swatches using their experiences does not demonstrate that experience is the critical thing necessary to sort swatches correctly.
TheMadFool October 16, 2021 at 01:59 #607783
Quoting InPitzotl
Eyes do not perceive, so the answer to the question is no (I'm sure you didn't literally mean that eyes perceive, but you have to be specific here enough for me to know what you did mean).

Color vision in most humans is trichromatic; to such humans, 750nm light would affect the visual system in a particular way, that contrasts quite a bit from 550nm light. The tristimulus values for each would be X=0.735, Y=0.265, Z=0 and X=0.302, Y=0.692, Z=0.008 respectively. A protanope would be dichromatic; the protanope's visual system might have tristimulus values for each color as X=1.000, Y=0.000 and 550nm light as X=0.992, Y=0.008.

Assuming Jack is typical, Jane has an inverted spectrum, and Joe is a protanope, Jack and Jane agree 750nm light is red and 550nm light is green; and Joe doesn't quite get what the fuss is about.


Languages maybe mutually unintelligible but nothing new is added in translation from one to another. Joe's knowledge that red is 750 nm, even when he's blind to red, is equivalent to Jack and Jane seeing/perceiving red. Red is, after all, light of 750 nm in eye dialect.

Here's a little thought experiment:

If I say out loud to you "seven" and then follow that up by writing "7" and showing it to you, is there any difference insofar as the content of my spoken and written message is concerned?

No!

Both "seven" (aural) and "7" (visual) contain the same information - seven-ness.

Likewise, seeing the actual color red is equivalent to knowing the number 750 (nm) - they're both the same thing and nothing new is learned by looking at a red object.

InPitzotl October 16, 2021 at 03:44 #607856
Quoting TheMadFool
Joe's knowledge that red is 750 nm,

There's language translation, and there's wrong. What color is a polar bear, Santa's beard, and snow?
Quoting TheMadFool
If I say out loud to you "seven" and then follow that up by writing "7" and showing it to you, is there any difference insofar as the content of my spoken and written message is concerned?

Your thought experiment is misguided. 7 is a number. Seven is another name for the number 7. But 7 aka seven is not a dwarf. There might be seven dwarves, but seven isn't a dwarf.
Quoting TheMadFool
Likewise, seeing the actual color red is equivalent to knowing the number 750 (nm) - they're both the same thing and nothing new is learned by looking at a red object.

Seeing the actual color red is not equivalent to knowing the number 750nm. Colors are not wavelengths of light; wavelengths of light have color (if you isolate light to said wavelength photons and have enough to trigger color vision), but a wavelength of light and a color aren't the same thing. A polar bear is white, not red (except after a nice meal), despite his fur reflecting photons whose wavelength is 750nm. There's no such thing as a white photon. White is a color. Colors are not wavelengths of light.

Joe also sees a color, in a color space we don't tend to name (because we're cruel?), when he sees 750nm light. But the color he sees is pretty much the same color as 550nm light. We call the former red, and the latter green.
TheMadFool October 16, 2021 at 03:50 #607859
Reply to InPitzotl Before we go any further, what do you think of the idea that perception is a language? It seems to be one; after all, the brain is interpreting the neural signals pouring into it through the senses.
InPitzotl October 16, 2021 at 03:53 #607864
Quoting TheMadFool
Before we go any further, what do you think of the idea that perception is a language?

It might work as a metaphor, but I wouldn't go further than that.
TheMadFool October 16, 2021 at 03:54 #607866
Quoting InPitzotl
It might work as a metaphor, but I wouldn't go further than that.


Why?
InPitzotl October 16, 2021 at 03:58 #607874
Quoting TheMadFool
Why?

It's not really the same thing, in short. Language does more than what perception does, and perception does more than what language does. They deserve different concepts. I don't think I want to elaborate here; I haven't bothered with the other thread yet (and once I do, I might just lurk, as I typically do way more often than comment).
TheMadFool October 16, 2021 at 04:06 #607878
Quoting InPitzotl
It's not really the same thing, in short. Language does more than what perception does, and perception does more than what language does. They deserve different concepts. I don't think I want to elaborate here; I haven't bothered with the other thread yet (and once I do, I might just lurk, as I typically do way more often than comment).


I hadn't thought it through too. It just seemed to make sense to me, intuitively that is. I guess it's nothing. G'day.
Daemon October 22, 2021 at 14:21 #610290
I thought some examples of Gricean Implicature might amusingly illustrate what computers can't understand (and why):

A: I broke a finger yesterday.
Implicature: The finger was A's finger.

A: Smith doesn’t seem to have a girlfriend these days.
B: He has been paying a lot of visits to New York lately.
Implicature: He may have a girlfriend in New York.

A: I am out of petrol.
B: There is a garage around the corner.
Implicature: You could get petrol there.

A: Are you coming out for a beer tonight?
B: My in-laws are coming over for dinner.
Implicature: B can't go out for a beer tonight.

You can complete the remaining examples (a computer can't).

A: Where is the roast beef?
B: The dog looks happy.

A: Has Sam arrived yet?
B: I see a blue car has just pulled up.

A: Did the Ethiopians win any gold medals?
B: They won some silver medals.

A: Are you having some cake?
B: I'm on a diet.



InPitzotl October 22, 2021 at 16:13 #610329
Quoting Daemon
I thought some examples of Gricean Implicature might amusingly illustrate what computers can't understand (and why)

I think you're running down the garden path.

I'm a human. I experience things. I also understand things. I can do things like play perfect tic tac toe, go the store and buy bananas, and solve implicature puzzles.

I'm also a programmer. I have the ability to "tell a computer what to do". I can easily write a program to play perfect tic tac toe. Not only can I do this, but I can specifically write said program by self reflecting on how I myself would play perfect tic tac toe; that is, I can appeal to my own intuitive understanding of tic tac toe, using self reflection, and emit this in the form of a formal language that results in a computer playing perfect tic tac toe.

But by contrast, to write a program that drives a bot to go to the store and buy bananas, or to solve implicature puzzles, is incredibly difficult. Mind you, these are easy tasks for me to do, but that tic tac toe trick I pulled to write the perfect tic tac toe player just isn't going to cut it here.

I don't think you're grasping the implication of this here. It sounds as if you're positing that you, a human, can easily do something... like go to the store and buy bananas, or solve implicatures... and a computer, which isn't a human, cannot. And that this implies that computers are missing something that humans have. That is the garden path I think you're running down... you have a bad impression. It's us humans that are building these computers that have, or don't have as the case may be, these capabilities. So when I show you my perfect tic tac toe playing program, that is evidence that humans understand tic tac toe. When I show you my CAT tool that can't even solve an implicature problem, this is evidence that humans have not solved the problem of implicature.

And maybe they will; maybe in 15 years you'll be surprised. Your CAT tool will suddenly solve these implicatures like there's no tomorrow. But that just indicates that programmers solved implicatures... the CAT tool still wouldn't know what a banana is. How could it?

The whole experience thing is a non-sequitur. I have just as much "experiencing" when I write tic tac toe as I do when I fail to make a CAT tool that solves implicatures. I don't think if I knew how to put experiences into the CAT tool that this would do anything to it that would help it solve implicatures. I certainly don't make that tic tac toe perfect player by coding in experiences. It's really easy to say humans have experiences, humans can do x, and computers cannot do x, therefore x requires experiences. But I don't grasp how this can actually be a justified theory. I don't get what "work" putting experiences in is being theorized to do to pull off what it's being claimed as being critical for.
Varde October 22, 2021 at 16:28 #610336
To understand means to, get, mentally/spiritually.

To know means to, have, mentally/spiritually.

Though you may understand something, you may lose it when further complexities concerning it's concept arise.

You understand shape, but at the mention of adv. Shape you seem to lose what you got.

When you understand fully a concept, you can know about it - you can secure what you get from it.

Knowing is halving, it's as simple as looking at this word - example - and being able to half all aspects of it(it's symbol, its meaning, it's reality, etc.). Halving in mind is not directly about the fraction(though it is), but more the process.

I look up at the sky, I am able to say I know what it is, because I can quickly decompose it - half.




Daemon October 22, 2021 at 16:30 #610338
Reply to InPitzotl You seem to be contradicting yourself. The other day you had a robot understanding things, now you say a computer doesn't know what a banana is.

I've been saying from the start that computers don't do things (like calculate, translate), we use them to do those things.

TheMadFool October 22, 2021 at 16:31 #610340
Me (to a stranger): Sir, can you give me the directions to the nearest hotel?

Stranger (to me): Yeah, sure. Take this road and turn left at the second junction. There's a hotel there, a good one.

---

Me (to Siri): Siri, can you give me the directions to the nearest hotel?

Siri: The shortest route to the hotel nearest you is take x street, turn left at y street . It should take you about 3 minutes in light traffic.


Both Siri and the kind stranger (seem to have) understood my question. A mini Turing Test.
InPitzotl October 22, 2021 at 17:26 #610371
Quoting Daemon
You seem to be contradicting yourself.

I'm pretty sure if you understood what I was saying, you would see there's no contradiction. So if you are under the impression there's a contradiction, you're missing something.
Quoting Daemon
The other day you had a robot understanding things, now you say a computer doesn't know what a banana is.

Quoting InPitzotl
the CAT tool still wouldn't know what a banana is.

Your CAT tool doesn't interact with bananas.
Quoting Daemon
I've been saying from the start that computers don't do things (like calculate, translate), we use them to do those things.

What is this "do" you're talking about? I program computers, I go to the store and buy bananas, I generate a particular body temperature, I radiate in the infrared, I tug the planet Jupiter using a tiny force, I shake when I have too much coffee, I shake when I dance... are you talking about all of this, or just some of it?

I assume you're just talking about some of these things... so what makes the stuff I "do" when I do it, what I'm "doing", versus stuff I'm "not doing"? (ETA: Note that experience cannot be the difference; I experience shaking when I have coffee just as I experience shaking when I dance).
Daemon October 22, 2021 at 22:33 #610466
Quoting TheMadFool
Both Siri and the kind stranger (seem to have) understood my question. A mini Turing Test.


Yes. The Stanford Encyclopedia says that the Turing Test was initially suggested as a means to determine whether a machine can think. But we know how Siri works, and we know that it's not thinking in the way we think.

When we give directions to a hotel we use a mental map based on our experiences.

Siri uses a map based on our experiences. Not Siri's experiences. Siri doesn't have experiences. You know that, right?

Quoting TheMadFool
Me (to a stranger): Sir, can you give me the directions to the nearest hotel?

Stranger (to me): Yeah, sure. Take this road and turn left at the second junction. There's a hotel there, a good one.


Because the stranger understands the question in a way Siri could not, he is able to infer that you have requirements which your words haven't expressed. You aren't just looking for the nearest hotel, he thinks you will also want a good one. And he knows (or thinks he knows) what a good one is. Because of his experience of the world.

That's what it's like when we think. We understand what "good" means, because we have experienced pleasure and pain, frustration and satisfaction.





Daemon October 26, 2021 at 09:28 #612065
Quoting InPitzotl
Your CAT tool doesn't interact with bananas.


But neither does a robot.

Quoting InPitzotl
What is this "do" you're talking about? I program computers, I go to the store and buy bananas, I generate a particular body temperature, I radiate in the infrared, I tug the planet Jupiter using a tiny force, I shake when I have too much coffee, I shake when I dance... are you talking about all of this, or just some of it?


I'm talking about acting as an agent. That's something computers and robots can't do, because they aren't agents. We don't treat them as agents. When your robot runs amok in the supermarket and tears somebody's head off, it won't be the robot that goes to jail. If some code you write causes damage, it won't be any good saying "it wasn't me, it was this computer I programmed". I think you know this, really.

They aren't agents because they aren't conscious, in other words they don't have experience.





InPitzotl October 26, 2021 at 11:18 #612141
Quoting Daemon
But neither does a robot.

Quoting Daemon
You seem to be contradicting yourself.

Just to remind you what you said exactly one post prior. Of course the robot interacts with bananas. It went to the store and got bananas.

What you really mean isn't that the robot didn't interact with bananas, but that it "didn't count". You think I should consider it as not counting because this requires more caution. But I think you're being "cautious" in the wrong direction... your notions of agency fail. To wit, you didn't even appear to see the question I was asking (at the very least, you didn't reply to it) because you were too busy "being careful"... odd that?

I'm not contradicting myself, Daemon. I'm just not laden with your baggage.
Quoting InPitzotl
What is this "do" you're talking about? I program computers, I go to the store and buy bananas, I generate a particular body temperature, I radiate in the infrared, I tug the planet Jupiter using a tiny force, I shake when I have too much coffee, I shake when I dance... are you talking about all of this, or just some of it?

...this is what you quoted. This was what the question actually was. But you didn't answer it. You were too busy "not counting" the robot:
Quoting Daemon
They aren't agents because they aren't conscious, in other words they don't have experience.

I'm conscious. I experience... but I do not agentively do any of those underlined things.

I do not agentively generate a particular body temperature, but I'm conscious, and I experience. I do not agentively radiate in the infrared... but I'm conscious, and experience. I do not agentively shake when I have too much coffee (despite agentively drinking too much coffee), but I'm conscious, and experience. I even am an agent, but I do not agentively do those things.

There's something that makes what I agentively do agentive. It's not being conscious or having experiences... else why aren't all of these underlined things agentive? You're missing something, Daemon. The notion that agency is about being conscious and having experience doesn't work; it fails to explain agency.
Quoting Daemon
When your robot runs amok in the supermarket and tears somebody's head off, it won't be the robot that goes to jail.

Ah, how human-centric... if a tiger runs amok in the supermarket and tears someone's head off, we won't send the tiger to jail. Don't confuse agency with personhood.
Quoting Daemon
If some code you write causes damage, it won't be any good saying "it wasn't me, it was this computer I programmed"

If I let the tiger into the shop, I'm morally culpable for doing so, not the tiger. Nevertheless, the tiger isn't acting involuntarily. Don't confuse agency with moral culpability.
Quoting Daemon
I think you know this, really.

I think you're dragging a lot of baggage into this that doesn't belong.
Daemon October 26, 2021 at 12:29 #612186
Reply to InPitzotl That isn't a difficult question. Only conscious entities can be agentive, but not everything conscious entities do is agentive.
InPitzotl October 26, 2021 at 13:04 #612201
Quoting Daemon
That isn't a difficult question.

So answer it.

The question is, why is it agentive to shake when I dance, but not to shake when I drink too much coffee? And this:
Quoting Daemon
Only conscious entities can be agentive, but not everything conscious entities do is agentive.

...doesn't answer this question.

ETA: This isn't meant as a gotcha btw... I've been asking you for several posts to explain why you think consciousness and experience are required. This is precisely the place where we disagree, and where I "seem to" be contradicting myself (your words). The crux of this contradiction, btw, is that I'm not being "careful" as is "required" by such things. I'm trying to dig into what you're doing a bit deeper than this hand waving.

I'm interpreting your "do"... that was your point... as being a reference to individuality and/or agency. So tell me what you think agency is (or correct me about this "do" thing).
Daemon October 26, 2021 at 15:47 #612245
Reply to InPitzotl It might be better to take a clearer case, as you drinking the coffee is agentive, which muddies the water a little. So we'll ask "why is it agentive when you go to the store for bananas, but not agentive when you exert a tiny gravitational pull on Jupiter".

Agency is the capacity of an actor to act. Agency is contrasted to objects reacting to natural forces involving only unthinking deterministic processes.
Daemon October 26, 2021 at 17:47 #612298
Quoting InPitzotl
What you really mean isn't that the robot didn't interact with bananas, but that it "didn't count".


What I really mean is that it didn't interact with anything in the way we interact with things. It doesn't see, feel, hear, smell or taste bananas, and that is how we interact with bananas.
InPitzotl October 26, 2021 at 23:16 #612502
Quoting Daemon
It might be better to take a clearer case, as you drinking the coffee is agentive, which muddies the water a little.

I've no idea why you think it muddies the water... I think it's much clearer to explain why shaking after drinking coffee isn't agentive yet shaking while I dance is. Such an explanation gets closer to the core of what agency is. Here (shaking because I'm dancing vs shaking because I drank too much coffee) we have the same action, or at least the same descriptive for actions; but in one case it is agentive, and in the other case it is not.
Quoting Daemon
Agency is the capacity of an actor to act. Agency is contrasted to objects reacting to natural forces involving only unthinking deterministic processes.

Agentive action is better thought of IMO as goal directed than merely as "thought". In a typical case an agent's goal, or intention, is a world state that the agent strives to attain. When acting intentionally, the agent is enacting behaviors selected from schemas based on said agent's self models; as the act is carried out, the agent utilizes world models to monitor the action and tends to accommodate the behaviors in real time to changes in the world models, which implies that the agent is constantly updating the world models including when the agent is acting.

This sort of thing is involved when I shake, while I'm dancing. It is not involved when I shake, after having drank too much coffee. Though in the latter case I still may know I'm shaking, by updating world models; I'm not in that case enacting the behavior of shaking by selecting schemas based on my self models in order to attain goals of shaking. In contrast, in the former case (shaking because I'm dancing), I am enacting behaviors by selecting schemas based on my self model in order to attain the goal of shaking.

So, does this sound like a fair description of agency to you? I am specifically describing why shaking because I've had too much coffee isn't agentive while shaking because I'm dancing is.
Daemon October 27, 2021 at 09:47 #612726
Quoting InPitzotl
Agentive action is better thought of IMO as goal directed than merely as "thought".


But where do the goals come from, if not from "mere thought"?

GraveItty October 27, 2021 at 11:03 #612740
"The important question of what understanding is."

Why classify this question as important? Is as important or non-important as any other question. I'm not trying to troll or provoke here but what if you have the answer? Then you understand what it is. Makes this people more understanding? Knowing what understanding presupposes a theoretical framework to place the understanding in. Understanding this is more important than knowing what it is inside this framework. Of course it's important to understand people. It connects us. Makes us love and hate. The lack of it can give rise to loneliness. Though it's not a sine-qua-non for a happy life. You can be in love with someone you can't talk to, because of a language barrier. Though harder, understanding will find a way. No people are completely non-understandable. Only truly irrational ones, but these people are mostly imaginary., although I can throw a stone over the water without any reason. You can push the importance of understanding but at the same time non-understanding is important as well. Like I said, knowing the nature of understanding doesn't help in the understanding itself. It merely reformulated it and puts in in an abstract formal scheme, doing injustice to the frame-dependent content. It gives no insight into the nature of understanding itself.
InPitzotl October 27, 2021 at 11:10 #612745
Quoting Daemon
But where do the goals come from, if not from "mere thought"?

In terms of explaining agentive acts, I don't think we care. I don't have to answer the question of what my cat is thinking when he's following me around the house. It suffices that his movements home in on where I'm going. That is agentive action. Now, I don't think all directed actions are agentive... a heat seeking missile isn't really trying to attain a goal in an agentive way... but the proper question to address is what constitutes a goal, not what my cat is thinking that leads him to follow me.

My cat is an agent; his eyes and ears are attuned to the environment in real time, from which he is making a world model to select his actions from schemas, and he is using said world models to modulate his actions in a goal directed way (he is following me around the house). I wouldn't exactly say my cat is following me because he is rationally deliberating about the world... he's probably just hungry. I'm not sure if what my cat is doing when setting the goal can be described of as thought; maybe it can. But I don't really have to worry about that when calling my cat an agent.
Daemon October 27, 2021 at 11:30 #612752
Quoting InPitzotl
But where do the goals come from, if not from "mere thought"? — Daemon

In terms of explaining agentive acts, I don't think we care. I don't have to answer the question of what my cat is thinking when he's following me around the house. It suffices that his movements home in on where I'm going. That is agentive action. Now, I don't think all directed actions are agentive... a heat seeking missile isn't really trying to attain a goal in an agentive way.


But a robot buying bananas is?

InPitzotl October 27, 2021 at 12:09 #612769
Quoting Daemon
But a robot buying bananas is?

Why not?

But I want you to really answer the question, so I'm going to carve out a criteria. Why am I wrong to say the robot is being agentive? And the same goes in the other direction... why are you not wrong about the cat being agentive? Incidentally, it's kind of the same question. I think it's erroneous to say my cat's goal of following me around the house was based on thought.

Incidentally, let's be honest... you're at a disadvantage here. You keep making contended points... like the robot doesn't see (in the sense that it doesn't experience seeing); I've never confused the robot for having experiences, so I cannot be wrong by a confusion I do not have. But you also make wrong points... like that agentive goals require thought (what was my cat thinking, and why do we care about it?)
Daemon October 27, 2021 at 12:29 #612773
Quoting InPitzotl
Agentive action is better thought of IMO as goal directed than merely as "thought". In a typical case an agent's goal, or intention, is a world state that the agent strives to attain.


You're wrong because the robot doesn't have a goal. We have a goal, for the robot.
InPitzotl October 27, 2021 at 12:31 #612775
Quoting Daemon
You're wrong because the robot doesn't have a goal.

Ah, finally... the right question. But why not?

Be precise... it's really the same question both ways. What makes what the robot not have a goal, and what by contrast makes what my cat have a goal?
Daemon October 27, 2021 at 12:53 #612784
Reply to InPitzotl

The cat wants something. The robot is not capable of wanting. The heat seeking missile is not capable of wanting.

InPitzotl October 28, 2021 at 22:56 #613740
Quoting Daemon
The cat wants something.

I'm not sure what "want" means to the precision you're asking. The implication here is that every agentive action involves an agent that wants something. Give me some examples... my cat sits down and starts licking his paw. What does my cat want that drives him to lick his paw? It sounds a bit anthropomorphic to say he "wants to groom" or "wants to clean himself".

But it sounds True Scotsmanny to say my cat wants to lick his paws in any sense other than that he enacted this behavior and is now trying to lick his paws. If there is such another sense, what is it?

And why should I care about it in terms of agency? Are cats, people, or dragon flies or anything capable of "trying to do" something without "wanting" to do something? If so, why should that not count as agentive? If not, then apparently the robot can do something us "agents" cannot... "try to do things" without "wanting" (whatever that means), and I still ask why it should not count as agentive.
Quoting Daemon
The robot is not capable of wanting.

The robot had better be capable of "trying to shop and get bananas", or it's never going to pull it off.
Varde October 29, 2021 at 10:46 #613898
Understanding is what you are given whilst knowing is what you give, concerning intellect.

An apple ~can be [I]red[/I] is an intellectual statement, and I am giving it to you.

I am given a kiwi, and hypothetically I know nothing about it, I have nothing to give, but I understand it, in so much as I have a sense of it.
Daemon October 29, 2021 at 22:28 #614088
Quoting InPitzotl
I'm not sure what "want" means to the precision you're asking.


Because you're an experiencing being you know what "want" means. Like the cat you know what it feels like to want food, or attention.

This is the Philosophy of Language forum. My contention is that a computer or a robot cannot understand language, because understanding requires experience, which computers and robots lack.

Whether a robot can "try" to do something is not a question for Philosophy of Language. But I will deal with it in a general way, which covers both agency and language.

In ordinary everyday talk we all anthropomorphise. The thermostat is trying to maintain a temperature of 20 degrees. The hypothalamus tries to maintain a body temperature around 37 degrees. The modem is trying to connect to the internet. The robot is trying to buy bananas. But this is metaphorical language.

The thermostat and the robot don't try to do things in the way we do. They are not even individual entities in the way we are. It's experience that makes you an individual.






InPitzotl October 30, 2021 at 02:28 #614297
Quoting Daemon
It's experience that makes you an individual.

No, being agentively integrated is what makes me (and you) an individual. We might could say you're an individual because you are "of one mind".

For biological agents such as ourselves, it is dysfunctional not to be an individual. We would starve to death as Buridan's asses; we would waste energy if we all had Alien Hand Syndrome. Incidentally, a person with AHS is illustrative of an entity where the one-mindedness breaks down... the "alien" so to speak in AHS is not the same individual. Nevertheless, an alien hand illustrates very clear agentive actions, and suggests experiencing, which draws in turn a question mark over your notion that it's the experiencing that makes you an individual.
Quoting Daemon
In ordinary everyday talk we all anthropomorphise. The thermostat is trying to maintain a temperature of 20 degrees. The hypothalamus tries to maintain a body temperature around 37 degrees. The modem is trying to connect to the internet. The robot is trying to buy bananas. But this is metaphorical language.

Not in the robot case. This is no mere metaphor; it is literally the case that the robot is trying to buy bananas.

Imagine doing this with a wind up doll (the rules are, the wind up doll can do any choreography you want, but it only does that one thing when you wind it up... so you have to plan out all movements). If you try to build a doll to get the bananas, you would never pull it off. The slightest breeze turning it the slightest angle would make it miss the bananas by a mile; it'd be lucky to even make it to the store... not to mention the fact that other shoppers are grabbing random bananas while stockers are restocking with bananas in random places, shoppers constantly are walking in the way and what not.

Now imagine all of the possible ways the environment can be rearranged to thwart the wind up doll... the numbers here are staggeringly huge. Among all of these possible ways not to get a banana, the world is and would evolve to be during the act of shopping a particular way. There does exist some choreography of the wind up doll for this particular way that would manage to make it in, and out, of the store with the banana in hand (never mind that we expect a legal transaction to occur at the checkout). But there is effectively no way you can predict the world beforehand to build your wind up doll.

So if you're going to build a machine that makes it out of the store with bananas in hand with any efficacy, it must represent the teleos of doing this; it must discriminate relevant pieces of the environment as they unpredictably change; it must weigh this against the teleos representation; and it must use this to drive the behaviors being enacted in order to attain the teleos. A machine that is doing this is doing exactly what the phrase "trying to buy bananas" conveys.

I'm not projecting anything onto the robot that isn't there. The robot isn't conscious. It's not experiencing. What I'm saying is there is something that has to genuinely be there; it's virtually a derived requirement of the problem spec. You're not going to get bananas out of a store by using a coil of two metals.
Quoting Daemon
My contention is that a computer or a robot cannot understand language, because understanding requires experience, which computers and robots lack.

And my contention has been throughout that you're just adding baggage on.

Let's grant that understanding requires experience, and grant that the robot I described doesn't experience. And let's take that for a test run.

Suppose I ask Joe (human) if he can get some bananas on the way home, and he does. Joe understands my English request, and Joe gets me some real bananas. But if I ask Joe to do this in Dutch, Joe does not understand, so he would not get me real bananas. If I ask my robot, my robot doesn't understand, but can get me some real bananas. But if I ask my robot to do this in Dutch, my robot doesn't understand, so cannot get me real bananas. So Joe real-understands my English request, and can real-comply. The robot fake-understands it but can real-comply. Joe doesn't real-understand my Dutch request, so cannot real-comply. The robot doesn't fake-understand my Dutch request but this time cannot real-comply. Incidentally, nothing will happen if I ask my cable modem or my thermostat to get me bananas in English or Dutch.

So... this is the nature of what you're trying to pitch to me, and I see something really weird about it. Your experience theory is doing no work here. I just have to say the robot doesn't understand because it's definitionally required to say it, but somehow I still get bananas if it doesn't-understand-in-English but not if it doesn't-understand-in-Dutch, just like with Joe, but that similarity doesn't count because I just have to acknowledge Joe experiences whereas the robot doesn't, even though I'm asking neither to experience but just asking for bananas. It's as if meaning isn't about meaning any more; it's about meaning with experiences. Meaning without experiences cannot be meaning, even though it's exactly like meaning with experiences save for the definitive having the experiences part.

Daemon! Can you not see how this just sounds like some epicycle theory? Sure, the earth being still and the center of the universe works just fine if you add enough epicycles to the heavens, but what's this experience doing for me other than just mucking up the story of what understanding and agency means?

That is what I mean by baggage, and you've never justified this baggage.
Daemon October 31, 2021 at 17:47 #615110
Thanks for the thoughtful and thought-provoking response InPitzotl.

Quoting InPitzotl
We might say you're an individual because you are "of one mind".


That's broadly what I meant when I said that it's experience that makes you an individual, but you seem to think we disagree.

For me "having a mind" and "having experiences" are roughly the same thing. So I could say having a mind makes you an individual, or an entity, or a person. Think about the world before life developed: there were no entities or individuals then.

Quoting InPitzotl
The robot is trying to buy bananas. But this is metaphorical language. — Daemon

Not in the robot case. This is no mere metaphor; it is literally the case that the robot is trying to buy bananas.


Suppose instead of buying bananas we asked the robot to control the temperature of your central heating: would you say the thermostat is only metaphorically trying to control the temperature, but the robot is literally trying? Could you say why, or why not?

For me, "literally trying" is something only an entity with a mind can do. They must have some goal "in mind".

Quoting InPitzotl
It's as if meaning isn't about meaning any more; it's about meaning with experiences. Meaning without experiences cannot be meaning, even though it's exactly like meaning with experiences save for the definitive having the experiences part.


The Oxford English Dictionary's first definition of "mind" is: "The element of a person that enables them to be aware of the world and their experiences".

The word "meaning" comes from the same Indo-European root as the word "mind". Meaning takes place in minds.

The robot has no mind in which meaning could take place. Any meaning in the scenario with the bananas is in our minds. The robot piggybacks on our understanding, which is gained from experience. If nobody had experienced bananas and shops and all the rest, we couldn't program the robot to go and buy them.




InPitzotl October 31, 2021 at 22:48 #615314
Quoting Daemon
That's broadly what I meant when I said that it's experience that makes you an individual, but you seem to think we disagree.

My use of mind here is metaphorical (a reference to the idiom "of one mind").

Incidentally, I think we do indeed agree on a whole lot of stuff... our conceptualization of these subjects is remarkably similar. I'm just not discussing those pieces ;)... it's the disagreements that are interesting here.
Quoting Daemon
Think about the world before life developed: there were no entities or individuals then.

I don't think this is quite our point of disagreement. You and I would agree that we are entities. You also experience your individuality. I'm the same in this regard; I experience my individuality as well. Where we differ is that you think your experience of individuality is what makes you an individual. I disagree. I am an individual for other reasons; I experience my individuality because I sense myself being one. I experience my individuality like I see an apple; the experience doesn't make the apple, it just makes me aware of the apple.

Were "I" to have AHS, and my right hand were the alien hand, I would experience this as another entity moving my arm. In particular, the movements would be clearly goal oriented, and I would pick up on this as a sense of agency behind the movement. I would not in this particular condition have a sense of control over the arm. I would not sense the intention of the arm through "mental" means; only indirectly through observation in a similar manner that I sense other people's intentions. I would not have a sense of ownership of the movement.
Quoting Daemon
Suppose instead of buying bananas we asked the robot to control the temperature of your central heating: would you say the thermostat is only metaphorically trying to control the temperature, but the robot is literally trying?

Yes; the thermostat is only metaphorically trying; the robot is literally trying.
Quoting Daemon
Could you say why, or why not?

Sure.

Consider a particular thermostat. It has a bimetallic coil in it, and there's a low knob and a high knob. We adjust the low knob to 70F, and the high to 75F. Within range nothing happens. As the coil expands, it engages the heating system. As the coil contracts, it engages the cooling system.

Now introduce the robot and/or a human into a different environment. There is a thermometer on the wall, and a three way switch with positions A, B, and C. A is labeled "cool", B "neutral", and C "heat.

So we're using the thermostat to maintain a temperature range of 70F to 75F, and it can operate automatically after being set. The thermostat should maintain our desired temperature range. But alas, I have been a bit sneaky. The thermostat should maintain that range, but it won't... if you read my description carefully you might spot the problem. It's miswired. Heat causes the coil to expand, which then turns on the heating system. Cold causes the coil to contract, which then turns on the cooling system. Oops! The thermostat in this case is a disaster waiting to happen; when the temperature goes out of range, it will either max out your heating system, or max out your cooling system.

Of course I'm going for an apples to apples comparison, so just as the thermostat is miswired, the switches are mislabeled. A actually turns on the heating system. C engages the cooling system. The human and the robot are not guaranteed to find out the flaw here, but they have a huge leg up over the thermostat. There's a fair probability that either of these will at least return the switch to the neutral position before the heater/cooler maxes out; and there's at least a chance that both will discover the reversed wiring.

This is the "things go wrong" case, which highlights the difference. If the switches were labeled and the thermostat were wired correctly, all three systems would control the temperature. It's a key feature of agentive action to not just act, but to select the action being enacted from some set of schemas according to their predicted consequences in accordance with a teleos; to monitor the enacted actions; to compare the predicted consequences to the actual consequences; and to make adjustments as necessary according to the actuals in concordance with attainment. This feature makes agents tolerant against the unpredicted. The thermostat is missing this.

ETA:
Quoting Daemon
The word "meaning" comes from the same Indo-European root as the word "mind". Meaning takes place in minds.

The word "atom" comes from the Latin atomus, which is an indivisible particle, which traces to the Greek atomos meaning indivisible. But we've split the thing. The word "oxygen" derives from the Greek "oxys", meaning sharp, and "genes", meaning formation; in reference to the acidic principle of oxygen (formation of sharpness aka acidity)... which has been abandoned.

Meaning is about intentionality. In regard to external world states, intentionality can be thought of as deferring to the actual. This is related to the part of agentive action which not only develops the model of word states from observation, and uses that model to align actions to attain a goal according the predictions the model gives, but observes the results as the actions take place and defers to the observations in contrast to the model. In this sense the model isn't merely "about" itself, but "about" the observed thing. That is intentionality. Meaning takes place in agents.
Joshs November 01, 2021 at 22:19 #615699
Reply to InPitzotl Quoting InPitzotl
Meaning is about intentionality. In regard to external world states, intentionality can be thought of as deferring to the actual. This is related to the part of agentive action which not only develops the model of word states from observation, and uses that model to align actions to attain a goal according the predictions the model gives, but observes the results as the actions take place and defers to the observations in contrast to the model. In this sense the model isn't merely "about" itself, but "about" the observed thing. That is intentionality. Meaning takes place in agents.


This may be off topic , but that’s one definition of intentionality, but not the phenomenological one.
In phenomenology , objects are given through a
mode of givenness, so the model participates in co-defining the object.
InPitzotl November 01, 2021 at 22:39 #615713
Quoting Joshs
This may be off topic , but that’s one definition of intentionality, but not the phenomenological one.

I don't think it's off topic, but just to clarify, I am not intending to give this as a definition of intentionality. I'm simply saying there's aboutness here.
Daemon November 08, 2021 at 15:19 #618260
Quoting InPitzotl
This feature makes agents tolerant against the unpredicted. The thermostat is missing this.


I've looked at this many times, and thought about it, but I just can't see why you think it is significant. Why do you think being tolerant against the unpredicted makes something an agent, or why do you think being tolerant against the unpredicted means that the robot is trying.

But also there's a more fundamental point that I don't believe you have addressed, which is that a robot or a computer is only an "entity" or a "system" because we choose to regard it that way. The computer or robot is not intrinsically an entity, in the way you are.

I was thinking about this just now when I saw this story "In Idyllwild, California a dog ran for mayor and won and is now called Mayor Max II".

The dog didn't run for mayor. The town is "unincorporated" and doesn't have its own local government (and therefore doesn't have a mayor). People are just pretending that the dog ran for mayor.

In a town which was incorporated and did have a mayor, a dog would not be permitted to run for office.
InPitzotl November 08, 2021 at 22:57 #618392
Quoting Daemon
I've looked at this many times, and thought about it, but I just can't see why you think it is significant.

I think you took something descriptive as definitive. What is happening here that isn't happening with the thermostat is deference to world states.
Quoting Daemon
But also there's a more fundamental point that I don't believe you have addressed, which is that a robot or a computer is only an "entity" or a "system" because we choose to regard it that way.

You're just ignoring me then, because I did indeed address this.

You've offered that experience is what makes us entities. I offered AHS as demonstrating that this is fundamentally broken. An AHS patient behaves as multiple entities. Normative human subjects by contrast behave as one entity per body. Your explanation simply does not work for AHS patients; AHS patients do indeed experience, and yet, they behave as multiple entities. The defining feature of AHS is that there is a part of a person that seems to be autonomous yet independent of the other part. This points to exactly what I was telling you... that being an entity is a function of being agentively integrated.

So I cannot take it seriously that you don't believe I have addressed this.
Quoting Daemon
The computer or robot is not intrinsically an entity, in the way you are.

I think you have some erroneous theories of being an entity. AHS can be induced by corpus callosotomy. In principle, given a corpus callosotomy, your entity can be sliced into two independent pieces. AHS demonstrates that the thing that makes you an entity isn't fundamental; it's emergent. AHS demonstrates that the thing that makes you an entity isn't experience; it's integration.
Quoting Daemon
I was thinking about this just now when I saw this story "In Idyllwild, California a dog ran for mayor and won and is now called Mayor Max II".

Not sure what you're trying to get at here. Are you saying that dogs aren't entities? There's nothing special about a dog-not-running for mayor; that could equally well be a fictional character or a living celebrity not intentionally in the running.
Daemon November 09, 2021 at 11:36 #618581
Reply to InPitzotl You're correct to point out that I ignored your discussion of AHS etc, apologies. I ignored it because it doesn't stand up, but I should have explained that.

Cutting the corpus callosum creates two entities.



InPitzotl November 09, 2021 at 12:49 #618588
Quoting Daemon
Cutting the corpus callosum creates two entities.

So if experience is what makes us an entity, how could that possibly happen?

If integration makes us an entity, you're physically separating two hemispheres. That's a sure fire way to disrupt integration. The key question then becomes, if cutting the corpus callosum makes us two entities, why are we one entity with an intact corpus callosum, as opposed to those two entities?

Incidentally, we're not always that one entity with an intact corpus callosum. A stroke can also induce AHS.
Daemon November 09, 2021 at 13:05 #618594
Quoting InPitzotl
So if experience is what makes us an entity, how could that possibly happen?


Two experiencing entities.

But I don't really think the effects of cutting the corpus callosum are as straightforward as they are sometimes portrayed, for example, the person had a previous life with connected hemispheres.

Quoting Daemon
But also there's a more fundamental point that I don't believe you have addressed, which is that a robot or a computer is only an "entity" or a "system" because we choose to regard it that way.


I don't think the "corpus callosum/AHS" argument addresses this.



InPitzotl November 09, 2021 at 13:33 #618602
Quoting Daemon
Two experiencing entities.

Alright, I think we're talking past each other a bit. The two (mind you, not one) experiencing entities are a result of corpus callosotomy. The notion that experience is what makes you an entity cannot account for the fact that a corpus callosotomy should make two entities. Agentive integration by contrast explains why you are a single entity. The notion that you are an entity due to agentive integration does account for the fact that a corpus callosotomy should make two entities. Once again, experience is doing no work for you here; it's an epicycle.

Quoting Daemon
But I don't really think the effects of cutting the corpus callosum are as straightforward as they are sometimes portrayed, for example, the person had a previous life with connected hemispheres.

No idea what you're saying here. Are you suggesting there are two individuals before the corpus callosotomy?
Quoting Daemon
I don't think the "corpus callosum/AHS" argument addresses this.

Quite the opposite; see above. We can take an external view as well:

https://movie-usa.glencoesoftware.com/video/10.1212/WNL.0000000000006172/video-1
From: https://n.neurology.org/content/91/11/527

This person did not have a corpus callosotomy; she had a stroke (see article). It is very obvious that she has AHS. That's also curious... why is it obvious? What behaviors is she exhibiting that suggest AHS?
Daemon November 09, 2021 at 13:59 #618610
Quoting InPitzotl
The two (mind you, not one) experiencing entities are a result of corpus callosotomy


I don't understand why you are telling me that, as if it was a point against me.

Quoting InPitzotl
This person did not have a corpus callosotomy; she had a stroke (see article). It is very obvious that she has AHS. That's also curious... why is it obvious? What behaviors is she exhibiting that suggest AHS?



I don't understand why you are asking me that.

InPitzotl November 09, 2021 at 14:09 #618613
Quoting Daemon
I don't understand why you are telling me that, as if it was a point against me.

Because you keep asking about being an entity, but you're not accounting for the number here. But you keep saying that I haven't accounted for things.
Quoting Daemon
I don't understand why you are asking me that.

Because we can indeed tell by her behaviors. The subject talking to us is behaving as if her alien hand is a stranger. And you aren't diagnosing her alien hand by counting how many "experiences" there are. Her behavior is distinct from a normative case, but also distinct from someone who has half their body paralyzed after a stroke. There's still agency in there, just not integrated. Apparently you think that's a bad description; but it's kind of definitive of the condition.
TheMadFool November 10, 2021 at 11:58 #618906
It's true that understanding is, from the standpoint of someone intent on conveying (say) a point, restricted to that point. For instance, if I have a certain idea, call it x, that I want to share with you, I would only deem you to have understood when you too have, let's just say, an exact copy of x in your mind. In a sense, then, understanding boils down to minds xeroxing the contents of other minds. What are the ramifications of such an interpretation? Your guess is as good as mine.

However, I don't see why understanding should be limited/constrained in this way. The Buddha, it's said, once saw a pot of gold and exclaimed to his disciples "look, a snake!"