The important question of what understanding is.
Quoting TheMadFool
I have been a professional translator for 20 years. My job is all about understanding. I use a Computer Assisted Translation or CAT tool.
The CAT tool suggests translations based on what I have already translated. Each time I pair a word or phrase with its translation, I put that into the "translation memory". The CAT tool sometimes surprises me with its translations, it can feel quite spooky, it feels like the computer understands. But it doesn't, and it can't.
I do a wide range of translation work. I do technical translations, operating and maintenance instructions for machines for example. To understand a text like that, you need to have had experience of work like that. Experience is the crucial element the computer lacks. Experience of all facets of our world. For example, to understand fundamental concepts like "up" and "down", "heavy" and "light", you need to have experienced gravity.
I translate marketing texts. Very often my clients want me to make their products sound good, and they want their own customers to feel good about their products and their company. To understand "good" you need to have experienced feelings like pleasure and pain, sadness and joy, frustration and satisfaction.
I translate legal texts, contracts, court documents.
A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.
B. The councillors refused to allow the protestors to demonstrate, because they feared violence.
A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience.
That raises the important question of what understanding is and, more importantly, whether it is something beyond the ability of a computer AI? Speaking from my own experience, understanding/comprehension seems to start off at the very basic level of matching linguistic symbols (words, spoken or written) to their respective referents e.g. "water" is matched to "cool flowing substance that animals and plants need", etc. This is clearly something a computer can do, right? After such a basic cognitive vocabulary is built what happens next is simply the recognition of similarities and differences and so, continuing with my example, a crowd of fans moving in the streets will evoke, by its behavior, the word and thus the concept "flowing" or a fire will evoke the word/concept "not cold" and so on. In other words, there doesn't seem to be anything special about understanding in the sense that it involves something more than symbol manipulation and the ability to discern like/unlike thing.
I have been a professional translator for 20 years. My job is all about understanding. I use a Computer Assisted Translation or CAT tool.
The CAT tool suggests translations based on what I have already translated. Each time I pair a word or phrase with its translation, I put that into the "translation memory". The CAT tool sometimes surprises me with its translations, it can feel quite spooky, it feels like the computer understands. But it doesn't, and it can't.
I do a wide range of translation work. I do technical translations, operating and maintenance instructions for machines for example. To understand a text like that, you need to have had experience of work like that. Experience is the crucial element the computer lacks. Experience of all facets of our world. For example, to understand fundamental concepts like "up" and "down", "heavy" and "light", you need to have experienced gravity.
I translate marketing texts. Very often my clients want me to make their products sound good, and they want their own customers to feel good about their products and their company. To understand "good" you need to have experienced feelings like pleasure and pain, sadness and joy, frustration and satisfaction.
I translate legal texts, contracts, court documents.
A. The councillors refused to allow the protestors to demonstrate, because they advocated violence.
B. The councillors refused to allow the protestors to demonstrate, because they feared violence.
A computer can't understand that "they" applies to the protestors in A. but the councillors in B, because it's not immersed in our complex world of experience.
Comments (214)
Nobody masters a language with no idea what they are saying. I don't know why you would suggest such a thing.
Yes, human translators sometimes don’t understand what they are translating. Everyone has been baffled by poorly translated product instructions I guess. And sometimes this is because the human translator does not have experience assembling or using the product, or products like it.
Because it is true? But maybe you don't get my meaning and are making it too concrete. I've met dozens of folk who work in government and management who can talk for an hour using ready to assemble phrases and current buzz words without saying anything and - more importantly - not knowing what they are saying.
Yes, as long as you obey certain rule understanding the meaning can be secondary
You are speaking rather loosely here. Exaggerating.
One assumes when they're finished bullshitting you they go back to speaking a language they do understand, so you haven't addressed the OP, have you?
Not by much.
I think the OP's point was that context is important for translation. You seemed to be arguing to the contrary. Now I don't know what your point was.
Thanks for your comment. I just watched a video on minds & machines - The Turing Test & Searle's Chinese Room Argument. As per the video a computer that passes the Turing Test does so soley based on the syntactic properties of, and not the semantic properties of, symbols. So, at best, an AI (passed the Turing Test) is simply a clever simulation of a human being, no more no less.
My own views on the matter is semantics is simply what's called mapping - a word X is matched to a corresponding referent, say, Jesus. I'm sure this is possible with computers. It's how children learn to speak, using ostensive definitions. We could start small and build up from there as it were.
The examples I gave were intended to illustrate that semantics isn't simply mapping!
Mapping is possible with computers, that's how my CAT tool works. But mapping isn't enough, it doesn't provide understanding. My examples were intended to illustrate what understanding is.
Children learn some things by ostensive definition, but that isn't enough to allow understanding. I have a two-year-old here. We've just asked him "do you want to play with your cars, or do some gluing?"
He can't understand what it is to "want" something through ostensive definition. He understands that through experiencing wanting, desire.
It's not unlike what we humans do. Our "translation memory" simply is the mental act of association. I take input/sensation A (I see, feel or taste an apple) and link it to input/sensation B (the word apple in auditory or visual form).
Quoting Daemon
Quoting Daemon
Quoting Daemon
Machines can experience physical phenomena that reflects our perception - from cameras to thermometers, pressure and gyro sensors - none of our senses can't be adopted digitally. This means that fundamental concepts like "up" and "down", "heavy" and "light" can indeed be experienced by computers.
Your last example though is a whole different phenomena and this is where it gets interesting. Qualia - emotional sensation like happiness, sadness, desire and alike can not be found and measured in the physical realm. I have a hard time imagining that AI will ever get a good grip on these concepts.
I think it's a question of how machinized someone perceives our human organism. Afterall, the statement that there's no physical phenomena corresponding to emotions is false. Strictly speaking, it's all chemistry - qualitatively and quantitatively measurable. If and how this could possibly translate to machines perceiving emotion is beyond me. All I know is that it does raise one of the most interesting philosophical questions that I have seen in Sci-Fi: If a robot seemingly acts and feels like a human, how are we to know whether they are merely acting or if they actually engage with sensation and stimulation in the same way we do?
A camera does not see. A thermometer does not feel heat and cold. A pressure sensor does not feel pressure.
I worked at an AI company briefly for a few months, some time back. The biggest problem they had was in imparting 'context' to their agent. She (she was given a female persona) always seemed to lack a sense of background to queries.
One of my early experiences is illustrative. I had a set of test data to play with, from supermarket sales. I noticed you could request data for single shoppers or families with children. I asked 'Shirley' (not her name, but that's a trade secret) if she had information on bachelors. After a moment, she asked 'Bachelor: is that a kind of commodity: olive?' So she was trying to guess if the word 'bachelor' was a kind of olive. I was super-impressed that she tried to guess that. But then the guess was also kind of clueless. This kind of issue used to come up a lot. Like, I notice with Siri, that there's certain contextual things she will never get. (I also have Alexa, but all she does is play the radio for me.)
Quoting Hermeticus
That is lumpen materialism. There is a reason why all living beings, even very simple ones, cannot be described in terms of chemistry alone. It's that they also encode memory which is transmitted in the form of DNA. Ernst Mayr, a leading theoretical biologist, said 'The discovery of the genetic code was a breakthrough of the first order. It showed why organisms are fundamentally different from any kind of nonliving material. There is nothing in the inanimate world that has a genetic program which stores information with a history of three thousand million years’. Furthermore, all beings, even simple ones, are also subjects of experience, not simply objects of analysis, which introduces a degree of unpredictability which no simple objective model can hope to capture.
The physical principles behind these sensors and the senses of our body are literally the same. The difference is in the signal that is sent thereafter (and even then, both signals are electric) and how the signal is processed.
It goes way further than that though. The field of bionic prosthetics has already managed to send all the right signals to the brain. There are robotic arms that allow the user to feel touch. They are working on artificial eyes hooked up to the optic nerve - and while they're not quite finished yet, the technology already is proven to work.
Quoting Wayfarer
When we talk about what is, it's easiest to speak in terms of materialism. If two processes are comparable, one biological, one mechanical, why shouldn't I be able to compare them? As I said:
Quoting Hermeticus
I was going to agree with "Living beings can not be described in terms of chemistry alone" but the more I think about it - I'm not so sure. Your example doesn't make any sense to me either way. What do you think DeoxyriboNucleic Acid is, if not chemistry?
Bacteria can swim up or down what is called a chemical gradient. They will swim towards a source of nutrition, and away from a noxious substance. In order to do this, they need to have a form of "memory" which allows them to "know" whether the concentration of a chemical is stronger or weaker than it was a moment ago.
https://www.cell.com/current-biology/comments/S0960-9822(02)01424-0
Here's a brief extract from that article:
Increased concentrations of attractants act via their MCP receptors to cause an immediate inhibition of CheA kinase activity. The same changes in MCP conformation that inhibit CheA lead to relatively slow increases in MCP methylation by CheR, so that despite the continued presence of attractant, CheA activity is eventually restored to the same value it had in the absence of attractant. Conversely, CheB acts to demethylate the MCPs under conditions that cause elevated CheA activity. Methylation and demethylation occur much more slowly than phosphorylation of CheA and CheY. The methylation state of the MCPs can thereby provide a memory mechanism that allows a cell to compare its present situation to its recent past.
The bacterium does not experience the chemical concentration.
The "memory" encoded by DNA can also be described entirely in terms of chemistry. So I think Mayr got this wrong.
Thanks for trying to clarify the issue for me. Much obliged. Please tell me,
1. What understanding is, if not mapping?
2. Whatever thinking is, it seems to be some kind of pattern recognition process. That looks codable? Semantics are patterns e.g. dogs = domesticated (pattern) wolves (pattern).
In short, semantics seems to be within the reach of computers provided pattern recognition can be coded.
What say you?
But this can't be done without using the brain!
Quoting Hermeticus
Quoting Hermeticus
Quoting Hermeticus
It's hard to picture an artificial brain because we don't even fully understand how our brains work. It's a matter of complexity. Our understanding of it is getting better and better though. On what basis can we say that an artificial brain wouldn't be possible in the future?
We can't, but this is science fiction, not philosophy. I love science fiction, but that's not what I want to talk about here.
My examples were intended to illustrate what understanding is.
Something which no inorganic substance will do. Nobody would deny that such behaviours involve chemistry, but they're not reducible to chemistry.
Quoting Daemon
Perhaps behaviour is what experience looks like from the outside.
Automated Theorem Proving
[quote=Wikipedia]Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs.[/quote]
I'm no mathematician but if math proofs are anything like proofs in philosophy, semantics is a cornerstone. One has to to understand the meaning of words, sentences/statements.
In automated theorem proving computers have been used to prove math theorems but we know that computers are semantics-blind and can only manage syntax. Yet, they're doing things (proving theorems) which, in our case, requires understanding. My question is, can semantics be reduced to syntax?
They aren't doing things, we are using them to do things.
It's the same with an abacus. You can push two beads to one end of the wire, but the abacus isn't then proving that 1 + 1 = 2.
Well, we're talking about understanding and you made your premise experience. I've argued that it's absolutely possible for an AI to have the same experiences we have with our senses and that it's merely a question of how the content of these experiences are processed. If we're not talking about hypotheticals then the answer is obviously no, AI can not understand like humans do.
If you just want to talk about what understanding in general is, I'm totally with @TheMadFool here. Understanding is mapping. Complex chains of association between sensations and representations.
So, if we ask a group of grade 1 students to carry out the math operation 1 + 1, they aren't doing things, we are using them to do things.
Thank you!
But computers don't have sensations, they don't make associations, they don't use representations.
In science fiction?
I'm not gonna repeat myself forever.
Quoting Daemon
Quoting Hermeticus
If we were to talk in hypotheticals:
Sensation
Quoting Hermeticus
We already have this.
Association
Quoting Hermeticus
We don't have this yet, hence I raised the point of artificial brain.
And as for representations - computers are literally built on it. They're a representational system. Everything you see on your browser is a representation of a programming language. The programming language is a representation of another programming language (machine code). Machine code is a representation of bit manipulation. Bits are a representation of electric current.
I'm not interested in discussing hypotheticals. The Cambridge Dictionary says hypothetical means "imagined or suggested, but perhaps not true or really happening".
Quoting Hermeticus
We do not. Sensors do not have sensations.
Quoting Hermeticus
But the representation is to us, not to the computer. All there is in the computer is electric current. No bits, no languages. We say that the electric current represents something, in the same way that the beads on an abacus represent numbers.
Mapping is not understanding, as illustrated by my examples.
Quoting 180 Proof
I.e. performative competences developed by lived-experience (of failure and adaptation).
Ok so, what's your definition of understanding?
Please don't repeat yourself by saying, "...as illustrated by my examples...".
No, I think you're not correct Tim. You can take the comma out of both sentences or add parentheses to both if you wish, without affecting the meaning (or the grammaticality).
But why not? As Wittgenstein famously observed "meaning is use". You can tell what I mean by "understanding" by the way I use it in my examples. I'm using it in the standard way. I could of course provide you with dictionary definitions of "understand", but it hardly seems necessary as you already know how the word is normally used. If you didn't already understand the word, you wouldn't understand the definition.
Red Herring :yawn:
I kept my end of the bargain, you should keep yours.
Understand: perceive the intended meaning of (words, a language, or a speaker).
That's mapping words to referents.
These are different meanings. In A the councillors advocate violence and B they fear violence (which has two meanings in and of itself).
If something is poorly written then it is harder to translate. Don't blame the computer for someone's lack of clarity in their writing.
You have misunderstood the sentences!
I believe the rule is if it isn't clear who is giving the reason we go with the subject not the object. In day-to-day speech there is no need as the sentence is usually understood within the given context. They are both open to a degree of interpretation that would be cleared up with sentences that precede or follow it.
As stand alone sentences I would assume the 'councillors' are the ones 'fearing violence' or 'advocating violence'.
The point of the example, which it seems is rather wasted on you, is that we already understand the context without needing to see preceding and succeeding sentences. We know how councillors and protestors behave.
A program would certainly have to be programmed to better adjust to what is a living and changing language not one that is set in stone. The event of the internet has already dramatically changed the evolution of human languages.
It's not a better example, it's just a slightly less interesting example.
Quoting I like sushi
No, that completely misses the point of my example! The point was to show how our immersion in a world of experience allows us to understand things which a computer can't understand.
"The store has bananas" might be translated by the CAT tool from another language; perhaps it's translating to French, and it would map "banana" to "banane". That's a mapping of symbols to symbols.
But the referents for bananas aren't in English or French dictionaries... they are in store shelves, inside pies, and so on. What TMF is talking about is a mapping from "banana" to the stuff on the store shelf, the stuff infused within banana bread, the stuff in banana cream pies.
I think TMF is just having problems expressing this... on a forum, we generally use words. But the referents here are not words.
Your example has different interpretations and are ‘right’ or ‘wrong’ how you’ve framed them. If you cannot see that it’s a problem.
I guess you wrote these sentences because you seem to be offended because myself, and another above, have pointed out they are poorly written.
Computer translators are not programmed to understand slang, idioms or metaphors right? I imagine they may have some in their database yet they don’t ‘know’ when and when not to apply the rule - unless the writer has put the saying in special parenthesis?
I did not write those sentences, I am not offended, they are not poorly written, and you are still completely missing the point.
I guess a question would be: how do you know your experiences are similar enough to allow understanding?
Well, in my translation memory, in my computer, I would have a Dutch word, "banaan", and an English translation, "banana". Can you tell me how I could get all that stuff about the store shelf and banana bread into my translation memory? Or the stuff about councillors and protestors?
Not a very clear question Frank. But in 20 years of full time work as a translator I've translated around 10 million words, I very rarely receive complaints about my translations, my work is checked by an editor and I very rarely receive corrections from them, and my customers keep coming back to me and paying for my services. Does that answer your question?
Hm, I can see the point. Why not:
Out of fear of violence, the councilors refused to allow the protestors, whom are known to advocate violence, to demonstrate.
Or something of the like. Granted not everyone speaks casually in such a manner so it is useful for any application that plans to be relevant to be able to recognize as much variation in sentence construction as possible. Which as has been noted, is quite difficult.
Edit: And of course technically both sentences can mean either or, with a little thought. Granted we know and should assume the same meanings as in the OP, but there's nothing that prevents the opposite.
No you can't. You're missing the point completely.
Would you perhaps mind explaining it then, seeing as you now hold the minority viewpoint of 'understanding' in this discussion?
Is it possible that this is happening in spite of a rift in understanding? Could it be that you're applying certain rules correctly, and so there are no complaints about your service, and yet there is no communing of intent?
How would you prove that this extra thing beyond rule following, this 'understanding' exists?
A. The councillors refused to give the protestors permission for their demonstration as they advocated violence.
B. The councillors refused to give the protestors permission for their demonstration as they feared violence.
In A. "they" refers to the protestors, in B. it refers to the councillors. We know this because of our experience of the world. It's an example of something a computer couldn't know.
Well, for example, there are sometimes mistakes in the source text. Maybe somebody writes "the saw blade must be touched with the fingers while it is still rotating". So I write to the customer and say "I think you missed out the word 'not' here". And they say "yes, thank you, you're right".
Does that answer your question?
They only refer to the protestors and councilors respectively, because the father, or author, of the sentence determined so. Or I suppose "it simply happened that way" or as you say, that's just how "the world" (generally) works.. There are numerous scenarios, one of which has been posted previously, where it could easily be the opposite.
The same (likely) context recognition could be achieved, albeit haphazardly, with a 'word map' database.
Councilor = government, order, ruler, leader, society, peace, stability
Protestor = worker, grievance, anger, rebellion, uprising, turmoil, injustice
The more general words (violence) matched with context specific words (they), that happen to match a subsequent 'word map' of words relevant/associated with each party or subject(s) can more often than not determine which party to apply said word to. It would take a great deal of finagling, sure. But it's doable. Not with any laser accuracy, of course. Which I suppose was your point.
What exactly are we discussing and for what purpose? I do fail to see the profoundness or any possible fruit of this topic. Computers, AI =/= human comprehension. I doubt there was any disagreement at any point.
To most people. Why do you keep refusing to accept this? In a place where the councillors are corrupt/vicious why not the opposite.
A computer cannot understand anything. It is a CODED not THINKING. Other than that what is your point? I don't actually see one but I'm assuming there is one somewhere that is why I'm persisting.
Precisely
I have been a professional translator (freelance) for the same amount of years!
However, I never thought of my job as something that is all about understanding. Undestanding is of course essential, but it refers only part of the whole process. The most important things in a translation are 1) be proficient in the language you translate in (target language) and 2) be able to relay information as accurately as possible without being literal. The level of accuracy depends of course on the subcect: the more technical is a subject the more accurate one has to be. On the other hand, if the subject is literary, one can relax on accuracy and rely more on expresssion. But still, the meaning of the source text has always to be relayed.
All this is an art and and this is how I see a translation. Writing is an art. So is translation. Only that here the ideas come from someone else than youself.
Quoting Daemon
Yes, CAT tools are very good, but mainly for technical subjects. I used them extensively in translation manuals (75% of my total workload!) But on general text, I use Google translation, which I call "pre-translation". Although in the past Google translations were quite inferior --in Greek, which is my native language, it was actually deplorable, because of the complexity of the Greek grammar-- but these days they are really excellent, even in Greek! Most probably because of their hugely increased database of both words/terms, phrases and evem full sentences. So, after that, your task is only to correct minor mistakes and trim the text in general. It's there that your proficiency in your native language comes in as the most important element. Undestanding becomes of secondary importance. It's a fact.
That's enough about translation! :smile:
***
Now, I don't know how you have reduced such an interesting topic as "The important question of what understanding is" into a translation subject! I have a lot to say about "understanding", what it is, how it works, etc. but it seems that it is not what it matters anymore! :smile:
Because I was responding to something TheMadFool said, which I quoted at the very start of this thread:
Quoting TheMadFool
We could of course talk about a single language, but discussing computer translation is an excellent way to address the question of understanding.
Quoting Outlander
Quoting Daemon
It wasn't really a question. :razz:
You don't know for sure that you and your client have the same understanding.
In exactly the same way, you don't know that the world is out there as it appears to be.
You get by just fine not knowing these things. Or we could say you know one just as well as you know the other.
I can sketch it out.
You need some bootstrap capabilities outside of dictionaries... things like what humans have; e.g.:
Quoting Daemon
...the ability to see. Add to that some basic sapience. The general idea is that this should have the ability to interact with reality in real time on scales roughly approximating that of your typical language using naked apes. Some of this interaction would involve exploiting "seeing" (or other kinds of sensations) in the attainment of goal oriented behaviors analogous to how we "intentionally do things"; i.e., at roughly the same levels of abstractions as the "things we do" or, more to the point, at roughly the same levels of abstractions as the "things we talk about".
Once you have such a thing, we need two more ingredients to make it final: (a) a banana, (b) a shelf. All of this, or something akin to it, would need to be in place before you can have something to map "banana" to and call it understanding.
I skipped a few steps, but it's not like I wouldn't have had to skip steps anyway at some point; I have never built such a thing.
I like this very much. Whether one could somehow, someday develop an artificial system that could deal with such a case, who knows. I lean toward your view, but I wouldn't put money on it either way.
But it is a lovely example of the sort of thing we manage easily everyday, only noticing when it goes wrong for some reason. Funny things, pronouns
Quoting Daemon
Of course it isn't. I'm surprised anyone would think it is. In point of fact, I'm not even sure what it's supposed to mean: people look up the meanings of words in dictionaries, sure, but you can't look up the meaning of a sentence in the sentence-dictionary, so if sentences have meanings, they must not "map" to them, or they must have a different kind of meaning.
That doesn't affect the point of the example. If there were such a place, the computer wouldn't have access to that external set of circumstances.
The point is that we are able to make a judgement about the meaning of the sentences which a computer can't possibly make.
This is just a waste of everybody's time. I mean, come back to us when there's a camera that can see and we'll have something to talk about.
I read this of course. But it's still about undestanding ... and my wondering is still unanswered! :grin:
Quoting Daemon
Well, in that case, even if you had used a more specific title, like "Computers and understanding" or something like that, it would be still inappropriate because computers do not possess any understaning!
Well, maybe it's not so important, generally and for most people. But it just happens that undestanding and communication are among my favorite subjects. I have studied them extensivley and I was even teaching about them (theory and practice) in the past ...
So do I Srap! It seems to have caused nothing but confusion above though.
According to my theory the artificial system would need to be able to experience and interact with the world in the way we do. It would need to experience such things as pain and pleasure in order to understand what "good" and "bad" mean. Do we really want to create artificial beings that can experience pain? Surely there's enough trouble in the world with the experiencing beings we can already create?
You were the one who asked me the question. You were also the one opening this thread with your OP, where you wrote this:
Quoting TheMadFool
...and you were the one talking about CAT tools as if that had anything to do with referents.
There's a giant difference between responding to "Can you pick up some bananas from the store?" ...by showing me the phrase translated (poorly or greatly) to Dutch; and responding to "Can you pick up some bananas from the store?" ...by showing up on my doorstep with a bunch in your hand.
Well there are at least two in this discussion, and I was attempting to apply the Principle of Charity, which asks us to:
"Assume that the opponent is making the strongest argument and interpret others as rational and competent."
Despite, I suppose, all the evidence to the contrary.
I was kinda hoping you'd realise you couldn't answer the question. In other words, you'd realise that you can't get a computer to understand things in the way we can.
TheMadFool wrote that, I was quoting him. I'm arguing against him.
Ultimately that's correct, but the gaps are really in details.
Quoting Daemon
Sorry, I misspoke here... what I meant was that in the OP that was what you quoted. TMF did indeed write that, but he didn't explain what a referent was too well; the way he explained it, a referent could be interpreted as a phrase... so the proposal could be understood that your CAT tool might understand what "water" is if it mapped "water" to the phrase: "cool flowing substance that animals and plants need".
But that's not what the word "referent" means. The referent for "water" isn't another word (it's not "agua"); nor is it another phrase (it's not "cool flowing substance that animals and plants need"). There's no set of shape you can squiggle on a sheet of paper that is the referent for water; instead, you're going to have to go turn your taps on, point to the stuff falling from the sky outside, or go find that stuff fish swim in. Humans that know what "water" means map that word to that stuff... and to do that, we form a concept of that stuff that comes out of taps, that stuff that falls from the sky, that stuff that fish swim in. The idea of such things is an abstraction; it's a model of the stuff we're made aware of by, say, seeing it; swimming in it; drinking it; and so on. And it is that model that we map "water" to when we understand it; not more words.
:100:
Quoting InPitzotl
Careful now...
Quoting InPitzotl
Quoting InPitzotl
Are concepts and ideas and models any more harmlessly, less misleadingly identified as the referent of "water" than are phrases like "cool flowing substance"?
And?
Oooooh! What a great question! I think this naturally falls out of our agency. We use our senses to sense the world; as we do so, we create world models. We refer to these world models, in real time even, to "do things". But we also as part of this model "project" it as something independent from us and, well, it winds up that's a good theory of what the world is. I think something along these lines (at least for claims about the state of the external world) is what gives rise to intentionality.
ETA: Just to close the loop here... when we act in the world, we're not merely using our world models... we're literally using that world. By this I mean that we don't simply imagine ourselves walking to the sink, we walk over there. These interactions are in real time, and they are updated by real time world sensations... any difference between what our world model is and these sensations is updated by deferring to the sensed world. This is the long form of what I mean by "project" here.
I agree that I don't know for sure that my translations can be correctly understood, that's part of my own philosophical position, and I also believe that, in a certain philosophical sense, words do not carry or convey meaning. But I don't tell my translation clients about any of this.
I do think there's an abundance of evidence that we are able to understand a great deal of what we say to one another. You couldn't get people to land on the moon and come back without shared understanding of language. And on a smaller scale, you and perhaps a couple of others have understood at least some of what I've said in this discussion, as evidenced by your coherent responses.
Whether the world is as it appears to be is another (vast) question, and perhaps off topic for the Philosophy of Language forum. Personally I'm satisfied that the world is enough like it appears to be for us to travel to the moon and back, and for me to make the pasta dish I'm going to eat soon.
You have replied to me that your topic was a kind of answer to @TheMadFool about undestanding. This didn't change at all my wondering of how has the subject of "undestanding" in the title been replaced by the subject of "translation" in the description! But after this, I'll stop wondering! So, don't worry! :smile:
My point was that skepticism about our ability to communicate (Quine, for instance), is very like skepticism about the world.
In both cases the skeptic belies her supposed beliefs with her behavior. And I think both kinds of skepticism are very lonely places to be. :grin:
(I think the curly c is much prettier than the kicking k in the word "scepticism").
My own provisional position is that when we say for example that a word or a sentence has or carries or conveys meaning, that is a metaphor, one we find difficult to rekognise as such.
He would say there's no fact of the matter regarding whether your translations are correct. I think he would want us to deflate the concept of correctness.
Quoting Daemon
Nietzsche agrees and so do I, except there's a special brand of language use called propositions. I think this is communication, not between people, but between an individual and the world. IOW, I think we relate to the world as if it's a person and true propositions are its utterances. I think this has its roots in the time when people really did think the world was alive.
They're special because they're supposed to transcend any particular speaker. You and I can express the same proposition in different ways at different times. This invites questions about the nature of propositions, especially false ones.
I think we relate to the world as if it can talk: we ask it questions and expect answers even if we may not understand those answers at first, as with quantum physics.
When we don't understand what the world is telling us, we proceed as if it's just a matter of asking more questions. In fact some physicists celebrate the fact that the world has not yet revealed all its secrets.
I'm saying this general framework of interrogation is something we've inherited, and it explains the nature of propositions.
It's not normal in my time to believe the world can really talk, so that's why propositions are philosophically confusing.
You and I can assert the same proposition. Logically, that means the proposition is neither our utterances nor the sentences we use. See what I mean?
All I can see at the moment is someone stating the obvious (nothing wrong with that!) and trying to look beyond the obviousness ... it is the later part I'm having trouble with seeing.
Computers don't understand and humans do. Translation programs don't 'think'. Our experience of language within a given context helps us choose the better/correct meaning behind statements made - computers are limited to what they're programmed to do.
We are self correcting and constantly learning and relearning the world about us.
Where in here is the OP's idea/point/question?
Daemon gives two sentences:
Quoting Daemon
Some would say human language is a matter of rule following, but making sense of the above sentences seems to require experience with a point of view.
It may be that you're so closely allied with Daemon's outlook that it seems he's stating the obvious, but some might argue that Daemon is wrong: no experience with life is necessary for translation. The motive for arguung that would be to eliminate any reliance on experience to explain anything because the goal is to deny that there is any such thing as experience.
Well, there is no 'seems to' about it. There is no manual for language. Anyone 'arguing' against is just plain wrong! I think maybe that some people confuse Chomsky's view of language as saying that there are strict rules. That isn't at all what he is saying though. Undoubtedly there are certain elements that constitute what we commonly refer to as 'language,' but there is still a lot of work to do in terms of the cognitive neurosciences. Sadly a large section of the 'Philosophy of Language' group were a bit slow catching up with the science and were still occupied with problems that had be solved by neuroscience ... it takes time for things like that to bed down. Ironically habituation is a huge element of our experience and understanding about-the-world.
The only space where confusion arises is within what we're framing as 'language'. I have big issues with that. Also, some people view 'thinking' as purely about the spoken/written word where within actual studies of language this divide is not always applied (context dependent given what is actually being considered for study).
It might help us @Daemon if you told me if you'd loosely say that these here words are 'translations' of my 'cognitive capacities' expressed with the purpose of elucidating some common meaning/understanding?
I think we might be slipping into semiotics here.
@frank Any chance you could look at thread about 'Choice: The problem with power' and see if you can disagree with me or highlight something?
I Googled the phrase "Can computers think". I got 21,000 hits, including this, from Oxford University's Faculty of Philosophy (my italics):
Quoting Oxford University's Faculty of Philosophy
It seems it's still very much a live question.
From what you said previously though, we can't know if we are asserting the same proposition?
I can only suggest that you reread and ask yourself what you're referring to above^^
If you can read into what I write something that explicitly isn't there then you probably don't get paid much for your work (or shouldn't) :D
Jibing aside; have fun I'm exiting :)
I don't know what you're on about.
But oddly enough I do.
Oh good.
Sure!
Quoting Daemon
Yes. I don't think there's any logic that overcomes skepticism there, you just have to look at the cost of it: how much do you actually lose if you embrace that skepticism?
I think one result is that you can't know whether you agree with yourself from one moment to the next.
As some have noted (I think Chomsky did) if you adopt Quine's skepticism, meaning of any kind breaks down, so there would be no understanding.
Skepticism about the external world also results in a breakdown in meaning if you note Heidegger's point: that you are inextricable from your world. So if you deny the external world, the thing that's left isn't you. It's some foreign entity.
You can't just pick and choose though, can you? I mean if the scepticism is justified, then it doesn't matter if you embrace it or not.
The human mind has a flair for justification. You can justify pretty much any belief you like. Make your own religion and build a community of believers who will support you all the way to the Kool Aid.
Identity and emotion are in charge. Logic is a brittle autumn leaf in a hurricane.
But do you think identity and emotion are in charge of you?
I think so. You?
The word banna is mapped to the fruit banana - every word has a referent.
A pattern (the referent) which we can extract from the following scenarios:
1. I tried to jump over the fence, my feet touched the top of the fence but I couldn't clear the fence.
2. Sara tried eating the whole pie, she ate as much as she could but a small piece of it was left.
3. Stanley tried to run 14 km but he managed only 13.5 km, he had to give up because of a sprained ankle.
What drivel? You asked me a question, I answered it. If you have any issues with the way I view semantics (as mapping of word to its referent), please be specific about where exactly I go wrong. Kindly refrain from derailing the discussion from something worthwhile to something puerile.
Extracting "almost" from those three sentences is a good example of something a computer couldn't do! If you asked a human to identify what the sentences have in common, they might say "they are all about people trying and failing". There's no "mapping" from those sentences to the word "almost", even for us.
Your ideas are simplistic and naive.
I'm surprised that you're ignoring important details in my examples that help you abstract the meaning of "almost". Also, try and use the word "almost" in some sentences and reason from them the pattern which the word refers to.
Can a 'thinking machine', according to this definition(?), 'understand'? I suspect, if so, it can only understand to the degree it can recursively map itself within a map of a domain (or domains) recursively nested within a situation (or diachronic process).
Btw, particularly in philosophy, I think understanding
Quoting 180 Proof
The first entities on Earth were single-celled organisms. The cell wall is the boundary between the organism and the rest of the world. No such boundary exists between a computer and the rest of the world.
Quoting 180 Proof
It isn't appropriate to talk (in the present context) about the computer "itself".
Why is it a prerequisite?
Someone ought to tell that to Wittgenstein.
Not that I understand Wittgenstein or much of anything else.
Just for clarity, the part from "Critics hold" onwards is the SEP and not Searle.
The evidence we have that humans understand is not the same as the evidence that a robot understands. The problem of other minds isn't a real problem, it's more in the nature of a conundrum, like Zeno's paradoxes. The paradox of Achilles' Arrow for example is supposed to show that a flying arrow doesn't move. But it does.
The nature of consciousness is such that I can't experience your understanding of language. But I can experience my own understanding, and you can experience yours. It would be ridiculous for me to believe that I am the only one who operates this way, and it would be ridiculous for you to believe that you are the only one.
With a robot, we know that what looks like understanding isn't really understanding, because we programmed the bloody robot. My translation tool often generates English sentences that make it look like it understands Dutch, but I know it doesn't, because I programmed it.
It is hard to not completely change topics and respond to your point. Suffice it to say that minds are really hard stuff. This may be due to the fact that people are generally unwilling to treat the idea inclusively (things are in till proven out vs. things are out till proven in). Accepting for a moment that understanding is a function of an agent demonstrating a particular capability, I think it is easy enough to say that understanding can not be discrete, i.e. that a system that can only do one thing (or a variety of things) well lacks agency for this purpose. However, at some point, a thing can do enough things well that it feels a bit like bad faith to say that it isn't an agent because you understand how it was constructed and how it behaves (indeed, if determinism obtains, the same could be said of people). Being a bit aggressive, I might suggest that you can't rule out panpsychism and so despite your being responsible for the behavior and assemblage of a computer, it may very well be minded (or multiply minded) sufficiently to understand what it is doing. We have no present way to demarcate minded from non-minded besides interpreting behavior. If something behaves like it understands (however strictly or loosely you want to define demonstrating a skill/ability/competency), bringing up whether it has a mind sufficient for agency doesn't do much work - it merely states the obvious: we don't know what has a mind.
I suppose if being explicable renders a thing mindless, increasing number of things that previously were marginally minded (after we admitted that maybe more than just white men could have minds) would go back to not having them. I just don't know how our minds will survive the challenge 10,000 years from now (when technology is presumably vastly superior to what we managed to create in the last hundred or so years). Before you know it, we will be arguing about p-Zombies. For my part, I might approach the thing with humility and err on the side of caution (animated things are minded) rather than dominion (people are special and can therefore subjugate the material world aside from other people).
In the case of a computer, it isn't just that we know how it was constructed and how it behaves, the point is that we know it is not using understanding.
Not only that: a computer is not an agent, we are the agents making use of it. It doesn't qualify for agency, any more than an abacus does.
I'm a bit confused here. Is your translation tool a robot?
There absolutely is a significant difference. How are you going to teach anything, artificial or biological, what a banana is if all you give it are squiggly scratches on paper? It doesn't matter how many times your CAT tool translates "banana", it will never encounter a banana. The robot at least could encounter a banana.
Equating these two simply because they're programmed is ignoring this giant and very significant difference.
Quoting Daemon
The question isn't about experiencing; it's about understanding. If I ask a person, "Can you go to the store and pick me up some bananas?", I am not by asking the question asking the person to experience anything. I am not asking them to be consciously aware of a car, to have percepts of bananas, to feel the edges of their wallet when they fish for it, etc. I am asking for certain implied things... it's a request, it's deniable, they should purchase the bananas, and they should actually deliver it to me. That they experience things is nice and all, but all I'm asking for is some bananas.
Quoting Daemon
I disagree with the premise, "'When humans do X, it involves Y' implies X involves Y". What you're asking me to believe is in my mind the equivalent of that asking "Can you go to the store and pick me up some bananas?" is asking someone to experience something; or phrased slightly more precisely, that my expectations that they understand this equate to my expectations that they (consciously?) experience things. And I don't think that's true. I think I'm just asking for some bananas.
The other problem is that you missed the point altogether to excuse a false analogy. A human doesn't learn language by translating words to words, or by hearing dictionary definitions of words. It's kind of impossible for a human to come to the point of being able to understand "Can you go to the store and pick me up some bananas?" by doing what your CAT tool does. It's a prerequisite for said humans to interact with the world to understand what I'm asking by that question.
IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do.
As I emphasised in the OP, experience is the crucial element the computer lacks, that's the reason it can't understand. The same applies to robots.
Quoting InPitzotl
But in order to understand your question, the person must have experienced stores, picking things up, bananas and a multitude of other things. Quoting InPitzotl
Quoting InPitzotl
Neither my CAT tool nor a robot do what I do, which is to understand through experience.
.
Nonsense. There are people who have this "crucial element", and yet, have no clue what that question means. If experience is "the crucial" element, what is it those people lack?
I don't necessarily know if a given person would understand that question, but there's a test. If the person responds to that question by going to the store and bringing me some bananas, that's evidence the person has understood the question.
Quoting Daemon
Quoting Daemon
Your CAT tool would be incapable of bringing me bananas if we just affix wheels and a camera on it. By contrast, a robot might pull it off. The robot would have to do more than just translate words and look up definitions like your CAT tool does to pull it off... getting the bananas is a little bit more involved than translating questions to Dutch.
Quoting Daemon
Neither your CAT tool nor a person who doesn't understand the question can do what a robot who brings me bananas and a person who brings me bananas do, which is to bring me bananas.
I'm not arguing that robots experience things here. I'm arguing that it's a superfluous requirement. But even if you do add this superfluous requirement, it's certainly not the critical element. To explain what brings me the bananas when I ask for them, you have to explain how those words makes something bring me bananas. You can glue experience to the problem if you want to, but experience doesn't bring me the bananas that I asked for.
I'm not so sure that Daemon accepts that the understanding is in the doing. A person and a robot acting identically on the line (see box, lift box, put in crate A, reset and wait for next box to be seen) do not both, on his view, understand because the robot is explicable (since he, or someone else, built it from scratch and programmed it down to the last detail). He is after minds as the locus of understanding, but he seems unwilling to accept that what has a mind is not based on explicability. It is a bit like a god of the gaps argument that grows ever smaller as our ability to explain grows ever larger. We will have minds (and understanding) only so long as someone can't account for us.
My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.
I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.
Because we can explain the robot, we know that its actions are not due to understanding based on experience.
We will continue to have minds and understanding even after we understand our minds.
We're not trying to explain how you get bananas, we're trying to explain understanding.
Quoting InPitzotl
Quoting Daemon
A correct understanding of the question is comprised of relating it to a request for bananas. How this fits in to the world is how one goes about going to the store, purchasing bananas, coming to me and delivering bananas. You've added experiencing in there. You seem too busy trying to compare CAT tools not understanding and an English speaker understanding to relate understanding to the real test of it: the difference between a non-English speaker just looking at me funny and an English speaker bringing me bananas.
So what you've tried to get me to do is accept that a robot, just like a CAT tool, doesn't understand, even if the robot brings me bananas; and the reason the robot does not understand the question is because the robot does not experience, just like the CAT tool. My counter is that the robot, just like the English speaker, is bringing me bananas, which is exactly what I meant by the question; the CAT tool is just acting like the non-English speaker, who does not understand the question (despite experiencing; surely the non-English speaker has experienced bananas, and even experiences the question... what's missing then?). "Bringing me bananas" is both a metaphor for what the English speaker correctly relates the question to that the non-English speaker doesn't, and how the English speaker demonstrates understanding the question.
"If the baby fails to thrive on raw milk it should be boiled".
The concept of understanding you talked about on this thread doesn't even apply to humans. If "the reason" the robot doesn't understand is because the robot doesn't experience, then the non-English speaker that looked at me funny understood the question. Certainly that's broken.
I think you've got this backwards. You're the one trying to redefine understanding such that "the robot or a computer" cannot do it. Somewhere along the way, your concept of understanding broke to the point that it cannot even assign lack of understanding to the person who looked at me funny.
I can understand what you're saying, but it is quite wrong. When you experience through your senses you see, feel and hear. A computer does not see, feel and hear. I shouldn't need to be telling you this.
And yet, Josh (guessing) does not understand Sanskrit, and you do not understand understanding. A person who does not understand something does not understand it. I shouldn't need to be telling you this.
You've convinced yourself that experience is the explanation for understanding. The problem is, experience does not explain understanding. A large number of animals also experience; but somehow, only humans have mastered human language. Experience cannot possibly be the explanation for understanding if it isn't even an explanation of understanding.
I am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.
To understand what "pain" means, for example, you need to have experienced pain.
Your example isn't even an example of what you are claiming, unless you seriously expect me to believe that you believe persons with congenital analgesia cannot understand going to the store and getting bananas.
There's a gigantic difference between claiming that X is necessary for understanding, and claiming that X is necessary to understand X.
ETA: Your claim is that experience is necessary for understanding. I interpret this claim as equivalent to saying that there can be no understanding without experience. The expected justification for this claim would be to show how understanding at all necessarily involves experience (because if it doesn't, the claim is wrong). This is quite different than pointing out areas of understanding that require experience (such as your pain example).
Explain to me, for example, how you connect the requirement of experience to the example question requesting some bananas.
I don't really see what you're getting at here. I'm not saying you need to experience pain to understand shopping. You need to experience pain to understand pain.
To understand shopping, you would need to have experienced shopping.
Pain is a feeling. Shopping is an act.
If I see a person walking through the store, looking at various items, picking up some of them and putting them into the cart, the person is shopping. If I see a robot walking through the store, looking at various items, picking up some of them and putting them into the cart, the robot is shopping. It's hard to say what a robot feeling pain is by comparison, but that being all shopping is, that robot is shopping.
Also, are you implying nobody knows what my question means unless they have bought me bananas? (Prior to which, they have not experienced buying me bananas?)
A robot is not an individual, an entity, an agent, a person. To say that a robot is shopping is a category error.
Of course in everyday conversation we talk as though computers and robots were entities, but here we need to be more careful.
You could say that the robot is simulating shopping.
Do you think the robot understands what it is doing?
I wrote this above: [i]My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.
I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.[/i]
So a person can understand instructions to shop for your bananas if they have had sufficiently similar experiences.
If the baby fails to thrive on raw milk, boil it.
Just a quick reminder... we're not talking about robots in general. We're talking about a robot that can manage to go to the store and get me some bananas.
I don't believe such a robot can possibly pull this off (with any sort of efficacy) without being an individual, an entity, or an agent.
But, sure... it need not be a person.
I suspect that your concept of individuality/agency drags in baggage I don't myself drag in.
Quoting Daemon
Okay, so let's be careful.
Quoting Daemon
Quoting Daemon
Imagine this theory. Shopping can only be done by girls. I say, that's utter nonsense. Shopping does not require being a girl; I'm a guy, and I can certainly pull it off. But the objection is raised that it's a category error to claim that a guy can shop; you could say that I am simulating shopping.
I don't quite buy that said argument counts as being careful. I'm certainly, in this particular hypothetical scenario, not committing a category error simply by claiming that I, a guy, can shop; it's not me that's claiming only girls shop. In fact, the suggestion that an event that actually occurs should be considered a simulation seems to raise red flags to me.
This sounds like exactly the opposite of being careful.
ETA: You've managed to formulate a theory that makes unjustified distinctions. There's now real shopping, where real bananas get put into real carts, money from real accounts change hands, and the real bananas are brought to me; and then there's simulated shopping, where all of that stuff also happens, but we're missing vital ingredient X. There's by this logic real walking, where one manages to perform a particular choreography of controlled falling in such a way as to invoke motion without falling over, and simulated walking, where all of this stuff happens, but you're not doing it with the right stuff. There's real surgery, where a surgeon might slice me open with a knife, remove a tumor, and sew me up while managing not to kill me; and simulated surgery, where all of this stuff happens... the tumor's still removed, I'm still alive... but the thing slicing me open didn't quite have feels in the right way.
It seems to me there's no relevant difference here between the real thing and what you're calling a simulation... which is also the real thing, but is missing the ingredient you demand the real thing requires to call it real. All of this stuff still gets done... so to me, this is the ultimate test demonstrating that the thing you demand must be there to do it isn't in fact necessary at all. Are you sure you want this to be your standard that vital ingredient X is necessary? Because it sounds to me like this is the very definition of ingredient X not being vital.
A genuine argument for ingredient X's vitality should not look like a True Scotsman fallacy. If experience isn't doing any work for you to explain something crucial about understanding, it's as I said superfluous... and your inclusion of it just to include it is simply baggage. If you have a good reason to suspect experience is necessary, that is what you should present; not just a narrative that lets you say that, but an explanation for how it critically fits in.
Quoting Daemon
In a nutshell, yes. But again, to be clear, this does not stem from a principle that doing things is understanding. Rather, it's because this is precisely the type of task that requires understanding to do with any efficacy.
Researchers have compared the results of machine translation to a jar of cookies, only 5% of which are poisoned.
Computers can make an amazingly good job of translating, but they don't do what we do when we translate. We use our understanding, and you can see from the faults in machine translation that that is what a computer lacks.
If a computer could do what I can do, people would use Google Translate and I wouldn't have any work. Google Translate is free and I am quite expensive.
What the computer lacks is involvement with the world.
I put this sentence into Google Translate: "If the baby fails to thrive on raw milk, boil it."
Google translated this into Dutch as "Als de baby niet gedijt op rauwe melk, kook hem dan."
That means "If the baby fails to thrive on raw milk, boil him."
Google Translate is extremely ingenious, but it lacks understanding, because it is not involved with the world as we are, through experience. QED.
Mary's Room?
The question is, does Mary learn anything new?
I recall mentioning this before but what is red? Isn't it just our eyes way of perceiving 750 nm of the visible spectrum of light?
Look at it in terms of language. This :point: 0 is sifr in Persian, zero in English and sunya in Hindi but do we claim that the Persian knows something more from the word "sifr" or that an Englishman got an extra amount of information from the word "zero" and so on?
Likewise, does Mary get ahold of new information when she sees the color red? It's just 750 nm in eye language.
I dunno. :chin:
Mary's deficit in the room is only that she hasn't seen red. Apart from that she is a normal experiencing human being.
A computer doesn't experience anything. All the information you and I have ever acquired has come from experience.
A tree is/can be green and brown...
If you know each part of the former sentence, you therefore understand what is meant. To get a machine to understand then it must be programmed concisely.
As I tried to explain with Mary's room thought experiment, redness is just 750 nm (wavelength of red) in eye dialect. Just as you can't claim that you've learned anything new when the statement the burden of proof is translated in latin as onus probandi, you can't say that seeing red gives you any new information.
What you've set out here is just one side of the disagreement about Mary's Room, but I am suggesting that not just red but everything you have learned comes from experience. Do you have a counter to that?
Yes, I think so. I'll give you an argument Socrates made.
1. Nothing in our experience is truly, precisely, equal. Everything we encounter around us is either never equal or only approximately equal.
Yet,
2. We have the concept of perfect equality.
Ergo,
3. Not everything we know is drawn from experience.
Eyes do not perceive, so the answer to the question is no (I'm sure you didn't literally mean that eyes perceive, but you have to be specific here enough for me to know what you did mean).
Color vision in most humans is trichromatic; to such humans, 750nm light would affect the visual system in a particular way, that contrasts quite a bit from 550nm light. The tristimulus values for each would be X=0.735, Y=0.265, Z=0 and X=0.302, Y=0.692, Z=0.008 respectively. A protanope would be dichromatic; the protanope's visual system might have [s]tristimulus[/s] distimulus values for each color as X=1.000, Y=0.000 and 550nm light as X=0.992, Y=0.008.
Assuming Jack is typical, Jane has an inverted spectrum, and Joe is a protanope, Jack and Jane agree 750nm light is red and 550nm light is green; and Joe doesn't quite get what the fuss is about.
Quoting Daemon
Imagine a test. There are various swatches within 0.1 units of each other from X=0.735, Y=0.265, Z=0; and this is mixed in with various swatches within 0.1 units from X=0.302, Y=0.692, Z=0.008. Jack, Jane, Joe, and a robot affixed with a colorimeter are tasked to sort the swatches of the former kind together and the swatches of the latter kind together into separate piles. Jack, Jane, and the robot would be able to pass this test. Joe will have some difficulty.
Jack and Jane do this task well using their experiences of seeing the swatches. Joe will have great difficulty with this task despite experiencing the swatches. The robot can be programmed to succeed at this test with success rates rivaling Jack and Jane, despite having no experiences.
I'll grant that all of the information Jack, Jane, and Joe have ever acquired has come from experience. I'll grant that the robot here does not experience. But granting this, with regard to this test, Joe's the odd one out, not the robot.
Maybe Jack, Jane, and Joe only being able to sort swatches using their experiences does not demonstrate that experience is the critical thing necessary to sort swatches correctly.
Languages maybe mutually unintelligible but nothing new is added in translation from one to another. Joe's knowledge that red is 750 nm, even when he's blind to red, is equivalent to Jack and Jane seeing/perceiving red. Red is, after all, light of 750 nm in eye dialect.
Here's a little thought experiment:
If I say out loud to you "seven" and then follow that up by writing "7" and showing it to you, is there any difference insofar as the content of my spoken and written message is concerned?
No!
Both "seven" (aural) and "7" (visual) contain the same information - seven-ness.
Likewise, seeing the actual color red is equivalent to knowing the number 750 (nm) - they're both the same thing and nothing new is learned by looking at a red object.
There's language translation, and there's wrong. What color is a polar bear, Santa's beard, and snow?
Quoting TheMadFool
Your thought experiment is misguided. 7 is a number. Seven is another name for the number 7. But 7 aka seven is not a dwarf. There might be seven dwarves, but seven isn't a dwarf.
Quoting TheMadFool
Seeing the actual color red is not equivalent to knowing the number 750nm. Colors are not wavelengths of light; wavelengths of light have color (if you isolate light to said wavelength photons and have enough to trigger color vision), but a wavelength of light and a color aren't the same thing. A polar bear is white, not red (except after a nice meal), despite his fur reflecting photons whose wavelength is 750nm. There's no such thing as a white photon. White is a color. Colors are not wavelengths of light.
Joe also sees a color, in a color space we don't tend to name (because we're cruel?), when he sees 750nm light. But the color he sees is pretty much the same color as 550nm light. We call the former red, and the latter green.
It might work as a metaphor, but I wouldn't go further than that.
Why?
It's not really the same thing, in short. Language does more than what perception does, and perception does more than what language does. They deserve different concepts. I don't think I want to elaborate here; I haven't bothered with the other thread yet (and once I do, I might just lurk, as I typically do way more often than comment).
I hadn't thought it through too. It just seemed to make sense to me, intuitively that is. I guess it's nothing. G'day.
A: I broke a finger yesterday.
Implicature: The finger was A's finger.
A: Smith doesn’t seem to have a girlfriend these days.
B: He has been paying a lot of visits to New York lately.
Implicature: He may have a girlfriend in New York.
A: I am out of petrol.
B: There is a garage around the corner.
Implicature: You could get petrol there.
A: Are you coming out for a beer tonight?
B: My in-laws are coming over for dinner.
Implicature: B can't go out for a beer tonight.
You can complete the remaining examples (a computer can't).
A: Where is the roast beef?
B: The dog looks happy.
A: Has Sam arrived yet?
B: I see a blue car has just pulled up.
A: Did the Ethiopians win any gold medals?
B: They won some silver medals.
A: Are you having some cake?
B: I'm on a diet.
I think you're running down the garden path.
I'm a human. I experience things. I also understand things. I can do things like play perfect tic tac toe, go the store and buy bananas, and solve implicature puzzles.
I'm also a programmer. I have the ability to "tell a computer what to do". I can easily write a program to play perfect tic tac toe. Not only can I do this, but I can specifically write said program by self reflecting on how I myself would play perfect tic tac toe; that is, I can appeal to my own intuitive understanding of tic tac toe, using self reflection, and emit this in the form of a formal language that results in a computer playing perfect tic tac toe.
But by contrast, to write a program that drives a bot to go to the store and buy bananas, or to solve implicature puzzles, is incredibly difficult. Mind you, these are easy tasks for me to do, but that tic tac toe trick I pulled to write the perfect tic tac toe player just isn't going to cut it here.
I don't think you're grasping the implication of this here. It sounds as if you're positing that you, a human, can easily do something... like go to the store and buy bananas, or solve implicatures... and a computer, which isn't a human, cannot. And that this implies that computers are missing something that humans have. That is the garden path I think you're running down... you have a bad impression. It's us humans that are building these computers that have, or don't have as the case may be, these capabilities. So when I show you my perfect tic tac toe playing program, that is evidence that humans understand tic tac toe. When I show you my CAT tool that can't even solve an implicature problem, this is evidence that humans have not solved the problem of implicature.
And maybe they will; maybe in 15 years you'll be surprised. Your CAT tool will suddenly solve these implicatures like there's no tomorrow. But that just indicates that programmers solved implicatures... the CAT tool still wouldn't know what a banana is. How could it?
The whole experience thing is a non-sequitur. I have just as much "experiencing" when I write tic tac toe as I do when I fail to make a CAT tool that solves implicatures. I don't think if I knew how to put experiences into the CAT tool that this would do anything to it that would help it solve implicatures. I certainly don't make that tic tac toe perfect player by coding in experiences. It's really easy to say humans have experiences, humans can do x, and computers cannot do x, therefore x requires experiences. But I don't grasp how this can actually be a justified theory. I don't get what "work" putting experiences in is being theorized to do to pull off what it's being claimed as being critical for.
To know means to, have, mentally/spiritually.
Though you may understand something, you may lose it when further complexities concerning it's concept arise.
You understand shape, but at the mention of adv. Shape you seem to lose what you got.
When you understand fully a concept, you can know about it - you can secure what you get from it.
Knowing is halving, it's as simple as looking at this word - example - and being able to half all aspects of it(it's symbol, its meaning, it's reality, etc.). Halving in mind is not directly about the fraction(though it is), but more the process.
I look up at the sky, I am able to say I know what it is, because I can quickly decompose it - half.
I've been saying from the start that computers don't do things (like calculate, translate), we use them to do those things.
Stranger (to me): Yeah, sure. Take this road and turn left at the second junction. There's a hotel there, a good one.
---
Me (to Siri): Siri, can you give me the directions to the nearest hotel?
Siri: The shortest route to the hotel nearest you is take x street, turn left at y street . It should take you about 3 minutes in light traffic.
Both Siri and the kind stranger (seem to have) understood my question. A mini Turing Test.
I'm pretty sure if you understood what I was saying, you would see there's no contradiction. So if you are under the impression there's a contradiction, you're missing something.
Quoting Daemon
Quoting InPitzotl
Your CAT tool doesn't interact with bananas.
Quoting Daemon
What is this "do" you're talking about? I program computers, I go to the store and buy bananas, I generate a particular body temperature, I radiate in the infrared, I tug the planet Jupiter using a tiny force, I shake when I have too much coffee, I shake when I dance... are you talking about all of this, or just some of it?
I assume you're just talking about some of these things... so what makes the stuff I "do" when I do it, what I'm "doing", versus stuff I'm "not doing"? (ETA: Note that experience cannot be the difference; I experience shaking when I have coffee just as I experience shaking when I dance).
Yes. The Stanford Encyclopedia says that the Turing Test was initially suggested as a means to determine whether a machine can think. But we know how Siri works, and we know that it's not thinking in the way we think.
When we give directions to a hotel we use a mental map based on our experiences.
Siri uses a map based on our experiences. Not Siri's experiences. Siri doesn't have experiences. You know that, right?
Quoting TheMadFool
Because the stranger understands the question in a way Siri could not, he is able to infer that you have requirements which your words haven't expressed. You aren't just looking for the nearest hotel, he thinks you will also want a good one. And he knows (or thinks he knows) what a good one is. Because of his experience of the world.
That's what it's like when we think. We understand what "good" means, because we have experienced pleasure and pain, frustration and satisfaction.
But neither does a robot.
Quoting InPitzotl
I'm talking about acting as an agent. That's something computers and robots can't do, because they aren't agents. We don't treat them as agents. When your robot runs amok in the supermarket and tears somebody's head off, it won't be the robot that goes to jail. If some code you write causes damage, it won't be any good saying "it wasn't me, it was this computer I programmed". I think you know this, really.
They aren't agents because they aren't conscious, in other words they don't have experience.
Quoting Daemon
Just to remind you what you said exactly one post prior. Of course the robot interacts with bananas. It went to the store and got bananas.
What you really mean isn't that the robot didn't interact with bananas, but that it "didn't count". You think I should consider it as not counting because this requires more caution. But I think you're being "cautious" in the wrong direction... your notions of agency fail. To wit, you didn't even appear to see the question I was asking (at the very least, you didn't reply to it) because you were too busy "being careful"... odd that?
I'm not contradicting myself, Daemon. I'm just not laden with your baggage.
Quoting InPitzotl
...this is what you quoted. This was what the question actually was. But you didn't answer it. You were too busy "not counting" the robot:
Quoting Daemon
I'm conscious. I experience... but I do not agentively do any of those underlined things.
I do not agentively generate a particular body temperature, but I'm conscious, and I experience. I do not agentively radiate in the infrared... but I'm conscious, and experience. I do not agentively shake when I have too much coffee (despite agentively drinking too much coffee), but I'm conscious, and experience. I even am an agent, but I do not agentively do those things.
There's something that makes what I agentively do agentive. It's not being conscious or having experiences... else why aren't all of these underlined things agentive? You're missing something, Daemon. The notion that agency is about being conscious and having experience doesn't work; it fails to explain agency.
Quoting Daemon
Ah, how human-centric... if a tiger runs amok in the supermarket and tears someone's head off, we won't send the tiger to jail. Don't confuse agency with personhood.
Quoting Daemon
If I let the tiger into the shop, I'm morally culpable for doing so, not the tiger. Nevertheless, the tiger isn't acting involuntarily. Don't confuse agency with moral culpability.
Quoting Daemon
I think you're dragging a lot of baggage into this that doesn't belong.
So answer it.
The question is, why is it agentive to shake when I dance, but not to shake when I drink too much coffee? And this:
Quoting Daemon
...doesn't answer this question.
ETA: This isn't meant as a gotcha btw... I've been asking you for several posts to explain why you think consciousness and experience are required. This is precisely the place where we disagree, and where I "seem to" be contradicting myself (your words). The crux of this contradiction, btw, is that I'm not being "careful" as is "required" by such things. I'm trying to dig into what you're doing a bit deeper than this hand waving.
I'm interpreting your "do"... that was your point... as being a reference to individuality and/or agency. So tell me what you think agency is (or correct me about this "do" thing).
Agency is the capacity of an actor to act. Agency is contrasted to objects reacting to natural forces involving only unthinking deterministic processes.
What I really mean is that it didn't interact with anything in the way we interact with things. It doesn't see, feel, hear, smell or taste bananas, and that is how we interact with bananas.
I've no idea why you think it muddies the water... I think it's much clearer to explain why shaking after drinking coffee isn't agentive yet shaking while I dance is. Such an explanation gets closer to the core of what agency is. Here (shaking because I'm dancing vs shaking because I drank too much coffee) we have the same action, or at least the same descriptive for actions; but in one case it is agentive, and in the other case it is not.
Quoting Daemon
Agentive action is better thought of IMO as goal directed than merely as "thought". In a typical case an agent's goal, or intention, is a world state that the agent strives to attain. When acting intentionally, the agent is enacting behaviors selected from schemas based on said agent's self models; as the act is carried out, the agent utilizes world models to monitor the action and tends to accommodate the behaviors in real time to changes in the world models, which implies that the agent is constantly updating the world models including when the agent is acting.
This sort of thing is involved when I shake, while I'm dancing. It is not involved when I shake, after having drank too much coffee. Though in the latter case I still may know I'm shaking, by updating world models; I'm not in that case enacting the behavior of shaking by selecting schemas based on my self models in order to attain goals of shaking. In contrast, in the former case (shaking because I'm dancing), I am enacting behaviors by selecting schemas based on my self model in order to attain the goal of shaking.
So, does this sound like a fair description of agency to you? I am specifically describing why shaking because I've had too much coffee isn't agentive while shaking because I'm dancing is.
But where do the goals come from, if not from "mere thought"?
Why classify this question as important? Is as important or non-important as any other question. I'm not trying to troll or provoke here but what if you have the answer? Then you understand what it is. Makes this people more understanding? Knowing what understanding presupposes a theoretical framework to place the understanding in. Understanding this is more important than knowing what it is inside this framework. Of course it's important to understand people. It connects us. Makes us love and hate. The lack of it can give rise to loneliness. Though it's not a sine-qua-non for a happy life. You can be in love with someone you can't talk to, because of a language barrier. Though harder, understanding will find a way. No people are completely non-understandable. Only truly irrational ones, but these people are mostly imaginary., although I can throw a stone over the water without any reason. You can push the importance of understanding but at the same time non-understanding is important as well. Like I said, knowing the nature of understanding doesn't help in the understanding itself. It merely reformulated it and puts in in an abstract formal scheme, doing injustice to the frame-dependent content. It gives no insight into the nature of understanding itself.
In terms of explaining agentive acts, I don't think we care. I don't have to answer the question of what my cat is thinking when he's following me around the house. It suffices that his movements home in on where I'm going. That is agentive action. Now, I don't think all directed actions are agentive... a heat seeking missile isn't really trying to attain a goal in an agentive way... but the proper question to address is what constitutes a goal, not what my cat is thinking that leads him to follow me.
My cat is an agent; his eyes and ears are attuned to the environment in real time, from which he is making a world model to select his actions from schemas, and he is using said world models to modulate his actions in a goal directed way (he is following me around the house). I wouldn't exactly say my cat is following me because he is rationally deliberating about the world... he's probably just hungry. I'm not sure if what my cat is doing when setting the goal can be described of as thought; maybe it can. But I don't really have to worry about that when calling my cat an agent.
But a robot buying bananas is?
Why not?
But I want you to really answer the question, so I'm going to carve out a criteria. Why am I wrong to say the robot is being agentive? And the same goes in the other direction... why are you not wrong about the cat being agentive? Incidentally, it's kind of the same question. I think it's erroneous to say my cat's goal of following me around the house was based on thought.
Incidentally, let's be honest... you're at a disadvantage here. You keep making contended points... like the robot doesn't see (in the sense that it doesn't experience seeing); I've never confused the robot for having experiences, so I cannot be wrong by a confusion I do not have. But you also make wrong points... like that agentive goals require thought (what was my cat thinking, and why do we care about it?)
You're wrong because the robot doesn't have a goal. We have a goal, for the robot.
Ah, finally... the right question. But why not?
Be precise... it's really the same question both ways. What makes what the robot not have a goal, and what by contrast makes what my cat have a goal?
The cat wants something. The robot is not capable of wanting. The heat seeking missile is not capable of wanting.
I'm not sure what "want" means to the precision you're asking. The implication here is that every agentive action involves an agent that wants something. Give me some examples... my cat sits down and starts licking his paw. What does my cat want that drives him to lick his paw? It sounds a bit anthropomorphic to say he "wants to groom" or "wants to clean himself".
But it sounds True Scotsmanny to say my cat wants to lick his paws in any sense other than that he enacted this behavior and is now trying to lick his paws. If there is such another sense, what is it?
And why should I care about it in terms of agency? Are cats, people, or dragon flies or anything capable of "trying to do" something without "wanting" to do something? If so, why should that not count as agentive? If not, then apparently the robot can do something us "agents" cannot... "try to do things" without "wanting" (whatever that means), and I still ask why it should not count as agentive.
Quoting Daemon
The robot had better be capable of "trying to shop and get bananas", or it's never going to pull it off.
An apple ~can be [I]red[/I] is an intellectual statement, and I am giving it to you.
I am given a kiwi, and hypothetically I know nothing about it, I have nothing to give, but I understand it, in so much as I have a sense of it.
Because you're an experiencing being you know what "want" means. Like the cat you know what it feels like to want food, or attention.
This is the Philosophy of Language forum. My contention is that a computer or a robot cannot understand language, because understanding requires experience, which computers and robots lack.
Whether a robot can "try" to do something is not a question for Philosophy of Language. But I will deal with it in a general way, which covers both agency and language.
In ordinary everyday talk we all anthropomorphise. The thermostat is trying to maintain a temperature of 20 degrees. The hypothalamus tries to maintain a body temperature around 37 degrees. The modem is trying to connect to the internet. The robot is trying to buy bananas. But this is metaphorical language.
The thermostat and the robot don't try to do things in the way we do. They are not even individual entities in the way we are. It's experience that makes you an individual.
No, being agentively integrated is what makes me (and you) an individual. We might could say you're an individual because you are "of one mind".
For biological agents such as ourselves, it is dysfunctional not to be an individual. We would starve to death as Buridan's asses; we would waste energy if we all had Alien Hand Syndrome. Incidentally, a person with AHS is illustrative of an entity where the one-mindedness breaks down... the "alien" so to speak in AHS is not the same individual. Nevertheless, an alien hand illustrates very clear agentive actions, and suggests experiencing, which draws in turn a question mark over your notion that it's the experiencing that makes you an individual.
Quoting Daemon
Not in the robot case. This is no mere metaphor; it is literally the case that the robot is trying to buy bananas.
Imagine doing this with a wind up doll (the rules are, the wind up doll can do any choreography you want, but it only does that one thing when you wind it up... so you have to plan out all movements). If you try to build a doll to get the bananas, you would never pull it off. The slightest breeze turning it the slightest angle would make it miss the bananas by a mile; it'd be lucky to even make it to the store... not to mention the fact that other shoppers are grabbing random bananas while stockers are restocking with bananas in random places, shoppers constantly are walking in the way and what not.
Now imagine all of the possible ways the environment can be rearranged to thwart the wind up doll... the numbers here are staggeringly huge. Among all of these possible ways not to get a banana, the world is and would evolve to be during the act of shopping a particular way. There does exist some choreography of the wind up doll for this particular way that would manage to make it in, and out, of the store with the banana in hand (never mind that we expect a legal transaction to occur at the checkout). But there is effectively no way you can predict the world beforehand to build your wind up doll.
So if you're going to build a machine that makes it out of the store with bananas in hand with any efficacy, it must represent the teleos of doing this; it must discriminate relevant pieces of the environment as they unpredictably change; it must weigh this against the teleos representation; and it must use this to drive the behaviors being enacted in order to attain the teleos. A machine that is doing this is doing exactly what the phrase "trying to buy bananas" conveys.
I'm not projecting anything onto the robot that isn't there. The robot isn't conscious. It's not experiencing. What I'm saying is there is something that has to genuinely be there; it's virtually a derived requirement of the problem spec. You're not going to get bananas out of a store by using a coil of two metals.
Quoting Daemon
And my contention has been throughout that you're just adding baggage on.
Let's grant that understanding requires experience, and grant that the robot I described doesn't experience. And let's take that for a test run.
Suppose I ask Joe (human) if he can get some bananas on the way home, and he does. Joe understands my English request, and Joe gets me some real bananas. But if I ask Joe to do this in Dutch, Joe does not understand, so he would not get me real bananas. If I ask my robot, my robot doesn't understand, but can get me some real bananas. But if I ask my robot to do this in Dutch, my robot doesn't understand, so cannot get me real bananas. So Joe real-understands my English request, and can real-comply. The robot fake-understands it but can real-comply. Joe doesn't real-understand my Dutch request, so cannot real-comply. The robot doesn't fake-understand my Dutch request but this time cannot real-comply. Incidentally, nothing will happen if I ask my cable modem or my thermostat to get me bananas in English or Dutch.
So... this is the nature of what you're trying to pitch to me, and I see something really weird about it. Your experience theory is doing no work here. I just have to say the robot doesn't understand because it's definitionally required to say it, but somehow I still get bananas if it doesn't-understand-in-English but not if it doesn't-understand-in-Dutch, just like with Joe, but that similarity doesn't count because I just have to acknowledge Joe experiences whereas the robot doesn't, even though I'm asking neither to experience but just asking for bananas. It's as if meaning isn't about meaning any more; it's about meaning with experiences. Meaning without experiences cannot be meaning, even though it's exactly like meaning with experiences save for the definitive having the experiences part.
Daemon! Can you not see how this just sounds like some epicycle theory? Sure, the earth being still and the center of the universe works just fine if you add enough epicycles to the heavens, but what's this experience doing for me other than just mucking up the story of what understanding and agency means?
That is what I mean by baggage, and you've never justified this baggage.
Quoting InPitzotl
That's broadly what I meant when I said that it's experience that makes you an individual, but you seem to think we disagree.
For me "having a mind" and "having experiences" are roughly the same thing. So I could say having a mind makes you an individual, or an entity, or a person. Think about the world before life developed: there were no entities or individuals then.
Quoting InPitzotl
Suppose instead of buying bananas we asked the robot to control the temperature of your central heating: would you say the thermostat is only metaphorically trying to control the temperature, but the robot is literally trying? Could you say why, or why not?
For me, "literally trying" is something only an entity with a mind can do. They must have some goal "in mind".
Quoting InPitzotl
The Oxford English Dictionary's first definition of "mind" is: "The element of a person that enables them to be aware of the world and their experiences".
The word "meaning" comes from the same Indo-European root as the word "mind". Meaning takes place in minds.
The robot has no mind in which meaning could take place. Any meaning in the scenario with the bananas is in our minds. The robot piggybacks on our understanding, which is gained from experience. If nobody had experienced bananas and shops and all the rest, we couldn't program the robot to go and buy them.
My use of mind here is metaphorical (a reference to the idiom "of one mind").
Incidentally, I think we do indeed agree on a whole lot of stuff... our conceptualization of these subjects is remarkably similar. I'm just not discussing those pieces ;)... it's the disagreements that are interesting here.
Quoting Daemon
I don't think this is quite our point of disagreement. You and I would agree that we are entities. You also experience your individuality. I'm the same in this regard; I experience my individuality as well. Where we differ is that you think your experience of individuality is what makes you an individual. I disagree. I am an individual for other reasons; I experience my individuality because I sense myself being one. I experience my individuality like I see an apple; the experience doesn't make the apple, it just makes me aware of the apple.
Were "I" to have AHS, and my right hand were the alien hand, I would experience this as another entity moving my arm. In particular, the movements would be clearly goal oriented, and I would pick up on this as a sense of agency behind the movement. I would not in this particular condition have a sense of control over the arm. I would not sense the intention of the arm through "mental" means; only indirectly through observation in a similar manner that I sense other people's intentions. I would not have a sense of ownership of the movement.
Quoting Daemon
Yes; the thermostat is only metaphorically trying; the robot is literally trying.
Quoting Daemon
Sure.
Consider a particular thermostat. It has a bimetallic coil in it, and there's a low knob and a high knob. We adjust the low knob to 70F, and the high to 75F. Within range nothing happens. As the coil expands, it engages the heating system. As the coil contracts, it engages the cooling system.
Now introduce the robot and/or a human into a different environment. There is a thermometer on the wall, and a three way switch with positions A, B, and C. A is labeled "cool", B "neutral", and C "heat.
So we're using the thermostat to maintain a temperature range of 70F to 75F, and it can operate automatically after being set. The thermostat should maintain our desired temperature range. But alas, I have been a bit sneaky. The thermostat should maintain that range, but it won't... if you read my description carefully you might spot the problem. It's miswired. Heat causes the coil to expand, which then turns on the heating system. Cold causes the coil to contract, which then turns on the cooling system. Oops! The thermostat in this case is a disaster waiting to happen; when the temperature goes out of range, it will either max out your heating system, or max out your cooling system.
Of course I'm going for an apples to apples comparison, so just as the thermostat is miswired, the switches are mislabeled. A actually turns on the heating system. C engages the cooling system. The human and the robot are not guaranteed to find out the flaw here, but they have a huge leg up over the thermostat. There's a fair probability that either of these will at least return the switch to the neutral position before the heater/cooler maxes out; and there's at least a chance that both will discover the reversed wiring.
This is the "things go wrong" case, which highlights the difference. If the switches were labeled and the thermostat were wired correctly, all three systems would control the temperature. It's a key feature of agentive action to not just act, but to select the action being enacted from some set of schemas according to their predicted consequences in accordance with a teleos; to monitor the enacted actions; to compare the predicted consequences to the actual consequences; and to make adjustments as necessary according to the actuals in concordance with attainment. This feature makes agents tolerant against the unpredicted. The thermostat is missing this.
ETA:
Quoting Daemon
The word "atom" comes from the Latin atomus, which is an indivisible particle, which traces to the Greek atomos meaning indivisible. But we've split the thing. The word "oxygen" derives from the Greek "oxys", meaning sharp, and "genes", meaning formation; in reference to the acidic principle of oxygen (formation of sharpness aka acidity)... which has been abandoned.
Meaning is about intentionality. In regard to external world states, intentionality can be thought of as deferring to the actual. This is related to the part of agentive action which not only develops the model of word states from observation, and uses that model to align actions to attain a goal according the predictions the model gives, but observes the results as the actions take place and defers to the observations in contrast to the model. In this sense the model isn't merely "about" itself, but "about" the observed thing. That is intentionality. Meaning takes place in agents.
This may be off topic , but that’s one definition of intentionality, but not the phenomenological one.
In phenomenology , objects are given through a
mode of givenness, so the model participates in co-defining the object.
I don't think it's off topic, but just to clarify, I am not intending to give this as a definition of intentionality. I'm simply saying there's aboutness here.
I've looked at this many times, and thought about it, but I just can't see why you think it is significant. Why do you think being tolerant against the unpredicted makes something an agent, or why do you think being tolerant against the unpredicted means that the robot is trying.
But also there's a more fundamental point that I don't believe you have addressed, which is that a robot or a computer is only an "entity" or a "system" because we choose to regard it that way. The computer or robot is not intrinsically an entity, in the way you are.
I was thinking about this just now when I saw this story "In Idyllwild, California a dog ran for mayor and won and is now called Mayor Max II".
The dog didn't run for mayor. The town is "unincorporated" and doesn't have its own local government (and therefore doesn't have a mayor). People are just pretending that the dog ran for mayor.
In a town which was incorporated and did have a mayor, a dog would not be permitted to run for office.
I think you took something descriptive as definitive. What is happening here that isn't happening with the thermostat is deference to world states.
Quoting Daemon
You're just ignoring me then, because I did indeed address this.
You've offered that experience is what makes us entities. I offered AHS as demonstrating that this is fundamentally broken. An AHS patient behaves as multiple entities. Normative human subjects by contrast behave as one entity per body. Your explanation simply does not work for AHS patients; AHS patients do indeed experience, and yet, they behave as multiple entities. The defining feature of AHS is that there is a part of a person that seems to be autonomous yet independent of the other part. This points to exactly what I was telling you... that being an entity is a function of being agentively integrated.
So I cannot take it seriously that you don't believe I have addressed this.
Quoting Daemon
I think you have some erroneous theories of being an entity. AHS can be induced by corpus callosotomy. In principle, given a corpus callosotomy, your entity can be sliced into two independent pieces. AHS demonstrates that the thing that makes you an entity isn't fundamental; it's emergent. AHS demonstrates that the thing that makes you an entity isn't experience; it's integration.
Quoting Daemon
Not sure what you're trying to get at here. Are you saying that dogs aren't entities? There's nothing special about a dog-not-running for mayor; that could equally well be a fictional character or a living celebrity not intentionally in the running.
Cutting the corpus callosum creates two entities.
So if experience is what makes us an entity, how could that possibly happen?
If integration makes us an entity, you're physically separating two hemispheres. That's a sure fire way to disrupt integration. The key question then becomes, if cutting the corpus callosum makes us two entities, why are we one entity with an intact corpus callosum, as opposed to those two entities?
Incidentally, we're not always that one entity with an intact corpus callosum. A stroke can also induce AHS.
Two experiencing entities.
But I don't really think the effects of cutting the corpus callosum are as straightforward as they are sometimes portrayed, for example, the person had a previous life with connected hemispheres.
Quoting Daemon
I don't think the "corpus callosum/AHS" argument addresses this.
Alright, I think we're talking past each other a bit. The two (mind you, not one) experiencing entities are a result of corpus callosotomy. The notion that experience is what makes you an entity cannot account for the fact that a corpus callosotomy should make two entities. Agentive integration by contrast explains why you are a single entity. The notion that you are an entity due to agentive integration does account for the fact that a corpus callosotomy should make two entities. Once again, experience is doing no work for you here; it's an epicycle.
Quoting Daemon
No idea what you're saying here. Are you suggesting there are two individuals before the corpus callosotomy?
Quoting Daemon
Quite the opposite; see above. We can take an external view as well:
https://movie-usa.glencoesoftware.com/video/10.1212/WNL.0000000000006172/video-1
From: https://n.neurology.org/content/91/11/527
This person did not have a corpus callosotomy; she had a stroke (see article). It is very obvious that she has AHS. That's also curious... why is it obvious? What behaviors is she exhibiting that suggest AHS?
I don't understand why you are telling me that, as if it was a point against me.
Quoting InPitzotl
I don't understand why you are asking me that.
Because you keep asking about being an entity, but you're not accounting for the number here. But you keep saying that I haven't accounted for things.
Quoting Daemon
Because we can indeed tell by her behaviors. The subject talking to us is behaving as if her alien hand is a stranger. And you aren't diagnosing her alien hand by counting how many "experiences" there are. Her behavior is distinct from a normative case, but also distinct from someone who has half their body paralyzed after a stroke. There's still agency in there, just not integrated. Apparently you think that's a bad description; but it's kind of definitive of the condition.
However, I don't see why understanding should be limited/constrained in this way. The Buddha, it's said, once saw a pot of gold and exclaimed to his disciples "look, a snake!"