Why aliens will never learn to speak our language
The main reason we have not been able to replicate human conversation with computers is because we use mirroring in human speech. This means that we trust that our phrases cause almost the same associations in the minds of the participants of the conversation. And then we just have to modify these associations a little to understand one another.
The problem is that our associations are dependent on almost everything that makes up a human mind. They are affected by the mood of the situation, how things look like, what the current events are and how they affect the particular group that is talking, our human needs and priorities and other things that are very particular to human programming.
This causes that our speech works only between systems that have almost the same human programming so that the phrases cause almost the same associations. We can see this even between humans of different cultures. Even if the cultures speak the same language, it becomes hard for them to understand each other if the phrases and contexts cause different associations in those cultures.
Because of this:
A - we will never have a fluent conversation with aliens unless they are programmed almost exactly like us.
B - we will not program an AI that can speak human in the foreseeable future because we don’t have the empirical knowledge of how human mind is programmed to replicate that programming in an AI and thus enable the AI to use mirroring.
C - if a single human changed his programming in a major way (for example by emphasizing logic in his thinking beyond normal) he would gradually lose his ability to fluently communicate with other people unless other people changed at the same rate.
Not that this means that we can’t communicate in any way in these situations. Logical languages like mathematics are still a way to communicate even without mirroring.
The problem is that our associations are dependent on almost everything that makes up a human mind. They are affected by the mood of the situation, how things look like, what the current events are and how they affect the particular group that is talking, our human needs and priorities and other things that are very particular to human programming.
This causes that our speech works only between systems that have almost the same human programming so that the phrases cause almost the same associations. We can see this even between humans of different cultures. Even if the cultures speak the same language, it becomes hard for them to understand each other if the phrases and contexts cause different associations in those cultures.
Because of this:
A - we will never have a fluent conversation with aliens unless they are programmed almost exactly like us.
B - we will not program an AI that can speak human in the foreseeable future because we don’t have the empirical knowledge of how human mind is programmed to replicate that programming in an AI and thus enable the AI to use mirroring.
C - if a single human changed his programming in a major way (for example by emphasizing logic in his thinking beyond normal) he would gradually lose his ability to fluently communicate with other people unless other people changed at the same rate.
Not that this means that we can’t communicate in any way in these situations. Logical languages like mathematics are still a way to communicate even without mirroring.
Comments (47)
It's probably an inefficient and unnecessarily complex way to achieve general intelligence in a transistor based system to try to replicate the programming of a particular neuron based system that we know has a huge amount of unnecessary quirks and flaws. This is why we should remove human speech from the list of things we are trying to make our general-intelligence-AIs to be able to do.
Well put, but I don't see why all aliens must lack in ability of human-like mirroring. Some aliens may have had experiences and developments in their evolutionary past that are similar to human experiences and developments. This is what you need to show is impossible. I don't think this can be shown in an a priori manner.
I'm not actually saying that no aliens are similar enough to use mirroring with us - just that us coming into contact with those particular very rare aliens would be so improbable that in practice it will never happen. Although, I guess we will never have to be in live contact in order for them to learn our language. Still I think those aliens would be so rare, that even the recordings we leave will never be discovered by them.
The text even specifies that "unless the aliens are programmed almost exactly like us".
the point is many objects in the world, even with some difference in senses, are commonly perceivable, and many of the problems faced by different kinds of life overlap so it makes sense the vocabulary and language of those organisms, if existent, would also overlap enough to allow communication. I don’t see why an alien, that can sense, and perceive us and our surroundings and ascribe value to those different things, couldn’t communicate with us in terms of those things
Perhaps of some relevance is our ability to "understand" animals. I don't know how much we've progressed in the the field of animal communication but there are some various clearly unambiguous expressions e.g. a dog's growl that we seem to have understood. As to whether we can extrapolate animal-human communication to human-alien exchanges is an open question.
Personally, if the universe is really uniform as we assume then language would be either visual or audio based which narrows the possibilities sufficiently to permit alien-human communication.
However, as you mentioned, we know for a fact that human languages are unintelligible to each other and the distance between human languages is likely much much less than between human languages and alien languages.
If I understand what you mean by "mirroring", it plays an important part in when the subject of discussion is privileged in some sense i.e. there exists a certain association that isn't common knowledge and it's that particular link you want to convey. Under such circumstances communication can break down but this are rare occasions otherwise how on earth are people able to make sense of each other? Civilization would collapse if this problem just a tad more common.
Unfortunately, depending on your outlook, "important" discourses are highly susceptible to the "mirroring" problem. For instance in difficult subjects we need to make the right association and that may be difficult especially for novices and even experts.
So, you're right in that alien- human communication maybe harder than imagined but I'm going to bet my money on the "higher" intelligence of ET to see us through that roadblock.
The mirroring isn't just about associations which can be learned. It's also about the way things are just processed by the brain of your species. For example, if your brain processes visual information by prioritizing colours first and then using that information to find lines of contrast, the resulting associations in your system will differ from systems which use brightness to find lines of contrast. And when the millions of these systems that create a human mind end up creating our particular associations, the probability that an alien will have a system capable of learning similar enough associations for the mirroring we use in our language is almost zero.
This is why we can't just define the correct associations for a given word or phrase and expect an AI or another species to be able to use it. The underlying programming that ends up choosing those associations in any given context in humans is as complex as the human mind itself and therefore we don't even ourselves know it. And therefore we can't teach it.
A good point. But we have to understand that our evolutionary history with animals is not just similar - it is for the large part the exact same history. And it is also a history where we share a common environment, where evolution has simply created ways for different species to communicate things like "danger" or "no threat" or acceptance for each other.
Because of this, we can "understand" animals and communicate with them in certain simple things with this simple interspecies mirroring. (Not even sure if this is mirroring, since the same associations don't seem to come because of similar programming, but because we have learned them from other sources.) Nothing as complex as our language could be used between things with such major differences in programming though.
If that's the case then, evolution on other planets would also evolve in a similar enough way that would make communication systems of all life in the universe converge rather than diverge. This would mean that, contrary to your argument, "mirroring" ability among lifeforms in the universe may not be so radically different to each other to render communication impossible.
The problem isn't that evolution doesn't cause things to converge on large scales. The problem is that evolution never creates any kinds of "ultimate" or "perfection". Different ways of processing information can work better in different environments. They can work differently if things defined very early in evolution like replication mechanisms of cells are different. They can simply be non optimal vestiges from earlier evolution. And many times times different systems can all work just as well - making no difference for evolution, but still changing the particularities of how associations work in your species (assuming that the species even uses an association based communication).
This combined with the fact that our language requires extremely similar associations to happen. When basic things like "shape" start to mean fundamentally different things simply because one uses colour to define contrasting lines and one uses brightness (which isn't an inferior method in many cases), it doesn't require most of the things to be differently programmed. Even a part of a percent difference in programming can reliably cause enormous changes in the end result. This is the reason why complex mirroring requires so precise similarity from the systems that use it.
I very much disagree with that our definition of general intelligence should be associated with the Turing test. That would be the same as defining "a car" to be only those things that are nearly exactly the same as a volkswagen beetle, since fluent human speech requires one to be capable of reproducing human programming almost exactly. Even a human with all of our quirks and flaws removed, could not speak with us fluently - he would be confused with most of the associations we make with our language - only being capable of using it through definitions and logic for the most part, which is not mirroring. Are you saying that a human without our quirks and flaws is not intelligent?
If a system is capable of gathering information about its environment and making predictions based on it and if it is capable of intependently creating complex technologies and solutions based on its information and just generally doing the things that we humans are capable with our "intelligence", then it is intelligent whether or not it can speak the human language.
As for the future: never say never. I saw 'Arrival', so I know Amy Adams will know how to communicate with the aliens, if no-one else can.
There's enough elbow room in convergent evolution to make inter-species communication impossible and in fact there have been no recorded cases of such events. Each species seems confined to their respective domains as far as language is concerned.
However, if there's anything in favor of communication still being possible is the shared environment. Arguably Hydrogen on earth would be identical to Hydrogen anywhere else in the universe. In fact this assumption has been used for an attempt at alien communication - the golden record on the voyager spacecrafts.
Recall that in the Turing Test, a human evaluator has to decide purely on the basis of reading or hearing a natural language dialogue between two participants, which of the participants is a machine. If he cannot determine the identities of the participants, the machine is said to have passed the test. Understood narrowly as referring to a particular experimental situation, yes the Turing Test fails to capture the broader notion of intelligence. But understood more broadly as an approach to the identification of intelligence, the Turing test identifies or rather defines intelligence pragmatically and directly in terms of the behavioural propensities that satisfy human intuition. The test therefore avoids metaphysical speculation as to what intelligence is or is not in an absolute sense independent of human intuition.
Yes, and that is exactly a form of communication that doesn't use mirroring - a logical language which is based on definitions. Definitions don't need mirroring since they are defined the same irregardless of what you associate with them. And that's what our communications with aliens and AIs will be like - making definitions and saying things simply by those definitions. It's much slower and the things we don't know how to define with purely logical means become near impossible to talk about.
Just curious, what exactly do you mean by "mirroring"?
If the "natural language" is specifically defined not to use mirroring, I might agree with the broader definition of the Turing test. Mirroring would always give an advantage to the human participant since the evaluator would understand his words better since the evaluator is programmed in such a similar way.
But no - even then the test simply doesn't work in anything but finding things that can reproduce the particular way humans are programmed. It is much harder to replicate the behavior of a thing that is on your level or lower than it is to just be on his level or higher. The test just can't be defined in any way where a human evaluator decides which participant is human. Mirroring just makes that too easy no matter how intelligent the other participant is - no matter what language is used since every kind of expression causes associations in human mind.
With this test literally a system which can do everything a human can except predicting some particular associations humans get from specific phrases in specific contexts for reasons even they don't know, would not pass the test. Even if it solved every big problem we humans have not yet solved and explained its reasons for its own goals, it would not pass the Turing test since the evaluator can identify it as the machine.
Yes the Turing test is anthropomorphic, but why is that a problem in the absence of an 'objective' alternative?
Not even a logical language can be identified without mirroring. Recall Wittgenstein's example of an alien tribe stamping their feet and grunting in a way that is compatible with the rules of Chess. Only if we recognised their culture a being similar to ours might we assert they were playing Chess.
Quoting sime
Fame has this mystical quality of turning shit into gold. I think this is what the alchemists were looking for all the time.
Often the difference between something being profound or crazy is the person saying it.
The difference between great and mediocre contemporary art is the artist who made it.
My clock is intelligent. It can tell me the time.
"Prediction" seems a wrong concept to apply to language. I thought that was an astrologer's domain. Language is about information isn't it and while that maybe useful to make predictions, language itself is solely about transmitting information and so your version of "mirroring" seems a bit off the mark. Perhaps you'll enlighten me.
:rofl:
Yes and makes us wonder if we're mistaking one for the other in every possible way which I think can happen in only 2 ways and what a coincidence that number 2 means shit.
When writing "predict", I actually thought of using the word "evaluate", but it simply felt a little off. I agree that I probably should have used "evaluate" or "judge" instead. What I meant was: "If you consider something to be someway, because you are someway, you are using mirroring." In our communication we need a way to evaluate what someone means with their language and we usually use a lot of mirroring to make our evaluations.
It shows that the need for Gods persists in modern society.
I think your op should read 'Why aliens will never learn to understand' our language. For understanding precedes speaking.
Every new born child is an alien.
Whether or not a particular Turing test is appropriate in a given situation is largely a question concerning the breadth of the test. For example, if testing whether a computer 'really' understands Chess, should the test be very narrow and concern only it's ability to produce good chess moves? or should the test be very broad to even include the ability of the computer to produce novel metaphors relating chess to the human condition?
Personally, I don't interpret the spirit of the Turing test as making or implying ontological commitments regarding how AI should be programmed or trained , or as to how intelligence should represent sensory information with language, or even as to what intelligence is or whether it is ultimately reducible to metrics. Neither do I understand the spirit of the Turing test as being prescriptive in telling humans how they ought to judge a participant's actions. Rather I understand Alan Turing as very modestly pointing out the fact that humans tend to recognise intelligence in terms of situationally embedded stimulus-response dispositions.
In other words, the specifics of what goes on inside the 'brain' of a participant is considered to be relevant only to the functional extent that the brain's processes are a causal precondition for generating such situationally embedded behavioural repertoires; the meaning of language and intelligence being undetermined regarding the implementation of stimulus-response mappings.
Indeed, an important criterion of intelligence is the ability to generate unexpected stimulus-responses. Hence any formal and rigid definition of intelligence solely in terms of rules, whether internal in describing computational processes inside the brain, or situationally in terms of stimulus-response mappings, would be to a large extent an oxymoron.
The slug that crawls over my carpet and leaves a trail behind in the morning is intelligent because I haven't been able to find out where he hides himself. Or maybe I am just dumb.
For a good reason or bad?
Do we take aspirin for a good reason or bad? Both I think. Good that we have it, bad that we need it.
If given a choice would you adopt atheism because of the bad reasons or become a theist for the good reasons?
If I could, I would prefer to believe in a benevolent God.
:up:
Wouldn't that be great? Like having super powerful parents. Anytime you need something you can go and binge off them and stay a child for the rest of your life.
Why couldn't you have fluent conversation? I mean, as humans, we can appreciate how valuable a bone-toy is to a dog, how a nest is essential to the life of a bird. Surely, if we could converse, we could comment about those things even though they aren't associations held in common with us. We can see and understand associations that are unrelated to us. Why couldn't a hypothetical intelligent extraterrestrial capable of learning about us do the same, and why couldn't we do the same with them?
You're kinda hitting on the reason we have communications problems with other humans: different mental associations. I'd say that no two people have the exact same associations, not when even moderately complex concepts are involved. The more unlike the people are, the less effective the communication. I agree that with aliens, the differences would be stark. On the other hand, it seems that some communications would be possible - I'd expect there'd still be recognizable referrents to objects and actions, and the relations between them. Discussion of art or politics would probably be hopeless.