You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

How google used Wittgenstein to redefine meaning?

Shawn June 05, 2021 at 18:09 4550 views 17 comments
In your opinion what kind of relationship do you think exists between Wittgenstein and Google?

I have read a now lost scientific article that Wittgenstein had a prominent influence on how Google organizes search results and even translates from one language to another.

Here is a recent article on Google and Wittgenstein.

https://qz.com/1549212/google-translate-is-a-manifestation-of-wittgensteins-theory-of-language/

With a better frame of mind, I was wondering if Google utilizes meaning as use, given their enormous knowledge about how language is utilized by its users and then organizes based on those search results certain results.

Comments (17)

Banno June 06, 2021 at 03:58 #546989
Reply to Shawn Ah, an excellent OP.

One puzzle may be that in Wittgenstein meaning is pretty much replaced by use in a form of life; that is, it is not to be separated from the everyday activities in which you and I engage. But arguably that is what Google does in abstracting a vector representation of a word.

It will be interesting to see how the divide between Chomski and Bengio plays out - the link in your cited article is unfortunately broken.
Shawn June 06, 2021 at 17:04 #547079
Quoting Banno
One puzzle may be that in Wittgenstein meaning is pretty much replaced by use in a form of life; that is, it is not to be separated from the everyday activities in which you and I engage. But arguably that is what Google does in abstracting a vector representation of a word.


I think that's pretty much true. Yet, what do you think about their hierarchical organizing? Is it really science or facts that get the most views in their vector's directionality? And even more interesting, what about saying and showing for Google Images search results?

Quoting Banno
It will be interesting to see how the divide between Chomski and Bengio plays out - the link in your cited article is unfortunately broken.


Care to elaborate?

Here are some more links, regarding Natural Language Processing and Wittgenstein:
https://towardsdatascience.com/neural-networks-and-philosophy-of-language-31c34c0796da
Shawn June 06, 2021 at 17:11 #547081
The link in the OP is for paid membership, sorry, can't do anything about that.

I still cant find my investigations into Google and Wittgenstein has so few results!
Shawn June 06, 2021 at 17:13 #547082
OK, here they are:

https://slate.com/technology/2011/10/google-translate-will-google-s-computers-understand-languages-better-than-humans.html

https://slate.com/human-interest/2015/09/take-a-wittgenstein-class-he-explains-the-problems-of-translating-language-computer-science-and-artificial-intelligence.html
Banno June 06, 2021 at 21:09 #547171
Quoting Shawn
The link in the OP is for paid membership,


It was this: https://amplifypartners.com/machines-that-dream-an-interview-with-yoshua-bengio/

linked from:
but deep learning pioneer Yoshua Bengio has noted that deep learning so far entirely contradicts these theories.


The theories claimed to be contradicted are those of Noam Chomsky.

Now the notion that "certain features of language, such as grammar, are biologically and innately rooted in the mind" is a Kantian one; that is, it's a transcendental argument, along the lines that we somehow get started on grammar, grammar consists in rules, therefore there must be some innate rules. Kant use similar thinking to argue that time and space somehow proceed experience, and various other things.

The logical structure of transcendental arguments is inherently fallible. They require a leap of faith. I don't accept them in Kant, but had given some consideration to their use by Chomsky. There seems to be potential for language to be in some way inherent. The considerations in the article you cite lead away form that view, and taken more broadly away form Kant as well as Chomsky.

But more commonly than that, this thread is relevant to those who hold that words have a definition that gives their meaning - there are far too many of them on this forum; and to those who hold that what philosophers need to do is to find the essence of things. Google shows the poverty of both positions.

Oh, and the Chineses Room - Google might be showing how the room could be constricted without a rule book... showing that Searle's basic insight, that language use is not syntactic, is correct, but without dismissing machine intelligence.

All stuff worth considering.

Writing this in a rush, and so without detail. Bit there's lots here to consider.

Shawn June 06, 2021 at 22:16 #547203
Here's a link that works:

oreilly.com/content/machines-that-dream/

  • Natural language processing has come a long way since its inception. Through techniques such as vector representation and custom deep neural nets, the field has taken meaningful steps toward real language understanding.
  • The language model endorsed by deep learning breaks with the Chomskyan school and harkens back to Connectionism, a field made popular in the 1980s.
  • In the relationship between neuroscience and machine learning, inspiration flows both ways, as advances in each respective field shine new light on the other.
  • Unsupervised learning remains one of the key mysteries to be unraveled in the search for true AI. A measure of our progress toward this goal can be found in the unlikeliest of places—inside the machine’s dreams.


Connectionism and Searle seem intertwined. I have come to the conclusion that it seems more like redundancy theories (which inherently rely on Chomsky or a general accepted syntax to reinforce) and then the correspondence theory to make Connectionism work out.

Just reading it in more detail.
Shawn June 06, 2021 at 22:17 #547205
Quoting Banno
Oh, and the Chineses Room - Google might be showing how the room could be constricted without a rule book... showing that Searle's basic insight, that language use is not syntactic, is correct, but without dismissing machine intelligence.


Yeah, I can see how this is true, and think it makes sense.
Shawn June 07, 2021 at 21:33 #547597
The Chinese Room is some kind of Turing oracle, no, @Banno?
Banno June 07, 2021 at 21:50 #547604
Reply to Shawn That change might work.

Searle used the Chinese room to argue that there was more to meaning than could be captured by mere syntax. A bloke in a room with a book of rules that could translate any piece of Chinese text into English does not understand Chinese.

Does Google Translate understand Chinese?
bongo fury June 07, 2021 at 22:34 #547626
Quoting Banno
Searle used the Chinese room to argue that there was more to meaning than could be captured by mere semantics.


If by "mere" you mean fake.

Quoting bongo fury
I wish I could locate the youtube footage of Searle's wry account of early replies to his vivid demonstration (the chinese room) that so-called "cognitive scripts" mistook syntax for semantics. Something like, "so they said, ok we'll program the semantics into it too, but of course what they came back with was just more syntax".


Never found the clip, but I don't think I badly mis-quote.

Can Google Translate play the human (and genuinely semantic) game of agreeing which words are (pretended to be) pointed at which objects? (Or even more abstruse pretences about speech acts?) One doubts it, as yet.
Pfhorrest June 08, 2021 at 07:03 #547745
Quoting Banno
more to meaning than could be captured by mere semantics


I think you mean syntax? Searle’s mantra was that syntax is not enough for semantics: being able to manipulate symbols successfully doesn’t equal understanding the meaning of them.
Banno June 08, 2021 at 07:45 #547758
Reply to Pfhorrest Yep - written in a hurry.
creativesoul June 09, 2021 at 02:03 #548141
Reply to Shawn Reply to Banno

Interesting stuff!

:smile:
Antony Nickles June 09, 2021 at 07:41 #548194
Reply to Shawn Quoting Shawn
I was wondering if Google utilizes meaning as use, given their enormous knowledge about how language is utilized by its users


I think it's important to unpack the idea of "utiliz[ing] meaning as use"; which here I take comes from monitoring how language is used to understand what people mean, or at least want. First, it might help to see this is trading one thing for another: "meaning as[/is] use", is the same as meaning as... say, metaphysical yada yada (positivism). Wittgenstein is looking at the way we picture meaning; he wants us to look at why we picture it that way. So a 'use' is something we are asked to see, not a way to explain language. A word (or expression, or concept) can have different, say, categories, Wittgenstein calls them "senses"--e.g., "knowledge" as information, knowledge as skill, or knowing as recognizing something in another. These are not different "meanings", but the associated contexts of our lives. So 'use' is not our language being manipulated, nor the job to which it is put--as if we always do something, or that we control (i.e.,'use') language--as the article quotes the Standford Encyclopedia claiming that Wittgenstein wants us to see "the variety of uses to which the word is put." That language is put to use inverts its public meaningfulness with our wants and desires, trading history and the ways of the world for intention and personal causality.

Wittgenstein has a method to see what part of our life an expression touches on--which 'sense'; which 'use'. The various contexts in which words can (or can not) be: appropriate, fitting, recognized. This is not a fixed, singular, or limited connection which makes the "meaning" of words understood. It is a new kind of philosophical posture (Witt says "attitude"); to look at an expression and differentiate one 'sense' from another forces us to examine, and in the process become aware of, our ordinary criteria, judgments, expectations, implications, etc., imbedded in the life we participate in; thus, to know ourselves. To know what matters is to know how something can be meaningful.

Now is this what Google is doing? I think the article is justified in saying the perspective is Wittgenstenian in that it is turning away from: word = meaning (like a definition or corresponding object), but it appears to want to replace 'use' there as a way language has meaning. As if people using language in certain ways allows it to have meaning. This description fits Google's framework because it must work backwards from the way words are 'used' in sentences, i.e., what goes before and after what, in order to try to flesh out the different contexts of our expressions. But it is forced to do this, when we are born into the world already; we are trained in and absorb the ways and failures of it. Its vast variability and evolution make it impossible for Google (or math) to map, describe, or encompass. But what Google is doing is not a substitute definition; the important part for it is the prediction and association. It is guessing what you are going to say next, but just not in the same way we have an expectation of the appropriate next expression given the context.

Let's leave it that our expression associates us with its uses which come from the situation we are in. Again, these different uses are not the 'meanings' of the words--except in the way a veiled threat or insincere apology or insecure boasting have meaning. The world is meaningful to us. The attempt to construct context from leftover signs reduces reading to morbidly sifting through evidence for a world already dead. The outcome narrows because the computer is without vision; it is facing the wrong way. Its experiences (its historical attempts) are in isolation from the human experience. Predictive text is a time saver, amazingly apt at finding the appropriate thing to say, taking the words right out of our mouths, but we still wouldn't call this writing without minimizing the endeavor (which some are happy to do).
Kenosha Kid June 09, 2021 at 08:02 #548198
Quoting Banno
But arguably that is what Google does in abstracting a vector representation of a word.


Sorry, this is late and may now be irrelevant, but Google doesn't store vector representations of words in isolation but rather in their semantic context. So the vector for "dog" in:

"My dog likes walks and belly rubs"

is different to the vector for "dog" in:

"Spielberg's Lincoln (aka Amistad 2) is an absolute dog of a film"

Quoting Banno
Searle used the Chinese room to argue that there was more to meaning than could be captured by mere syntax. A bloke in a room with a book of rules that could translate any piece of Chinese text into English does not understand Chinese.

Does Google Translate understand Chinese?


I think the claim that Google has no such rule book is over-egging a little bit. An equivalent Chinese argument might be: you feed in Chinese texts and their English translations; the man in the room knows neither English nor Chinese, and constructs rules on how and when to translate Chinese symbols into English ones, then attempts to translate the Chinese text according to his rules; when the man starts outputting English texts sufficiently similar to the ones fed in, you stop feeding him the English translations.

I'd say that since the man does not know English, he doesn't know the meaning of any of the symbols he translates, so does not learn Chinese.
Banno June 09, 2021 at 08:13 #548202
Quoting Kenosha Kid
Google doesn't store vector representations of words in isolation but rather in their semantic context.


Sure. I don't think I suggested otherwise.
unenlightened June 09, 2021 at 08:55 #548206
New Zealanders have this annoying habit of inflecting every sentence at the end to make it sound like a question. In Wales, the equivalent is achieved by tacking an actual question onto the end of every statement. The favoured question varies, by region. Thus in Cardiff, one will hear "Let's go down the pub, is it?" whereas, in the Valleys, it would be, "Let's go down the pub, aren't we?" Here in the north, it would be "let's go down the pub, aye?" or a little further East "Let's go down the pub, d'you know what I mean?"

"D'you know what I mean?" is particularly annoyingly redundant and of course, impossible to answer. The best I can translate this into philosophistry is something like "But I won't bite your head off if you disagree." In the mountains, this is important information, but not a question that needs answering, d'you know what I mean?

Words don't have meaning, rituals do. I heard it on a program on BBC 2.