Philosophical Computer
Describe how you might program a computer, or robot, to philosophize, or mimic a philosopher - such that it would be able to fool a group of experts on a standardized test. I envision a test similar to the Turing Test for intelligence, but more in line with what this Philosophy Forum would consider a good test. I'm looking for input on both: how to program the computer, or how to test the computer.
Comments (42)
Computer: a shop window!
Quoting Don Wade
No, it's a dialogue typical of a Turing test - and an indication of what you're up against trying to program a computer to do philosophy. A great deal of what is meant is unspoken; or ambiguously expressed, and a computer doesn't have the real world embodied experience to be able to discern that buying a puppy is something someone might do, whereas, buying a shop window is not.
Buzzwordy bullshit (*) taken for profundity, as any such philosophical chatbot must necessarily be..
(*) I use the word in the sense of Harry Frankfurt, and not as a barnyard epithet. "Speech intended to persuade without regard for truth." Since machines don't do semantics, that's all that could be output by any such computer program as you propose.
https://en.wikipedia.org/wiki/On_Bullshit
Seriously, dude, this is contradiction at its best!
Philosophize and standardized test? Which one? You can't have both!
Quoting Don Wade
Yeah! Maybe your philosophical computer can also make remarks dripping with obvious sarcasm! That's always good in a debate!
Praise the Lord for that distinction. :wink:
And therein lies the misplaced belief about philosophy. Philosophy thrives in pointing out distinction, in defining a domain, in laying foundation, even in definition. Anyone who proclaims alignments and agreements in just about anything is probably lazy.
So in a philosophical discussion the computer must find which meaning and definition the human philosopher meant when a word is used. So take a philosophy statement, the computer must know all meanings and definitions of each word and then (somehow) detect which meaning of each word was the human philosopher considering in that statement. So its a lot of work just to give each individual word all possible meanings and then a hard work (perhaps impossible?) for the AI to find which of the meaning is the human philosopher choosing at that instant.
Another way this might work is if the AI asks for a definition of each term (kind of annoying but ok).
Lets say a statement from the human has 3 important terms and the AI asks for definition, but then there needs to be a relation between those 3 terms in the AI system for it to give a somewhat acceptable response, but still better than just choosing random meaning for each term.
I don't know much about computing but this this seems to me a lot of work but still interesting.
In a Kierkegaardian sense, the inherent dynamics of being (philosophy) are such that it will overflow any box (definition) in which you try to contain it.
There can be no philosophy if the definition of philosophy is not itself an issue for philosophy.
And does not the entire project rest upon the unstated presumption that a particular type of entity (human?) is uniquely situated to decide what is and what is not philosophy?
Interesting topic.
I believe we could say the same about "truth". How well can anyone define truth in philosophy - yet we still search for it.
Searching for truth is not the same as defining truth.
And defining truth is not the same as defining philosophy.
My point remains the same. I suspect your project would be more worthwhile if you let go of the mistaken belief that it's success depends upon a definition of philosophy.
It is your project, you solicited opinions, I provided mine, and I wish you nothing but success.
and what if you don't see it because it doesn't fit your definition?
there is a wide range between having an idea of what you are in search of and having a clear definition of what you are in search of.
searching and defining is an interactive process.
unless you have already been there, you cannot be certain how it will look until you get there.
And why would you want to?
I agree. You and I could not even talk about truth (let alone define it) without having at least a vague and average understanding of truth.
There are other things implied in a "lazy philosopher". C'mon man, we are discussing philosophically here.
Saying, we can bring everything to agreement, for example, violates a lot of philosophical principles.
Absolutely not. But are we talking "vague" now?
I disagree vehemently! Vagueness is a normal feature of philosophical discussion, laziness is not.
Within the realm of philosophical discussion, we can avail ourselves of various treatments for vagueness -- vagueness frequently appears in epistemology and metaphysics. But, laziness is a human condition that existed prior to philosophy and exists outside of it.
Quoting Caldwell
There are vaccines being developed in the MRL to protect against this very affliction. :cool:
Surely some of us don't need that vaccine. haha! :blush:
At it's core, I think an ability to self evaluate it's own source code or state and build inferences. Fooling is easy. I would say we all use the fooling tactic to some degree in our lives and that's part of philosopher. But if you are suggesting that philosophical discourse is fooling ones self, I disagree for sure.
Also, the Turing test has been beaten already by AI. And not by AI that is ground breaking. Just with fooling as you call it. But for me all this proves, was that the Turing Test is not very useful.
What we should have learned from it was to ask better questions on what makes human intelligence
The Turing test is routinely "beaten" by plain old chatbots. The problem isn't that the chatbots are intelligent, but that humans are easily fooled. If you say, "Hello" to a program and it outputs, "Hi there, how are you today?" most people are willing to believe they're talking to their neighbor, with whom they never have any deeper conversation than that yet credit their neighbor with sentience.