You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Philosophical Computer

Don Wade January 29, 2021 at 22:32 8700 views 42 comments
Describe how you might program a computer, or robot, to philosophize, or mimic a philosopher - such that it would be able to fool a group of experts on a standardized test. I envision a test similar to the Turing Test for intelligence, but more in line with what this Philosophy Forum would consider a good test. I'm looking for input on both: how to program the computer, or how to test the computer.

Comments (42)

180 Proof January 29, 2021 at 23:35 #494469
AlphaGoZeno ...
jgill January 29, 2021 at 23:44 #494476
Maybe this is somehow connected to your question. Maybe not.
fishfry January 29, 2021 at 23:59 #494483
https://en.wikipedia.org/wiki/Sokal_affair
counterpunch January 30, 2021 at 00:11 #494487
Person: I saw some puppies in a shop window - so I bought one! What did I buy?
Computer: a shop window!



Don Wade January 30, 2021 at 01:39 #494514
Reply to jgill Thanks for the post. Nope, not what I was looking for but still a good article. I believe computers can mimic philosophers good enough to pass a test made specifically to test them - but it will not be easy to program the cpmputer, or test it, unless we can first understand more of how to define philosophy.
Don Wade January 30, 2021 at 01:44 #494515
Reply to fishfry I don't see a philosophical computer as a hoax. Not sure I know what you mean otherwise.
Don Wade January 30, 2021 at 01:46 #494517
Reply to counterpunch A play on words?
counterpunch January 30, 2021 at 01:56 #494520
Reply to Don Wade

Quoting Don Wade
A play on words?


No, it's a dialogue typical of a Turing test - and an indication of what you're up against trying to program a computer to do philosophy. A great deal of what is meant is unspoken; or ambiguously expressed, and a computer doesn't have the real world embodied experience to be able to discern that buying a puppy is something someone might do, whereas, buying a shop window is not.
fishfry January 30, 2021 at 02:06 #494522
Quoting Don Wade
?fishfry I don't see a philosophical computer as a hoax. Not sure I know what you mean otherwise.


Buzzwordy bullshit (*) taken for profundity, as any such philosophical chatbot must necessarily be..

(*) I use the word in the sense of Harry Frankfurt, and not as a barnyard epithet. "Speech intended to persuade without regard for truth." Since machines don't do semantics, that's all that could be output by any such computer program as you propose.

https://en.wikipedia.org/wiki/On_Bullshit
Caldwell January 30, 2021 at 02:27 #494526
Quoting Don Wade
Describe how you might program a computer, or robot, to philosophize, or mimic a philosopher - such that it would be able to fool a group of experts on a standardized test.


Seriously, dude, this is contradiction at its best!

Philosophize and standardized test? Which one? You can't have both!
Don Wade January 30, 2021 at 03:16 #494546
Reply to Caldwell A point of view may at first sound like a contradiction with another point of view. That in itself is part of philosophy until alignments can be made to show agreement. That's where discussion can help.
Don Wade January 30, 2021 at 03:24 #494553
Reply to counterpunch Good points! Ambiguity is part of philosophy. That's what seperates it from science. A pjilosophical computer would realize that and make use of it in debate.
counterpunch January 30, 2021 at 03:43 #494560
Reply to Don Wade

Quoting Don Wade
Good points! Ambiguity is part of philosophy. That's what seperates it from science. A pjilosophical computer would realize that and make use of it in debate.


Yeah! Maybe your philosophical computer can also make remarks dripping with obvious sarcasm! That's always good in a debate!

jgill January 30, 2021 at 05:29 #494595
Quoting Don Wade
Ambiguity is part of philosophy. That's what seperates it from science


Praise the Lord for that distinction. :wink:
Caldwell January 30, 2021 at 06:46 #494606
Quoting Don Wade
A point of view may at first sound like a contradiction with another point of view. That in itself is part of philosophy until alignments can be made to show agreement. That's where discussion can help.

And therein lies the misplaced belief about philosophy. Philosophy thrives in pointing out distinction, in defining a domain, in laying foundation, even in definition. Anyone who proclaims alignments and agreements in just about anything is probably lazy.

Don Wade January 30, 2021 at 13:34 #494646
Reply to counterpunch Good point!
Don Wade January 30, 2021 at 13:36 #494648
Reply to Caldwell A lazy philosopher can still be a philosopher!
TheMadMan January 30, 2021 at 13:45 #494652
Since philosophy is of the brain/mind, an AI may pretend to be a philosopher. On how to do it, go back to the basics. Philosophy is a collection of words with different meaning and definitions.
So in a philosophical discussion the computer must find which meaning and definition the human philosopher meant when a word is used. So take a philosophy statement, the computer must know all meanings and definitions of each word and then (somehow) detect which meaning of each word was the human philosopher considering in that statement. So its a lot of work just to give each individual word all possible meanings and then a hard work (perhaps impossible?) for the AI to find which of the meaning is the human philosopher choosing at that instant.
Don Wade January 30, 2021 at 14:05 #494658
Reply to TheMadMan Good points, but it seems you are looking at a "scientific method" of examining the discussion. Using your discussion: In philosophy a heavy object will fall faster than a light object. In science (from experiments) we understand both objects fall at the same rate. A philosophical computer needs only to understand the probability of what you might believe when given information. One might even argue we don't know anything, but we believe we do.
TheMadMan January 30, 2021 at 14:20 #494662
Reply to Don Wade If you want to rely on the probability of what one might believe, I don't think it would in a serious discussion but it may work on a vague, superficial one. Though I wouldn't care much for a "mumbo jumbo" discussion.

Another way this might work is if the AI asks for a definition of each term (kind of annoying but ok).
Lets say a statement from the human has 3 important terms and the AI asks for definition, but then there needs to be a relation between those 3 terms in the AI system for it to give a somewhat acceptable response, but still better than just choosing random meaning for each term.

I don't know much about computing but this this seems to me a lot of work but still interesting.
Arne January 30, 2021 at 14:25 #494663
Reply to Don Wade if you must first define philosophy before you can simulate philosophy, then you will define philosophy in such way that you can simulate philosophy. And then your computer will be unable to entertain any philosophical notions not contained within its definition. It is the classic garbage in/garbage out dilemma.

In a Kierkegaardian sense, the inherent dynamics of being (philosophy) are such that it will overflow any box (definition) in which you try to contain it.

There can be no philosophy if the definition of philosophy is not itself an issue for philosophy.

And does not the entire project rest upon the unstated presumption that a particular type of entity (human?) is uniquely situated to decide what is and what is not philosophy?

Interesting topic.
Don Wade January 30, 2021 at 14:53 #494683
Reply to Arne There can be no philosophy if the definition of philosophy is not itself an issue for philosophy.
I believe we could say the same about "truth". How well can anyone define truth in philosophy - yet we still search for it.
Arne January 30, 2021 at 15:00 #494688
Quoting Don Wade
How well can anyone define truth in philosophy - yet we still search for it.


Searching for truth is not the same as defining truth.

And defining truth is not the same as defining philosophy.

My point remains the same. I suspect your project would be more worthwhile if you let go of the mistaken belief that it's success depends upon a definition of philosophy.

It is your project, you solicited opinions, I provided mine, and I wish you nothing but success.
Don Wade January 30, 2021 at 15:05 #494690
Reply to TheMadMan I believe a good philosophical computer would first need to be a good psychologist. It is important to know the probability of someone accepting your statement because it may seem true to them. Example: If I make a statement: "Columbus discovered America". Many people may believe that to be true and would accept the statement. But, it's not true at all. Close, but not true. It is more important in philosophy to have someone believe your statement, rather than have scientific proof.
Don Wade January 30, 2021 at 15:14 #494694
Reply to Arne Reply to Arne Searching for truth is not the same as defining truth. How can one search for something they can't define?
TheMadMan January 30, 2021 at 15:20 #494696
Reply to Don Wade A computer cannot be a philosopher or a psychologist, it can only appear to be one by coding it. Statement like "Columbus discovered America" are easy for AI to deal with since it is a matter of fact (whether correct or incorrect), the difficulty is dealing with opinions and viewpoints which is what philosophy centers on.
Don Wade January 30, 2021 at 15:26 #494700
Reply to TheMadMan We agree. Now, back to my original post: Can we build a computer that may "seem" to be philosophical? That is; it could fool a board of judges. Please note the difference between seems to be, and is - such as in the Turing Test.
Arne January 30, 2021 at 15:27 #494701
Quoting Don Wade
Searching for truth is not the same as defining truth. How can one search for something they can't define?


and what if you don't see it because it doesn't fit your definition?

there is a wide range between having an idea of what you are in search of and having a clear definition of what you are in search of.

searching and defining is an interactive process.

unless you have already been there, you cannot be certain how it will look until you get there.

And why would you want to?

Don Wade January 30, 2021 at 15:33 #494705
Reply to Arne We agree. However, one can be in search of a "vague" concept/idea - and not necessarily have a clear definition.
TheMadMan January 30, 2021 at 15:36 #494708
Reply to Don Wade Oh yes one can easily build one that "seems" but whether it can fool judges Im not so sure. I guess it depends on the person that the AI is communicating to.
Arne January 30, 2021 at 15:40 #494709
Quoting Don Wade
We agree. However, one can be in search of a "vague" concept/idea - and not necessarily have a clear definition.


I agree. You and I could not even talk about truth (let alone define it) without having at least a vague and average understanding of truth.
Don Wade January 30, 2021 at 16:16 #494723
Reply to TheMadMan Yes, that would define the problem. The same could be true of the Turing Test. It seems "statistics and probability" would be a requirement for the memory of the "bot". All the bot needs to do is "convence the judges" - so to speak - similar to the IBM Watson in the game-show Jeopardy.
Caldwell January 30, 2021 at 17:01 #494742
Quoting Don Wade
A lazy philosopher can still be a philosopher!


There are other things implied in a "lazy philosopher". C'mon man, we are discussing philosophically here.
Saying, we can bring everything to agreement, for example, violates a lot of philosophical principles.
Don Wade January 30, 2021 at 17:05 #494743
Reply to Caldwell Is a vague philosopher a lazy philosopher?
Caldwell January 30, 2021 at 17:09 #494748
Quoting Don Wade
Is a vague philosopher a lazy philosopher?


Absolutely not. But are we talking "vague" now?
Don Wade January 30, 2021 at 19:00 #494791
Reply to Caldwell Vaugueness and Lazyness are both properties that can support philosopher.
Caldwell January 30, 2021 at 19:29 #494805
Quoting Don Wade
Vaugueness and Lazyness are both properties that can support philosopher.


I disagree vehemently! Vagueness is a normal feature of philosophical discussion, laziness is not.

Within the realm of philosophical discussion, we can avail ourselves of various treatments for vagueness -- vagueness frequently appears in epistemology and metaphysics. But, laziness is a human condition that existed prior to philosophy and exists outside of it.


Deleted User January 30, 2021 at 19:49 #494815
This user has been deleted and all their posts removed.
jgill January 31, 2021 at 00:35 #494917
Quoting tim wood
Something about the name, "Metaphysics Research Lab," strikes me at the same time as hilarious and terrifying.


Quoting Caldwell
we can avail ourselves of various treatments for vagueness


There are vaccines being developed in the MRL to protect against this very affliction. :cool:
Caldwell February 06, 2021 at 02:44 #497317
Quoting jgill
There are vaccines being developed in the MRL to protect against this very affliction. :cool:


Surely some of us don't need that vaccine. haha! :blush:
Paul S February 18, 2021 at 19:31 #501040
Reply to Don Wade
At it's core, I think an ability to self evaluate it's own source code or state and build inferences. Fooling is easy. I would say we all use the fooling tactic to some degree in our lives and that's part of philosopher. But if you are suggesting that philosophical discourse is fooling ones self, I disagree for sure.

Also, the Turing test has been beaten already by AI. And not by AI that is ground breaking. Just with fooling as you call it. But for me all this proves, was that the Turing Test is not very useful.

What we should have learned from it was to ask better questions on what makes human intelligence
fishfry February 18, 2021 at 20:57 #501084
Quoting Paul S
the Turing test has been beaten already by AI.


The Turing test is routinely "beaten" by plain old chatbots. The problem isn't that the chatbots are intelligent, but that humans are easily fooled. If you say, "Hello" to a program and it outputs, "Hi there, how are you today?" most people are willing to believe they're talking to their neighbor, with whom they never have any deeper conversation than that yet credit their neighbor with sentience.