You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Gödel's Theorem and Artificial Intelligence

deepideas August 30, 2017 at 09:36 3925 views 4 comments
Dear fellow philosophers,

recently, I have been interested in arguments against the possibility of strong artificial intelligence being able to arise out of mere syntactical symbol manipulation by a computer. There are some arguments against this possibility, the most compelling of which is I think Roger Penrose's argument based on Gödel's Incompleteness Theorem. The argument roughly goes as follows:

1. Assume (for the sake of contradiction) that there is some formal system F that captures the thought processes required for mathematical insight.
2. Then, according to Gödel’s theorem, F cannot prove its own consistency.
3. We, as human beings, can see that F is consistent.
4. Therefore, since F captures our reasoning, F could prove that F is consistent.
5. This is a contradiction and, therefore, such a system F could not exist.

I'd be interested in triggering a discussion about this. A more thorough treatise of this argument can be found here: http://www.deepideas.net/godels-incompleteness-theorem-and-its-implications-for-artificial-intelligence/

Sincerely,
Daniel

Comments (4)

noAxioms August 30, 2017 at 11:35 #101083
Quoting deepideas
2. Then, according to Gödel’s theorem, F cannot prove its own consistency.
3. We, as human beings, can see that F is consistent.
4. Therefore, since F captures our reasoning, F could prove that F is consistent.
3 is worded entirely differently. You've not stated that F cannot see that F is consistent, nor that humans can prove that F is consistent. So nothing seems to have been demonstrated at all. Line 4 does not follow at all.

Perhaps you can restate your steps using consistent terms.
I did not read linked page. The argument posted here thus far has not enticed me to do so.
Tzimie August 30, 2017 at 15:16 #101102
We prove Goedels theorem in a higher level logic.
So #2 and #3 occur on different levels
4 is false, we don't know if our higher level reasoning is consistent. Good example - "naive set theory", created based on our common sense reasoning, is inconsistent.
T_Clark September 24, 2017 at 19:32 #107918
Quoting deepideas
the most compelling of which is I think Roger Penrose's argument based on Gödel's Incompleteness Theorem.


I've read that Gödel's Incompleteness Theorem invalidates a number of things in various places, but I've never understood it. I thought GIT only applies to natural numbers, i.e. integers greater than or equal to 0. How would that relate to artificial intelligence or much of anything else?
fdrake September 24, 2017 at 19:53 #107921
Human thought is consistent? Human mathematical thought isn't consistent. Within a specific formal system, sure, but there are intuitionist, constructionist and para-consistent versions of mathematical theories. For example in para-consistent mathematics, the real numbers turn out to be countable and not countable at the same time, whereas in standard mathematics they are simply not countable. In constructionist mathematics, the existence of non-measurable sets cannot be established whereas they are objects in classical mathematics. Para-consistent and intuitionist/constructivist mathematics disagree on proof by contradiction (fine in paraconsistent, not fine in intuitionist).