Gödel's Theorem and Artificial Intelligence
Dear fellow philosophers,
recently, I have been interested in arguments against the possibility of strong artificial intelligence being able to arise out of mere syntactical symbol manipulation by a computer. There are some arguments against this possibility, the most compelling of which is I think Roger Penrose's argument based on Gödel's Incompleteness Theorem. The argument roughly goes as follows:
1. Assume (for the sake of contradiction) that there is some formal system F that captures the thought processes required for mathematical insight.
2. Then, according to Gödel’s theorem, F cannot prove its own consistency.
3. We, as human beings, can see that F is consistent.
4. Therefore, since F captures our reasoning, F could prove that F is consistent.
5. This is a contradiction and, therefore, such a system F could not exist.
I'd be interested in triggering a discussion about this. A more thorough treatise of this argument can be found here: http://www.deepideas.net/godels-incompleteness-theorem-and-its-implications-for-artificial-intelligence/
Sincerely,
Daniel
recently, I have been interested in arguments against the possibility of strong artificial intelligence being able to arise out of mere syntactical symbol manipulation by a computer. There are some arguments against this possibility, the most compelling of which is I think Roger Penrose's argument based on Gödel's Incompleteness Theorem. The argument roughly goes as follows:
1. Assume (for the sake of contradiction) that there is some formal system F that captures the thought processes required for mathematical insight.
2. Then, according to Gödel’s theorem, F cannot prove its own consistency.
3. We, as human beings, can see that F is consistent.
4. Therefore, since F captures our reasoning, F could prove that F is consistent.
5. This is a contradiction and, therefore, such a system F could not exist.
I'd be interested in triggering a discussion about this. A more thorough treatise of this argument can be found here: http://www.deepideas.net/godels-incompleteness-theorem-and-its-implications-for-artificial-intelligence/
Sincerely,
Daniel
Comments (4)
Perhaps you can restate your steps using consistent terms.
I did not read linked page. The argument posted here thus far has not enticed me to do so.
So #2 and #3 occur on different levels
4 is false, we don't know if our higher level reasoning is consistent. Good example - "naive set theory", created based on our common sense reasoning, is inconsistent.
I've read that Gödel's Incompleteness Theorem invalidates a number of things in various places, but I've never understood it. I thought GIT only applies to natural numbers, i.e. integers greater than or equal to 0. How would that relate to artificial intelligence or much of anything else?