This is the title of a discussion about self-reference
Banno and I had a short discussion about self-reference in another thread. Whenever I think about self-reference I have two responses. First - it’s fun and a bit exciting. You get the feeling that you’ve stumbled on something profound and important. Second - once you look into it, it’s still fun, but it’s clear it’s neither profound nor important.
Here's what the Stanford Encyclopedia of Philosophy (SEP) says:
[i]In the context of language, self-reference is used to denote a statement that refers to itself or its own referent. The most famous example of a self-referential sentence is the liar sentence: “This sentence is not true.” Self-reference is often used in a broader context as well. For instance, a picture could be considered self-referential if it contains a copy of itself (see the animated image above); and a piece of literature could be considered self-referential if it includes a reference to the work itself. In philosophy, self-reference is primarily studied in the context of language. Self-reference within language is not only a subject of philosophy, but also a field of individual interest in mathematics and computer science, in particular in relation to the foundations of these sciences.
The philosophical interest in self-reference is to a large extent centered around the paradoxes. A paradox is a seemingly sound piece of reasoning based on apparently true assumptions that leads to a contradiction. The liar sentence considered above leads to a contradiction when we try to determine whether it is true or not. If we assume the sentence to be true, then what it states must be the case, that is, it cannot be true. If, on the other hand, we assume it not to be true, then what it states is actually the case, and thus it must be true. In either case we are led to a contradiction. Since the contradiction was obtained by a seemingly sound piece of reasoning based on apparently true assumptions, it qualifies as a paradox. It is known as the liar paradox.
Most paradoxes of self-reference may be categorised as either semantic, set-theoretic or epistemic. The semantic paradoxes, like the liar paradox, are primarily relevant to theories of truth. The set-theoretic paradoxes are relevant to the foundations of mathematics, and the epistemic paradoxes are relevant to epistemology. Even though these paradoxes are different in the subject matter they relate to, they share the same underlying structure, and may often be tackled using the same mathematical means.
The text references the importance of self-reference to the foundations of mathematics. I assume it is talking about Russell’s paradox. In computer science, there was a brief discussion of the importance of programs that can modify themselves.[/i]
So, my impression is that most self-reference is useless. It seems cool because it’s about us thinking about ourselves, but there is little of substance there. It has never seemed to me that the liar’s paradox has anything interesting or important to say about truth or language. I’d be interested in hearing about situations where self-referential ideas actually contribute rather than obscure.
Here's what the Stanford Encyclopedia of Philosophy (SEP) says:
[i]In the context of language, self-reference is used to denote a statement that refers to itself or its own referent. The most famous example of a self-referential sentence is the liar sentence: “This sentence is not true.” Self-reference is often used in a broader context as well. For instance, a picture could be considered self-referential if it contains a copy of itself (see the animated image above); and a piece of literature could be considered self-referential if it includes a reference to the work itself. In philosophy, self-reference is primarily studied in the context of language. Self-reference within language is not only a subject of philosophy, but also a field of individual interest in mathematics and computer science, in particular in relation to the foundations of these sciences.
The philosophical interest in self-reference is to a large extent centered around the paradoxes. A paradox is a seemingly sound piece of reasoning based on apparently true assumptions that leads to a contradiction. The liar sentence considered above leads to a contradiction when we try to determine whether it is true or not. If we assume the sentence to be true, then what it states must be the case, that is, it cannot be true. If, on the other hand, we assume it not to be true, then what it states is actually the case, and thus it must be true. In either case we are led to a contradiction. Since the contradiction was obtained by a seemingly sound piece of reasoning based on apparently true assumptions, it qualifies as a paradox. It is known as the liar paradox.
Most paradoxes of self-reference may be categorised as either semantic, set-theoretic or epistemic. The semantic paradoxes, like the liar paradox, are primarily relevant to theories of truth. The set-theoretic paradoxes are relevant to the foundations of mathematics, and the epistemic paradoxes are relevant to epistemology. Even though these paradoxes are different in the subject matter they relate to, they share the same underlying structure, and may often be tackled using the same mathematical means.
The text references the importance of self-reference to the foundations of mathematics. I assume it is talking about Russell’s paradox. In computer science, there was a brief discussion of the importance of programs that can modify themselves.[/i]
So, my impression is that most self-reference is useless. It seems cool because it’s about us thinking about ourselves, but there is little of substance there. It has never seemed to me that the liar’s paradox has anything interesting or important to say about truth or language. I’d be interested in hearing about situations where self-referential ideas actually contribute rather than obscure.
Comments (64)
You're probably asking about philosophy, and I can't really help there. However, as someone who knows a bit of programming and mathematics, self-reference can certainly be interesting in those spheres and even sometimes useful (recursive functions provide concise ways to code certain things).
As the article alludes to at the end, things get even more interesting when thinking about self-modification of programs or self-specialising compilers (I've lost a bookmark to an interesting and not too technical blog post about this, maybe I can find it again...)
In terms of mathematics, the book "Vicious Circles" by John Barwise and Lawrence Moss seems to be a good reference for what they call "hyperset" theory, an extension of set theory that allows for self-referencing and circularity. I haven't read much, and it's very dense. Working understanding of set theory required. I wonder if there are any mathematicians here that could break it down for us.
Reflection is not recursion. I can reflect on the past, but I can't change it. Imagination and reflection are closely linked, it's true. And that's an interesting topic in its own right.
Seems like a different approach to that dictum than the usual, ontological one.
Whereas this variant involves a "vicious circle":
Interestingly, we still seem to understand it.
Yes, I am primarily talking about philosophy. I tried to be careful not to be too dismissive of self-reference. I had read that the kind of programing uses you describe are valuable. I guess I'm trying to separate the wheat from the chaff - uses with real value as opposed to just a bunch of gee whiz stuff.
Quoting the affirmation of strife
My attitude toward self-reference in math is ambivalent. First off, I'm good at the math required to be an engineer. That's really different from what we're talking about here. When I look at Russell's paradox, for example, it seems like a trick, yet many mathematicians seem to think it undermines math as a whole. We had a discussion about a conversation between Wittgenstein and Turing a week or so ago. Turing proposed that Russell's paradox undermined math to the point that it might lead to a bridge falling down. That seems goofy to me, but my level of expertise is too limited for me to have any confidence in my judgement.
But yes, the goofiest part of self-reference for me is its use in philosophy. The liar's paradox seems like a little joke that people have decided to take seriously. I can't see how it gives any insight into meaning or truth, as some propose.
This paragraph right at the end of the book gives an idea of the conclusions they draw from their maths shenanigans:
So I think that matches your intuition and it at least gives confidence that the kind of separation you talk about should be possible. I'll need to look into it more to give better examples of "useful self-reference".
I appreciate your input. I didn't start this discussion because I have a particular end in mind. I just want to see where it goes.
Human self-reference
Sarah says:
1. I am a bad, bad girl! (1st person)
2. You are a bad, bad girl, Sarah! (2nd person)
3. Sarah is a bad, bag girl, isn't she? (3rd person)
Linguistic self-reference
4. I am false (1st person) ???
5. This sentence is false (2nd/3rd person?)
Why are self-referential sentences like the liar sentence (5) only in the 2nd/3rd person while we humans can do the same in 3 different ways (1, 2, 3)
Another issue:
If I say "this bag is black", I have to actually point at the bag in question. That is to say we need another piece of information ( :point: ) to clarify what "this" refers to.
Consider now the liar sentence "this sentence is false". How do I know "this" refers to the liar sentence itself? Where's the :point: ?
4. This :point: "Paris is on the moon" sentence is false.
5. This :point: "This sentence is false" sentence is false.
Ambiguity?
"This" is not second person.
Your other point is about incomplete information, which is indeed the first hurdle for most "silly" kinds of self-referential paradoxes.
Third person?
:ok:
They say that in later chapters they prove that circularity is not the villain here... I'm way out of my depth though.
But, consideration of that set is useful because:
So, it looks like the value of the liar's paradox or Russel's paradox etc. comes from the insight into how we can or can not formulate truth. The authors give a plain-language summary of Tarski's Undefinability Theorem for Truth:
parentheses added
I kind of get that, but it seems like a joke. A meaningless technicality. I can't see how it tells us anything useful about truth for any other propositions.
For completeness (chuckles), I've just found that the SEP also has an article on non-wellfounded set theory (aka hyperset theory). They have "Vicious Circles" in their references, and a lot of the same topics seem to be briefly covered. Fairly technical, but maybe something useful is there.
As for the relevance of self-reference: it draws attention to the event of language, it's taking place. It's the institution, at the level of the proposition, of the what is extra-propositional in language. When language takes itself as an object, the separation of language 'here' and object 'there' evaporates: language becomes enthinged, enworlded. Or rather, the always-already enworldedness of language shows itself and stops being obscured, for the briefest of moments. Self-reference is puzzling only to those who want to treat language as a pure, self-enclosed system, sterilized from any imbrication in the world.
I think think the fascination with self-referential paradoxes specifically comes from their use as a way to refute arguments, especially in epistemology.
:up:
Cool what you did with the title.
Aww... shucks.
Quoting the affirmation of strife
I'm interested what you and @StreetlightX have to say about the Russell paradox as opposed to the liar sentence. From what I have seen, mathematicians and philosophers of mathematics claim that the Russell paradox undermines the credibility of mathematics in general. We had a discussion a few weeks ago about a discussion between Wittgenstein and Turing where Turing claimed the inconsistencies in math might cause a bridge to tall down. That seems silly to me, to believe that an anomaly in number theory could contaminate calculus.
What are you guys thoughts?
Quoting T Clark
I struggle with this idea. I think of mathematics as a concise language for encoding models of reality[1]. The symbols and rules are invented, but what they describe is discovered[2]. Would it make sense to talk about, for example, the "credibility" of the Japanese language?
I think W. has it right: there are only two causes for the bridge to fall down. Either the model (physics) is wrong, or the mathematical rules were not followed. The same reasons for a failure in communication: either you misunderstand what I am talking about, or I am talking gibberish. The first of those problems has nothing to do with language, so we'll move on to the second.
The problem: what should we do if we are presented with contradictory mathematical rules. For the language analogy, this is like finding a contradiction in your Japanese grammar book. On page 24 it tells you to say X in situation Y, but on page 135 (it's not an easy language, you understand) it instructs you to say the opposite i.e. (not X) in situation Y. Solution: buy a new grammar book.
In addition to what @StreetlightX said about the "enworlded-ness" of language (arising from the fact that it is invented by humans), I would like to then add a second point: language is dynamic. It will evolve. We didn't have mathematical rules for talking about circularity in set theory, so we invented hyperset theory. It just takes a bit of coffee and head-scratching.
That's not to say that contradictions are completely harmless (and circularity is hard to think about, so it can easily lead to contradictions). I think some of Turing's fear was justified. It's not nice to end up in a situation where the rules are contradictory. You have to go back to the drawing board and maybe throw out a lot of work. But I fail to see how someone could even construct a bridge, or anything else, based on contradictory instructions. The best that I could offer would be a stream of colorful language directed at whatever theorist had handed me the instructions (actually, it's more likely that I would be the theorist...)
---
[1]: Is this still controversial? I mean, Einstein called it a language. My first year lecturer did the same.
[2]: Without getting bogged down in ontology, I just mean to say that there is some kind of distinction between these processes.
"“This sentence is not true.” I'll just change it to "This sentence is false" for less typing.
Lets look at this from a logic perspective. We could say, "If this sentence is true, then its false"
A -> ~A
If A is true, then we get A is not true.
A = (A -> ~A)
Now negate the formula, and assume the sentence if false.
~A = (A or A)
~A = A
(If I did my logic right, its been a while)
So if the sentence is false, its true, and if its true, its false. We definitely have a contradiction.
As we can see, there's something weird going on. But why? Our intuitions feel like the sentence makes sense, but logically, it doesn't. Because we're being too general. We realize we've said nonsense by being too implicit. That's the lesson we can glean. Just because we can say or posit an idea in language, doesn't mean it makes sense. You've previously posted the question, "What is metaphysics?" Many times people use metaphysics to disguise liars paradoxes. Terms that are ambiguous are great ways to hide nonsense terms and conclusions within them. If you can pick them out, you can ask for clarification.
Solving the liar's paradox can give us a tool to solve other nonsense points while keeping within the spirit of the discussion. Nonsense arguments are often unintentional, and often times hide an underlying meaning that wasn't quite nailed with the language. So I could propose this to someone instead:
"I don't think we're being specific enough with our words. Do you mean perhaps, "This sentence is a false sentence"? Because at that point, we can look at the sentence and see, "No, that is a viable and correct sentence. It is false that that is a false sentence.
Or
Proposal:
A = a sentence
~A = not a sentence
A therefore
A = ~ A
And we can see that its a contradiction right off the bat, and that A must be a sentence.
Liar's paradoxes are a great teaching tool about the ambiguity of language, but also about seeing through the intentionality of a person's argument. When discussing philosophy with others, we should be generous towards the other person's argument. Sometime we're not just trying to show that a person's argument is viable, we're also trying to see if we can use language correctly to better cast what they are intending to argue as well.
That's the heart of the argument. Many people, I guess some really great mathematicians and logicians, don't agree. I have a feeling it has something to do with mathematicians being natural idealists. You can't futz with the ideal world. It's perfect. If it's not, somehow the whole thing falls apart.
Quoting the affirmation of strife
I don't think this analogy applies. Seems like with the Russel paradox, we start with what appear to be consistent rules and get contradictory results.
Quoting the affirmation of strife
Is this the issue, that mathematicians and logicians don't believe math was invented by humans? That they think it is intrinsic to the world?
Quoting the affirmation of strife
I don't get it. I'm not sure I can even see the connection between number and set theory and calculus. But then, my math is of the practical, engineering sort.
Quoting the affirmation of strife
There are certainly people who believe that the Russell paradox says something profound about math and logic.
That's just it. The liar's paradox only shows up when we are talking about sentences that we would never use in normal speech. They are grammatically and semantically correct, but they don't make any sense. Or can you think of a counter-example.
Quoting Philosophim
Agreed. It's the significance of the contradiction that we are questioning. That I am questioning.
Quoting Philosophim
I don't find this a very convincing argument. As you note, there are plenty of ways to do bad philosophy and logic without needing this paradox to show us another. The liar's paradox seems trivial and I don't see how it's connected with any substantive logical issue. Do you have examples of when "...people use metaphysics to disguise liars paradoxes."
Quoting Philosophim
I guess my solution is realizing there isn't anything to solve. Yes, I know that's not what you meant. I don't see any solution but to ignore the paradox as an interesting and fun, but ultimately meaningless, pastime.
Quoting Philosophim
I think this discussion, and all the other ones about this and similar subjects, are evidence that the subject obscures rather than clarifies language, mathematics, and logic.
The analogy is contrived, I agree. I've lost the circularity aspect for one. We start with consistent premises and get contradictory predictions (I feel like those are still not the right words, but it's all I've got at the moment). But they are still predictions. Someone has to go out and build the bridge. It ties into this:
Quoting Philosophim
where I agree with your response:
Quoting T Clark
Or to put it another way: there is no way to "accidentally" draw well-founded conclusions from a paradox, otherwise there would be a way to resolve it, meaning that it is not a paradox.
Quoting T Clark
Yes, I think this could be the case, especially historically. They love the runes so much (talking about the "beauty" of an equation, for example), and why not. It seems like it could easily lead to the emotional conclusion: "maths is discovered". It's too beautiful to be our own work. And us laypeople are partly to blame. Imagine being told over and over: "Oh, you study maths? That's like magic to me." I think here of Tolkien and other fantasy settings where uttering a phrase in some ancient language unlocks an otherwise unattainable power. How fitting, that Spock had ears like an elf...
I'm losing track. Back on topic:
Quoting T Clark
You are right: there is only a danger if this paradox within set theory has an effect within the practical mathematics (which I suggested would necessarily always be detectable, but maybe not trivially apparent). I don't have an example to hand, although they might be found in e.g. differential geometry (foundation for General Relativity) or, where this all came to light, in computability theory (foundation for, well, computers).
I wonder if they have the same reaction to division by zero. After all it is just as "dangerous" (undefined vs contradictory, both impossible to execute), just more boring. If they don't then I can finally say I completely agree with your sentiment, that recursive paradoxes are basically useless, and are artificially raised above other mathematical impossibilities.
I think you and I are mostly in agreement except for this paragraph. It seems pretty clear to me that the math paradoxes we're talking about are trivial. This is not my area of expertise, to put it mildly. I'd be willing to change my mind if there were people who disagree and provide an argument which is more than just arm-waving.
I see, you are looking for examples of subtle vicious circles. I might have one for you, although I'm not sure how "dangerous" it is in practice.
Define a vector. What is it?
It has magnitude and direction? Cool, so what's a direction?
1. There are no truths. If true then it is false. Ergo, There are truths! I wish this could be used as a starting point to tackle radical skepticism.
2. Nothing is certain. This can't be certain - sawing off the branch you're sitting on aka self-refuting statement. Still in skeptical territory.
3. Everything is relative. Is that itself relative? If yes, whatever the problem is with relative positions is also a problem for relativism.
4. Cotard's delusion (walking corpse syndrome). "I'm dead" says the patient but he has to be alive to say that!
5. This sentance has 3 erors. Two errors within the sentence and one error is the sentence itself (a counting error).
6. I'm a Cretan and all Cretans are liars.
There's no contradiction there. You only need a good definition.
Also, you've brought up circularity several times and I haven't responded. As far as I can see, circularity is not the same thing as self-reference, although I can see they have things in common.
Have you revised this view?
No, but there really haven't been much in the way of arguments supporting self-reference. Those that there have been have been luke-warm.
As to the usefulness of self-reference, it was pointed out that it is pivotal to iteration. Any iterative procedure by definition calls itself. Now that's indispensable in coding, but it also leads to many a curiosity. So for example, this beast:
...is calculated using iterative procedures.
Douglas Hofstadter made use of iteration in his discussion of consciousness, a notion that has not dissipated over the years. Chaos theory in general relies on iteration.
Also self-reference is not pivotal to semantic paradoxes. There is at least one paradox that does not make use of self-reference.
Self-referentiality points to our tendency to conflate the thing with our thoughts about said thing.
Also, more generally, it points to the possibility of saying one thing and meaning two things.
(Of course, this works because we take into consideration other statements that contextualize the one under scrutiny, but we do not verbalize those others.)
I thought about fractals. I've read that many features of the world involve fractal geometry. I don't know what to do with that.
Quoting Banno
As for iteration. I thought about that too. One of the first things I thought of was a do loop in a computer algorithm. I don't think iteration and self-reference are the same thing. I'm not sure of that.
Confusing "the moon" with the moon doesn't strike me as a self-reference issue.
Quoting baker
I don't understand what you mean.
No, they don't seem to be. Languages such as LISP depend on iteration using self-reference. I'm not sure if a do loop avoids, or just hides, that self-reference.
It can, depending on one's epistemic theory. The problem is also known as "confusing the map for the territory".
Saying "There's a draft" when you're in a room with another person and there is a draft, can mean 'There's a draft' and 'Close the window'.
Question: If you pick an answer at random, what are the chances that the percentage written in the pick is equal to the chance of picking that percentage?
There were four answers given from which you could pick at random. One said 50%. One said 25%. One said 60%. And another one said 25%. Altogether there were four answers from which a random choice would be made.
I looked up perturbative quantum field theory. I'll spend some more time with it.
Your comment made me think - Are all iterative processes self-referential? Maybe someone else brought this up previously. Is that the same kind of self-reference we're talking about?
Thanks.
Percentage = 0. Right?
For some reason, that made me think of a yo mama joke:
Yo mama is so fat, her reflection weighs 5 pounds.
All recursive ones processes are, and calculation of the Greens function is recursive. But no, not all iterative ones.
I'm not sure I know the difference between "recursive" and "iterative."
So something like G = g + g S G is recursive, because you can take the whole RHS and substitute into the G on right:
G = g + g S G
= g + g S ( g + g S G )
= ...
ad infinitum.
Whereas something like
du(t)/dt = u(t)
has to be solved iteratively, but isn't expandable recursively as above. Something like that may have exact solutions, whereas G has to be solved as a power series and terminated arbitrarily.
Thanks.
Right.
If that is correct, self-reference occurs in recursion.
I think the interesting question that remains for me here, is if we can find non-trivial self-referential paradoxes, such that they could arise from seemingly well-founded frameworks. I'm no longer sure that it is even possible, and I think @T Clark was right to distrust my intuition about that.
Although I found the discussion helpful and interesting, it didn't resolve, for me at least, the answer to your question.
Regular quines are fixed points of a programming language; programs which when executed can print their source code without reflection (i.e. without needing to be able to read their source code from the hard-disk).
Radiation hardened quines are similar, but are also robust to the removal of one character. This is a useful property in environments where bits can be flipped/damaged on a regular basis (e.g. code on satellites - which are not shielded by the atmosphere); the program can repair itself.
* Here's an example comparing an iterative vs. recursive implementation of the factorial function:
Full disclosure, the iterative function could be written more compactly than it is above depending on the language - but using just regular language features, the recursive solution is more easily made concise.
As another example, writing a method to navigate a maze is naturally suited to recursion.
You could write this algorithm in an iterative form, but the recursive way below seems more intuitive to me.
Another thing I remembered is that the self-referential paradox known as the Berry Paradox is used to prove that the Kolmogorov complexity of an algorithm is not computable. This could be considered "practical" in the sense that we can be sure that trying to calculate the minimum required "complexity" of a computer program is a waste of effort (although there may be other ways to estimate it). Though again, this is all very much in the field of computer science and mathematics rather than philosphy per se, although simplicity plays a (large?) role in the philosophy of science.
Quoting Clarky
Reading some Penrose lately. I think I see now where this is coming from, it has to do with the idea that some mathematicians have of mathematical "entities" inhabiting a sort of world of ideal forms (see e.g. Plato). That is probably a topic for another thread, but I would agree that paradoxes (incl. self-referential) can say something about the limitations of mathematics, even if it is just regarded as a language.
Alas, I'm not able to bring any surprising yet practical paradoxes this time, just a little more rambling...
:snicker: ...and it's neither!Quoting Clarky
:grin:
You read my mind!
Self-reference in re the liar sentence, as you would've already noticed, is in the third person ("this"). Second, it involves negation of some kind that contradicts a property that's necessary to selfhood, assuming such a word exists and is imbued with the meaning that I have in mind.
Caesar used to refer to himself in the third person which is in a way quite noble of him - he alludes to the position that he holds (Emperor) instead of himself (Julius). Which leader can do that? :snicker:
I wonder what implications this has on the so-called hard problem of consciousness which is premised on the alleged restriction on consciousness to the first person mode?
@Wayfarer [math]\uparrow[/math]
[math]{{g}_{n}}(z)=z+{{\rho }_{n}}{{\varphi }_{n}}(z)[/math]
[math]{{G}_{1}}(z)={{g}_{1}}(z),\text{ }{{G}_{n+1}}(z)={{g}_{n+1}}({{G}_{n}}(z))[/math],
[math]n\to \infty [/math]
It's basically a catch-22 situation: For x such that Px, Px [math]\to[/math] ~x.
The liar sentence uses true/false, false to be precise, as a predicate. Is truth value a valid predicate? [s]If no, how did Gödel break math with his incompleteness theorems?[/s]
:chin: