Mathematical Conundrum or Not? Number Four
This one is commonly known as Russell's paradox and it has to do with set theory. Naive Set Theory, (a branch of mathematics which attempts to formalize sets), defines a set as any definable collection.
Here is the paradox:
Let R be the set of all sets that are not members of themselves
Is R a member of itself? If so, then it must meet the condition of not being a member of itself, which would mean it is not. If it is not, then it must meet the condition of not being a member of itself, which would mean it is a member of itself.
Can this, apparent paradox in set theory, be resolved?
You can think of this one as either mathematics or logic, as both branches share a root in set theory.
-----
Here is a common example that may be easier to digest for some.
The town barber, who is a man, shaves exactly every man in the town who does not shave himself.
Does the barber shave himself?
Here is the paradox:
Let R be the set of all sets that are not members of themselves
Is R a member of itself? If so, then it must meet the condition of not being a member of itself, which would mean it is not. If it is not, then it must meet the condition of not being a member of itself, which would mean it is a member of itself.
Can this, apparent paradox in set theory, be resolved?
You can think of this one as either mathematics or logic, as both branches share a root in set theory.
-----
Here is a common example that may be easier to digest for some.
The town barber, who is a man, shaves exactly every man in the town who does not shave himself.
Does the barber shave himself?
Comments (166)
Anyway, in modern mathematics Russell's Paradox isn't so much "solved" as it is avoided. The Unrestricted Comprehension Scheme allows for this paradox , despite Frege thinking it to have been "self-evident". Russell tried to get away from it but the whole motivation behind this, Logicism, died with Gödel's Incompleteness Theorems.
But really, it's an open question. The Incompleteness Theorems didn't so much destroy Logicism as much as it made the lay of the landscape clear. If you want to endorse Logicism, your mathematical system must be inconsistent. If you want to avoid triviality (e.g. proving all sentences are true) then you have to adopt a Paraconsistent logic to go with your inconsistent set theory. But if you want to keep consistency, you cannot accept Logicism. Of course for those like the Intuitionists this isn't an issue for their view. They never accepted Logicism.
Probably why foundationalism died a horrible death for mathematicians and logicians. There's a whole panoply of options for constructing mathematical and logical systems which are equally open to interesting and valid mathematical investigation.
Anyway, I simply accept the paradox. Nothing has gone wrong with it, it's a veridical argument.
Great, then tell me, does the barber shave himself?
Hello. I can see that the first statement is logically impossible, because whether the barber shaves himself or not, the statement contains a contradiction. But why call it a paradox, as opposed to a mere impossibility?
As I say, that's a somewhat naive view. The specification scheme allows one to avoid the paradox, but it doesn't necessarily solve the paradox. The whole point of regimenting set theory this way was to make make math consistent (or at least not provably inconsistent). But it comes with well known issues, like a number of unsolved questions that have known answers in other systems (e.g. Continuum Hypothesis).
It's kind of like saying you solve the Liar paradox by simply disallowing self-reference in your language. It's true in a sense (you can no longer articulate the paradox) but the debate is in the merits and justification in doing so. It's simply incorrect to say the see foundational issues were solved; no one studying modern fundamental mathematics would say it with such certainty, in any case.
Contradictions are impossible.
The statement “the town barber ...” contains a contradiction either way we look at it.
Therefore the statement is impossible.
“The town barber, who is a man, shaves exactly every man in the town who does not shave himself.”
Either the barber shaves himself or not. If he does, then he does not only shave men who do not shave themselves. If he does not, then he does not shave all men who do not shave themselves. The statement is therefore impossible.
He fits in the group and does not fit in the group, but only in a fiction can it obtain, even for metaphysical dialetheists. To motivate anything more you'd have to show such a barber does exist. So even if you're like me and think certain aspects of.reality might be inconsistent, the Barber Paradox doesn't show there is or can be such a barber.
A paradox is an apparent, but not real, contradiction. The barber can shave himself or not, and the group as defined can exist, but the barber cannot fit the group without contradiction. Now you claim it is a paradox and not a contradiction. Why is the contradiction not real?
Here is another simpler example for clarity:
A triangle exists in a world where all shapes have four sides. A triangle is possible, and a world where all shapes have four sides is possible, but the triangle cannot exist in the world without contradiction.
Oh but it is. Perhaps you are unable to see your unnecessary, superfluous, psychological baggage that you pile upon these threads of yours.
You are trolling, that's the only reason you even join my threads. You never contribute. You are trying to bait me.
https://en.oxforddictionaries.com/definition/paradox
You do realize this has been a recognized paradox since 1901?
That's is literally what I said. My post:
[quote='Me']I'm saying that even as metaphysical dialetheist I do not believe a "barber who shaves all and only those who do not shave themselves" cant exist.[/quote]
I already said I accept the paradox as a valid argument, but unlike most that's because I accept naive set theory. In ZFC, the set is not a valid one, sets cannot contain themselves in ZFC.
This statement is false.
If we are working with a logic that allows proof by contradiction, it is false because it allows deduction of contradictory propositions.
If we are not, we can in any case deduce the negation of the sentence.
This case is not the same as Russell's Paradox. Russell's Paradox arises from the inconsistency of the axioms of Naïve Set Theory, whereas this statement is meaningful and false in any reasonable logic.
Since the barber statement is false, the answer to the question 'who shaves the barber' is 'anybody, including the barber, could shave the barber, but we don't know who does it'.
Are you serious? Your OP:
Quoting Jeremiah
I have to assume there's some communication issue here. I accept Russell's Paradox, but the Barber doesn't seem difficult: no such barber exists, problem solved. If the barber is tangential to your question, fine. But you did in fact bring it up.
Quoting MindForged
You added an "and"? It is not "all and"; he shaves " every man in the town who does not shave himself." He does not shave people who shave themselves. That is what creates Russell's Paradox.
I find it very interesting you are now calling it a paradox. You realized your mistake.
This is ridiculous.
The barber shaves P and Q
The barber shaves Q.
Those are two different things.
At the very least your choice of wording is very ambiguous.
I can tell that writing is not your strong point.
MindForged, you are completely misunderstanding the difference between a veridical paradox and a plain old proof by contraction. Moreover, Russell's paradox has absolutely nothing to do with Gödelean incompleteness. Simply nothing.
Let's start by reviewing how proof by contradiction works. I'll use Euclid's classic proof of the infinititude of primes.
Claim: There is no largest prime.
Proof:
Assume the negation of our claim: Assume to the contrary that there is a largest prime.
Then we may number the primes p1, p2, ..., pn. [Moderators: It's so easy to add MathJax to a website. Pretty please? It would greatly enhance mathematical discussions here].
Form the number P = (p1 x p2 x ... x pn) + 1.
Clearly P is not divisible by p1, it leaves remainder 1. Likewise P is not divisible by any of p2, p3, ..., or pn.
Therefore P must either be prime; or else it must be divisible by some prime other than the ones we listed.
Therefore since the assumption that pn is the largest prime leads to a contradiction, we must conclude that in fact there is no largest prime.
Ok, that's the basic pattern. Now observe that Russell's "paradox" follows the exact same form.
Note that calling something a paradox doesn't make it a paradox. Naming conventions are generally historical accidents. For example the Axiom of Choice, Zorn's lemma, and the well-ordering theorem are respectively an axiom, a lemma, and a theorem. However they are all logically equivalent. The names are just accidental conventions. If we called it "Russell's theorem," or "Russell's smackdown of Frege," much less confusion would ensue. I hope this point is clear. From now on I'll refer to Russell's argument as Russell's smackdown.
By the way Frege is credited as being the inventor of the universal and existential quantification operators. He was really quite a bright fellow, notwithstanding his public humiliation at the hands of Russell in this particular matter.
Claim: We cannot form sets out of arbitrary predicates.
Proof:
Assume the negation of our claim: That is, assume that we can always form a set out of a predicate.
Consider the predicate P(x) = "x ? x".
Now we let R be the set R = {x : P(x)}. We see (following Russell) that we must have both R ? R and R ? R. That's a contradiction.
Therefore we conclude that our assumption is false; and that we may not arbitrarily form sets out of predicates.
Now we see that Russell's smackdown is nothing more than a traditional proof by contradiction; a basic pattern of logical reasoning that goes back at least two millennia.
How you get from this to invoking Gödel's incompleteness theorem I simply don't see.
Nor do I see how Russell's smackdown is a genuine paradox. After all, one could argue about primes as follows:
Primes get increasing rare as numbers get big. The farther out you go, the more distance there is on average between primes. If you go out far enough, it's reasonable that you simply run out of primes; and that from some point onward, every number is composite.
If one has this intuition, one would regard Euclid's beautiful proof as a veridical paradox. But very few people would call the infinitude of primes a paradox. Rather, it's a mathematical fact that's often proved via the method of contradiction. If one said, "But I REALLY have this intuition that there must be a largest prime, so Euclid's proof is a paradox that must be resolved," they might indeed have strong feelings about the matter, but their point would not get any traction among mathematicians or even logicians.
Likewise, Russell's smackdown shows that our naive intuition about sets -- that they arise from predicates -- is wrong. Perhaps one has this intuition; but with a little mathematical training, one quickly realizes that to form a set we must apply a predicate to an existing set. That's the axiom (schema) of specification.
Let's see how this works in practice. If N is the natural numbers, what is the set R = {x ? N : x ? x}? Well, is 0 ? 0? No, so 0 is in R. Is 1 ? 1? No, so 1 is in R, Continuing in this manner we see that in fact R = N. The axiom of specification has completely resolved the matter. We CAN'T necessarily form sets out of predicates, but we CAN form a set by applying a predicate to an existing set. Done and done.
So when one says, "My intuition is that the primes get so rare that eventually there's a largest one and then no more after that," or "My intuiution is that you can always form a set from a predicate," these are indeed intuitions that an untrained person might have. But with a little mathematical training, one comes to understand and internalize that there are infinitely many primes, and that in order to form a set from a predicate, one must first start with a known set.
This has NOTHING to do with Gödel, nor is the Continuum hypothesis even remotely, by any stretch of the imagination, the same type of phenomenon.
I didn't mistake anything. The paradox resulted from rules that Frege thought were indubitable. And you completely misrepresented what I said. I didn't say RP had anything whatsoever to with the Incompleteness Theorems. I said the regimentation of set theory was motivated by a desire to.prove mathematics was consistent and complete, but Gödel showed you could only have one or the other, not both (if your system is capable of articulating arithmetic truths).
I know how a proof by contradiction works, pleasedon't patronize me. What you don't seem to get is that avoiding a paradox is not the same as proving your solution ought to be adopted. I can avoid the Liar paradox by disallowing all self reference. But then all hell breaks lose because perfectly sensible sentences like "This is an English sentence" get the axe. The solution has to be justified. Modern.maths avoided the paradox to get on with the business of mathematics, no one disputes that the resolution was ad hoc. They simply take that theoretical black mark as preferable to an inconsistent theory. If the principles which cause the paradox are more theoretically virtuous than the consequences of evading the paradox, then one ought to accept the paradoxical conclusion. I mean, to just throw up "It's a proof by contradiction" is easily abusable:
Early calculus was indisputably contradictory. One had to treat infinitesimals as a non-zero value at one step of the proof and then as having a value of zero at another step. Ergo, we should have dropped calculus as a viable mathematical branch immediately instead of it knowingly being left to languish as an inconsistent theory for 150+ years.
Long story short, it's a lot more complicated than you seem to think. Paired with a Paraconsistent Logic, one can work with Naive Set Theory (and thus with Russell's Paradox) and draw interesting, non-trivial conclusions. Proof by contradiction is only going to work this way if you're already assuming that contradictions are off the table, meaning you're already rejecting Russell's Paradox before you even do the proof.
The barber either has a beard, or leaves town in order to get shaven wether or not he does it himself or visits a collegue, for sure he doesn't shave himself when in town.
From the OP:
Quoting Jeremiah
The question is "Is R a member of itself?"
There are to possible statements we can derive from that question. Our statements to be proven would be: R is not a member of itself and R is a member of itself.
Proof by contradiction would lead us right back to Russell's Paradox.
It seems you have another contradiction on your hands.
Argument from authority? really?
Argument from authority is only wrong if the authority is misplaced. I think over 100 years of history is a very strong authority.
Nonsense, argument from authority is wrong as long as one of the participants doesn't accept the authority of claimed authority by other(s). You ought to provide the actual argument, not appeal to the authority, especially if the authority gets questioned.
Having said that, personally I'm not fighting your paradox other than in my other post, wich if you read correctly substanciates the perceived paradox, but attempts to force you to improve on its formulation, not questions said authority.
I don't accept your authority on the argument of authority, guess that means you are wrong.
Also you forgot the link to the OED, which provided the definition of a paradox. Try reading it, as it turns out contradictions can be paradoxes. Why did I use the OED? Because it is an authority. Appeal to authority is not a reason to shrug off a valid authority.
Lol, you troll (at least for your sake I hope you were trolling). You are trying to substanciate your appeal to authority by refuting your unwarrented assumption of my appeal to authority. Now even if I made such an appeal, you still couldn't use your refusal of that to substanciate your appeal to authority. Two wrongs doesn't make a right.
Now I could provide the actual argument on why you shouldn't be appealing to authority, especially when said authority is questioned. But I think you already know (if I'm wrong about this, please say so and I'll provide the actual argument), you seem smart and educated enough on most instances, you are just abit sloppy/lazy on occasion, wich I like to point out when it happens, since you also seem to have the capabilities to understand me correctly. Perhaps I overestimated you though, time will tell.
Secondly you are conflating the conclusion with the type of argument used. I objected to the argument you used there, not to it's conclusion. It's most unfortunate you have this tendency, since otherwise you could have benefitted from my remarks to your posts to improve the formulations of your positions, instead of picking a fight with an ally.
So my advice, stop guessing, you have demonstrated your logic has more merits than your guesses.
I didn't object to the conclusion, I objected to the type of argument used. It's rather irrelevant how long people perceive something to be the case or not, what is relevant are the arguments used for or opposed. Now your first paradox we don't disagree on I think. The barber paradox could be more accurately formulated.
Quoting Jeremiah
I didn't even question the authority in this case, so I didn't see any merits in adressing this, but it seemed someone else might have. Hence I objected to the appeal to authority, not it's conclusion. You were doing great untill you made this argument. I saw it as weakening your case, hence I objected.
You raised a number of interesting points. Before I respond in detail, it would help me to understand your point of view if you could tell me in clear and unambiguous terms what you find different about these two situations.
a) There is no largest prime. Proof: We assume there is a largest prime and derive a contradiction. Hence there is no largest prime.
b) We can't define a set using an arbitrary predicate. Proof: We assume we can define a set using an arbitrary predicate and derive a contradiction. Hence we can not define a set using an arbitrary predicate.
Why is it that in the case of (a) you regard this as a basic mathematical truth; yet in the case of (b) you regard this as a philosophical conundrum perhaps susceptible to attack via paraconsistent logic?
I assume (although you have not confirmed this) that you don't regard the infinitude of primes as being subject to modification or revision based on paraconsistent logic. Why is (b) different?
Could there perhaps be some recency bias? Frege and Russell worked just a little over a century ago; and Euclid's proof is over 2000 years old.
But human nature doesn't change. It's reasonable that there was a contemporary of Euclid, an ur-Frege if you will, who was brilliant and accomplished and who maintained that the primes were finite in number. After all there is a perfectly sensible and compelling heuristic in support of that proposition, namely the fact that the primes get more and more rare the farther out you go; and that there are in fact arbitrarily large runs of consecutive composite numbers.
Perhaps ur-Frege published his masterwork; and right on the eve of publication, Euclid showed that there is no largest prime. Perhaps this caused a big stir back in the day. The historical record is lost; but it's certainly plausable. The fact that Euclid felt the need to write down a proof shows that the question was in the air at the time.
So just tell me please, what is the difference in your mind between (a) and (b)?
By the way I did not intend to appear patronizing. I carefully walked through these two proofs by contradiction in order to elucidate their structural similarity. Assume the contrary, derive a contradiction, learn a truth.
You see a great difference between these two famous proofs, and I don't see a difference at all, except for the antiquity of one and the recency of the other. If you can clearly explain to me why you see a profound difference, I'd understand your viewpoint better.
I addressed that point in my earlier response to @MindForged. Naming is generally a matter of historical accident. Is the Axiom of Choice an axiom, Zorn's lemma a lemma, and the well-ordering theorem a theorem? But they are logically equivalent, and often introduced to students in relation to one another. Do you regard the infinitude of primes as a paradox? It's often (though to be fair, not necessarily) proved via contradiction, just as Russell's smackdown of Frege is. Historical names mean nothing. One man's freedom fighter is another man's terrorist. What you call things is not the same as what those things are.
Lewis Carroll and many others have made the distinction between the name of a thing and the nature of that thing. Shakespeare noted that a rose by any other name would smell as sweet. Abe Lincoln used to ask, If you call a tail a leg, how many legs does a dog have? Answer: Four. Calling a tail a leg does not make it a leg.
Even the Beatles made this philosophical point: "Her name was Magill, and she called herself Lil, But everyone knew her as Nancy."
Because in the case of A, we have every reason to believe we are in a consistent domain (that of classical mathematics), where proof by contradiction is necessary (on pain of triviality), and we know we can give examples of larger primes . In B, we get a paradox unless we rewrite the rules of naive set theory to get something like ZFC. With A, we have a counter example that let's us dismiss the initial supposition, with B we get a contradiction from what seem like reasonable assumptions on their face. The assumption that there's a largest prime doesn't seem to rest on comparably reasonable principles such as a set being any collection defined by whatever condition you have in mind.
Quoting fishfry
I don't think the infinitude of primes will be much affected by a transition in the logic. Paraconsistent logic dispenses with proof by contradiction and tends to instead rely on proof by non-triviality (these are identical in other logics but not with PLs).
Quoting fishfry
I suppose the simplest way is to point out there are other concerns that bear on something besides consistency. I can't remember if it was in this thread that I mentioned this, but for example it's just a fact that the early calculus was inconsistent. One had to treat infinitesimals as a non-zero value at one step of proofs and then treat them as having and value of zero at another step of the same proof. This was acknowledged by Newton, Leibniz, criticized by Berkeley, etc., and it remained that way for more than 150 years. Now as far as I can tell, if you really tried to insist on this way of proceeding, you would have been rationally required by your standards to have rejected calculus (and therefore everything learned and built because of it) during that century and a half of it being inconsistent. But that's obviously ridiculous, there are other theoretical virtues besides consistency which made calculus tenable to accept despite the contradictions it required one to adopt.
That's what I'm arguing, sort of. Sure, Russell's paradox is a paradox. That was never the dispute. The issue was always that the principles that gave rise to the paradox in naive set theory seem pretty damn reasonable. So the way out of it was to come up with ad hoc restrictions on what constituted a set. There were extra-mathematical considerations which led to that response, not simply a proof by contradiction because that argument itself relies on already dismissing the possibility of paradoxes, which is the very thing under dispute of you accept Russell's Paradox. There has to be a reason (besides arguing against the conclusion) for why you reject the principles that give rise to the paradox, otherwise it seems like the objection is circular. One can get around it the way ZFC does, but the question is if that is more rational or if it results in a more theoretically virtuous theory. Perhaps it does, but it's certainly not answered by pointing out there's a contradiction.
Every single one of these threads I have made someone jumps out and goes, "Oh it is not a paradox, therefore paradox resolved." It gets old and I get tired of going back and forth on that point. I mean it is actually moot whether it is officially a paradox or not, the conundrum doesn't fade away just because someone decided not to call it a paradox. So it is easier just to tell people it widely recognized as a paradox, or something along those lines and I am not lying, these are well known paradoxes.
So call it an appeal to authority if you like. I don't really think it falls as neatly in those lines as you do, but either way it is an effective approach to move the discussion off a moot line of discussion.
You can't resolve a paradox but simply stating that it is not a paradox. A paradox by any other name is still a conundrum.
Do you regard the proof by contradiction that there's no largest prime a conundrum or paradox? Why or why not?
In other words: The assumption that there's no largest prime leads to a contradiction. so we conclude that there's no largest prime. The assumption that you can define a set with an arbitrary predicate leads to a contradiction, therefore we have a powerful paradox that must be addressed by philosophers. I simply do not understand the difference except as a manifestation of psychological recency bias.
@MindForged You raised some good points that I"m taking some time to think about.
I need a good explanation of why you think that is relevant before I spend my time on it. I am not trying to evade or be rude, I just don't have a lot of free time and I need to hear a good justification of the parallels.
Uh ... LOL. That made me chuckle.
It's because the form of the two proofs is identical:
* Assume there's a largest prime.
* Derive a contradiction.
* Conclude there's no largest prime.
versus
* Assume you can form a set from an arbitrary predicate.
* Derive a contradiction.
* Conclude that you have a deep paradox that must be addressed or resolved.
I don't see the difference. In the 20th century the smartest mathematicians in the world regarded these two patterns as the same. In the case of Russell's smackdown of Frege, everyone realized that you CAN'T always make a set from a predicate, hence the need for better rules of set formation.
So myself, I don't see the difference between the two proofs. If your assumption leads to a contradiction, you ditch the assumption. That's exactly what all the mathematicians did.
It wouldn't make any sense to say, "Oh Euclid's proof by contradiction shows there's a terrible paradox." Rather, Euclid's proof shows that there's no largest prime. And Russell's proof shows that we can't form sets from arbitrary predicates. It's as simple as that.
And -- admittedly an argument from authority -- every mathematicians agrees with me
Now of course that doesn't make me right, that's just an argument from popularity or authority. But it does place the burden of argument on you to say why everyone's wrong and you and @MindForged are right.
I actually already addressed this argument of yours.
Link please, I didn't see it. But it wasn't an argument, since I'm merely stating what every single mathematician agrees with. I'm asking you a question. Why do YOU find the two cases so radically different? Two proofs by contraction but only one is a paradox in your viewpoint.
@MindForged has the same opinion and he gave a longer post that I'm working through before I respond. If you did respond to this question, just point me at the response please.
That didn't even make sense. I do remember reading it now. I don't follow your point at all.
We assume there's a largest prime and derive a contradiction, so we conclude there's no largest prime.
We assume we can form sets out of arbitrary predicates and that leads to a contradiction, so we conclude we can't form sets out of arbitrary predicates.
This seems perfectly sensible to me. And (argument by authority and popularity) every mathematicians in the world agrees. That doesn't mean they're right, but you have to make a much stronger argument, which you haven't done.
By the way, are you and/or @MindForged making some kind of constructivist or intuitionist argument that rejects the law of the excluded middle and/or proof by contradiction? That would at least make some sense, but intuitionists aren't trying to resurrect naive set theory, as far as I know. The modern neo-intuitionists have given up on set theory entirely and are working with some flavor of type theory. Type theory was Russell's own solution to his discovery.
It makes complete sense and I am not interested in your straw-man.
I think I've already articulated my position without recourse to intuitionism. Once you think it over (need not agree obviously) let me know what you think.
I'll admit, I'm something of a logical pluralist so it's not like I'm advocating a wholesale abandonment of standard maths. Honestly, I actually wonder what mathematicians who think about this sort of thing believe (rare-ish to see it done in depth, most don't bother with the foundations of maths these days). Really, it seems like Gödel's Incompleteness Theorems in particular and the death of Logicism (using classical logic) seems to have killed foundationalism in the eyes of mathematicians and logicians, so I wonder if they're pluralists of a sort?
Ok. Just wanted to make sure you accept law of excluded middle and proof by contradiction.
Quoting MindForged
Will do. I got in trouble once around here when I deferred responding to someone's long and complex posts while responding quickly to other people's short posts. The poster whose long posts I was trying to give serious and considered thought to, got more and more impatient and finally abusive. Just wanted to be clear that I'm deferring my thoughts till I have a block of time tomorrow.
Quoting MindForged
I think Category theory and homotopy type theory are getting most of the foundational work these days. Homotopy type theory as I understand it actually relates to the resurgence of intuitionism. And the set theorists study large cardinals and are still hard at work on CH. You can Google names like Woodin and Hamkins to see what the set theorists are up to. But nobody worries about Russell's paradox because there's nothing to worry about. It just shows that we can't use unrestricted set comprehension. And I still don't know why you think people should be concerned about a run of the mill proof by contradiction. Sure it ruined Frege's day, but it revealed a mathematical truth about the nature of sets. But that's what we're talking about so I'll try to respond to your specific points soon.
Also, in neither math or logic are straw-mans valid methods of proof.
Yea, I don't have much issue with Excluded Middle. Then again, I've only passing familiarity with intuitionism. ;)
Quoting fishfry
I'm not talking about Russell's Paradox in that bit, I'm talking about the general outlook regarding mathematics post-Incompleteness Theorems. ZFC's development was intentionally practical: we need to get on with the business of doing sensible maths but classical logic cannot function sensibly with an inconsistent set theory. Once it became clear that there was no strict necessity in picking one formalism over another (i.e. no privileged set of indubitable axioms), it seems like mathematicians and logicians became a bit more cavalier about the whole thing. Rightly so, in my view, the interest shifted to the virtues of particular formal systems applied in specific domains, particularly when such systems are fruitful.
Like from the Incompleteness Theorems, we know you can (for systems expressive enough to articulate arithmetic truths) either have an inconsistent but complete mathematics (Paraconsistent mathematics) or you can have a consistent but incomplete maths (Classical math, Intuituonistic math, etc.). Classical logic is so preferred because of its wide usability, but there are known issues and domains where it's questionable (quantum mechanics, representing human reasoning, databases, some evidence paraconsistent logic operations are faster to compute, etc).
So I wonder if this modern openness to more or less any non-trivial logic/math indicates some kind of pluralism. What do you think?
This is exactly how I got in trouble last time. Conversating back and forth while deferring responding to the important earlier post. But a few thoughts ...
Quoting MindForged
I don't see why. Classical logic goes back to Aristotle. And even math doesn't need set theory. There wasn't any set theory till Cantor and there was plenty of great math getting done before that. Archimedes, Eudoxus, the medieval guys Cardano and so forth, Newton, Gauss, Euler, Cauchy, and all the rest. None of them ever heard of set theory and did fine without it.
If set theory were discovered to be inconsistent tomorrow morning, the foundationalists would get busy patching it and nobody else would care. As an example, how would group theory change? The group axioms and their logical consequences would still be the same.
As far as incompleteness, that's already been verified and sliced and diced via computer science, information theory, and almost another century of study. Gödel published in 1931, that's almost a century already. Incompleteness is literally a classical result now. Everyone's moved past it. So we can't use the traditional axiomatic method to determine what's true. If anything, that's perfectly sensible. We have to find other paths to truth. That's exciting, not worrisome I think.
Quoting MindForged
I don't think that's completely true. People don't study random sets of axioms. See Maddy's great articles Believing the Axioms parts 1 and 2, in which she works through the axioms of ZFC and discusses the philosophical reasons why they have gained mindshare. I really don't believe that incompleteness is any kind of nihilistic disaster. Interesting math is being done every day.
Quoting MindForged
Right. All of this is thrilling intellectual stuff. It's not the end of the road for reason. On the other hand, perhaps it's related to postmodernism and the reaction against reason. Reason has given us better ways to wage war and promote economic and social inequality. There are good reasons (!?) to distrust reason.
Quoting MindForged
Pluralism. Yes. Crisis = opportunity. Something new is coming. Hilbert's program failed, but that doesn't lead to people being cavalier as you put it. Alternatives are being explored. I think 100 years from now all this will be more clear. Reason and logic are going through some kind of revolution that we can't see the outlines of yet. Computers and the computational way of looking at things. We're in some kind of transitional period.
Hamkins has something called the set-theoretic multiverse. It's (to the extent I understand it, which isn't much) the consideration of all possible set theories considered as a whole. The worlds where CH is true, where CH is false, and so forth. There's no one true set theory, they're all part of some grand structure. These are my words, not any claimed description of what Hamkins is thinking.
Here's his "popular" exposition, which isn't what I'd call elementary or comprehensible. But for what it's worth, contemporary set theorists are already way past Gödel. By the way (rambling on now), I think the really big breakthrough wasn't Gödel. It was Cohen, who showed how to cook up nonstandard models. That's when things really started getting crazy in the set theory business.
http://jdh.hamkins.org/the-set-theoretic-multiverse/
The axiom schema of specification blocks Russell. Would I be right in thinking that one reason to be cool with that approach (the truth learned) is that we don't need unrestricted quantification?
No, Aristotle created Syllogistic. Classical logic was invented in the 1870s by Frege. These are not the same system, Classical Logic validates a different set of arguments than Syllogistic, it has logical connectives and quantifiers that Syllogistic lacked, and funnily enough, Syllogistic was a type of paraconsistent logic since according to Aristotle you cannot derive anything from a contradiction. Without set theory, we wouldn't understand a lot of maths, it came as part of the program to understand how various kinds of numbers were defined and related to each other. The "classical" in "classical logic" is misleading, if not propagandistic, heh.
Quoting fishfry
I didn't say it was worrisome, I was just pointing out a consequence of the theorems. You can use the traditional axiomatic methods, you'll just have an inconsistent theory.
Quoting fishfry
Now I certainly didn't say it resulted in nihilism, and I don't deny good math is being done. As I said, I don't reject standard math.
Quoting fishfry
Eh, I don't think it's Po-Mo at all. It's just that the landscape of possible formal systems worthy of mathematical investigation turned out to not be so limited.
Thanks for the link, looks interesting.
Well that's the conventional wisdom, pretty much universally accepted.
But I wouldn't say that we don't need unrestricted comprehension (I don't know why they use the word comprehension, I'd just say "set formation by predicates"). We simply discovered that set formation by arbitrary predicates leads to a contradiction. So we are FORCED to abandon it, reluctantly.
I do agree that this is psychologically or intuitively unpleasant. We want to think of sets as Cantor originally did:
A set is a gathering together into a whole of definite, distinct objects of our perception [Anschauung] or of our thought—which are called elements of the set.
That's how we teach school children about sets. It's how we think of sets. The collection of things that satisfy a predicate. But Cantor's definition fails. It leads to a contradiction. So we learn our lesson, we move on, we abandon naive set theory.
I do empathize with those who are troubled by Russell's refutation of naive set theory. But I don't agree with anyone who gets stuck on their intuition so firmly that they can't move past it. It was John von Neumann who said that we don't understand math, we just get used to it. That's a great insight.
Sorry, I overstepped my knowledge. I don't know anything about Aristotle. Poor Frege, such a brilliant and original thinker, forever remembered for his big mistake.
I better leave this be for tonight. Now I'm two posts behind you.
(I know this wasn't directed at me but I can't resist)
Depends on what you mean by "need". If you aim to prove Logicism then you do need unrestricted comprehension. Without it, we end up with a lot of unprovable hypotheses. For example, it has been demonstrated that naive set theory + a paraconsistent logic lets you prove the Continuum Hypothesis is false. However, in standard maths it's unprovable.
This isn't to say that because one formalism can solve a problem the other can't that we should ditch one for the other. It's just that there are extra-mathematical considerations to what we pick (i.e. assessing theories for their worth/virtue). Most mathematicians prefer working in a consistent system (and a fruitful one at that) so they privilege one which is consistent but lacks a bit over one where some contradictions are provable.
Ah, you must be working from knowledge of paraconsistent logic that I lack. Reference for the above fascinating factoid?
https://plato.stanford.edu/entries/mathematics-inconsistent
Maybe restricted quantification is not ad hoc at all though. Maybe "Some Gs are F" is a better paradigm than "Some xs are F".
I can understand your frustration about this, but please don't box me into that group on forehand. The fact that I disagreed with an argument used doesn't nessesarily mean I disagree with it's conclusion. If I disagree with a conclusion, I'll provide an argument leading to a different conclusion.
I can also understand why you do it. However in my opinion this is a debating tactic, wich has no place in discussions. On the other hand, it's quite possible that the persons you usually use this tactic on made a comment that has no place in a discussion either, so I can understand why you choose to do so.
If you can get away with it, it's usually a sign the person you used it to wasn't discussing either, but instead was debating. So I would say that its effective in a debate, but has no place in a discussion. Though of course, when having people in your discussion that don't know how to discuss and are debating instead, it can be an easy tool to get rid of them.
Personally I prefer to stick to the argument, as long as they don't provide actual counter arguments, but come up with a fallacy instead, I merely point out the fallacy rather than trump it with another fallacy. But again, if it works for you, fine, just don't try it on me, I resent debates, I love discussions.
Now lets get back to the barber paradox. The way you formulated it, it seems to have a backdoor.
Quoting Jeremiah
The backdoor lies in that it's not clear wether the 'in the town' part refers to that the shaving happens in town, or that it refers to the men living in the town. So to close it, i suggest either a formulation like :
"The town barber, who is a man who never leaves town, shaves exactly every man in the town who does not shave himself."
or
"The town barber, who is a man, shaves exactly every man living in the town who does not shave himself."
Otherwise "the barber shaves himself when out of town" seems to be a valid solution. The second formulation isn't airtight either though, since the town barber could be someone living in another town.
Say this barber was a real person, and this is the task he defined to set out to do. Then when he got down to trying to decide if he should shave himself or not, based on his predetermined conditions, would that make him pop out of existence? No, of course not. For the record.
It just means this is a task the barber cannot carry out. It is an invalid way of specifying a task.
What's curious is that if you consider these two commands
(a) Shave all and only those who do not shave themselves; and
(b) Form a set by selecting as members all and only sets that do not contain themselves as members,
then many people will conclude it is impossible to obey (a), but are confused by (b) and think it should be possible.
Agreed, except that the existence of such a set is a presupposition, and it is that presupposition that must be denied. (In this way it's analogous to being asked if the present king of France is bald, or if you've stopped beating your children.}
Ahhhh, very interesting article. I learned something.
I do feel a tiny bit sandbagged in the sense that you've had this somewhat obscure topic in mind as you've been debating. Had you presented this article and its point of view up front, it would have made your posts much more clear to me. Minor issue, now I'm educated and I see what you're talking about.
To summarize the article as I understood it:
* We really really really want to save naive set theory, so we have to rehabilitate unrestricted set formation via predicates, aka unrestricted comprehension.
* The reason we care is that we [not me, actually, the people doing this work] would like to rehabilitate logicism, the idea that math is derivable from logic.
* Unrestricted comprehension leads to a contradiction, and in standard logic a contradiction implies any given proposition. That's the principle of explosion. So we need to abandon explosion.
* For various technical reasons we need to also abandon or modify some other logical principles.
* Once we've done this, we can in fact allow unrestricted comprehension and save naive set theory and perhaps even logicism. Although in my opinion you're wrecking logic to save logicism, which might arguably be self-defeating. Nevertheless, this work can be done.
* Now having saved unrestricted comprehension and perhaps logicism [at the expense of wrecking logic IMO] we can also patch up standard math: number theory, analysis, topology, and so forth. Surprisingly, quite a bit of math can be preserved even at the expense of allowing the contradiction of unrestricted comprehension.
* This project is relatively new, and work continues as we speak.
Have I got this about right? A couple of comments.
First, this does remind me a bit of the constructivist project to rebuild math with a countable set of real numbers, each of which can be explicitly constructed. A lot of classical theorems fail in this scenario, so the constructivists patch and hammer and sing and dance and try to fix everything up.
Yes it's true that it's all logically correct, but it seems like so much trouble just to avoid the truths of 20th century math: that unrestricted comprehension fails and that there really are important mathematical objects that can be proven to exist but that can not be explicitly constructed.
So yes, the paraconsistent project is interesting and I'm sure the professors are getting their grants and doing their work and getting tenure and serving on academic committees and having fine old careers.
But if I met one of these distinguished characters, I would ask them the same question I've asked you and @Jeremiah: Why don't you hack logic to allow the existence of a largest prime? Why does one easily proved mathematical fact annoy you so much yet you accept the proof of the infinitude of primes? [Sorry didn't mean to imply you personally are annoyed, you already said you're not. I mean the generic "you," the people trying to rehabilitate naive set theory].
You know we could create a system of math with only finitely many primes. For example let 7 be the largest prime. We want the fundamental theorem of arithmetic (unique factorization into prime powers) to be true. So 1, 2, 3, 4 = 2 x 2, 5, 6 = 2 x 3, 7, 8, 9, and 10 are allowable numbers. But 11 falsifies the FTA, so it's abolished from the number system. 12 ok, 13 is abolished, 14's ok, 15 and 16 are ok, 17 is abolished. And so forth.
Now we have a system of arithmetic that obeys the fundamental theorem of arithmetic and in which there is a largest prime. There's a little problem, which is that the integers are no longer closed under addition, since for example 8 is a number and 9 is a number but 8 + 9 = 17 is no longer a number. Well I guess we'll just drop the rule that says the integers are closed under addition. You're already perfectly willing to abolish the truth table for material implication which says that False implies True, and rejecting the additive closure of the integers doesn't seem much worse.
But notice that we can still preserve the fact that the integers are closed under multiplication! Any product of powers of 2, 3, 5, and 7 is also such a project. See, we are making progress! With a little effort we can probably make this system work very nicely with a few such modifications.
You might object that abolishing 11 will cause practical problems in the world. I agree with that point, and I only used 7 as a simple example. In practice we can just take the largest number anyone could possibly care about, say maybe 10^80, the number of hydrogen atoms in the observable universe, or maybe Graham's number, or Skewes's number. Any old finite number that's so big that nobody could ever care much about it in real life. Then take the next prime after that, define that as the largest prime, and I claim this is a perfectly serviceable system of arithmetic.
So why does everyone care so much about naive set theory but nobody cares about hacking logic and math to allow a largest prime?
I would ask these guys this question if I ever met them.
Now I will allow that I may be one of those old dinosaurs that has to die so that younger people can simply grow up accepting inconsistent math. And I certainly agree that non-Euclidean geometry, relativity and quantum theory, Heisenberg's uncertainty and Gödel's incompleteness, postmodern philosophy and the ills of late-stage capitalism have brought the project of western rationality to a moment of crisis. There's no point defending rationality when the world is so clearly irrational. I take all these points.
But still. Why unrestricted comprehension and not a largest prime? Why the emotional attachment to naive set theory? Maybe my professors were too effective at beating standard mathematics into my brain. But I really don't get it. Naive set theory is intuitively appealing but it fails. Accept it and move on.
Thanks for the link though, I certainly did find it interesting.
You have not given any argument as to why it must be denied.
Well summarized, couldn't have done it better myself. My only quibble is that it's "wrecking logic" only insofar as one already has an idea of what the correct logic is beforehand. The Thomists believed those who started using Classic Logic post-Frege were "wrecking logic" by abandoning what Aristotle left for us, but no one ought take that seriously in light of all the standard mathematics can do for us! Granted, paraconsistent mathematics hasn't reached that level (yet, perhaps) so my comparison probably lacks the persuasive force I'd like it to have. :-) Also, I didn't intend to sandbag you, I should have linked it earlier, though I do believe I referenced paraconsistent math.
Quoting fishfry
Because a paradox is not simply a contradiction. The contradiction "It's raining outside and it's not raining outside", as with the supposition there's a largest prime, lacks any persuasive force for it. It doesn't follow from seemingly reasonable principles. Frege believed unrestricted comprehension was "self-evident", though I loathe that term. But of course we know that the very logic he created cannot be paired with that principle of set theory. So we have two choices on offer: remove/rework the principle as best we can or switch logics. At the very least, the Incompleteness Theorems leave the door open about which one you pick, as in either case you will necessarily lose a very desirable trait for a mathematical system (either completeness or consistency).
Dialetheists hold that classical math/logic fails to account for some pretty crucial data: the truth-value of the Liar sentences, the geometry of Escher spaces, etc. They trivialize one you try to consider such things. Or take unsolved problems that are unprovable (or at least unproven) and we know standard maths leaves one wanting where answers are concerned (Continuum Hypothesis, for example). However, they argue that paraconsistent logics can handle these elegantly and give some real answers, but you can't do it while making an appeal to retain the standard formalism. And so, keeping naive set theory but changing the logic lets you retain a principle that seems very reasonable and you have an explanation for certain data and a reason to give about why some principles and inference rules in standard maths ought to be dropped.
It's a philosophy of science issue, I suppose. Which theory is more theoretically virtuous? That's what they hang their hats on, perhaps.
Let's talk about the Barber.
Suppose we have a town with three men: a barber (B), a philosopher (P) who doesn't shave himself, and a mathematician (M) who does.
Now define a set R as all and only men who shave all and only men who don't shave themselves.
1. M is never a member of R because he shaves a man who shaves himself.
2. P can't be a member either because he doesn't shave himself, so he'd have to shave himself to be a member, but he doesn't.
3. What about B? He would have to shave P and not M. No problem. If he shaves himself, he'd be out, like M, but if he doesn't, he'd be out like P. So B can't be a member no matter what he does.
So R = { }. No one shaves all and only men who do not shave themselves, therefore the barber does not shave all and only men who do not shave themselves. The three cases are exhaustive, in fact: no one can be a member of R whether they shave themselves or not.
Your presentation is to start by defining R as {B}, and then saying
Quoting Jeremiah
But we've already seen that B cannot be a member of R, so the premise is just false.
Now what about Russell? In the analysis given above, R was not the Russell set, but the set of all Russell sets, and it has been shown to be empty. It does not contain any set that contains all and only sets that do not contain themselves, because there can be no such set.
Therefore if you present the paradox by beginning, "Let S be the set of all sets that do not contain themselves as members," then I will deny the premise. No set can be formed in this way, which is exactly Russell's point.
I plead tragic ignorance of Aristotelian logic. Perhaps I over-identify the word logic with the standard predicate logic used in mathematics. The paraconsitentists (that word is used in the SEP article I believe) are wrecking what I think of as logic, but clearly my perspective is too narrow.
Quoting MindForged
Appreciate that! Of course that doesn't mean that 20 or 30 years from now we won't be teaching paraconsistent logic to the undergrads. But it doesn't have much debating force today. You can't sensibly say, "Ok, our assumption of X has led us to a contradiction, so X might be true if we abolish the principle of explosion and tweak a few other things in logic." We don't say that. We say, "We have just shown that X is false."
Of course specialists in logic-tweaking may bend the rules to allow that X is true. But it's hard to argue that this is how we should think. If X leads to a contradiction, X gets rejected.
Quoting MindForged
Oh but it does. The primes get exceedingly rare on average the farther out you go. And there are arbitrarily long runs of primes. You name a number n, and I'll show you a run of n consecutive composite numbers. I find it perfectly reasonable that back in Euclid's time, nobody knew whether there was a largest prime, and many learned and brilliant thinkers might have believed that there is a largest one.
And I have seen for myself that many students still ask this question. The infinitude of primes is NOT obvious at all. Of course once one has seen the proof and has fully internalized the infinitude of primes, one can no longer conceived of anyone else's doubt on the matter. But before one proves otherwise, it's perfectly sensible that there might be a largest prime.
Ok next post is the response I've been putting off so let me just post that so at least I can feel like I've caught up.
Quoting MindForged
But no, you're just restating your bias, not explaining it. "Q: Why is primes a simple proof by contradiction, and sets a paradox? A: Because primes is a simple proof by contradiction, and sets is a paradox." You have not explained your position, you've only rephrased it. We "rewrite the rules of set theory?" Well we "rewrite the rules of primes" to outlaw a largest prime, once we see there isn't one.
This is recency bias, not a reasonable explanation IMO.
Quoting MindForged
Unrestricted comprehension "seems reasonable" till we prove it's not. You're privileging an incorrect intuition and saying, "Who are you going to believe, an absolute logical proof, or my vague intuitions?"
Before Euclid there may well have been a strong intuition that there is a largest prime. There ARE in fact good heuristic reasons for believing so, which I've mentioned a couple of times.
Quoting MindForged
But it does. The farther out we go in the integers, the more rare primes become. And there are arbitrarily large runs of composites. Before one receives any mathematical training, it's perfectly reasonable that there are only finitely many primes; and in fact this question does come up among the mathematically naive.
Quoting MindForged
Nice intuition, turns out to be false. No reason to privilege this intuitive error. You assume it and you derive a contradiction, so it's false.
Quoting MindForged
You can make this argument when paraconsistent logic gains mindshare. I already showed that by dispensing with the principle of explosion and making a few minor tweaks to number theory, we can let 7 be the largest prime and things work out fine. Just as you can crowbar naive comprehension into submission if you're willing to tweak the rules of logic.
Quoting MindForged
Of everything you've written, this is the one point that made me stop and think. It's a good point. I have a response.
Newton was doing physics, not math. He had a method that worked to give him correct answers, but as Berkeley pointed out, Newton did not have a rigorous mathematical justification for his method of fluxions. We have a modern parallel in renormalization, for which Feynmann, Tomonaga and Schwinger got the Nobel prize. At the time, they had no mathematical justification. I believe the mathematical rigorization of renormalization is a relatively recent development.
The moral of the story is simply that physics leads mathematics by decades or even centuries. Physicists leap in where mathematicians fear to tread.
It would not be reasonable for a physicist to reject a method that works in practice simply because it lacks mathematical rigor. "Lacking mathematical rigor" describes a lot of physics even today. Physicists think in infinitesimals, yet the theory of limits rejects infinitesimals.
And for what it's worth, nobody rejected calculus; but they worked very hard for 200 years after Newton to get it straightened out. Even so, the theory of limits is a bit of a kludge. It depends crucially on the completeness of the real numbers, something for which there is no known analog in the physical world. It's fair to say that the underlying philosophical problem is still open.
This is a mystery, not a paradox. Those are different things.
Quoting MindForged
Arggg! That's EXACTLY what I'm disputing. And even though it's still called a paradox, nobody treats it that way. We treat it as a rigorous and convincing demonstration that naive comprehension must be rejected.
Quoting MindForged
Falling back on naive intuition again. The Banach-Tarski paradox seems unreasonable, but it's mathematically true and is nothing more than a clever repackaging of the fact that the group of rigid motions of three space contains a copy of the free group on two letters. The proof sketch given in Wikipedia is actually quite simple. Nobody doubts its truth. We just note that "math isn't physics" and move on. By the way this is yet another perfectly correct theorem that's NAMED a paradox that actually ISN'T a paradox. It's simply an intuition-defying demonstration. Math is full of them.
The entire history of math is the triumph of rigorous demonstration over naive intuition. It's only when it comes to set formation that some (you and @Jeremiah and maybe a few others) dig in your heels and say, "No, my naive intuition is more true than mathematical proof." The mathematical commmunity does not share that view. In the course of studying math, many naive intutions are shattered and replace by proof. Naive set theory is just one of them.
Quoting MindForged
Not ad hoc at all, but rather the product of over thirty years, say from 1900 to the 1930's, give or take, that the modern axioms of ZF were developed. The process was anything BUT ad hoc, and again I would refer you to Maddy, Believing the Axioms.
Quoting MindForged
Where would I start here? You've already said you do not reject the law of the excluded middle. So there are not extra-mathematical considerations. You assume a proposition and show it leads to a contradiction, hence the proposition is false, no matter how intuitively appealing it seemed five minutes ago. Poor Frege. He got the point right away. You agree that Frege himself got the point right away. Yes?
What do you mean by dismissing the possibility of paradoxes? Is Euclid's proof of the infinitute of primes a paradox? No, it's simply a demonstration that a common belief (that the primes are finite in number) is false. You say nobody believes this, but I spend a lot of time on Quora and Reddit and this question DOES come up often among beginners.
Quoting MindForged
I could not understand that remark. What principles? The pattern is clear. If an assumption leads to a contradiction, we must reject the assumption, no matter how intuitively appealing.
Quoting MindForged
One "gets around" the finitude of primes by accepting their infinitude. You are simply using different words to describe two identical phenomena. Two proofs by contradiction.
Quoting MindForged
It is never a question of virtue, but only of truth and proof. [Two different things in general, but in this instance, the same]. There are infinitely many primes and naive comprehension fails.
Ah ... a while back you objected that I misquoted you saying that incompleteness was on point here. But in fact I believe I was originally correct. You think this is about incompleteness. It's not. In incompleteness we fix a given system of logic (first-order predicate logic in fact) and draw conclusions about sets of axioms. In paraconsistent logic we alter the logical rules to obtain different theorems. That is not the same thing at all.
In that bit, I was referring to Paraconsistent logic lacking as many practical applications (in comparison to standard maths) that would serve to justify it's use. It still has such practical uses (models of human reasoning, databases and some digital logic stuff) and uses in mathematics where standard maths drops the ball. It's just not as inherently persuasive given how much more standard maths does across a broader spectrum of applications.
Quoting fishfry
Not really. That primes might be finite would take a lot of work to justify believing, accepting that sets are extensions of properties seems far simpler.
Quoting fishfry
(Answering these together)
This seems like the circular reasoning I mentioned before (and this will answer a similar questions you ask elsewhere in your post). You are in effect pre-excluding the the axiom when the question is exactly if the axiom is acceptable. If I'm entertaining the possibility of accepting Russell's Paradox then pointing out the contradiction obviously isn't the defeater for me. Yes, it leads to a contradiction but to avoid it you have to spell out *why* the principles causing the contradiction are to be rejected. We know there are more primes beyond whatever prime one thinks is the largest one. They become rarer, not nonexistent. That's evidence against the view that there's a largest one. So in attempting to reject unrestricted comprehension, you have to have something beyond pointing out the obvious fact that there's a contradiction. We want to know why something has gone wrong (if indeed it has).
And on that point:
Quoting fishfry
This seems a little silly. Russell's Paradox is a paradox *in naive set theory*. When I said no one disputes that, I meant simply that it was the conclusion of the rules of naive set theory, but it wasn't clear why those rules were faulty beyond "we get a contradiction". We knew there was a contradiction, what made (and makes) the Russell Set a paradox in naive set theory is that the principles that give rise to it seem reasonable, a standard mathematicians employ routinely. We can avoid it of course, but in doing so Zermelo was explicit that he was trying to avoid the paradox in a new theory, not solve it (i.e. provide an explanation for why the axiom separation is a more reasonable axiom). This plays off my earlier comparison, which I'd like you to address:
I can avoid the Liar Paradox. It's easy. I'll simply define a formal language (because natural languages are known to be inconsistent) and by some means disallow self-reference. Poof, no more Liar paradox. After all, via a simple proof by contradiction I know something has gone wrong and so some assumption must be discharged. So I dismissed a feature necessary to create the paradox and voila.
This is obviously missing the point, in exactly the way you did. Evading it is easy, giving a justification for how you did so and why your methid ought to br adopted is not easy. That's what a paradox is, that's why tossing up a proof by contradiction is silly. There are consequences to whatever method you use to dissolve such issues and dismissing self-reference has a large number of negative repercussions, far more than a contradiction does. Sometimes those black marks are acceptable, but that has to be shown and a proof by contradiction does not do that. That's my point in referencing theoretical virtues. The theory resulting from the change in axioms has to be assessed for its worth and consequences. If you address nothing else, this is the main point I'd like to see you tackle.
Quoting fishfry
Sure? Think I already said so. But as I also said, Frege was reasoning with the classical logic he'd just created and had no other logical theory (besides the old Syllogistic) with which to explore alternate possibilities. Classically, Russell's Paradox is unacceptable because the set theory trivializes everything, but we know that's possible to contain with Paraconsistency, even if you don't go to dialetheism.
Quoting fishfry
This is exactly the point I was making. Calculus was so useful and good a theory that in spite of known contradictions, it was more reasonable to keep it. But you didn't really answer the problem I posed. Neither of us disputes that calculus was inconsistent for a while. On your view, it seems like you'd have no recourse by to dismiss the theory on grounds of inconsistency. Pointing out that physics blazes the trail is besides the point; physicists still use logic. In effect, I can't see how your view isn't inherently flawed because a single contradiction justifies any change needed to avoid it. You certainly didn't reference any limit on what one is rationally committed to doing when faced with a contradiction. So without further explanation, you'd be committed to dispensing with it and other developments depending on it. Ditto for any inconsistent scientific theory.
I think you are mixing up points. Earlier I referenced how Russell's Paradox led to a reformulation of set theory and I further said that in spite of this reformulation, the Incompleteness Theorems still left the door open to Paraconsistent logic. After all, paraconsistent logics can tolerate inconsistency but, as per Gödel's theorems, the resulting system will be complete (the opposite of standard maths).
He's saying that there cannot exist a barber who shaves all and only men who do not shave themselves. No barber is a "Russell barber".
And so there cannot exist a set that contains all sets that are not members of themselves. No set is a "Russell set".
Let S be a square triangle. How many sides does S have?
Is this a paradox? If not, what makes Russell's set different? I say nothing. In both cases there's just a contradictory premise, and so the "paradox" is resolved by dismissing that premise as incoherent.
S being a square circle isn't a paradox. It's just nonsense.
R being a set that contains all sets that do not contain themselves isn't a paradox. It's just nonsense.
The fact that it can be shown by existing foundations is what sets it apart.
Also, and this is very important, straw-mans are in no way a proof. Russell's paradox is concerning set theory, saying things like there can't be a round square is just babbling unrelated observations.
It just means that the axiom schema of unrestricted comprehension – [math]?y?x(x \in y ? P(x))[/math] – is inconsistent, and so additional qualifications are required to maintain consistency.
I'm somewhat confused as to what you're arguing, and I probably go further than you do on this issue. Surely the classical mathematician agrees that naive set theory was fundamentally wrong. Specifically, that it was fundamentally wrong in taking the Unrestricted Comprehension Schema as an axiom. In ZF, the paradox cannot be articulated, because the axiom of separation blocks sets from containing themselves.
I understand that some feel the axiom schema of specification resolves the paradox, I was speaking generally to the notion that if it is nonsense we can ignore it. As that argument seems to repeat itself in each of my threads. Mainly I just wanted a thoughtful reply instead of just a brush off and a straw-man. We can't just dismiss mathematical paradoxes as nonsense, that is not a valid solution.
It's nonsense that you cannot seem to accept the very clear arguments in favour of dismissing some of your purported paradoxes. Just because you're able to state an inherently contradictory sentence, does not mean that expression is a paradox.
Srap was clear:
Quoting Srap Tasmaner
"A married bachelor drew a round square" is as meaningful as "the set of all sets that do not contain themselves as members". E.g. nonsense that we can and should dismiss.
So a paradox is just a persuasive contradiction.
The Russell set is not what anybody had in mind when they first had the idea of sorting the world into classes. The Liar is not what anybody had in mind when they first had the idea of communicating a fact about the world to another person. What you're both missing is how perverse the paradoxical cases are. As I've said elsewhere, the reaction of the average layperson will almost certainly be, "Oh, it's a trick." In my part of the world it might be worse: "I always figured math was bullshit -- guess I was right."
Both of these cases reveal the dangers of unrestrained generalization. We find an approach that works for some purpose, see that generalizing it allows us to use the same technique for many purposes, invent mathematics and conquer the world with Francis Bacon proudly leading the way.
But do paradoxes show that abstract thought is fundamentally flawed?
We rush ahead with our limited understanding of classes, or of truth, and prematurely declare a principle because the first phases of generalizing worked so well. The paradoxes are a warning not to be less ambitious but to be more careful. They are creatures of the work we've done so far -- this is why they have the peculiar form they do. If they go in a box with "incompleteness", "uncertainty", "undecidability" and all that jazz, the label on that box isn't "Not such a big shot now, are you human?" It's just "Hey, you're not done."
Why is that relevant? I certainly never said people intended to create such paradoxes. My point is precisely that the existence of such things are what motivates changes in the logic. They are perverse, at least in the sense of being unintended consequences of seemingly reasonable principles (hence the designation "paradox"). The layperson will not understand it if you tell them there are different sizes of infinity but we know it's true in math, but we don't take that as evidence against Cantor's work on infinity.
This is just to say that I'm not suggesting people blindly forge ahead. To make sensible use of Unrestricted Comprehension, you have your use a paraconsistent logic. Classically (and in every other logic), doing so makes the resulting mathematics trivial and therefore useless.
Ah, no, I definitely wasn't saying that the recherche nature of things like the Liar or the Russell set is some kind of evidence they should be shrugged off.
We broadly agree, I think, that there is something reasonable and something unreasonable at work in producing the paradoxes. If forced to choose, my allegiance is with the LNC rather than naive set theory, but whatever. I do wholeheartedly approve of dialetheist tinkering. It's healthy.
But I am suspicious of a kind of magician's (or conman's) patter you see around these things. "I can have a set of numbers, a set of things that are red, a set of bald men. Perfectly natural, right? A set of cars. A set of cars that are blue. A set of all sets that don't have themselves as members. Most natural thing in the world..." I just want to pause over the "Wait, what?" reaction here, while you're always emphasizing how naturally these things arise. (I do understand that it's the principles not the example that's supposed to be natural.)
I guess emphasizing the weirdness of the counterexamples is holding onto the possibility that the counterexample itself is where the sleight-of-hand takes place, rather than in the principles it exploits. I certainly feel that way about the Liar. (Likewise Gettier, which is a whole lot like a magic act; Fitch's; the Slingshot.) Russell strikes me as something a little different, that absolutely unrestricted comprehension is bizarre and not what anyone needed or wanted and it's no surprise that if you explicitly let in anything at all, you'll get some pathological cases.
I shared this paradox with lots of people, non-math people, not a single one of them reacted that way. I share every paradox I post here with friends and co-workers, it acts as a short conversation piece sometimes; some find them interesting, most don't really care.
Maybe you are underestimating the "average layperson".
Wait, let me get the dead horse for you. . . .
Call it whatever you like, but changing the name is not a real argument.
They are here to generate discussion and I don't care if they are paradoxes, contradictions or whatever, just as long as they are meaty enough to get a fruitful discussion going. I also don't care if the side I have aligned myself with is right; I hate it when everyone agrees with one another, it is the most unproductive form of discussion. The purpose of this is to walk away with just a bit more understanding and the paradox is merely the raft we use to cross those waters.
We are at seven pages here now, so I think this one was a good one.
From where I sit, these threads have raised the level of discussion on the forum. Specific problems are way more interesting than a battle of isms.
Appreciate it.
The bachelor statement is not a direct contradiction. One has to deduce the contradiction by a series of steps, so the only difference between that and the assertion of the existence of a Russell set is the length of the deduction by which one arrives at a contradiction from the statement.
To see this, note that a contradiction is a statement of the form
P and not P
Now a person is said to be a bachelor at time t if at that time they are an adult, male human that has never been married. We can write this as:
bachelor(x, t) <-> adult(x, t) and male(x, t) and human(x, t) and for all t' (t' <= t -> not married(x, t'))
Then the statement 'Paul is a married bachelor at time t' is formalised as:
married(Paul, t) and bachelor(Paul, t)
which is the same as
married(Paul, t) and adult(Paul, t) and male(Paul, t) and human(Paul, t) and for all t' (t' <= t -> not married(Paul, t'))
Now intuitively we feel confident that we will be able to deduce a contradiction of the form
married(Paul, t) and not married(Paul, t)
from this.
But that deduction will take quite a number of steps. For a start we need to get rid of two other conjuncts adult(Paul, t) and male(Paul, t) using 'AND elimination'. We also somehow need to deduce
not married(Paul, t)
from
for all t' (t <= t -> not married(Paul, t'))
That is going to involve using an instance of the axiom schema of substitution as well as the axiom that A -> A or B ('OR introduction').
I expect that deducing the contradiction will require a proof of at least ten steps. SO the contradiction is certainly not direct.
For the contradiction to be direct, we would need to define bachelor(x,t) to simply mean x is married at time t. But that is not how bachelor is defined. Under that definition a divorcee, a newborn, a widow, a frog, a rock and the number 3.45 would all be bachelors.
The same analysis can demonstrate that the statement that x is a square circle is not a direct contradiction, but rather a statement from which a contradiction can be deduced by a series of steps - just like the assertion of the existence of a Russell set.
It may be telling then that most of your acquaintances are not philosophy students. Most bachelors who had to take a course of Analytical Philosophy will be well informed of the context of Frege's and Russell's correspondance.
If you present Russell's Barber at a someone who doesn't have a formation in philosophy as a problem to resolve, outside of Frege's Begriffsschrift or Cantor's set theory, then you are wasting their time. Outside of the context of Frege's attempt at a lingua universalis following Leibniz, Russell's paradox is just a pure contradiction. It is a paradox (or more accurately, an antinomy) for Frege's Grundgesetze der Arithmetik because of it's acceptation of unrestricted comprehension.
Quite simply, I think you have not put in the work, so to speak, into understanding the context of Russell's argument to Frege. Russell's Barber has no value outside of this context, it's not even an interesting problem.
No, rather, you have been missing the context ever since you started this thread. You cannot ask (at least, meaningfully) someone to resolve Russell's Barber if that person hasn't shown an acceptation of unrestricted comprehension. The problem doesn't present itself to them then.
Even less for the layperson. 99.99% of the human population doesn't know that you could possibly provide a foundation to mathematics with set theory.
Have you ever studied Frege's Begriffsschrift?
I am fairly sure I did, so clearly I can.
Btw, do you know what ad hominem is?
I study math, and I pay very little attention as to where it came from. However, as I always do, I spent time studying this paradox before posting it. I was well aware of the proposed resolution.
That's a poor troll's answer.
Quoting Jeremiah
I'm not trying to insult you. But you have clearly shown a lack of understanding toward the role of thought experiments in philosophy. This thread and Zeno's Paradox thread show it well. Outside of a very specific context, Russell's Barber is just useless. It's purpose is specifically to tell us that we need Zermelo's restriction.
Quoting Jeremiah
You should. Context is important. Frege's Begriffsschrift makes no sense outside of Frege's criticism of syllogistic logic. Russell's set theory is written directly in the mouvement of Frege's research. Your paradox is just otiose to everything if you don't put it back into it's historical and theoritical setting.
It's also why it's a bit disengenuous for you to come and claim authority on the subject. I mean, some of us will have had around 40+ hours of courses on this very subject alone.
I get the context I need to understand the underlying concepts and do the math. Honestly do you have any idea how many theories and theorems a student of mathematics has to cover? You are not suggesting a realistic approach. I get a brief history, a full proof and than we go. All I really care about personally is the proof and the notation. As long as I can follow proofs and read notation nothing is out of scope.
Quoting Akanthinos
I have easily spent more than 40 hours on set theory. It's math and if you speak the lingo I'll understand it. I reviewed all the notation and the underlying concepts for this paradox before posting, I knew before hand the axiom schema of specification would be the main counter point. I also know that several of the arguments presented in this thread don't hack it.
Your lording is not convincing me.
Math is pretty far back in the background of the context of the logicism of Frege. Its there, and its more relevant than say, natural language paradoxes, but the whole point of Russell's barber is not to try and resolve its apparent contradiction via maths, its what it means for Logicism's project. That is the only important thing about Russell's barber, and clearly you didn't figure that out in your research on set theory, otherwise the OP would be wildly different.
I did acknowledge that people are doing this research and that they're serious people. And I simply stated that if I met one of them I'd offer up the prime example and ask them to explain to me why they care about the one and not the other. It's a question I'm trying to understand.
I don't know much about philosophy but I do have a bit of a math background. I try to give my perspective. I'm generally pretty upfront about my areas of ignorance. I can be ignorant yet have an opinion, and people may find it interesting or not. I claim no authority I don't have. Are you referring perhaps to your interpretation of my writing style? If I express an opinion that's my opinion. You don't have to agree and I don't even claim that I'm right, and I never claim to have any authority I don't have, or any at all. I do have opinions. And I do have some knowledge of math that bears on philosophical issues from time to time.
So when you say the "authority you adopt," are you referring to my style of expressing my opinions? Or are you thinking that I have claimed authority I don't have? If the latter, please point these instances out so that I can correct them. But if only the former, you should take into account that that's just my style.
I apologize, fishfry, because your posts are, on the contrary of Jeremiah's, very well thought out. I was replying exclusively to him.
Russell's and Zenon's paradoxes are not riddles to be solved. They would have trivial interest to philosophers if they were, because despite apparences, philosophy is pretty much nothing like riddle solving. They show us why its important to think in some contrarian ways at some time in order to test some otherwise untouchable biases. Zenon's is about the need to think about infinite series. Russell's about Frege's mistake in using unrestrained set theory as a foundation for the concept of number.
What's unproductive is taking sides and not accepting valid counterarguments.
You raised a lot of really good points in this post and it's late so I only want to respond to this one point and I'll aspire to get back to the rest of your interesting post later.
Newton's calculus was never inconsistent in the sense of logic. You are equivocating the word inconsistent. See this is something I do happen to know about Aristotle! That he listed some rhetorical fallacies, one of which was equivocation, using the same word with two different meanings within the same argument.
You mentioned calculus was inconsistent in an earlier post, and I didn't push back on it then, but it's important to clarify this point now.
I hope we can agree that a logical system (some axioms along with some inference rules) is inconsistent if there is a proof (a step-by-step application of the inference rules to the axioms) that results in a proposition P, and also a proof of not-P. I'm certain we agree on that.
I will now argue that Newton's confusion over the nature of (what we now call) the limit of the difference quotient is NOT such an inconsistency.
I believe that if I asked you to name the P for which both P and not-P have proofs, you would say, "dy and dy [in modern notation] are both nonzero and both zero." But that's not really the same thing as I hope I can explain clearly enough to earn your agreement.
So in Newton's calculus (using modern terminology and notation) we have a difference quotient [math]\frac{\Delta y}{\Delta x}[/math] where [math]\Delta y[/math] is not zero and [math]y[/math] is a function of [math]x[/math]. [It's perfectly legitimate for [math]\Delta x[/math] to be zero, as in the case of a constant function].
Now as [math]\Delta x[/math] gets very close to zero, it may be the case that the quotient [math]\frac{\Delta y}{\Delta x}[/math] seems to get very close to some number, which Newton called the fluxion and that we now call the derivative. The derivative can be naturally interpreted as, for example, the instantaneous velocity of a moving particle. So whether we can mathematically formalize it or not, it's clearly an important concept in need of elucidation.
[Just as with the proof of the infinitude of primes, I'm going over familiar territory in detail just to make sure everyone's on the same page].
So we can sort of think of what the quotient does "when both [math]\Delta x[/math] and [math]\Delta y[/math] are zero," yet we know that this does not actually make any mathematical sense because the expression [math]\frac{0}{0}[/math] is not defined, and can not be defined consistent with the usual laws of arithmetic. So it's a puzzler. Berkeley's "ghosts of departed quantities" is a great line, a rhetorical zinger that shines the spotlight on Newton's problem.
For what it's worth, Newton himself perfectly well understood the problem and struggled over the rest of his career to try to explain it, but without success. It did take 150 years, as you mentioned earlier, to develop the concept of a limit; and it was well into the twentieth century before we saw the complete path from ZF to calculus.
What would we call Newton's problem? It's not an inconsistency in the sense of being able to prove both P and not-P. At no time did Newton ever say that [math]\Delta x[/math] and [math]\Delta y[/math] are both nonzero AND they are both zero. Newton knew better than to say that. We do NOT have a logical inconsistency in the formal sense.
What we have is something that clearly works, but we haven't got the vocabulary to express it mathematically. That's a mental state familiar to everyone who's ever had to construct a proof. We get to the point where we can SEE what's going on, but we can't mathematically SAY what's going on. That's where Newton and the mathematicians of the 18th and 19th century got stuck till they finally worked out a proper formalization.
I hope you can agree that this is not a case of a system that can derive a proof of some proposition P and not-P. That was not the case in Newton's calculus. Rather, Newton just saw a truth that he could not formalize, either with existing concepts or even by inventing new ones.
So there is an equivocation between
* Inconsistency as a formal proof of both P and -P; and
* Inconsistency as in getting to a point where it's intuitively obvious what's true yet you can't figure out how to formalize it properly.
Calculus was never inconsistent, just un-formalizable for a couple of centuries.
I do take your point that it's noteworthy that mathematicians kept at it till they developed a conceptual and symbolic framework to explain calculus. But that's not exactly the same as keeping at it to resolve a direct P and not-P contradiction as in the case of Russell's demonstration.
On the other hand I see that in both cases, we are keeping at it in order to get to the bottom of some antinomy in which we perceive a larger truth that we can't properly express. I will grant you that much. The Newton difference quotient isn't an actual inconsistency, but it's still a pretty thought-provoking datapoint for your case.
What do you think?
Thank you @Srap Tasmaner for the MathJax pointer.
Oh sorry I didn't realize that. Thanks for clarifying.
The only thing you have really expressed is your disproval of me, then you made some lame subjective argument of how you think I should have approached this. Your opinion has been noted, as an opinion.
What is oddly backwards about your entire argument, is that I am using these paradoxes to generate discussion, which is their central reason for engagement. While you are pretending they belong to your fantasy club of those you consider your fellows. You seem to think there is a specific context, under certain terms and with select people these matters should be discussed. I, on the other hand, discuss these topic and much more complicated notions with everyone and anyone. I have had conversations about multi-variable calculus with people who have no greater than high school algebra, because I assume that if I can figure it out and understand it then they can as well.
Yes, I can be a troll, I am well aware of that and fully admit it, but clearly you are also not without your egoistical hang ups as well.
I'm not making an elitist argument. What I'm saying is that you are losing all philosophical value to these historical/theoritical artefacts if you do not present them as such, but as riddles to resolve. You may not have realized this because you have not been discussing this mainly with philosophers or philosophy students.
Anyone with a Analytical Philosophy course completed, upon being asked "who shaves Russell's Barber" out of context, will simply look the interlocutor with amused pity. Just like I would look at someone if he were to tell me that he's trying to derive quantum physics from Democrite's atomism.
I can't find anything in your post I'd agree with. Certainly not the "amused pity" or the treatment of Russell's argument as an artifact.
And you claim you are not an elitist. . . .
Then you should reread your early Analytical history, starting by the Grundlagen... Russell's barber is an historical/theoritical artefact. It is not a riddle. Treating it as such simply devalues the discipline.
:brow:
The vast majority of my work colleagues did not believe or know that there were such a thing as logic classes taught at University (or anywhere actually). To them, what philosophers do is pelleter des nuages, literally shoveling clouds.
When you present these little problems as riddles, this is what you do, shovel clouds. If philosophy reduced itself to this, then the layman's opinion would be correct ; we'd be stupidly useless. Its fine to be unproductive and timewasting, but at least realize that you are being so.
I disagree.
The Russell set is manifestly pathological. That it is, I could know even without knowing its role in the history of philosophy, but the converse does not hold: I have to understand what it is to understand why it played the role it did.
The more interesting question might be why it is pathological, and that could be a matter of historical context. If, in the fullness of time, dialetheism wins out, the Russell set might no longer seem pathological.
As things stand, we have a predicate composed of simple, well-behaved elements to all appearances assembled in an acceptable way, and yet this predicate cannot possibly be predicated of anything. If we could say why this abomination is no predicate at all, we could regain the Paradise in which predicates always pick out classes.
Your entire argument boils down to a child throwing a fit because the neighbor kid is playing with his/her toys.
Yeah, and by doing so you'd end up, like in the Zenon's thread, talking about Planck's constant and other otiose elements.
Quoting Srap Tasmaner
Well, by the same account P & -P is a proposition composed of simple, well-behaved elements to all appearences assembled in an acceptable way.
And you have no arguments. Only a ridiculous attachment to the meaningfulness of these threads. Its telling that you chose a playground metaphor ; that is basically what you are doing, being a child pretending at having authority.
The Russell predicate [math]x \notin x[/math] most definitely picks out a class: the class of all things that are not members of themselves. This class just doesn't happen to be a set. It simply turns out to be the case that some collections defined by predicates are sets; and others are not. Those collections that aren't sets are called proper classes.
In ZFC, there are no official proper classes, so we use the phrase informally. We say, "The collection of all things not members of themselves is a proper class," by which we mean that it's not a set. Or as we sometimes say colloquially, it's "too big" to be a set.
On the other hand there are set theories such as Gödel-Bernays set theory, or NBG, in which the concept of proper class is officially formalized.
The 17 years old may feel like he has hit gold, so to speak, but he is wasting time. Until he starts reading the history of debates, proof and counter proofs, until he can actually develop a critical opinion on these, he is just masturbating intellectually.
Replace theology by logic, existential proofs by logicism, and we have the same situation here.
I never read any Nietzsche, and I am likely much older than you think.
Yeah, that's a good point. I had forgotten about the proper class stuff and was reaching for "class" as the generic term. (I guess here we have to use "collection" for a generic term?) Not an overly satisfying distinction, this, but it is what it is.
Wikipedia has the quote from Russell's letter:
It's interesting how unlike something like [math]\small \{x \mid Fx \land \neg Fx \}[/math] Russell's set is. If it were "the set of sets that contain themselves and don't contain themselves", it would not be so peculiar. Instead it's simply "the set of sets (that are sets) and don't contain themselves". All you have to do to get into trouble is take the collection as itself an object, something that could be the value of [math]\small x[/math].
You can almost hear the other sense of "totality" in our English-language descriptions: there's no problem defining a set that contains only sets that don't contain themselves; you just can't define a set that contains all and only sets that don't contain themselves. You can tell a barber to shave only men who don't shave themselves, just not all of them.***
Pages back I mentioned the word "heterological" but got no takers, Right before the quote above, Russell says this:
"... is heterological" sure looks like a predicate, and this time there's no use saying it picks out a "proper class" instead of a "set". In Russell's opinion it's not a predicate at all. What's up with that?
*** ADDED: Nah, it works the other way around too, "all" but not "only".
We need only go back to 1696 to see that yes, a formal contradiction was provable. The only way, as far as I can see, to say it was just an inability to formalize is to say that it was understood but couldn't be made explicit. From Analyse des Infiniment Petits (with some modern fixes in notation and such)
I don't know how to do that Math notation stuff on this forum, but the paper "Handling Inconsistencies in the Early Calculus" (use Sci-hub to download the paper) goes over the example I had in mind when trying to find a tangent to a curve. When calculating the differenial, in the early calculus dy had to be unequal to zero, yet once the fraction had been simplified it needed to be equal to zero. Prior to limits being formalized, this was contradictory. Dy's value was necessarily non-zero at one step and then zero afterwards.