Musings On Infinity
Grandi's series
I think this series tells us that infinity does not have a size and therefore infinite sets do not have a cardinality. The series is:
{ 1, -1, 1, -1, 1, -1, … }
To evaluate the infinite series requires knowledge of whether the above infinite set has an odd or even cardinality. That mathematics cannot produce (a sensible) evaluation of this series is tacit admission by mathematics that infinite sets do not have a cardinality.
What does Divide By Zero Signify?
Division by an infinitesimal is associated with infinity:
[i]lim 1/n = ?
n->0[/i]
Dividing by zero is however undefined. It is nonsensical to divide an object zero ways:
1/0 = UNDEFINED
Also, division being the inverse of multiplication, there is no number x such than x * 0 = 1.
What is going on here? A finitist might say the above two equations correspond to potential and actual infinity respectively:
- The limit represents potential infinity; we can potentially continue to arrive at larger and larger numbers (by setting n smaller and smaller)
- But we can never ‘complete’ the process of arriving at larger and larger numbers - we can never reach ‘Actual Infinity’ - if we try to, the expression becomes UNDEFINED - exactly what we’d expect if actual infinity does not exist.
So actual infinity is the same as 1/0 is the same as UNDEFINED.
Is There an Actual Infinity of Points on a Line Segment?
An illustration that UNDEFINED corresponds to actual infinity: it is conventional in mathematics to regard the number of points on a line segment as actually infinite. A finitist would disagree with this definition:
- Consider a line segment length 1
- The points on it have length zero (They are just logical labels on a line)
- So the length of the interval 1 divided by the length of a point 0 equals UNDEFINED number of points on the segment
The same argument applies when considering the number of real numbers in the interval 0…1, numbers have length zero, so the number of real numbers in the interval is UNDEFINED.
I think this series tells us that infinity does not have a size and therefore infinite sets do not have a cardinality. The series is:
{ 1, -1, 1, -1, 1, -1, … }
To evaluate the infinite series requires knowledge of whether the above infinite set has an odd or even cardinality. That mathematics cannot produce (a sensible) evaluation of this series is tacit admission by mathematics that infinite sets do not have a cardinality.
What does Divide By Zero Signify?
Division by an infinitesimal is associated with infinity:
[i]lim 1/n = ?
n->0[/i]
Dividing by zero is however undefined. It is nonsensical to divide an object zero ways:
1/0 = UNDEFINED
Also, division being the inverse of multiplication, there is no number x such than x * 0 = 1.
What is going on here? A finitist might say the above two equations correspond to potential and actual infinity respectively:
- The limit represents potential infinity; we can potentially continue to arrive at larger and larger numbers (by setting n smaller and smaller)
- But we can never ‘complete’ the process of arriving at larger and larger numbers - we can never reach ‘Actual Infinity’ - if we try to, the expression becomes UNDEFINED - exactly what we’d expect if actual infinity does not exist.
So actual infinity is the same as 1/0 is the same as UNDEFINED.
Is There an Actual Infinity of Points on a Line Segment?
An illustration that UNDEFINED corresponds to actual infinity: it is conventional in mathematics to regard the number of points on a line segment as actually infinite. A finitist would disagree with this definition:
- Consider a line segment length 1
- The points on it have length zero (They are just logical labels on a line)
- So the length of the interval 1 divided by the length of a point 0 equals UNDEFINED number of points on the segment
The same argument applies when considering the number of real numbers in the interval 0…1, numbers have length zero, so the number of real numbers in the interval is UNDEFINED.
Comments (303)
First, we realise if two different types of infinity existed, they would have to be larger than each other. Thats a logical contradiction, so we must of made a wrong assumption; only one kind of infinity can exist.
Then:
? + 1 = ?
Leads to:
1 = 0
Which is another logical contradiction; infinity is not a number.
Who said infinity was a number, other than four year olds?
I can see why his mathematics are "of great philosophical interest". They mean nothing and lead nowhere.
“I have never assumed a ‘Genus Supremum’ of the actual infinite. Quite on the contrary I have rigorously proved that there can be no such ‘Genus Supre- mum’ of the actual infinite. What lies beyond all that is finite and transfinite is not a ‘Genus’; it is the unique, completely individual unity, in which every- thing is, which contains everything, the ‘Absolute’, unfathomable for human intelligence, thus not subject to mathematics, unmeasurable, the ‘ens simplicis- simum,’ the ‘Actus purissimus,’ which is by many called ‘God.’ - Cantor.
He though God was talking to him and giving him these ideas about infinity. It's all marsh gas.
Point out my instances of ignorance then...
Cantor's "diagonal" theorem on the existence of an infinite hierarchy of infinities can be expressed in a quite convincing way: "for every set A, the set of all subsets of A is bigger than A".
Of course, we have to give a concrete definition of what "bigger" means:
"the set B is bigger than the set A if there isn't any function that for every element of A gives an element of B and covers all B" ( i.e. each element of B corresponds to some element of A )
The demonstration is quite easy (https://en.wikipedia.org/wiki/Cantor%27s_theorem). But there is a problem with the statement of the theorem: Russel's paradox (https://en.wikipedia.org/wiki/Russell%27s_paradox). The concept of "set of all subsets" is contradictory.
So, Russel's idea is, basically, to avoid talking about sets, and talk only about intuitively well-defined things: functions defined on "recursive types". In this way, Cantor's theorem becomes: "for every recursive type A, the type A->Bool is bigger than A".
Put in this way, Cantor's theorem only means that it's impossible to build a "surjective" function that associates to every natural number ( "Nat" ) a second function from natural numbers to booleans ( "Nat -> Bool" ). So ( "Nat -> Bool" is bigger than "Nat" ). At the same way, "Nat -> (Nat -> Bool)" is bigger than "Nat -> Bool", "Nat -> (Nat -> (Nat -> Bool))" is bigger than "Nat -> (Nat -> Bool)", etc..
This is what it means "an infinite hierarchy of of infinite sets each one bigger than the other".
Everything is built using functions on natural numbers and very basic logic deductions, and that's the whole point: it makes sense to speak about infinite objects because they are only built by finitely defined functions. So, in a sense, from the point of view of logic, all infinites are only "potential"
You have a detailed argument for that false claim? I'd like to understand your thinking here. The set of all subsets of a given set is in no way refuted by Russell's paradox.
Quoting Mephist
Math goes beyond logic. Even Russell accepted the axiom of infinity, which posits an actual infinite set whose elements include all the natural numbers. If you deny the axiom of infinity you have Peano arithmetic, which is fine as far as it goes, but does not allow a satisfactory theory of the real numbers. So you have to throw out modern physics along with most of modern math.
The solution to Russell's paradox is the axiom schema of specification. And regardless, Russell's paradox does not contradict or invalidate the powerset axiom.
"In 1908, two ways of avoiding the paradox were proposed, Russell's type theory and the Zermelo set theory, the first constructed axiomatic set theory."
Russel's paradox is about "the set of all sets that don't contain theirself", but the problem was how to limit the language (axioms and rules of logic) to ensure that no sentences of the type "the set of all sets that don't contain theirselves" are allowed. One of the solutions is type theory.
In the standard contemporary mathematics based on set theory you can't speak of the "set of all sets" because it's not a set itself, but rather you have to speak about the "class" of all sets.
If you use first order logics on the domain of real numbers, the set of all subsets of real numbers is the same thing as "the set of all sets"
No not at all. For example let us consider the set of all real numbers that are not members of themselves. That's a legal set formation according to the axiom schema of specification. That is, we start with a known set, the reals, and then reduce it by a predicate.
So, what is the set of all real numbers that are not members of themselves? Well, 14 is not a member of 14. Pi is not a member of pi. The cosine of 47 is not a member of the cosine of 47. In short, the set of all real numbers that are not members of themselves is ... drum roll ... the real numbers.
That's exactly how specification avoids Russell's paradox.
You have claimed that Russell's paradox invalidates the powerset axiom but I still don't follow your logic. In fact if the powerset axiom were false, I would have heard about it.
You will note that the formal expression of the powerset axiom is in fact first order. But perhaps there are some subtleties that you can elucidate in this regard.
https://en.wikipedia.org/wiki/Axiom_of_power_set
ps -- The Wiki article says:
Perhaps you are making a constructivist argument. It's true that the constructive powerset of the natural numbers is countable and Cantor's theorem doesn't apply. If you are making that type of argument it would be helpful to say so up front so as to not confuse the issue. If you are a constructivist and therefore don't believe in the full powerset as given by ZF, your remarks would make more sense.
pps -- Regarding my claim that the powerset axiom is first order, I refer you to the SEP article on set theory.
https://plato.stanford.edu/entries/set-theory/#AxiSetThe
The article then goes on to list the standard axioms, one of which of course is powerset.
ZF Set theory is another way to limit the use of "naive set theory", in my opinion much more complicated than Type theory.
The point is that both of them are "axiomatic" theories: ZF set theory speaks about something called "set" giving a complicated set of rules on how to operate with them. But with the same set of rules the world "set" can be interpreted as a different finitely defined structure (for example as as recursively branching trees). Every non contradictory axiomatic theory based on first order logic has a finite (non-standard) model (https://en.wikipedia.org/wiki/Compactness_theorem)
What I wanted to say is that Russel's paradox invalidates the use of "naive set theory", that is the kind of set theory used on Principia Mathematica
Of course. Perfectly correct. Perhaps you're thinking of the kind of set theory used by Frege, not by Russell. But that's a historical point and I'm not familiar enough with the specifics of Russell.
Regarding type theory, sure. No problem there. Type theory's even making a modern comeback. But I don't think I wrote a word in opposition to your remarks on type theory. You're defending a point I didn't even disagree with.
Quoting Mephist
Oh my, no. Not at all. You should read the link you posted. There's no nonstandard finite model of ZF. Please reread the Wiki page you linked. It says nothing in support of the false claim you made. There is no nonstandard finite model of ZF. That's not what the compactness theorem says.
Yes, exactly. not in "Principia Mathematica" ( my mistake ).
OK, I am glad to hear that you agree :smile:
I'll add even something else, that probably everybody will cause a lot of dissent.. :smile:
In my opinion the existence of infinite sets (such as the uncountably infinite number of points in a line) is not "demonstrable" by logic or mathematics, but is more related to physics. Then, I think that axioms like the axiom of choice or the continuum hypothesis, that are logically independent from the rest of "purely combinatorial" axioms should be treated in a similar way as the Euclid's parallel postulate in euclidean geometry: they are not decidable on the base of the sole logic.
Sorry, I wanted to write "finitary", in the sense of "recursively enumerable" (of course not finite, if you can build natural numbers with sets)
However, this idea is not mine: (https://www.youtube.com/watch?v=UvDeVqzcw4k) see at about min. 8:23
This is a nonsensical definition: for instance, it claims the even numbers are the same size as the natural numbers (as there is a one-to-one correspondence between the two). But the even numbers are a proper subset of the natural numbers. If either had a size, the size of the natural numbers must be greater than the size of the even numbers.
Quoting Mephist
If you use a reasonable definition of infinity: ‘A number bigger than any other number’ then it is clear that there could only be one such number - if there was a second infinity then both would have to be larger than the other - a contradiction - so there can be only one infinity.
Quoting Mephist
The set of all sets does not exist! Standard ‘Proof’:
1.Let S be the set of all sets, then |S| < |2^S|
2. But 2^S is a subset of S, because every set in 2^S is in S.
3. Therefore |S|>=|2^S|
4. A contradiction, therefore the set of all sets does not exist.
What is wrong with this ‘proof’?
- It is that the cardinality of the set of sets does not exist (infinite sets do not have a cardinality - infinity - is unmeasurable)
- This proof does not prove that the set of sets does not exist. This ‘proof’ is a sham
- Once it is acknowledged that the cardinality of an infinite set does not exist the whole of infinite set theory collapses like a pack of cards.
Quoting Mephist
It is the assumption that infinite sets are measurable that invalidates naive set theory. ZF set theory is patchwork of hacks that tries to cover all the the holes and fails - the solution is to acknowledge infinite sets do not have a cardinality / size.
How do you justify transfinite arithmetic? The rules of transfinite arithmetic assert that:
? + 1 = ?
This assertion says in english:
’There exists something that when changed, does not change’
Thats a straight contradiction.
If you read any of the above, you would have gathered that I maintain infinity does not have a size so it is impossible to measure the size of infinite sets.
All we can do is induction - for any reasonably sized interval, there are more natural numbers than even numbers.
[b]Infinity is not a number:
if infinity was a number, it would be a number X greater than all other numbers. But X+1>X
[/b]
This isn't news to mathematicians. When the concept of infinity was invented, there was a (perceived) need for it to be integrated into mathematics. (The alternative was to leave infinity standing alone and lonely, and this (apparently) was unacceptable.) The mess you observe is the result of that 'integration'.
Basic arithmetic does not work at all with the transfinites:
?+1=? implies there exists something that when changed, does not change
?/2=? falls foul of the axiom 'the whole is greater than the parts'
(both cases flaunt logic)
That's a realistic attitude, but my personal experience is it is not shared by many mathematicians. They tend to get very defensive whenever infinity is questioned.
1. Creating anything infinity large is impossible; would never finish
2. Creating anything infinity small is impossible; no matter how small it is made, it could still be smaller
3. Space cannot be infinite because it is expanding
4. Time cannot be infinite because an infinite temporal regress is impossible
5. Only in our minds can things continue ‘forever’; in reality this would be akin to magic
If we can establish that actual infinity is not a number and is not part of our universe, there will be an opportunity to return to set theory and clean up the mess.
- I think potential infinity (calculus) is obviously very useful in maths and science
- I think actual infinity has no useful applications
And yet, merely as a concept, it enhances our thinking, or at least it can enhance our thinking.
The problems can up start once you try to do anything logical with it. It's not a logical concept so it leads to paradoxes. Particularly if you insist (like set theory does) that infinity is measurable and has a size then it leads to paradoxes.
It is also a problem that the inclusion of actual infinity in set theory legitimatises actual infinity and then cosmologists and other scientists come along and build actually infinite models because they think actual infinity has a sound basis in logic when it does not.
I would not use bijection to arrive a conclusion about a sets size; bijection claims that there are the same number of rationals as naturals so it is clearly wrong (naturals are a proper subset of the rationals so bijection is giving a wrong results).
I would recognise that infinity has no size so is not measurable.
I would recognise that what Cantor was trying to do was to assess the size of infinity (which is impossible). Instead I'd look to compare the density of rationals and reals on the real number line. Bijection is one possible way of measuring density but its frankly pretty useless: we have aleph-naught and aleph-one but there is nothing else!
Well, the problem is to give a definition of "size" for sets that you cannot count. And we want this definition to have some "reasonable" properties:
1. it should be the number of elements if the set is finite
2. it should be applicable to both countable and uncountable sets
3. any uncountable set should be bigger than any countable one
4. (as you required) if A is a proper subset of B, than A should be smaller than B
- To satisfy 3 you have to add an extra element (that we call "infinite") to the integers as "size" of uncountable sets, because all integers are just "taken", and by definition we require this element to be greater than every natural number.
- To satisfy 4 you have to add a lot more elements of the "infinite" kind, that have to be all different between each-other.
I don't know if it's possible to define a "size" that respects these four properties, but even if it is, you cannot do it with only one "infinite" size.
If instead you drop the property 4, you can require (by definition) that two sets have the same size if there exists a bijection between their elements. This is uglier, but not contradictory: it's only an equivalence relation between the sizes of the sets. You could even require that all the sizes between 10 and 20 are the same, and all other sizes behave normally, and that wouldn't be contradictory either.
The point is that this is only a definition: the point is not if it is "true" or not, but if it is not contradictory and if it can be used to build "interesting" theorems.
True. But you can't use such number system (integers plus "infinite") to define a "size" of sets that respects the properties 1 to 4.
I don't see the need to assign a size to something that is unmeasurable - it leads to nothing useful, just paradoxes.
I think it would have been better if it had been recognised by Cantor that there are two different sorts of set - finite and infinite. An infinite set is not a-kind-of finite set and vice-versa; as can be seen by the two object types having very different properties:
- A finite set has a completely defined list of members. An infinite set does not have this property
- An infinite set does not have a cardinality property: cardinality or size implies the ability to measure something. Infinity is by definition unmeasurable so infinite sets have no cardinality/size property
Instead Cantor just made up fictitious numbers (transfinites) for the non-existence cardinality property of infinite sets. I'm an ex-computer programmer and what Cantor did is regarded in software as a cardinal sin (pardon the pun).
A finite set can be formed in reality. Infinite sets are not part of reality and are just a conceptual thinking aid. I think maths should not try to treat the two the same.
The conclusion could even be that "the measure of the set of all sets does not exist" this is an assumption too.
Well how come nearly everything gets classified as aleph-naugh:
the set of all square numbers, the set of all cubic numbers, the set of all fourth powers, ...
the set of all perfect powers, the set of all prime powers,
the set of all even numbers, the set of all odd numbers,
the set of all prime numbers, the set of all composite numbers,
the set of all integers,
the set of all rational numbers,
the set of all constructible numbers (in the geometric sense),
the set of all algebraic numbers,
the set of all computable numbers,
the set of all definable numbers,
the set of all binary strings of finite length, and
the set of all finite subsets of any given countably infinite set.
Then there is aleph-one the continuum. No-one has ever found any other aleph number (that was not generated from a previous aleph).
What use is it a measurement when it is so crude a measurement?
OK, I didn't understand at first that you wanted to use it as a "fasle proof". I agree, this proof is surely not acceptable for a number or reasons. First of all, we know from Russell that we should be very careful when proving something about the set of all sets, because the "naive set theory" is contradictory. So we should use a formalization of set theory as Zermelo Fraenkel Set theory, and complete formal proofs in ZF Set theory are not so simple..
Well.. I don't like it too, but nobody has shown that is inconsistent yet, and it's used since a very long time. So, I would guess that it's not inconsistent!
Sorry, I don't know transfinite aritmetics.. :sad: But if you have some good links to documents that explain what is it I would be interested!
Maybe I'm using inconsistent in the wrong way; set theory is not inconsistent within itself, but to me, Galileo's paradox and the other results bijection give are inconsistent with common sense. Especially this inconsistency with logic is noticeable for transfinite arithmetic.
Quoting Mephist
This document is reasonable (see page 23):
http://www.thatmarcusfamily.org/philosophy/Course_Websites/Readings/Yarnelle%20-%20Transfinite%20Math.pdf
But isn't an infinite set bigger than any finite set? Doesn't that imply size, even if obviously you cannot measure it like a finite set?
And what is your problem with the reductio ad absurdum proof of there not being a bijection between the natural numbers and the reals?
Quoting Devans99
How do you come to this conclusion? Give a proof that this isn't so. Because the proof of this being so is for me quite understandable (if I remember it correctly): you can well order the rationals, hence there is a bijection between the rationals and the natural numbers.
Quoting tim wood
Yeah maybe, you cannot just say that set theory is wrong. It would be similar that I would simply declare quantum physics wrong in physics. You would actually need to give a proof of it in mathematics.
An infinite set is unmeasurably bigger than a finite set. An infinite set therefore has no size.
Or: size is an integer property. Infinity is not an integer, if it were, it would be an integer X greater than all other integers. But X+1>X. So infinity is no sort of size or number.
Quoting ssu
I'll define infinity as ‘A number bigger than any other number’ then it is clear that there can be only one such number - if there was a second infinity then both would have to be larger than the other - a contradiction - so there can be only one infinity.
That disproves the assertion of multiple infinities in set theory. From that we can deduce that:
?+1=?
implies
1=0
IE infinity is not a number... disproving the transfinites from set theory.
Ok, let's think about what you said here.
So infinite sets are unmeasurably bigger. Fine, they are bigger, we agree with this.
How then couldn't you simply say that infinite sets have an unmeasurably bigger size? Meaning that they do have a size, but the size is obviously unmeasurable, because (accurate) measuring is something that you can do in the finite realm.
So, could it be then there would be Absolute Infinity?
If the size is unmeasurable then it is not a size. Size has to be an integer.
Quoting ssu
In our minds yes but it can't exist as a consistent mathematical object:
?+1=?
implies
1=0
And it can't exist in the real world:
- Creating anything infinity large is impossible; not enough time / would never finish
- Creating anything infinity small is impossible; no matter how small it is made, it could still be smaller
- Spacetime is a creation so nothing in it is infinite
Aha!
This is the definition that you base your thinking. An integer shows size, yes, but size doesn't have to be an integer! Just to take one definition:
Let's remember (again) the heap of rock pebbles. A heap is more than one, but it's not exact. Yet it can perfectly perform the task of defining size. Just as you can have a < b < c , which naturally do define size, but don't give 'exact sizes' like the bijection a +1 = b. Now a < b < c isn't a bijection, yet it does define obviously size!
(And we already had the discussion about how impossible is it to count the natural numbers from 1 to googolplex. Of course, the concept of size is often applied to ideas that have no physical reality, just like in mathematics.)
A heap is an estimated size - looking at the heap is an act of rough measurement of size. That is possible only with finite objects; you cannot estimate the size of infinity; so I maintain infinity is unmeasurable so has no size.
I just have to disagree with this.
You don't have to have the ability to make a bijection to have size, a < b does give a size comparison and size doesn't somehow evaporate without a bijection. The ability to measure accurately something exactly simply isn't a precursor for size to exist. You are creating your own math so good luck with that.
[i]"Size is the magnitude or dimensions of a thing. Size can be measured as length, width, height, diameter, perimeter, area, volume, or mass.
In mathematical terms, "size is a concept abstracted from the process of measuring by comparing a longer to a shorter". Size is determined by the process of comparing or measuring objects, which results in the determination of the magnitude of a quantity, such as length or mass, relative to a unit of measurement. Such a magnitude is usually expressed as a numerical value of units on a previously established spatial scale, such as meters or inches."[/i]
https://en.wikipedia.org/wiki/Size
So you specifically compare two objects; one of which is a ruler, the other being the object to measure and derive a size. That is not possible with infinity (ruler not long enough).
Also note that size determination results in a 'magnitude' - infinity is not a magnitude.
I read the book that you suggested me (thanks for the link!)
I think that the "trick" that avoids inconsistency is that the inverse of infinite is not unique.
From wikipedia (https://en.wikipedia.org/wiki/Cardinal_number):
Assuming the axiom of choice and, given an infinite cardinal ? and a cardinal ?, there exists a cardinal ? such that ? + ? = ? if and only if ? ? ?. It will be unique (and equal to ?) if and only if ? < ?.
So, choosing ? = ? = ?, there exists a cardinal k such that ? + k = ?
k will be unique if and only if ? < ?. But ? < ? is false, then k is not unique.
So, it's impossible to deduce that ? - ? = 0, and it's not allowed to use the expression ? - ? as if it was a definite cardinal number.
To derive 1 = 0 from ? + 1 = ? you should subtract ? from both sides of the equation, but
"1 + ? = ?" does not imply "(1 + ?) - ? = ? - ?" because ? has not an unique inverse.
So, no contradiction! :-)
By the way, "? + 1 = ?" does not mean ’There exists something that when changed, does not change’, but 'there exists something that doesn't change when you add 1 to it'
Oh yes ok, thanks for clarifying.
Quoting Mephist
Quoting Mephist
Oh HOTT. Very interesting. I don't know much about it beyond the basics. I only watched the vid starting a little before the 8:23 mark and didn't find what you were quoting. I saw where he was showing that every mathematical object is a tree once you expand its set-theoretic nature into the sets that contain sets and so forth. Very nice little insight. Maybe I'll watch more of this. Very tragic about Voevodsky's premature demise. If he said there are finite models I'm sure there are!
I see, thanks. If you believe there is only one infinity (like I do), then ? - ? = 0 is fairplay, leading to 1=0.
First of all, I used the term "finitary model" meaning "a model that is built from a finitary theory", but probably it's a misused term. So, let's speak only about "finitary first order theories" with this definition: "A first-order theory is called finitary when it's expressions contain only finite disjunctions or conjunctions".
Of course I could be wrong, and I could have misinterpreted what Voevodsky says in the video, but the fact that first order set theories have a model whose elements can be put in a one-to-one relation with natural numbers is a theorem of model theory, basically due to the fact that you can always take as a model the strings that correspond to well-formed terms of the language: in other words, the theory can be interpreted as speaking about "strings of characters".
As usual, a citation from the old good wikipedia can come to the rescue :-)
"Set theory (which is expressed in a countable language), if it is consistent, has a countable model; this is known as Skolem's paradox, since there are sentences in set theory which postulate the existence of uncountable sets and yet these sentences are true in our countable model."
(https://en.wikipedia.org/wiki/Model_theory)
So, you could even say that "every mathematical object is a string of characters", or (as Godel did in his famous incompleteness theorem) that "every mathematical object is a natural number" (because he proved that every theorem can be translated in another theorem speaking about natural numbers).
This of course doesn't mean that uncountable sets "do not exist", but only that you cannot use a finitary
first order logic theory to prove that they exist.
But, at the end, you find that every formal logic based on rules and axioms has the same problem (I know, I should explain why, but it would be too long to start discussing this here). For this reason, I believe that the existence of non numerable infinite sets should be treated as a problem to be decided by physics, and not by mathematics.
I know the obvious objection: how do you make a physical experiment to check if something (like for example space-time) is made of a countable or uncountable set of points? Well, I don't know :-)
But the fact that calculus becomes much simpler and "beautiful" if it is interpreted using infinitesimals instead of limits (look for example at smooth infinitesimal analysis: https://en.wikipedia.org/wiki/Smooth_infinitesimal_analysis) for me is a good indication that, in some sense, infinitesimal "objects" exist in nature, because laws of physics are written with differential equations.
? - ? = 0 is not true, because ? - ? is not a term of the language.
Let me explain better: let's take the function "int", defined as "the integer part of a decimal number":
- for example, int( 5.65 ) = 5
I want to invert this function, and call it "unint", such that unint( 5 ) = 5.65
I cannot do it, because int( 5.7 ) = 5, and int( 5,8 ) = 5, etc..
This is the same situation:
I define ? + 0 = ?, ? + 1 = ?, ? + 2 = ?, ecc..
So, every finite number is a neutral element for ?, such as 0 is THE neutral element for natural numbers.
Now, if for transfinite numbers the neutral element for addition is nor unique, you can't define subtraction as the inverse of addition, simply because addition is not invertible!
Quoting Devans99
Sorry to disappoint you, but (as I just wrote in the previous post), I believe that at least infinitesimal objects do exist in nature in some sense, but you cannot decide if they exist or not using mathematics.
From the point of view of mathematics, I don't think there is a reason why transfinite numbers should be not allowed. Yes, you can say "I don't like set theory because it's a complicated way to define something that seems intuitively simple as sets", but a hundred years ago this was the only theory that allowed to reason in an unambiguous way on real numbers (that, as the name suggests, appear to be "real") and to say such an obvious thing as the fact that the square root of two (i.e. the measure of the diagonal of a square whose side has measure one) is a number.
And without real numbers how do you build limits and calculus? And calculus has to be "real", since it works so well with the laws of nature.
So, my point is that you can use logic to speak about infinity even if you cannot decide from logic itself if infinity really exists in nature.
Because addition with transfinite 'numbers' does not make sense, nor does subtraction. This is indicative of the fact the transfinite are not numbers.
Quoting Mephist
I do not believe infinitesimal objects exist in maths or nature:
1. We have no examples from nature of actual infinity
2. Creating anything infinity large is impossible; not enough time / would never finish
3. Creating anything infinity small is impossible; no matter how small it is made, it could still be smaller
4. Basic arithmetic says infinity is not a number: if it were a number, it would be a number X greater than all other numbers. But X+1>X
5. Numbers are magnitudes or sizes. Infinity is not measurable: for infinity in the large, we would never finish measuring it. For infinity in the small, a finely graduated enough ruler is not available. So infinity has no size; so it is not a number.
6. Numbers have a fixed value; that is their defining characteristic. Numbers are not variables. Infinity has no fixed value so is not a number.
But then people started to discover that you can use infinitesimals to derive lots of correct results about the laws of nature (law of gravitation, for example). So, there had to be some way to allow the calculations that gave correct results using infinitesimals!
The solution to the problem was to define real numbers as "limits" (Cauchy) or "cuts" (Dedekind). But, in both cases, they are made of actually infinite objects (infinite sequences or infinite sets). So, at least for what I know, if you want calculus to be part of mathematics you have to allow the existence of actually infinite objects!
Formal logic and set theory were mainly created as a way to reason about the counter-intuitive concepts of "infinite set of things". If you do not allow infinite sets, set theory becomes trivial, and fundamentally useless: you don't need an axiomatic set theory to reason about finite sets.
With formal logic, you don't need a "concrete model" of mathematical things: a "set" is whatever "thing" on which you can operate following a (carefully chosen) given set of rules. So you can reason about infinite things avoiding contradictions.
But in this way, you cannot say that "this thing should not be allowed because it does not make physical sense". Everything that is not contradictory should be allowed, since we are not speaking about physical objects any more.
For what I know, there is no other way to make sense of calculus or differential equations (and other useful concepts in mathematics) without using formal logic or without "concrete" infinitesimal objects.
I don't see why calculus cannot be defined purely in terms of potential infinity. A limit should approach but never reach actual infinity:
lim 1/n
n->0
It's not possible to make this expression equal actual infinity; n is either non-zero (leading to a finite number) or zero (leading to UNDEFINED). There is no place for actual infinity between the large numbers and UNDEFINED; actual infinity should be UNDEFINED IMO.
I think that calculus worked just fine before Cauchy and Dedekind; it's just actual infinity has been used in the logical justification of calculus when it should not have been.
All irrational numbers are defined as infinite objects (usually infinite sums or products):
pi = lim ( sum for k=1..n ((-1)^(k+1) / (2*k - 1)) )
n->infinite
If infinite sums do not exist, pi is not a number, since there is no way to write it with a finite expression using only + and *
How do you define the function sin(x) if you can't use infinite expressions?
Integrals are defined as sums of infinite terms and most of them do not converge to rational numbers.
If a limit does not converge to a rational number you have to treat the limit itself (its expression) as a number, because there is no way to write it avoiding limits. Then, you have to say that real numbers are the usual rationals plus some "objects" that are defined as infinite sums of rationals.
That's similar to how transfinite numbers are defined: take natural numbers and add to them other "objects", such that the operations with preexisting natural numbers give the same result as before.
Calculus can actually be summarized with the greek method of exshaustion. Look it up on google or bing. Calculus always was and always will be close principle but not an exact science. People have written calculus solving software using the greek method of exhaustion, or at the very least have produced the pseudo code for it. I've come up with an algorithm for it.
See my profile or click on my name if you like.
if you use a 2 dimensional array and produce enough entries in that 2 dimensional array. Using the greek method of exsaustion is just a few steps away. If any one needs the algorithm i can provide it.
Click on my name or see my profile. No wrong answer.
lim sin x
n-> infinity
It already works, just read the above as 'n tends to but never reaches actual infinity'. Thats a better definition.
For example, with ?, it's irrational so it can never be fully defined - it's impossible to know all the digits, so saying an expression tends to ? rather than is equal to ? is actually more accurate. We can never know ? exactly.
IMO, whenever an infinite sum is evaluated, it is more correct for maths to use ~= (approx equals) rather than = (equals).
Since the results are the same and logic says that it's no more contradictory using infinities than avoiding using them, it seems to me more "convenient" to use them. Even if in the real universe they do not exist.
At the end, even real perfect circles do not exist (everything is made of atoms), and euclidean space does not exist (space-time is not flat), but it's much simpler convenient for reasoning about real objects to "pretend" that they existed.
Right. A countable model. Not a finite one.
Quoting Mephist
I don't believe this is true. ZF is a first-order theory with a countably infinite language that easily proves uncountable sets exist. See Cantor's theorem, as simple a proof as one can imagine for a fact so profound.
Quoting Mephist
No I don't believe so. This is confusing syntax with semantics. A formal theory consists of finite-length strings of symbols. But models of those theories are not strings of symbols. They're sets, often quite large ones.
OK, probably this is not the conventional point of view in mathematics, but I'll try to explain my point of view:
In ZF Set theory you have the "axiom of infinity" that says ( in a simplified form ):
There exists a set A such that:
-- ??A,
-- for every x?A, the set (x?{x}) ? A.
This, from my point of view, is a complicated definition for a recursively defined structure that is equivalent to Peano's natural numbers:
You can build ? (equivalent 0), ??{?} (equivalent to S0), ??{?}?{??{?}} (equivalent to SS0), etc...
Then, first order logic postulates the existence of functions.
Basically, Cantor's theorem proves that, for every set A, the function A -> (A -> Bool) is always bigger than A (in the usual sense of no one-to-one correspondence). Then, since (A -> Bool) is itself a set, you can build an infinite chain of sets of the form (((A -> Bool) -> Bool) -> Bool) -> ...
So, from the infinite set you can build an infinite hierarchy of infinites.
The crucial point here to decide the cardinality of "A -> (A -> Bool)", is what you take as a model for functions.
If functions are only recursive functions (what I can evaluate using a decidable set of rules applied to the symbols of the language), I can never build anything that has non-numerable cardinality.
In this case, it is true that for every set you can find a bigger set, but their cardinality will be numerable: at the same way as it is true that for every integer, you can always find a bigger integer.
If instead you define functions in the usual way as "sets of couples", then there are 2^|A| possible functions of type A -> Bool, and there exists a non-numerable set of sets. But, in any way, inside your language you are able to describe only an enumerable set of them.
(https://en.wikipedia.org/wiki/Skolem%27s_paradox):
"From an interpretation of the model into our conventional notions of these sets, this means that although u maps to an uncountable set, there are many elements in our intuitive notion of u that don't have a corresponding element in the model. The model, however, is consistent, because the absence of these elements cannot be observed through first-order logic"
Since both models for functions are consistent, what's the reason to take the standard interpretation as the "true" one?
Quoting fishfry
See Herbrand interpretation: https://en.wikipedia.org/wiki/Herbrand_interpretation.
The demonstration of Godel's incompleteness theorem is using a similar technique to map natural numbers on the syntactic structures of the language and properties of arithmetic operations on syntactical rules of logic.
Pi is computable and therefore has a finite description. https://en.wikipedia.org/wiki/Computable_number
'Most' irrational numbers are not computable, since the computable numbers (being countable) have measure zero. There are indeed some rich philosophical issues here. Personally I'm an anti-foundationalist when it comes to math. I think we learn to use it just as we learn to use English. A few people concern themselves with the formalities of set theory. Most just use it as an economical language.
The danger of worrying too much about the metaphysics of infinity is in repeating the 'Wittgenstein' mistake (not his so much as those in the grip of his charisma)--the 'cure' is assimilated by the disease. The finitist who rails against infinity is charging at his own shadow, the infinitist. Both are minority figures with an obsession in common.
Second Ant: If you want to fool around with nonsense and “what if’s”…go do it. Just do it somewhere else. We intelligent ants are discussing REALITY. We are discussing why we can figure out what is logical and what is not...what can exist and what cannot.
Thanks for this interesting post.
If I am reading you correctly you are espousing a computational or constructivist point of view. Perfectly valid, and even trendy lately as computer science begins to influence math. Homotopy type theory, etc. I like to think of it as the revenge of Brouwer!
Quoting Mephist
Yes and no. The axiom of infinity does provide a model of PA inside ZF.
However there is a crucial distinction. PA says that if n is a number then S(n), the successor of n, is a number. That gives us (after the standard naming conventions) a handy collection 0, 1, 2, 3, 4, ... which we can use to do a fair amount of number theory.
What the axiom of infinity says is that {0, 1, 2, 3, ...} is a set, not just a collection. That's a much stronger statement. Without the axiom of infinity we still have PA and everything that we can do with it. But we don't have the powerset of the natural numbers nor do we have an easy way to get the theory of the real numbers off the ground.
The axiom of infinity is a strong statement that says that we not only have successors; but that the collection of all successors forms a set. In fact this gives us an easy way to visualize proper classes. In ZF minus infinity, the collection of all natural numbers is a proper class because it's too big to be a set.
I asked on math.stackexchange once what functions are in PA, since we can't be sure we have enough sets. I don't recall getting an answer that satisfied me. But if we naively regard functions as mappings, we can probably get away with it. I'm sure logicians have a good answer for this point.
Quoting Mephist
Let me be picky here (as if I'm ever any other way!) First, it's not the function that's bigger, it's the powerset. But Cantor doesn't prove that P(A) is bigger than A. He proves that there is no surjection from A to P(A); then we DEFINE "bigger" to mean there's no surjection. This is a common point of confusion among those who complain that it's absurd to call one infinite set bigger than another. They are quite right! Rather, "bigger" and "smaller" in this context refers to the existence or nonexistence of injections and surjections. I see that you basically said that by mentioning 1-1 correspondence, but I wanted to emphasize this point. There is no surjection from the positive integers to the reals. Whether the reals are "bigger" in any meaningful sense, I have no idea.
Quoting Mephist
This blew my mind when I first saw it. Still does. Cantor really put a zap to the world of math. This was quite a revolutionary discovery. Or fraudulent sophistry, depending on one's point of view.
Quoting Mephist
Well again, yes and no We do NOT need to know what is the cardinality of P(N). We only know that there is an injection from N -> P(N) and no surjection. So we say P(A) has a larger cardinality. In fact nobody has the foggiest idea what is the cardinality of P(N). That's the Continuum hypothesis.
Quoting Mephist
Perfectly well agreed. I hope you don't think I'd be shocked or would object. The computable powerset of N is countable of course, since there are only countably many Turing machines. And now we're into constructivist philosophy. And since we live in the age of computation, there's growing support for this point of view. Can't fight the tide.
Quoting Mephist
Yes. But then you can't get the theory of the real numbers off the ground and the intermediate value theorem is false. The constructivists counter: The IVT becomes true again we only consider computable functions. And so the argument goes. I'm sure we both understand each other's point of view here. There's no right or wrong to the matter. Just history and philosophy.
Quoting Mephist
Yes agreed. We agree on everything. I just think the world of the full powerset of N is richer and more interesting than the computable powerset. That's an aesthetic judgment. And it's more useful. That's a pragmatic judgment.
Quoting Mephist
Right. The model thinks it's uncountable but from "outside" we can see it's countable.
As I understand it, Skolem himself thought that his paradox showed that our notion of set is murky and not entirely coherent. This is a point of view I can't disagree with. Nobody knows what a set is. Sets as conceived in set theory are much more strange than the "collections of objects" that we teach in high school.
Quoting Mephist
Aha! Well, I don't think Cantorian set theory is true any more than I think chess is true. Does that knight "really" move that way? It's a silly question. Chess is a formal game. And when pressed into my Cantorian corner, I put on my formalist hat and say that we can adopt the axiom of infinity or reject it. Accept the full powerset of N or only the computable powerset. Accept the law of the excluded middle or reject it. Accept the axiom of choice or reject it.
Now the question is: Why adopt the full powerset? The answer is pragmatic. In the 20th century, full Cantorian set theory has proven supremely useful. It lets us develop a logically rigorous theory of the real numbers. The physicists use the real numbers to build their theories. Throw out Cantorian math and you lose a lot. Nobody has yet succeeded in developing a computable or constructive theory of physics. People are working on it. Perhaps we'll need another few decades to get more insight into this question.
Quoting Mephist
I read that article and didn't fully understand it or the point you're making. But whether math refers to anything outside itself is another one of those philosophical questions. Chess doesn't. Why should formal math?
Quoting Mephist
Math is more than ?Gödel numbering. But now I'm making a Platonist argument when a couple of paragraphs ago I was making a formalist one. I tend to use whichever argument is handy for my point. But ?Gödel's incompleteness theorems are about syntax and not semantics. ?Gödel himself was a Platonist. I found that surprising when I first learned it. He believed there is mathematical truth "out there" that's beyond the limitations of formal systems.
Infinity is not a number. Infinity just means unboundedness. The idea of infinity is refered to using a noun, and I think this is why most people think infinity is a "thing".
Of course, this is not even remotely rigorous, but it is the best way to explain to ordinary people why infinity is not a number and why they thought it might be.
You are right. I think I should be more careful with this kind of statements. I wrote this without really checking the recursive structure generated by the the axiom of infinity.. I hadn't much time and I wanted to give a quick answer :-) ( mathematics is only my hobby, not my work... )
Well, Quoting fishfry
Well, if you expand the definition "there is no surjection from A to B", it becomes:
forall F: A->B, exists y:B, forall x:A, F(x) != y [ I prefer to use the syntax of Coq ( https://coq.inria.fr/ ) because I didn't take the time to learn writing symbols :-) ]
This tranlsates to the english: "however you choose to match elements of A with elements of B, there will always be an element of B that is not matched by any element of A"
I think this is a quite intuitive definition of the fact that B has at least one element more than A.
Quoting fishfry
Sorry, I don't understand what you mean here: The constructivist (intuitionist) logic is only a "more general" logic than classical logic, since it has less axioms. As a result, it is valid for a bigger set of models:
If you substitute "true" with "provable" and "false" with "contradictory", there are three possibilities:
-- A proposition is provable ( then it's true in all models )
-- A proposition is contradictory ( then it's false in all models ) ( equivalent to "it is provable that it's not provable": if you give me a prove, I can derive a contradiction from your prove )
-- A proposition is not provable, but it's not contradictory ( then, it's true only in some models ).
-- A is not derivable from not( not A ), because this means ("A is not provable" is not provable), and that is not true for all models (it's not true if functions are partial recursive functions)
For example, if you can interpret functions as partial recursive functions (Turing machines) and propositions as assertions on the behavior of Turing machines:
-- forall is a loop that stops returning false if it finds a false value, and it's value as proposition is false if it stops and true if it loops forever
-- exists is a loop that stops returning true if it finds a true value, and it's value as proposition is true if it stops and false if it loops forever
Since the halting problem is undecidable, there are theorems that are not provable and not decidable.
The model of Turing machines (functions are partial recursive functions) is countable, but you can interpret the types and functions of type theory in uncountable models too ( including real numbers ):
The axiom of excluded middle is independent from the others, so if you want to speak only about models in which all functions are total (non necessarily recursive), such as standard analysis, you can assume not( not A ) => A as an axiom. In this way, you recover the full classical logic and all propositions can only be true or false (even if some of them - much less than before - can still be non provable).
Did I miss something? What is objection to this argument?
Quoting fishfry
That's reassuring :-)
Quoting fishfry
Yes, that's exactly the point that I wanted to discuss when I started the other topic https://thephilosophyforum.com/discussion/5792/is-mathematics-discovered-or-invented
( look for example at my post number 38 )
In fact, I started writing on thephilosophyforum.com a couple of weeks ago because I was looking if somebody had an answer to this question: https://thephilosophyforum.com/discussion/5789/is-it-possible-to-define-a-measure-how-interesting-is-a-theorem
but it seems that nobody knows o consider the question important :-(
Quoting fishfry
Voevodsky showed that Homotopy Type Theory can be used as a foundation for mathematics in a very natural way. What do you mean by "constructive theory of physics"?
Quoting fishfry
I wrote "you could even say that "every mathematical object is a string of characters", and you replied that "This is confusing syntax with semantics".
Herbrand interpretation says that you can build a model of your theory (semantics) made from the syntactic structures of the language itself (strings of characters).
Quoting fishfry
I agree. And I think Godel himself would agree: he built this model to confute Hilbert's idea that there is no truth except formal logic.
Quoting fishfry
I think Godel's idea was something similar to what I tried to express in (https://thephilosophyforum.com/discussion/5792/is-mathematics-discovered-or-invented): there exists a meaning of truth based on a physical world that is more "fundamental" than the physical world that we observe.
That's an interesting point. Here's why IVT is false in constructive math. IVT, you will recall, says that if a function from the reals to the reals is continuous, and if it takes a negative value at one point x and a positive value at another point y, then it must be zero at some point between x and y.
That is if we have f(x) < 0 and f(y) > 0 then there exists p with x < p < y and f(p) = 0. This is intuitively correct about a continuous function f.
If you take the standard real line and omit all the noncomputable points, you get the computable real line. The computable real line is full of holes where the noncomputable reals used to be. You can have a continuous function whose graph passes through the x axis at one of the holes. To a constructive mathematician that zero does not exist. You can drive a continuous function through a hole in the computable real line.
For example for a noncomputable p, there's a straight line that passes through the x-axis at p. But the point (p,0) does not exist for constructivists. We have a straight line that crosses the x-axis without ever taking on the value zero.
I personally find such a model of the continuum unsatisfactory in the extreme. It violates every intuition about what a continuum should be. I think that's both a mathematical and a philosophical problem for the constructivists.
You mentioned a lot of other interesting things that I'll try to get to later. One was about LEM and HOTT. I was not under the impression that HOTT is a constructivist theory. I'm sure Voevodsky believed in uncountable sets. They are a simple consequence of the first-order axioms of set theory. A machine could verify Cantor's theorem. So I am not sure that HOTT and constructive math are the same. On the contrary I'd assumed they're different. But I'm mostly ignorant regarding HOTT.
One thing I do know is that the axiom of choice implies LEM. To reject the axiom of choice involves throwing out quite a lot of modern math.
https://en.wikipedia.org/wiki/Diaconescu%27s_theorem
I'm not sure what that means. You can't iterate through a set unless it's well-ordered. And there are sets that are not well-ordered, unless you accept the axiom of choice, which implies LEM. How would you iterate through the real numbers, for example? Here's how I'd do it. First, well-order the reals. Second, use transfinite recursion. Those are two powerful principles of math not available to constructivists. I don't know how constructivists feel about transfinite recursion, but it applies to very large sets that constructivists can't possibly believe in.
But the "for all" operator can be applied to the reals without the need for such powerful machinery as the axiom of choice and transfinite recursion. You just say "for all reals" and you're good. That's much more powerful than iteration IMO.
I do not believe that the universal quantifier is the same thing as iteration, for this reason. There are sets that you can apply "for all" to but that you can't iterate through in any obvious manner.
Quoting Mephist
I can't agree that intuition is useful here. Intuitively, the even numbers adjoined with a single odd number are "larger" than the even numbers; and in fact this is confirmed by the proper subset relationship. But these two sets have the same cardinality. This is specifically a point of confusion for beginners. I reject naive intuition here and insist that "bigger" means injection but no surjection, end of story and no intuition allowed! Intuition is exactly what gets students into trouble when they first learn the math of infinity.
Moreover if you reject the axiom of choice (necessary if you reject LEM) there are "amorphous" sets that have no sensible cardinality at all. Such sets defy anyone's intuition.
https://en.wikipedia.org/wiki/Amorphous_set
I should add that constructivists have considered these issues regarding the axiom of choice.
https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_mathematics)#Axiom_of_choice
https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_mathematics)
"In the philosophy of mathematics, constructivism asserts that it is necessary to find (or "construct") a mathematical object to prove that it exists. In classical mathematics, one can prove the existence of a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving a contradiction from that assumption. This proof by contradiction is not constructively valid. The constructive viewpoint involves a verificational interpretation of the existential quantifier, which is at odds with its classical interpretation.
There are many forms of constructivism.[1] These include the program of intuitionism founded by Brouwer, the finitism of Hilbert and Bernays, the constructive recursive mathematics of Shanin and Markov, and Bishop's program of constructive analysis. Constructivism also includes the study of constructive set theories such as CZF and the study of topos theory."
Well, the constructivist theory that I know quite well is the "Calculus of Inductive Constructions", or CIC ("https://en.wikipedia.org/wiki/Calculus_of_constructions"), implemented in the Coq proof assistant (https://coq.inria.fr/).
This is a particular version of "Martin-Lof Type Theory" (https://en.wikipedia.org/wiki/Intuitionistic_type_theory)
Here's a summary of the main properties of Coq's logic: https://github.com/coq/coq/wiki/The-Logic-of-Coq
Basically, Coq is a programming language (let's cal this the "internal language" of the logic) based on dependently typed lambda calculus (https://en.wikipedia.org/wiki/Dependent_type) that allows only to build total functions: using this language it's impossible to define a function that does not terminate (then it's less powerful than the full untyped lambda calculus, or a Turing machine).
However, using the internal language you can operate on inductive (and co-inductive) data structures and on functions (the models about which you can speak about and on which you can operate using the internal language) that ARE NOT REQUIRED TO BE COMPUTABLE.
For example, you can say "let R be a field and let F be a function such that forall x:R, F(x)^2 = x; Let x be F(2)". You can represent for example real numbers as functions from natural numbers to the finite set {0...9 and comma} to make things simpler (the infinite decimal expansion of the number), and then you can prove that the algorithm that produces the digits of the square roots of two IS a real number that solves your equation. Of course "x" is not a recursive function (of course, it does not terminate if you execute it on all natural numbers! ), but to build it you only need a finitely defined algorithm.
In a similar way, you can say "Axiom excluded_middle: forall A:Prop, A \/ ~A." meaning: excluded_middle is a function that takes as input the prove of a proposition A and produces as output the prove of the proposition A \/ ~A. This function is not implementable in the internal language of Coq, but you can treat it as a pre-defined "external" function that you can call and use the result without knowing how it works (something like a function defined in an external library of a programming language).
The logic doesn't make sure that the function exists (and even it doesn't guarantee that i's consistent: the proof of consistency is related to the meta-theory of the internal language), but only says that if you have this function, you can use it. And the resulting logic is equivalent to classical higher order logic (not set-theory).
The correspondence between set theory and the various versions of dependent type theory is rather complex:
See for example (https://mathoverflow.net/questions/69229/proof-strength-of-calculus-of-inductive-constructions):
"IIRC, the calculus of inductive constructions is equi-interpretable with ZFC plus countably many "inaccessible cardinals" -- see Benjamin Werner's "Sets in Types, Types in Sets". (This is because of the presence of a universe hierarchy in the CIC."
And this is a very good reference for intuitionistic type theory: https://plato.stanford.edu/entries/type-theory-intuitionistic/
========= end of the "defintions" :-) =========
Quoting fishfry
I read that you can't define Cauchy or Dedekind real numbers in intuitionistic logic (without additional axioms), but you can use other definitions of "continuity":
http://www.alternatievewiskunde.nl/QED/brouwer.htm
"Any function which is defined everywhere at an interval of real numbers is also continuous at the same interval. With other words: For real valued functions, being defined is very much the same as being continuous."
and MY FAVOURITE ONE:
https://plato.stanford.edu/entries/continuity/#9 "Smooth Infinitesimal Analysis"
What is your objection to this axiomatization of real numbers?
Quoting fishfry
Intuitionistic type theory implies a somewhat "weaker" version of the axiom of choice (the "good" part of it :-) )
[ ..going to continue next time! ]
HOTT is Martin-Lof Type Theory with higher inductive types, but with the addition of Voevodsky's "univalence axiom". So, it's not so simple:
first of all: univalence does not imply excluded middle nor the axiom of choice. So, in this sense, it could be called intuitionistic. But if by intuitionistic you mean that all the internal functions of the language are computable, then the presence of an axiom makes it become not computable (if it were computable it wouldn't be needed as an axiom).
However, both the axiom of choice and excluded middle are consistent with Martin-Lof Type Theory. So there is no problem of including them in the theory (if you don't require that the theory should be intuitionistic).
But the main distinctive point with HOTT is the fact that it is a Type Theory, and not first order logic with axioms from set theory. This makes it much easier to use in practice for real mathematics ( I am pretty sure that nobody uses formal set theory for real mathematics: take a look at the post on this site on how to prove 1 + 1 = 2 :-) ). But on the other side, there were a couple of things that were not completely "compatible" with standard mathematics. And both of them are magically "fixed" with the univalence axiom: one of them (the simpler one) is that functions in type theory are not directly interpretable in the usual way of input-output correspondence, but rather as lambda functions. The difference is that two functions can be different because they are implemented in a different way (different algorithms) but are equal as input-output (this is called "function extensionality"). The other, most important point, is that there are two kinds of "equality" in type theory, and neither of them matches exactly the equality of set theory. I don't think I can explain this in a few words, and this post is just becoming too long... :-)
Quoting fishfry Yes of course, but this in my opinion is not really related to intuitionism.
Quoting fishfry
A machine could verify any theory based on a formal system: this, I believe, can be taken as a definition of "formal system".
Quoting fishfry
I wanted to take a very simple countable model for an intuitionistic theory. You can use this model to "understand" the reason why the rules of logic are that particular ones. This is useful because if a theory has a model is surely not contradictory. And if you are sure that they are not contradictory you can reinterpret them on another model that you don't understand (for example one that has an infinite set of elements) and assume that is has to make sense for this model too. You can't iterate through real numbers with a procedure made of finite steps. But you can give it some physical meaning (covering an 1-dimensional segment with a 2-dimensional figure, for example) that works following the same rules.
OK, your are right. Let's stick to a formal logic that uses only the forall-exists language. But we have to keep in mind that the meaning of "forall" and "exists" is only determined by the rules of the language. And that we can trust these rules because they are verified in a "physically verifiable" (intuitionistic?) model (such as the "iterate and check on countable sets" model that I described before), but they ARE NOT REQUIRED TO HAVE THAT PHYSICAL MEANING, at the same way as the word "larger" doesn't have to have the intuitive meaning related to our physical world.
Quoting fishfry
I didn't know what amorphous sets are. I searched on Wikipedia:
"After Cohen's initial work on forcing in 1963, proofs of the consistency of amorphous sets with Zermelo–Fraenkel were obtained.[3]".
So, looked for the proof of consistency with Zermelo-Fraenkel to see for which model amorphous sets make sense.. but I don't have access to [3] :-(
However, I would guess that the crucial point here is that you define "infinite" as "existence of an one-to-one correspondence with a proper subset", but the definition of subset is given by a proposition of the language, and everything depends on what propositions you can express in the language.
The axiom of choice expressed in set theory expands the set of propositions that you can express, and this has the effect to limit the sets that you can define.
But the set-theoretical axiom of choice is not verifiable in any "concrete" (intuitionistic) model, at the same way as amorphous sets aren't.
Wow you wrote a lot and I'm a little overwhelmed. I'll try to respond where I have some value to add. FWIW I have heard about HOTT. I do in fact know what a homotopy is from topology. I don't know the details of how homotopy is used to do logic or foundations. I've heard of the univalence axiom and my understanding is that it says, "Equivalence is equality" or some such. I'm sure I'm missing 100% of the philosophical and mathematical depth of the statement. I know enough category theory to vaguely imagine what a topos might be. I know who Grothendieck but not quite exactly what a sheaf is. I've heard of constructive math and I know about Brouwer. As I've said I regard HOTT and neo-intuitionism in general as Brouwer's revenge. So I know enough to make a little joke but not enough to discuss any of the technical aspects. I know of Bishop's book and I even know a book where a physicist tried to do constructive physics. That's a project in its infancy but you constructivists will have to face the problem someday. Physics is founded on classical math, ie continuity and the real numbers as defined in set theory.
This is all by way of indicating some of the boundaries of my knowledge so that you'll get a better understanding of the deep pool your posts just threw me into!
Quoting Mephist
Yes I've perused that article.
Quoting Mephist
I know a little about Brouwer and once attempted to try to grok the idea of a free choice sequence. That's as far as I ever got with classical intuitionism. I find it murky in the extreme.
I wouldn't call Hilbert any kind of finitist, but you might know more about that than I do. Bernays I only know from ?Gödel-Bernays set theory but beyond that I know nothing.
Shanin and Markov I've never heard of, I'll Google. Bishop's book I've heard of. Constructive set theory I know nothing about. As I said I know enough category theory to have a vague notion of what a topos is. I'd love to get to that point someday.
Quoting Mephist
I'll check out the Wiki link. Martin-Löf type theory I've heard of about a million times and have no idea what it is. I confess I give priority to trying to understand more of the classical math I wish I'd studied harder in my youth, and not so much with all the neo-intuitionist stuff. I like Steve Awodey's category theory book and Youtube videos and he's a big name in HOTT so I'm sure I'm missing a lot of good stuff. I'm just buried under an avalanche of stuff that I know a little "about" but that I'll never know. It's tragic, really. All this is part of that avalanche.
Quoting Mephist
I know a little about Coq as a proof assistant but no details. I am grateful for your summary and I will try to work through it at some point. I have to confess I did not dive into this exposition but I'm glad you wrote it.
Quoting Mephist
Note to self, read and understand all of the above someday.
But if I'm skimming reasonably, you are saying that someone's figured out how to do a satisfactory theory of the real numbers using constructive math? Well ok if you say so but where are all the noncomputable numbers? Maybe you can explain that more simply. Also what about the various constructions that require the axiom of choice? The nonmeasurable set, the Hahn-Banach theorem used in functional analysis which is used in physics? Zermelo's well-ordering theorem. The fact that every vector space has a basis, and that every surjection has a right inverse? That every unital commutative ring has a maximal ideal. That every Dedekind-infinite set is infinite. That every field has an algebraic closure. Mathematicians are not going to give all those things up and they all depend on the axiom of choice hence LEM.
Quoting Mephist
Ok I'll add this to my reading list. You know I could spend the summer just working through your post! But ... are you telling me you can believe in countably many inaccessible cardinals but not a noncomputable real number? I didn't even think constructivists believed in uncountable sets.
Quoting Mephist
More for my reading list.
Quoting Mephist
Ok right. This is perhaps a sticking point for me because I am so steeped in the traditional set-theoretic real numbers that I have a very hard time taking any other approach seriously. If you remove the noncomputable reals then there are Cauchy sequences that don't converge. Your "continuum" has more holes than points. I personally find that a serious problem. However I'm certain that all the smart people working on HOTT are perfectly aware of the problem and are ok with it somehow.
Quoting Mephist
Yes, I've heard that. If you restrict to continuous functions you can make all this work. Will add the link to my reading list. Seriously, this is an awesome research program you've outlined for an aspiring neo-intuitionist.
Quoting Mephist
Oh yes another one I've heard of. Well differential geometers are closet physicists so they believe in infinitesimals and I gather you can take a category-theoretic approach that makes all this work. More stuff I know a little about without knowing any of the details. It's like my brain is filled with a skeleton but not enough is filled in. But yes I'll perfectly well agree that smarter people than me can make infinitesimals work in a categorical framework. I'm not saying they can't, LOL!!
Quoting Mephist
I'm sure I have no objection. Lawvere I know from his elementary theory of the category of sets, ie you can do set theory without ever talking about elements. I'm sure in a hundred years nobody will remember set theory anymore, it's clearly doomed. I have no objection. I can only plead ignorance of even more stuff I wish I knew.
Quoting Mephist
I'm gratified that we are in agreement on the areas where our knowledge overlaps. Weak forms of choice are often sufficient. However the facts that every vector space has a basis and every surjection has a right inverse (ie section) are both equivalent to full choice. I suppose the constructivists have a fix.
Quoting Mephist
I'm out of breath just typing. Let me tackle the other posts later.
Quoting fishfry
I don't believe that physics is founded on ZF Set theory, if that's what you mean by "classical math".
Physics is based on calculus (for the most part), that was created starting from XVII century (Newton's "principia" was published in 1687), at a time when logic wasn't even considered as part of mathematics.
In fact, I believe that most of physicists don't even know the exact formulation of the axioms of ZF set theory, and that's because these axioms are never used in physics directly. The thing that really matters is that you can use logic to prove that your results follow from the algebraic rules of calculus, differential geometry, group theory, etc...
If this was true, the distinction between the various axiomatizations of real numbers would be irrelevant for physics. ZF set theory only happens to be the first axiomatization to be discovered.
But this is in fact not completely true. There are some physical results that are not the same in all axiomatizations. For example, the Banach-Tarski theorem. If you believe that the physical space can't be split in peaces and then reassembled with double volume, you have to conclude that the real numbers' axiomatization of ZFC is not the right model for physical space.
A more "practical" example are Feynman's path integrals: they can be used to calculate physical quantities with extreme precision, if you don't take for real the result that derives from ZFC's axioms. In fact, from ZFC axioms you can derive that the result is "infinite". But if you take only some terms of your infinite (divergent) series, the result is experimentally correct with a very high accuracy!
I believe that the most popular explanation of this fact among physicists today is that physical space is in fact discrete. But there is no experimental evidence of this until now.
( The same thing happens with energy: if energy is continuous you get infinite results for black body radiation. With Plank's quantization you get the correct results )
But with physics you can assume nothing until you find experimental evidence, because nature has surprised us so many times! However, if I had to guess, discrete space (like pixels of a video game) is too "ugly" to be true: nature has shown to prefer "beautiful" mathematical structures.
I think an interesting question is: is there an axiomatization of a continuous space (where you can take measures represented as real numbers) where Feynman integrals make sense and are convergent? I don't know, but I don't see a reason why this shouldn't be possible.
Quoting fishfry
I'll try to give you a simpler explanation: In logic you have propositions, terms and relations. Martin-Löf type theory ( and HOTT too ) reduces all these three concepts to only one: functions. That's basically the reason why it has a simple categorical model.
The functions that you consider in the theory are, exactly as for morphisms in category theory, not limited in any way. Now the question is: how is it possible to use and reason about functions that are not computable? Obviously, you can't "execute" the function on a given argument to check what's the result. Well, the "trick" is: lambda calculus! With lambda calculus you can "execute" a function by what's called "beta reduction". That means that you can transform and compose functions from one form to another (equivalent to the previous one) using very simple rules that are supposed to be respected by everything that can be called "function".
The whole point about constructivism is that you can assume existing for sure only the functions that you can effectively compute. All the others can enter in the theory only as premises of your deductions (no axioms!). And that is enough to prove all mathematical theorems that can be proved with classical logic, if you add the appropriate axioms corresponding to the "non computable" parts of classical logic. In this way you make clear the distinction between the computable and not computable parts of logic!
[ that's enough for today.. :-) ]
Oh no now I am falling even further behind in my replies!
Quoting Mephist
Yes but by the 19th century it became clear that the logical problems of calculus were becoming a problem and needed to be addressed. We needed to know exactly what was a real number and a continuous function in order to resolve the delicate convergence issues that were arising.
I agree of course that the physicists don't care and I didn't mean to imply that they do. But physics is based on the real numbers and mathematical analysis in general; and those things were discovered to need a foundation; and that foundation is currently set theory.
Quoting Mephist
Of course. In fact most working mathematicians couldn't state the axioms and don't use set theory directly! That's a fact. My point still stands. Once you get picky about which things converge and which things don't, a problem that arose in the infinite trigonometric series that arose out of 19th century physics, you need a foundation and set theory is it. Currently of course.
Quoting Mephist
Sure, not disagreeing at all.
Quoting Mephist
Perfectly well agreed. But we are not talking about whether Dedekind cuts or equivalence classes of Cauchy sequences are a better way to represent the real numbers. You are advocating nonconstructive foundations. It's in that context that my remarks make sense. We can NOT currently (as far as I understand it) get modern physics off the ground using nonconstructive math. The 't' in the Schrödinger equation is taken to be a variable that ranges over the standard mathematical real numbers, noncomputables at all.
I am aware of only one book that attempts to get physics working on nonconstructive math. That's a tiny drop of water in an ocean that needs to be filled.
I will agree that synthetic differential geometry is some kind of counterexample to my claim but I don't know enough about it. But SDG is not the same as nonconstructive approaches, is it? I thought SDG is just a categorical approach to infinitesimals so it's more like a complementary idea to nonstandard analysis. Not something directly related to homotopy type theory or neo-intuitionism or whatever. But I could be all wrong about all this, I have very little to no actual knowledge.
Quoting Mephist
I would never call B-T a "physical" result. On the contrary it's only a technical result regarding the isometry group of Euclidean 3-space, which happens to contain a copy of the free group on two letters, which has a paradoxical decomposition. It has nothing to do with the real world. As far as we know, of course.
But if earlier you said that physics doesn't depend on set theory, then surely we can't apply the axiom of choice to regions of physical space!
Quoting Mephist
I've often raised this point myself. If anyone seriously believed it was, then we'd see physics postdocs applying for grants to count the number of points in the unit interval and thereby discover the truth of the continuum hypothesis. The fact that the ides is absurd shows how far ZF, let alone C, is from the real world.
I'm not claiming the world's based on set theory. I apologize if my earlier wording made it sound like I did.
I claim that our best physical theories are based on the real numbers and the theory of limiting processes that go under the name of mathematical analysis; and that our current formulation of analysis is in terms of set theory. I don't state this as a metaphysical point, only as a historically accurate one. I don't care if you want to replace one definition of the real numbers with another. I do care that there is as yet no fully worked out constructive theory of the real numbers that will support modern physics. That is my belief. Certainly I could be wrong.
Quoting Mephist
I've heard about renormalization and I am not aware of whether it's been mathematically formalized or not. I found a stackexchange thread that some that "some" version of QFT are mathematically sound and others not! This is all inside baseball in the physics biz.
https://physics.stackexchange.com/questions/86597/qft-as-a-rigorous-mathematical-theory
Regardless, I take renormalization as a modern example of Newton's great use of calculus to work out gravity, two hundred years before we had a logically rigorous foundation for calculus. Physics often leads math that way. That doesn't mean the math of renormalization won't be fully patched up someday in terms of ZFC or some future foundation. Maybe tomorrow morning someone will derive renormalization from homotopy type theory. If that happens I'll have to shut up about constructivism.
Quoting Mephist
Yes we are now back to the ancient question of atomism. Zeno. All that stuff. We are not going to solve it today. The Planck scale doesn't say the world's a discrete grid. It only says that our contemporary theories don't allow us to speculate below that scale. Maybe there are little thingies in there.
Personally I do not think there are Euclidean points. I don't regard the real numbers as a good model of continuity. I'm not making any metaphysical claims for math at all. But I am making historical claims that our best continent theories DO need an underlying Euclidean model.
[By Euclidean I don't mean the geometry, I mean the Cauchy completeness of n-space],
Quoting Mephist
I hope my Planck point was clear. I didn't deny the Planck scale. It's that my own personal understanding is that the Planck scale is that scale of space and time below which we can't sensibly measure or calculate or speculate about. I'm agnostic on whether there's anything actually there.
Quoting Mephist
I think we're meeting in the middle. I don't think mathematical continuous space is quite the right model for the world, and you admit that neither is a discrete grid. Most likely it's something even stranger, yet to be discovered by a genius not yet born.
Quoting Mephist
Don't know enough about the subject, more stuff to read. But quantum physics is probably not something I'll be tackling in this lifetime. I do know a little functional analysis, enough to know what Hilbert space is. So if you tell me that an observable is a linear operator on some complex Hilbert space, I know what that means mathematically. Just not physically. And I'll probably never know.
Quoting Mephist
I understand the words but not the ideas. I suspect it's the kind of thing where mere Wiki reading wouldn't help. I've heard of lambda calculus but don't know anything about it except that everyone thinks it's terribly important. I gave the Wiki a quick scan and they pointed out in their section on beta reduction that "The lambda calculus may be seen as an idealised version of a functional programming language ..."
Ok. But a functional programming language can't do anything a Turing machine can't, and in fact Turing proved that TMs are equivalent to the lambda calculus, and Turing machines can't crank out the digits of noncomputable numbers. So once again I'm up to my original brick wall of understanding. Or misunderstanding. I'm sure that if I grokked this subject I'd suddenly realize that I've been ignorant all these years, and that you can in fact do everything you need to do with mere computability.
I just don't see it yet.
Quoting Mephist
Thank you much!
ps --
Quoting Mephist
If I could only understand this. You are saying that if I can prove a theorem at all, I can prove it using constructive methods, by adding certain axioms. This could be within my grasp. What are the axioms that make this work?
pps --
Ah this is that Curry-Howard business I bet. Programs are proofs. Fine, I believe that. A proof has to be computable, but the thing it's talking about need not be. But doesn't that mean something? Proofs aren't sufficient to know mathematical reality, ?Gödel showed that. Same with physics. Restricting our epistemology to what we can prove with axiomatic systems or computers is not sufficient to understand reality. That perhaps would be my working thesis.
Wait a moment: I am advocating Martin-Löf type theory, and one of it's particular versions called "Calculus of Inductive Constructions" that is, as the name says, CONSTRUCTIVE mathematics.
From https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_mathematics):
"Constructive mathematics
Much constructive mathematics uses intuitionistic logic, which is essentially classical logic without the law of the excluded middle."
Quoting fishfry
OK, maybe this is the best point to start with.
Here's MY PERSONAL (not sure if common) interpretation of the "meaning" of classical and intuitionistic logic.
First of all, definitions:
- a model is something on which you can perform "experiments" and verify the results.
- a proposition is the description of an experiment performed on a model.
The result of every experiment (if you can perform it) is always uniquely determined: True or False.
This is the same thing as a physical experiment: if it's doable and you prepared the experiment to obtain a true or false result, you'll always get an answer: it can't happen that the world stops working because it doesn't know how to answer to your experiment!
But there is a difference from physics, and that's the difference between mathematical functions and physical experiments: in a mathematical model for the same experiment you'll always get the same result: a function is supposed to be some fixed well-identified mechanism that always outputs the some output if you put in the some input.
In a physical model instead, as we know from quantum mechanics, this is not always true.
Let's begin from what is probably the most controversial point: the law of excluded middle (LEM).
So, LEM says that "all experiments are performable" (giving as a result True or False).
Not assuming LEM means that in some cases A is a performable experiment, but (A \/ ~A) is not.
The obvious example is the termination of a Turing machine:
A means: "the program A terminates".
You run the program and you verify that it's true (we assume that A always does the same thing, so it's enough to run it one time)
NOT A means: the program A never terminates:
You run the program and wait... forever! Your experiment will never end, so you'll never be able to perform the experiment that corresponds to the proposition "A is false".
Nevertheless, we know that an answer "exists": or the program terminates or it doesn't, No other possiblities! Only that the answer is not implementable as an experiment on the model.
(of course in most cases you can prove that A is false with logic, but here we are not speaking about "proving" propositions but "verifying" on the model if the proposition is true or false)
So, in classical logic we assume that EVERY proposition of the language describes an "experiment" that is executable and when is executed returns an answer True or False.
In intuitionistic theory instead, we assume that there are propositions A for which "NOT A" is not a feasible experiment. So, you shouldn't be allowed write "NOT A" as an assumption for an arbitrary A, because for some propositions A, "NOT A" has no meaning in our model.
The "trick" is that in ituitionistic logic "NOT" is not defined as the boolean function {True -> False; False -> True}, but is defined as "A implies Absurd" ( A -> _|_ ), where Absurd is the "impossible experiment": the experiment that does not return any result, because it's associated with a contradictory proposition. Note that IF A is a possible experiment, then "A -> _|_" is always a possible experiment in intuitionistic logic, because it's implemented as a logical demonstration made of a finite number of steps starting from A (that's the reason why the internal language of logic should be allowed to implement only total functions) and the provability of a theorem is a syntactic operation made of a finite number of steps, independent from the model.
In this way, the rules of logic guarantee that you can never build an "impossible" experiment IF you start from a possible one.
Now, in classical logic "A -> _|_" is equivalent to "NOT A", because there are no "impossible" experiments ( except _|_ that is not derivable in any logic ).
But in intuitionistic logic you can assume as an axiom the proposition "A \/ (A->_|_)" (usually written A \/ ~A), but with the intuitionistic definition of the not function).
"A \/ (A->_|_)" in intuitionistic logic is some kind of "oracle" function that takes as an input a proposition A and ALWAYS returns one of the two propositions: A or (A->_|_).
If this function exists, then you can use it to decide the result of every experiment, and (A->_|_) becomes equivalent to the "NOT A" of classical logic (no more impossible experiments). If this function does not exist, there are impossible experiments, and "A \/ (A->_|_)" cannot be built in any way starting from intuitionistic rules.
So, this means that:
- "classical logic" = "intuitionistic logic" + LEM.
- "models of classical logic" = "models of intuitionistic logic" - "non realizable experiments"
You can use intuitionistic logic to separate the finitary deductions (not using any axiom) from the non finitary ones (using LEM, or some other axiom).
(of course, this is a super semplification since there are many kinds of "classical" and "intuitionistic" logic, even without considering the axioms of set theory)
There are even other models in which not all propositions of classical logic make sense: for example, if you interpret the propositions as experiments of quantum mechanics, the proposition A /\ B does not make sense if A and B are incompatible observables: because both A and B separately are measurable quantities, but the couple (A, B) is not.
So, when you use classical logic to reason about experiments in quantum mechanics you have to be careful not to use the expression A /\ B when A and B correspond to non independent observables.
OK, I am not an expert either :-)
For what I can say for sure, there is a full formalization of real numbers in COQ standard library (https://coq.inria.fr/distrib/current/stdlib/).
I have used the library for a long time (at the beginning at university, and after because I like to play with math), but to be honest I even didn't know on what axiomatization it was based :-)
I searched for some the documentation (easier than dig into the library searching for axioms):
(https://hal.inria.fr/hal-00860648v1/document at page 6)
"Note that the standard library sometimes makes use of the excluded-middle axiom in addition
to the previous ones. The CoqTail project6
, however, has shown that it was unneeded. So, for the
purpose of this work, we consider that this axiom is not part of the ones defining real numbers in
Coq’s standard library"
From the point of view of calclulus (and physics) you can use limits or derivatives perfectly well without knowing how they are defined in the base libraries (you only apply theorems).
Quoting fishfry
SDG represents space as a topos. And a topos is the model (or at least the "standard" model) of Martin-Löf type theory (lots of details omitted...)
(From https://ncatlab.org/nlab/show/topos)
"toposes are contexts with a powerful internal logic. The internal logic of toposes is intuitionistic higher order logic. This means that, while the law of excluded middle and the axiom of choice may fail, apart from that, every logical statement not depending on these does hold internal to every topos."
P.S. In reality, the "standard" model for Martin-Löf type theory with extensional equality is "belived" to be an (infinity-1)-topos, but I don't know if anybody found a proof of this.
Simple explanation:
In ZF Set theory, everything is made of sets (or, if you prefer, everything is a set)
Of course you can represent (or build) whatever you want with sets, by composing them with inclusion and equality. But the fundamental "objects" described by the theory are sets.
In Martin-Lof type theory, everythig is made of types (objects) and functions (morphisms).
Essentially, the rules of logic describe what you can "build" starting from an arbitrary finite collection of objects and morphisms.
The "thing" that results from building ALL THE POSSIBLE CONSTRUCTIONS starting from this arbitrary collection results to have the algebraic structure of a topos (the free topos built from the given initial objects and morphisms)
In a similar way, for example, the axioms of classical propositional logic correspond to a distributive lattice, and the axioms of simply type lambda calculus correspond to a cartesian closed category.
Well, SDG is an attempt to add to a topos enough structure (corresponding to axioms of the internal logic) to be able to define on it differential operators that respect the same rules of calculus that we use in physics.
The main difference is that this space is not made by an infinite set of "points" of zero size, but by an infinite set of lines of infinitesimal size. From the point of view of logic, the model is not contradictory. For the point of view of "intuition", I am not sure if it's more "crazy" to believe that you can "attach" one after the other (continuity) an infinite set of objects of length zero to obtain a finite segment, or that you can "attach" one after the other an infinite set of objects of infinitesimal length to obtain a finite segment.
Quoting fishfry
I don't know what is an isometry group :worry:
Quoting fishfry
:-) Lambda calculus (let's say simply typed lambda calculus, to fix the ideas) is a little recursive set of rules that says how you can build some objects starting from other objects that have just been built: exactly like Peano natural numbers, recursive trees, and all other recursive constructions that you can think of.
The objects that you can build in this way have exactly the same structure as the arrows and the objects of a cartesian closed category. Every arrow can start from a type and end in another type, and there is the possibility to build types from all arrows that start from the same source object and end in the same target object (the exponential objects of the category).
So, category theory and lambda calculus represent the same structure, seen from two different points of view:
- category theory describes the collection of all possible objects and arrows seen from a GLOBAL point of view
- lambda calculus describes how you can build the whole structure putting together one piece at a time, from a LOCAL point of view.
About beta reduction: If you are operating on an universe of functions that you don't know, you obviously you cannot execute them, because you don't know how they are "implemented". And, in general, they don't have to be "computable" functions at all. You can only execute the functions that you build within the internal language of logic.
Well, beta reduction is a rule that says "if you apply the function that takes the argument a into the formula ".... a ...." ( lambda a -> ..... a .... ) to the argument "whatever formula you want", you obtain as a result ( ..... "whatever formula you want" .... ). That is: you substitute the expression of the argument to the symbol that represents the input inside the formula. It's the most obvious and stupid thing that you can say that a function "must" do. This rule does not care how the function is implemented. Only operates formally composing functions together. Well, the "strange" thing is that this rule can be used to calculate the value of the function IF the function is computable. It means that you can "execute" the function and obtain a result not really executing it, but transforming it recursively from a form to the next one, until you reach a "standard form". This standard form "represents" the result of the execution of the function (the output). In other worlds, this is a very clever way to mix computable functions that you built yourself with other "external" functions that you assume to exist, but you have no idea what they are doing. The same you do in logic: ( A /\ B ) \/ ( not C ). You can reduce this to a normal form made only of conjunctions without having any idea of what the propositions A B and C say.
Quoting fishfry
Exactly. All that you said is correct. You can operate with logic only using a Turing machine. But the model on which you can operate is a category, without ANY limitation of size. This is exactly the same thing that you do in set theory: you assume that there exists an universe made of sets of whatever size you want. And you say that if there exist two sets A and B, then there must exist even "A intersect B", and "A union B", etc, etc.. The things that you can build with logic operators (conclusions) starting from a finite number of terms of the language (hypothesis) are always a countable set in whatever formal language you use.
And by describing the finite structures that you can build from arbitrary finite sets of other objects, you define a structure on the underlying universe on which you operate.
But you can't impose a limit on the "infiniteness" of the size of the underlying universe only by imposing rules on finite constructions that are part of it. And that is true for intuitionistic logic AND for classical logic. You cannot "build" an irrational number in set theory either. You can only assume that it exists because it's compatible with the rules that you use to build sets starting from other sets!
It's interesting that you used Coq in school. You are a HOTT native. I of course learned my set-theoretic foundations long ago. As Max Planck said, science progresses one funeral at a time.
Quoting Mephist
The isometry group of 3-space is just the collection of all rigid motions of space. These are all the combinations rotations, translations, and reflections. Simple idea. They form a group in the sense of group theory, hence the name.
I only mentioned it because Banach-Tarski came up. BT is everyone's go-to example that proves that abstract math is completely divorced from common sense.
It turns out that the proof of Banach-Tarski is quite simple, at least in its outline. There are a number of steps but no step is particularly difficult. The core idea is encapsulated in the phrase, "The isometry group of 3-space contains a copy of the free group on two letters." When you unpack the meaning (which would take us far afield here) it is very simple and natural. No set-theoretic magic at all.
The Wiki article on the subject has a nice outline in case anyone's interested in demystifying this theorem for themselves. Point being that BT is much less esoteric than people think.
Quoting Mephist
I wanted to put in a word for the standard real numbers.
* Set theory's not needed for the standard reals. The idea of Cauchy completion arose historically long before Cantor's work on set theory. Cauchy completeness is a very natural idea, and the formality of defining the real numbers as the (equivalence classes of) Cauchy sequences of rationals would occur even to the most diehard constructivists. They might reject the real numbers from their ontology; but they could not deny its importance as an abstract idea.
The standard reals are the only model of the reals that are Cauchy complete.
The constructive reals are not complete. The sequence whose n-th element is the real number represented by the first n digits of the binary representation of Chaitin's Omega is a Cauchy sequence of computable numbers that fails to converge to a computable number.
On the other hand the other famous alternative model of the reals, the hyperreals of nonstandard analysis, are also not complete. Any non-Archimedean field (one that contains infinitesimals) is necessarily incomplete.
The constructive reals fail to be complete because there are too few of them. The hyperreals fail to be complete because there are too many of them. The standard reals are the Goldilocks model of the real numbers. Not too small and not too large. Just the right size to be complete.
* Cauchy completeness is a second order property. It's equivalent to the least upper bound property, which says that every nonempty set of reals that's bounded above has a least upper bound. It's second-order since it quantifies over subsets of reals and not just over the reals.
The second order theory of the reals is categorical. That means that every model of the reals that includes the least upper bound axiom is isomorphic to the standard reals. Up to isomorphism there is only one model of the standard reals; and it is the only model that is Cauchy complete. Even a committed constructivist would have to acknowledge these facts, and hold the standard real numbers as important at least as an abstraction.
* As a final point, I believe as far as I can tell that not every HOTT-er is a diehard constructivist. In some versions of HOTT there are axioms equivalent to Cauchy completeness and even the axiom of choice.
If I'm understanding correctly, one need not commit to constructivism in order to enjoy the benefits of HOTT and computer-assisted proof.
Hmmm...When we say divide by 2, we mean make two of that. Or cut once. 2-1. But had our convention been that divide by 2 meant cut two times, then to divide by 0 would mean to cut 0 times or leave it as it is which is 1 piece. So then to get 0 piece you would have to cut i) negative number of times which is impossible ii) an infinitesimal number of times, which is impossible, not just because of the unreal nature of infinitesimal numbers, but also because you cannot cut at a fractional number of times iii) and infinite number of times so that every piece is so small as to be the same as nothing, which is impossible, not just because of the unreal nature of infinite numbers, but also because you'd never get to the end of it even at infinite speed of cutting and forever in time.
OK, I read the article and finally understood the point about isometry group! :smile:
Actually, I tried to look for a formal proof of Banach-Tarski in ZFC Set theory, but the only one I found is using Coq... :-) (https://hal.archives-ouvertes.fr/hal-01673378/document)
- By the way, there is a formal logic computer system using ZFC Set theory with a very extensive library (http://mizar.org/), but there is no Banach-Tarski theorem there (at least at the moment)
I am looking for formalized proofs because usually proofs in ZFC related real numbers are too long to be completely written by hand, and tend to be not clear on the steps that make concrete use of ZFC axioms.
Anyway, the reason of my doubts on ZFC in geometry are more "fundamental", and not related to the existence of the isometry group.
The thing that I don't like is that in BT (as in all topology based on ZFC), a segment (or a surface, or a solid) is DEFINED to be a set of points.
(well since in ZFC everything is defined as a set, you don't really have much choice with this definition...)
The problem is that you can't use the property of a segment being a set in any way, because equivalence relations on sets (one-to-one functions) do not preserve any of the essential properties of the segment:
- the size of the set is not preserved, because additivity doesn't work for uncountable sets (and I believe there is no way to define a sum of an uncountable set of terms that gives a finite result, because the elements of an uncountable set are not "identifiable" with a computable function, and then you can't "use" them to compute a sum).
- the topology of the set is not preserved, since every set can be reordered to have a least elenent (https://en.wikipedia.org/wiki/Well-ordering_theorem).
For this reason, the use of bijective functions with the "meaning" of moving a 3-dimensional object by breaking it into pieces and then reassembling it in another place is wrong.
I mean: it's not contradictory, but it doesn't reflect the intuitive meaning that you give to the transformation, since neither topology nor measure are invariant under this transformation.
The "trick" of breaking the transformation in two parts, so that the second part is only the composition of four pieces doesn't change the fact that the first part of the transformation does not preserve measure (the measure of each of the four intermediate pieces is undefined).
[TO BE CONTINUED]
Ok! And I'm so glad you responded. I come to this forum to discuss math, and when there's no interesting math content going on I go over and get in trouble on the political forums. Discussing math is much more productive. As I saw on a mathematician's office door once: Good sense about trivialities is better than nonsense about things that matter!
Quoting Mephist
Arghhh. In my opinion you are totally missing the point of math. I don't mean that to be such a strong statement, but in this instance ... yes.
The point isn't to have a pristine logical proof. The point is the beautiful lifting, via the axiom of choice, from the very commonplace paradoxical composition of the free group on two letters, up to a paradoxical composition of three-space itself. It's the idea that's important, not the formal proof. That is the actual point of view of working mathematicians.
I recently ran across an article, link later if I can find it. Professional number theorists were asked by professinal logicians, wouldn't you be interested in seeing a computerized formal version of Wiles's proof of Fermat's last theorem? And the logicians were stunned to discover that the mathematicians had no interest in such a thing!
What's important about Wiles's proof is the ideas; not every last bit and byte of formal correctness.
This is something a lot of people don't get about math. It's the overarching ideas that people are researching. Sure, you can work out a formal proof if you like, but that's more like grunt work. The researchers are not interested.
Likewise, you are interested in a formal computer proof of BT, and that is not the point at all. The point is that the paradoxical decomposition of the free group on two letters induces a paradoxical decomposition of three space. That's the point. It's beautiful and strange. Formal proof in Coq? Ok, whatever floats your boat. But that is not the meaning of the theorem. The meaning is in the idea.
I hope I'm explaining the modern viewpoint here. The ideas are important, the proofs are not. I know, that's the opposite of what they tell people in the philosophy of math classes. But it's how mathematicians think.
I found the article.
It's a great read, it makes my point much better than I can.
https://www.quantamagazine.org/why-the-proof-of-fermats-last-theorem-doesnt-need-to-be-enhanced-20190603/
Quoting Mephist
That may well be true, but it is in no way relevant to the interestingness of the theorem!
Quoting Mephist
That's perfectly cool if you are interested in detailed step-by-step proofs derived from first principles. But that's a different interest than the math itself.
Quoting Mephist
Two points here, one, I didn't realize you have "doubts" about ZFC, but if you do I might try to alleviate them or even perhaps agree with them. "ZFC in geometry" I didn't follow, you mean modern geometry ie algebraic geometry (very category theoretic), geometric algebra, etc.? Not catching your reference.
But secondly, you spoke of the "existence of the isometry group" as something strange or questionable. Perhaps you mean something else. If you take Euclidean 3-space straight out of multivariable calculus class, it's clear that if you have an object or set of points, you can transform them in a distance-preserving manner; and that the collection of all such transformations forms a group. Surely nothing is radical or counterintuitive about that.
If you don't like the idea of "space as a set of points" I do understand that philosophical objection. But then you are objecting to a huge amount of modern math and physical science too. I'm not sure where you're coming from. If you don't like point sets, then you wouldn't like set theory. I can certainly see that.
Quoting Mephist
Yes that is certainly the case. And I get that you are objecting to that perspective. Which is fine. If you are engaged in the century-long project of replacing set theory, which won the 20th century; with HOTT (or category theory or Smooth Infinitesimal Analysis, etc)., which will probably win the 21st; I can't argue the point, we are all just passing through history.
But surely your objection then is not to BT, but to virtually all of modern mathematics and physics. Is that your viewpoint? How far does your rejection of using point sets to model mathematical ideas go?
Quoting Mephist
Even if set theory were dethroned as the foundation of mathematics -- and of course it already has been dethroned in much of modern math -- it would still be of interest. Just as group theory is the study of all the mathematical structures that satisfy the axioms for groups; set theory is becoming the study of all the mathematical objects that satisfy the axioms for sets.
In other words now that set theory is "relieved of its ontological burden," as I saw it expressed once, we will now see a great flourishing of set theory, not its demise. Set theory is fascinating whether you regard it as foundational or not!
So if you deny set theory as a foundation, I will not disagree with you! Rather, I will point out that we can now study set theory for its own sake. Do you object?
Quoting Mephist
Correct, the intervals [0,1] and [0,2] have the same cardinality but different measures. But so what? A quart of water weighs differently than a quart of oil. You can have two systems of measurement that are independent of each other. So what? When we care about cardinality we care about cardinality. When we care about measure we care about measure.
Quoting Mephist
Sure, that's the age-old problem of the continuum. How can a big pile of dimensionless points give rise to dimension? Uncountable additivity would lead to a contradiction because the measure of the union of singleton points would be 0 but the measure of the unit interval is 1. So we can't have uncountable additivity and nobody knows how points make up a line. You might as well complain to Euclid.
Quoting Mephist
It's a theorem that if you have an uncountable sum that converges to a finite number, all but countably many of the terms must be zero. Not that much of a surprise.
Quoting Mephist
I don't think that has anything to do with it. It's not about the inability to compute a sum. It's a simple proof but it doesn't involve computability.
Quoting Mephist
The w.o. theorem does not come into play here at all. It's an interesting theorem though, worth a separate thread but a little off topic here. It's logically equivalent to the axiom of choice anyway so complaints about one are complaints about the other.
By the way you don't need the power of the well-ordering theorem to change the order type of a set via bijection. Just take the natural numbers 0, 1, 2, 3, ... and reorder them to 1, 2, 3, ..., 0. So it's the usual order but n '<' 0 for every nonzero n, where '<' is the new "funny" order relation. The natural numbers in their usual order have no greatest element; but in the funny order they do. We've changed the order type with a simple bijection. You don't argue that this simple example should be banished from decent mathematics, do you? The funny order is the ordinal number [math]\omega + 1[/math], or omega plus one. Turing knew all about ordinals, he wrote his Ph.D. thesis on ordinal models of computation. Even constructivists believe in ordinals.
I would like to call something out here. In your objecting to bijections, you are seemingly objecting to the operation of changing the order of an ordered collection. I wonder if you are going to far. Of course we can rearrange things. With infinite collections (don't call them sets if you like) we can change their order properties by rearranging them. Are you objecting to this as a concept or idea? What would be left in mathematics if we threw out rearranging things?
Quoting Mephist
It's can't be right or wrong any more than the game of chess can be right or wrong. BT is undeniably a theorem of ZFC. That's true even if you utterly reject ZFC. The novel Moby Dick is fiction, yet it is "true" within the novel that Ahab is the captain of the Pequod and not the cabin boy. Banach-Tarski is a valid derivation in ZFC regardless of whether you like ZFC as your math foundation.
Suppose I stipulate that ZFC is banned as the foundation of math.
Ok. Then Banach-Tarski is still a valid theorem of ZFC. So what is your objection to that? B-T is still fascinating and strange and its proof is surprisingly simple. One could enjoy it on its own terms without "believing" in it, whatever that means. I don't think the game of chess is "true," only that it's interesting and fun. Likewise ZFC.
Quoting Mephist
AHA! Yes you are right. But these are RIGID MOTIONS. That is the point. We are not applying topological or continuous transformations in general, which can of course distort an object.
We are applying RIGID MOTIONS. Isometries:
* Rotation around a point;
* Translation in a direction;
* Reflection through a plane.
These are rigid motions. They preserve measure. That is where you get the counterintuitive power of the theorem.
Let me say this again. We are NOT applying general bijections or topological transformations. We are applying only rigid motions in three-space, which are perfectly intuitive and reasonable. If you move an object from one location to another, or rotate it around a point, or reflect it in a plane, you preserve its measure if it has a measure. If not, then there is nothing to preserve.
Quoting Mephist
Well yes. Distance-preserving transformations (isometries) have the following properties:
* They preserve the measure of measurable sets; and
* They map nonmeasurable sets to nonmeasurable sets.
So clearly the concept of nonmeasurable set is in play. And these are a consequence of the axiom of choice. HOWEVER! If you reject AC and thereby banish nonmeasurable sets, you lose all of modern probability theory, which takes with it a big chunk of modern physics, not to mention the social sciences. People didn't get in a room and say, "Let's foist nonmeasurable sets in the world because we're evil." They did it to get formal probability theory off the ground. You'll have to take up the foundations of modern probability theory with Andrey Kolmogorov.
Quoting Mephist
I hope you won't get so far ahead of me that I despair of keeping up!
To focus the discussion, are you objecting to point-sets and ZFC in general? Then you object to a lot more than just B-T. But if you are interested in B-T for its own sake as a theorem of ZFC, we can talk more about that.
First of all, the point about formal systems. I completely agree that the formal proof is not the essential part of a theorem about geometry! I'll tell you more: I am pretty convinced that the formal proof alone does not contain the essential information necessary to "understand" the theorem, and I started to write on this forum because I was looking for somebody that has some results/ideas on this point. Please take a look at my last post on the discussion that I opened about two months ago: "Is it possible to define a measure how 'interesting' is a theorem?" (https://thephilosophyforum.com/discussion/5789/is-it-possible-to-define-a-measure-how-interesting-is-a-theorem).
Then, since nobody seemed to find the subject interesting, I started to reply to posts about infinity.. :-)
But the point of the discussion about infinity was more or less this one: "are ZFC axioms about infinity right?" and my answer was: "you cannot use a formal logic system to decide which kind of infinity exists". And I said you that I "don't like" ZFC because, for example, of Banach-Tarski paradox.
Let me try to explain this point: I am not saying that ZFC is wrong and Type Theory, or Coq, is right.
I am saying that the encoding of segments (and even surfaces and solids) as sets of points is not "natural", because you can build functions that have as input a set of definite measure and as output a set where measure is not definable: the transformations on sets and on measures are not part of the same category! Of course, you can say that this is because there really exist geometric objects that are not measurable, and it is really a feature of geometry, and not of ZFC.
Well, in that case you have to admit that there exist several different geometries, because there are sound logic systems based on type theory where all sets are measurable and lines are made of a countable set of "elementary" lines (integrals are always defined as series) (elementary lines have the property that their squared length is zero), and Banach-Tarski is false.
So, we can ask which one of the possible geometries is the "right" one. The problem is that they are only mathematical models. From my point of view, this is exactly the same thing as mathematical models for physics (or maybe you want to consider euclidean geometry as a model, but in that case non measurable sets surely do not exist): the best mathematical model is the one that encodes the greatest number of physical results using the smallest number of physical laws, and none of them is perfectly corresponding to physical reality anyway.
Quoting fishfry
I was looking for a formal proof of BT in ZF Set theory because the critical part of BT is the function that builds the non measurable sets (the one defined using of the axiom of choice). You are right that in most cases the formal logic is not necessary at all, but in this case the result of the theorem depends in a critical way on the actual encoding of a continuous space as a set of points. In fact, I am pretty convinced that It's possible to encode euclidean geometry in ZF Set theory in a way that is perfectly sensible for all results of euclidean geometry but makes BT false (I think I can do it in Coq, but surely you wouldn't like it.. :-) )
Quoting fishfry
That is not true: the definition of measure is purely algebraic, and can be used with ANY definition of sets: it's the same thing as for calculus, derivatives, integrals, etc..
Quoting fishfry
Well, I think you can easily guess: I studied electronic engineering (Coq can be used to proof the correctness of digital circuits), but I am working as a software developer. But I am even interested in mathematics and physics, and even philosophy, so I think I am not the "standard" kind of software developer.. :-)
I didn't say that I don't like point sets: I said that point sets are not a good model for 3-dimensional geometric objects, and I am not very original with this idea: in HOTT segments are not sets but 1-spaces. The word "sets" is defined as synonymous of 0-space (or "discrete" space).
Quoting fishfry
Point-set topology and algebraic topology are equivalent, and all modern mathematics and physics since Grothendieck (https://en.wikipedia.org/wiki/Alexander_Grothendieck) are based on category theory: as you just said, they really don't care much about foundations... :-)
Both string theory and loop quantum gravity (the two most trendy at the moment....) (https://en.wikipedia.org/wiki/Loop_quantum_gravity) make heavy use of algebraic topology: not only they don't care "of what are made" the objects of the theory, but they are even not clear if they are something geometric, or maybe can be interpreted as emergent structures coming from something even more fundamental. This is the point of view of category theory: don't describe how objects are made, but only how they relate with each-other.
Quoting fishfry
I was speaking about changing the topology of a set: any open set (that has no minimum or maximum) can be transformed into a set with a minimum (lower bound), not open and not closed. If the isometries used in BT are continuous transformations, they should preserve the topology of the sets that are transformed. My guess is that the ones used in BT cannot be continuous on sets that are not discrete (that is one of the things that I wanted to understand from the formal proof..). In other words, I think that to make that transformation you have to destroy the topological structure of the object (but I am actually not sure about this). If this wasn't possible (and, as I understand, it's not possible without axiom of choice) I think BT would be false.
Quoting fishfry
Absolutely, I agree.
Quoting fishfry
Yes, it's even more interesting because of the fact that it is an (apparent) paradox, and paradoxes are the most informative and interesting parts of mathematics.
Quoting fishfry
Well, I have my doubts here. These are rigid motions on countable subsets of points, it means only on subsets with zero measure. I am quite sure that they cannot be continuous transformations on measurable subsets with non-zero measure. If you have a proof that they are, I am very interested in it ( even not formal :-) )
P.S. Algebraic definition of measure: see https://en.wikipedia.org/wiki/Sigma-algebra
I tried to google for "constructive real numbers are not complete", or something similar.
I found this, for example: https://users.dimi.uniud.it/~pietro.digianantonio/papers/copy_pdf/RealsAxioms.pdf ).
And I found this: https://www.encyclopediaofmath.org/index.php/Constructive_analysis
I think this is what you refer to by "constructive reals". Is it?
Can you give me a link where is written that they are not complete?
I am convinced that my definition of "constructivism" is not the same thing that your definition.
Well, here's a simple definition of what I mean by constructive logic:
=== A logic is called constructive if every time that you write "exists t" it means that you can compute the value of t. ===
I believe that you can define real numbers that are complete in a constructive logic. I think the example that I gave you using Coq is one of these. But I could be wrong: I am not completely sure about this.
Quoting fishfry
I googled this: "non archimedean fields are not complete" and the first link that come out is this one:
https://math.stackexchange.com/questions/17687/example-of-a-complete-non-archimedean-ordered-field
Probably, as they say, "The devil is in the detail". I read several times in the past about Abraham Robinson's hyperreal numbers, and I believe that I read somewhere that non archimedean fields are not complete. So I believe that, under appropriate assumptions, this is true. But why is this a problem?
Quoting fishfry
Hmmm... I understand what you mean:
- "constructive" reals are computable functions. Then there is a countable number of them.
- standard reals are the set of all convergent successions of rationals then their cardinality is aleph-1
- nonstandard reals are much more than this (not sure about cardinality), since for each standard real there is an entire real line of non-standard ones.
Well, here's how I see it:
- "constructive" reals (with my definition) can be put in one-to-one correspondence with standard reals, only with a different representation (but I don't know a proof of this) and do not correspond to computable functions. It is true that if you can write "Exists x such that ... " then you can compute that x, But for the most part of real numbers x there is no corresponding formula to describe them (and this is exactly the same thing that happens for non constructive reals).
- Robinson's nonstandard reals are more than the standard reals because you exclude induction principle as an axiom (so that "P(0)" and "P(n) -> P(n+1)" does not imply "forall n, P(n)"). But there are objects used in mathematics that are treated as if they were real numbers, but DO NOT have the right cardinality to be standard real numbers: for example the random variables used in statistics: https://en.wikipedia.org/wiki/Random_variable. So, they are more similar to nonstandard reals.
- The real numbers of smooth infinitesimal analysis are less then standard real numbers, and even the set of functions from reals to reals is countable: basically, every function from reals to reals is continuous and expandable as a Fourier series. And there are infinitesimals.
What for such a strange thing? Well, for example, they correspond exactly to what is needed for the wave-functions and linear operators of quantum mechanics: there are as many functions as real numbers, and a real numbers correspond to experiments (then, there are a numerable quantity of "real" numbers). And what's more important, a wave function contains a definite quantity of information, that is preserved by the laws of quantum mechanics.
So, from my point of view, there is not one "good" model of real numbers, at the same way as there is not one "good" model of geometric space.
[ END OF PART TWO :-) ]
Yes, and this clarifies a lot o things about infinity:
"In first-order logic, only theories with a finite model can be categorical." (form https://en.wikipedia.org/wiki/Categorical_theory). ZFC is a first-order theory and it has no finite model (obviously), than it cannot be categorical. Ergo, you cannot use ZFC to decide the cardinality of real numbers: "if a first-order theory in a countable language is categorical in some uncountable cardinality, then it is categorical in all uncountable cardinalities." (from the same page of wikipedia).
Then, you can say, the problem is in the language: let's use a second or higher order language, and you can discover the "real" cardinality of real numbers.
Well, in my opinion this is only a way of "hiding" the problem: it is true that if you assume the induction principle as part of the rules of logic, you get a limit on the cardinality of possible models (the induction principle quantifies over all propositions, so it's not expressible in first-order logic), but this is exactly the same thing as adding an axiom (in second order logic) and not assuming the induction principle as a rule. Ultimately, the problem is that the induction principle is not provable by using a finite (recursively computable) model: it's not "physically" provable.
That's exactly the same situation as for the parallels postulate in euclidean geometry: you cannot prove it with a physical geometric construction (finite model), because it speaks about something that happens at the infinite, and the fact of being true or not depends on the physical model that you use: if computers that have an illimited amount of memory do not exist, or, equivalently, if infinite topological structures do not exist, then the induction principle is false, and infinitesimals are real!
So, the sentence "if you use uncomputable (non constructive) axioms in logic, you can decide the cardinality of real numbers", for me it sounds like "if you use euclidean geometry, you can prove the parallel postulate".
HOTT is not a constructivist theory (with my definition of constructivism) because it uses a non computable axiom: the univalence axiom (https://ncatlab.org/nlab/show/univalence+axiom).
This was considered by Voevodsky as the main "problem" of the theory, and there are currently several attempts to buid a constructive version of HOTT. One of them is cubical type theory (https://ncatlab.org/nlab/show/cubical+type+theory), but I don't know anything about it.
[ THIS WAS THE LAST PART :-) ]
Ok! In the meantime I just read through your first reply and I believe I can respond concisely and beneficially to it. Let me get to that. It might take me a day, or basically five minutes after putting it off for a day. But when I wait a day the five minutes worth is generally better. So let me work on reply #1 and read through the others.
I do think it's very cool that you learned Coq in EE. I had no idea! My background is that I learned set theory-based math back in the day, but my education did include introductory category theory as part of grad level algebra. In fact it is one of the high-water marks of my mathematical career that I totally grok the definition of the tensor product of modules as a universal property. I get a lot of mileage out of that one example!
I spent my professional career as a working software engineer and programmer. Along the way I was on Usenet in 1995 and started reading John Baez's amazing This Week's Finds in Mathematical Physics, which was essentially the world's first math blog. I was astonished to see that "n-categories" were being used to do loop quantum gravity. When I was in school categories were only for mathematicians, there were no applications yet. Now it's all over physics and computer science and biology and who knows what else.
I've followed all this at least via Baez over the years, and now that I've recently been reviewing and learning some more category theory, I am starting to get a vague understanding of what an n-category might be. And along the way of course I watched Vovoedsky's videos and vaguely kept up with HOTT at least at a buzzword level. It helps a lot that I already knew what a homotopy is from algebraic topology back in the day. So I get the thing about paths and paths between paths.
I think we have a good meeting of backgrounds coming from totally different directions. Now I'll go see what I can do with the reply queue.
Because to me it seems infinity is not impossible. It just doesn't manifest in this world apparently, if one is to fully trust modern science.
The same is true for geometry: Archimedes discovered that you can calculate the circumference of the circle (knowing it's ray) if you consider it as a polygon with a very big number of sides: but to describe the symmetries of the circle you cannot treat it as if it was a polygon: you have to treat it as an "infinite" polygon.
Actually, even flat geometric figures do not exist in reality, because everything in nature is 3-dimensional, and it's obvious that considering one of the dimensions to be of measure 0 is only an approximation. So, in reality, not a big deal...
The problem with infinity started to be disturbing with calculus: you need a single measure (a number) that is approximated to zero in one place of a formula an not approximated (treated as finite number) in another place: for example if you take a very big number of very little segments that form a circle, it's clear that their sum is finite (the length of the circle), but if you build a little square on top of every little segment, you get a very thin ring whose area is approximated to zero. At the end, a rational explanation of this fact was found: you have to split at the same time in all dimensions, so that at all times you get only finite objects. This is a logically coherent explanation, only that if you have a lot of dimensions to consider at the same time, you get very complex formulas because it contains a lot of terms of the "wrong" dimensionality (for example you have to sum areas with lengths), that you know you could throw away (and in reality everybody with some experience in calculus throws them away in the calculations), but to be "logically coherent" with the underlying logic explanation of limits you "pretend" to have included them in the calculations.
But the "elimination" of infinity from mathematics started to become really complicated when Cantor discovered that something similar happens with infinitely big discrete objects (instead of infinitely small continuous objects): natural numbers are not enough to enumerate all discrete sets. In fact, there is no way to enumerate the set that is made of all possible tuples of natural numbers (pairs, triples, quadruples, etc..). So, the infinite of all tuples of numbers is more infinite then the infinite of all numbers, and in the same way you can build infinitely many infinites each one bigger than the previous one.
Now, it's clear that you can't neither count the uncountable discrete sets nor measure the infinitesimally small measures, but nevertheless, these imaginary infinte models have very interesting symmetries, that happen to correspond with a surprising accuracy to the symmetries that are present everywhere in nature. Without infinites, there are no symmetries and everything is a big mess (it's something like trying to discover the laws of Euclidean geometry using only polygons).
So, nobody really knows if infinites exist in nature or what are they really like, but they are essential in mathematics to discover very elegant models that often correspond to laws of nature, and I think that the idea that mathematics should work only using finite numbers because infinites don't really exist has been abandoned forever.
Quoting Mephist
I didn't read that thread because I thought the answer was trivial. You can measure citations of a paper, and that is commonly done. It works the same way as Google's page rank algorithm. Alternatively, it's unanswerable, because sometimes it takes decades or even centuries for the value of a mathematical insight to become apparent. But I'll glance at that thread if you like, once I catch up with my mentions. Which never happens.
Quoting Mephist
Well of course no axiomatic foundation for math can be right. People did math for thousands of years without any foundation at all, and now we've got several. But I would draw a line in the sand here and say that set theory has been extraordinarily successful as a theory of infinity, and I don't know that HOTT or Category theory are any improvement at all.
Of course no foundation can be "right" about infinity until the physicists discover actual infinity in the universe and report back to us. Just as we didn't know whether Euclidean or non-Euclidean geometry was right until Einstein's great breakthrough.
Regarding B-T, surely it's not right about the world (unless it is -- work for the physicists again). But as a theorem of ZFC, it's as right as a legal position in chess. Which is to say that it's the endpoint of a legal sequence of moves in a formal game.
Quoting Mephist
That's good, because the claim is meaningless. There is no right or wrong about it. Only useful or not, beautiful or not, interesting or not, fashionable this century as opposed to last.
Quoting Mephist
There was a poster here a while back who gave me some great insights along these lines, before becoming so insufferably rude that I could no longer interact with him. I learned that Charles Sanders Peirce noted that the set-theoretic continuum can't possibly be the right model of a continuum, because a continuum must have the property that every part of it is the same as the whole; and a set, being decomposable into the union of its singleton points, fails this test.
If that's what you mean, I take your point. But knowing what's the "right" model of the continuum is beyond my own personal pay grade. I know that Weyl and Brouwer and others had many deep thoughts along these lines. I know little of these matters.
What I do know is that the set-theoretic continuum, having won the 20th century, underpins all modern physical theory. If one seeks to overthrow that foundation, one has much work to do.
Quoting Mephist
If we accept the point-set foundation, then it's reasonable that not every point-set is measurable. After all, there are some wild point-sets. If you reject point-sets, then I guess not. But nonmeasurable sets in ZFC depend on the axiom of choice, and perhaps that's what you object to. It's AC that gives us sets whose only property is existence, with no clue as to how to construct them or identify their elements.
Quoting Mephist
Well yeah, but the Euclidean/non-Euclidean example gives us a good historical datapoint. There are lots of geometries but only one "true geometry" of the universe. But I'm not interested much in applications. The physicists can do their thing. I'll wear my formalist hat and steer cleer of ontological issues.
Quoting Mephist
I'll take your word for that. Are you talking about SIA? What of it? There are variations of chess too. It doesn't bother me.
But why are you hung up on Banach-Tarski? If you don't like point-sets, your objecions to standard math would be much wider than that.
Quoting Mephist
As meaningless as the question of which variant of chess is the right one. Or whether in some fan fiction, Ahab is the cabin boy and not the captain of the Pequod, and whether that's right or wrong.
What do you mean by right one? Right as in best foundation for math? Or right as in true about the actual world? I don't share your focus on trying to figure out what's right about things that you and I will never be able to know, unless the next Einstein hurries up and gets born.
Quoting Mephist
That's a pragmatic requirement I don't share. You think math is required to be bound by physics. Riemann kicked the hell out of that belief in the 1840's. You think math is physics. Mathematicians don't share that opinion. Math is whatever mathematicans find interesting. If the rest of the world finds an application, more the better, but that is never the point of research in pure mathematics.
Quoting Mephist
There could never be such. ZFC is necessary.
Quoting Mephist
Right. So why look for a proof in ZF that could never exist?
Quoting Mephist
I really don't follow your point here at all. B-T is the least of it. If you reject point-sets as the foundation of math, that's fine, but then you have to rebuild a LOT of stuff. B-T is the very least of it. I don't understand why you regard B-T as uniquely interesting in the context of completely re-founding math on non point-set principles.
Quoting Mephist
Why wouldn't I like it? Why would I even care? You seem to be tilting at windmills that I'm not defending. All I've ever said about set theory is that it won the 20th century and that Planck noted that science progresses one funeral at a time. Surely that shows my open-mindedness about historically contingent trends in foundations.
And as I already pointed out, if set theory is no longer required to be foundational, it becomes interesting for its own sake. It makes no difference to practitioners of set theory whether it's foundational or not.
Quoting Mephist
I don't know what you mean but reading ahead I think I do so let's defer this for a moment ...
Quoting Mephist
Well, engineers definitely have a pragmatic view of math. And one of my ongoing theses is that there are a lot of people studying computer science these days who thirty years ago would have studied math. So the mathematical point of view is somewhat lost among many people these days. But when you said that the purpose of a math foundation is to be able to found physics, that's just wrong. It's not how mathematicians view it at all.
Quoting Mephist
Ok. I can't argue the point but even if I could I don't have much reason to.
Quoting Mephist
Ok. I wasn't aware that HOTT had gotten to the point of replacing differential geometry in general relativity, nor functional analysis in quantum physics. Are you making those claims? If so I am not sure I believe you. I watch a lot of physics videos these days and they're all differential geometry and functional analysis. The Schrödinger equation is functional analysis, not homotopy type theory. I think you're making claims that are more aspirational than true of the current state of the art.
Quoting Mephist
Well yes and no. Analysis hasn't been categorized much. But physics? Again I think this is an aspirational claim. Loop quantum gravity uses n-categories but as far as I know relativity and quantum theory use classical 20th century math.
Quoting Mephist
Yes ok but LQG is not the standard theory yet. You're being aspirational again.
Quoting Mephist
You are misapplying the well-ordering theorem. Bijections can radically change the order properties of a set. And the well-ordering theorme is equivalen to the axiom of choice so what of it? Reject one, reject the other. Take it up with Zermelo.
Quoting Mephist
Isometries preserve measure. They're rigid motions in space.
Quoting Mephist
I'm not sure exactly what you mean by continuous on sets that aren't discrete. All isometries are continuous and measure preserving (on sets that have a defined measure), period.
Quoting Mephist
I don't think so. Isometries are continuous since they preserve distances, and they preserve measure.
Quoting Mephist
Isometries aren't mysterious, they're rigid motions of space. Flips, rotations, translations. Clearly continuous, measure preserving, "shape-preserving" if you like. If you translate a triangle it's still a triangle. This is not mysterious nor does it require any esoteric philosophical assumptions.
Quoting Mephist
It's a veridical paradox, meaning that it is NOT actually a contradiction, it merely violates our intuition. It should be named the Banach-Tarski theorem, since that's what it is. Now if you want to argue from that that the axiom of choice should be rejected, or points should be rejected, or sets should be rejected, that's fine. But it doesn't alter the fact that B-T is a theorem of ZFC, and moreover, not an extremely difficult one.
And as I mentioned earlier, it rests on the paradoxical decomposition of the free group on two letters, which at some point I should talk about, because this is a very strange paradox that has nothing at all to do with points or the axiom of choice or anything else questionable. It's very very simple and natural.
Quoting Mephist
No not so. An isometry is a rigid motion on countable and uncountable sets. If I take the unit square and translate it 3 units to the right and 3 units up, the shape and measure are preserved. All measures are preserved.
I think you should open up a separate thread on Banach-Tarski so we could talk about these issues without all the philosophical baggage. You are misunderstanding a lot of basic issues. If you have a point set of any cardinality and measure, an isometry preserves its cardinality and measure. And shape, even though that's not a technical definition.
Quoting Mephist
You're simply wrong on this technical point. You should open a separate thread on B-T because we can't walk through that proof while you're claiming physics is based on HOTT and that mathematicians have point sets wrong. These are very different issues.
Regarding B-T you are misinformed on the basic math, on issues that are very clear and simple, and that we should discuss separately.
Quoting Mephist
That a reflection, rotation, and translation are continuous? A freshman calculus student could work out those proofs. Measure preserving? A little trickier but not difficult.
Come on. Are you doubting that a translation is continuous? A reflection? A rotation? Why are you making these claims whose disproof is so elementary? I don't follow your logic here. Don't mean to be piling on, but after all this heavy-duty category theory and philosophy you are missing some very elementary freshman calculus.
You have a set of points which you can think of as vectors in the plane or space. You add a constant vector to each point. The resulting point set has the same shape and measure, and the transformation is continuous. I can not accept that this is not obvious to an engineering major.
Quoting Mephist
Yes ok what of it? From the basic axioms of measure one proves the existence of a nonmeasurable set. And from that, Banach-Tarski.
I gather you are trying to make the claim that measure theory is "algebraic." That's a point of semantics. From countable additivity plus the axiom of choice, the existence of a nonmeasurable set falls out. Have you seen the proof? It's not difficult. Well it's not difficult after one's been through it a few times. Like everything else. But it's a very cool proof. It's a paradigm of all axiom of choice proofs.
But the bottom line, and this has been a long post so here is the tl;dr:
Your interest in Banach-Tarski is orthogonal to all the philosophical and foundational issues; and the confusion generated by conflating these things is causing you to misunderstand some extremely elementary points of math, such as the continuity of translations and rotations and reflections.
Ok one post down, so many more to go ... thanks for reading.
ps -- Here is the proof that an isometry preserve a Hausdorff measure, which the measure on Euclidean 3-space is an example of. In the general case it's not true but that case doesn't concern us. https://math.stackexchange.com/questions/695492/isometry-vs-measure-preserving
pps -- Simpler theorem on point. Isometries preserve Lebesgue measure of Euclidean space.
https://math.stackexchange.com/questions/242837/lebesgue-measure-is-invariant-under-isometry
Yes, I really wasn't interested in speaking about Banach-Tarski. I took it only as an example, maybe the wrong one. But on the other hand I am convinced that what I wrote is correct, so I am a little upset to not being able to convince you (and it's not about you: probably I didn't convince anybody...). I could start a separate discussion and try to use formal proofs instead of explanations, but I am afraid it's not appropriate for a philosophy forum (I still didn't read how to write symbols on this site). Maybe I'll make a last attempt tomorrow, and than stop talking about BT. But I have not time now. However, thank you for replying to my posts.
Well I'm three of your posts behind I think so I'm planning to get to them soon.
But I'm not sure what you mean that " I am convinced that what I wrote is correct, so I am a little upset to not being able to convince you ...' About what? There are two things going on:
1) The business about ZFC and point-sets being "wrong" in some way; and
2) Your misunderstanding of isometries, which preserve measure of all measurable sets.
I don't know which one you want to talk about. I don't know what you think you didn't convince me of. I see foundations as historically contingent and not all that important to what working mathematicians do anyway. And you certainly didn't convince me that foundations are for the purpose of modeling physics, that's not mathematically true at all, it's something only a physicists or an engineer would believe :-)
So maybe you can concisely tell me exactly what thesis you are trying to convince me of. No, formal proofs would be the wrong way to go without a simple one or two sentence description of the thesis being put forth.
But if I understood you, you believe that the rightness of a foundation can be measured by how well it lets us model physics. And that's 100% opposed to what I think foundations are. Math is not held to the standard of physics, hasn't been since Riemann.
So just tell me as clearly as you can what it is you're trying to convince me of. That will be helpful to me because you covered a lot of different topics in the post I replied to and I spent a lot of time trying to explain that rigid motions are continuous and preserve measure, and if you don't care about those things we shouldn't lengthen our posts with them.
I have a lot more to say about this second of your posts (of the four that I'm working through) but I wanted to just catch this up first because I already proved this earlier but with so much verbiage back and forth it was easy to miss.
You know Chaitin's Omega? Very cool number, because it's actually a specific example of a noncomputable real number, or rather any one of a specific class of noncomputable reals. We can define it because it's a definable real even though it's not computable. It's not computable because if it were, it would solve the Halting problem, which Turing showed we can't do.
Now, consider a sequence whose n-th term is simply the n-th truncation of Omega's decimal expression. In other words if we take pi, we can form the sequence 3, 3.1, 3.14, 3.141, ... This is a Cauchy sequence that converges to pi. This is an example of a Cauchy sequence of rationals that fails to converge to a rational, showing that the rational numbers are not Cauchy complete.
Likewise we can form the Cauchy sequence of rational numbers that are the n-th truncations of the decimal digits of Omega. This is a Cauchy sequence of computable numbers, because it's easy to see that a rational number is computable. The algorithm that cranks out the decimal digits is just grade school long division.
But this is a Cauchy sequence of computable numbers that fails to converge to a computable number! So the computable real numbers are not Cauchy-complete. QED.
Now the big problem IMO with the computable reals is that you can do this for every noncomputable real. Every noncomputable real represents the limit of a sequence of rational hence computable numbers that fail to converge to a computable real number. The computable real line has only countably many points and uncountably many holes. It's the swiss cheese of number lines and a terrible model of the ancient idea of the continuum. I confess that I do not understand why Brouwer and Weyl were not greatly troubled by this.
The other thing I wanted to mention about your post is that you found an example of a non-Archimedean ordered field that's Cauchy complete. I stand corrected!! I'm apparently wrong on this point. But I thought otherwise and now I have to try to understand why I've seen a proof that a non-Archimedean field can't be complete. This is a mystery but thank you for digging up this example, I have to study it.
Either way it's not too critical. In fact the Robinson hyperrreals are NOT Cauchy-complete so my Goldilocks remark still stands even if I am wrong about the general case.
I think in that in an ideal mathematical language, Chaitin's Omega wouldn't be stateable. To say 'Omega is definable but non-computable' is surely not a statement about a number, but a statement about the syntactical inadequacy of our mathematical language for permitting the expression of Omega.
For if a sentence is understood to express logical impossibility, then it cannot be an empirically meaningful proposition. It can only serve as a rule of grammar for forbidding the syntactical construction of certain sentences in order to preserve the semantic consistency of the respective language.
First of all, let's use the definition of real numbers as Cauchy sequences.
The definition of the Omega number is this one: \Omega _{F}=\sum _{p\in P_{F}}2^{-|p|}, (taken from here: https://en.wikipedia.org/wiki/Chaitin%27s_constant - sorry, I still didn't learn how to correctly type formulas..). This is a Cauchy sequence, but there is a missing piece: the function "halt(p)" that takes a string representing the program p and returns true if it halts and false if it doesn't. Since the function halt(p) is not computable (but well defined), then the function Omega too is not computable.
In a non constructivist logic, you define Omega in this way: Let "halt(p)" be the function that returns true if the program p halts. Then, Omega is defined to be the previous formula.
In a constructivist logic, you do exactly the same thing: Let "halt(p)" be the function that returns true if the program p halts (that in costructive logic means: let's assume that the function "halt(p)" exists). Then, Omega is defined to be exactly the same formula.
That means: IF you give me a function "halt(p)", THEN I can give you Omega: this is simply a function that takes as an input a function from strings to booleans (strings represent programs), and returns a Cauchy sequence. The fact if "halt(p)" is computable or not does not make any difference for the constructivist aspect of logic: you ASSUME to be given this function (it's an hypothesis): you don't have to build it! (neither in constructivist, nor in standard logic).
The other essential point is: the fact that you cannot build a given real number (because you cannot build the function "halt(p)") DOES NOT MEAN that there is a missing real number, and then you have a hole in the real line, that is no more continuous! The real numbers are uncountable (using the standard definition, of course), so it's obvious that there is an uncountable quantity of real numbers that ARE NOT DEFINABLE. If they are not definable, it means that you cannot prove that they exist (classical logic), you cannot construct them (constructivist logic), and you cannot even prove that they don't exist! Simply you cannot speak about them in the language!
P.S. To be even more clear (I know that I am repeating myself):
constructivist logic does NOT mean that the only numbers that exist are the ones that can be built! It only means that to prove "forall x such that ... exists y such that..." you have to build a computable function that has an x as an input and produces an y as an output. x can be a number that is not computable, or even not definable. You simply assume that somebody gives you x, and you can use it to build y.
( meaning: how is it possible that you can change the volume of an object by applying only isometric tranformations? )
Let's take as reference the most complete proof of the theorem that I was able to find: http://www2.math.uconn.edu/~solomon/BTFinal.pdf
STEP 1. We define a group of rotations G (page 11: "The Group G")
The group G is defined as "the set of all matrices that can be obtained as a finite product of the matrices "phi" and "psi" ( Theorem 3.3 on page 12 ).
So, G is a discrete group. "psi" contains a parameter "tetha" (a real number), that is assumed to be a FIXED arbitrary number, with only one condition: "cos tetha" is a transcendental number (last paragraph on page 16). Then, by the way, tetha cannot be zero.
G is a discrete (countable) set of rotations around the origin (point (0,0,0)), that are ISOMETRIES ON R3. No doubt about this.
STEP 2. Theorem 3.5 on page 13.
(This is the surprising part about the decomposition of G, but I will not describe it in detail, since you know very well how it works)
We can decompose each element of G (each rotation, that we supposed to be built using TWO basic rotations), in an UNIQUE way, as a sequence of THREE rotations: the two that we considered before ("phi" and "psi"), plus "phi squared", that is "phi" applied twice.
As a consequence of this, you can split the set of rotations G in 3 parts, named G1, G2 and G3 ( Theorem 3.7 on page 17 ).
STEP 3. We use the partition of G to define a similar partition of the unit sphere in R3, named S (that is a 2-dimensional surface) - ( Theorem 4.1 on page 21 ).
First of all, we consider all possible rotations contained in G (that, remember, is a countable set). For each of them we have a pair of points of the sphere that are not moved by the rotation (the ones that lay on the axis of rotation). So, we define P to be the set of all points that lie on the axis of rotation for any rotation in G. This is obviously a discrete (countable) set of points, that has zero measure: it's not a 2 dimensional object.
Then, we consider all other points of S, excluding the set P (the set S \ P)
From page 22: "For each x ? S \ P, let G(x) = {?(x) : ? ? G}".
It means: take a point x of S \ P and build the set G(x), that is the set of all points where you can arrive by applying all possible rotations that are in G. This is obviously a discrete (countable) set too. And is a set made of points that are distant from each other (the angles of rotations are finite). This is not a connected piece of S \ P, and is not an open set. It has zero measure.
Here's the critical part (still on page 22): "We can also notice, by conducting
the following calculation, that any two sets G(x) and G(y) are either disjoint or identical". OK, so the orbits induced by G are a partition of S \ P.
A few lines after: "This proves that the family of sets F = {G(x) : x ? S \ P} is a partition of S \ P"
The sets of this family are the orbits induced by G. How many orbits are there? Obviously, there are uncountably many orbits, right? (I think this is why the author calls it a "family" and not a "set").
So, now we can prove that "this partition is equivalent to one formed by the desired sets
S1, S2, and S3". OK, but what are the 3 parts of this partition? Each part is made of an uncountable number of points, and contains at least one point for each orbit. In other words: the set S \ P is split by splitting the single orbits (made of a countable set of points), and NOT by splitting the surface S \ P in an uncountable number of open sets. It is not proved that the partition of S \ P preserves it's topology.
On the contrary, I think that it can be proved that this partition cannot preserve the topology of S in any point of S. Or in other words, it cannot be continuous. That means that for each point x in S and for each real number epsilon greater than zero, you can always find a point of each of the sets S1, S2 and S3 contained in the circle with center x and radius epsilon.
Why? Because the orbits induced by the rotations of G are unstable: if you start from two points arbitrarily near to each other, you can finish in points that have a distance greater than 1/2, if you take an enough big number of steps.
(OK, to prove that each open set contains at least one point of each of the subsets you should show that the points of an orbit are distributed over all the sphere without leaving "holes", but this is a detail: for my argument to be valid is enough that this happens for at least a measurable portion of the sphere)
That is the point: there is no doubt that the rotations of G are isometric operations. The problem is that they are not performed on connected pieces of the sphere. They are performed on discrete sets of points (distant from each other) with zero measure. They do preserve that 0 measure, but the measure that you should preserve is the one of open subsets of S, and that measure is not preserved.
The "trick" of regarding an infinite sequence of points as the same set of points minus the first one can be applied only to countable sets, and G is a countable set. And the orbits of G are countable sets. But the transformation that we use to split S into 3 pieces is NOT a rotation, is NOT isometric, and it's even NOT continuous!!
OK, this is the end. I think I cannot explain better than this my argument about BT.
Can you show me a physical theory, or a result of a physical theory, that is somehow derived from the fact that a continuous line is made of an uncountable set of points?
Formal logic (currently assumed as the foundation of mathematics) is only dependent on one very fundamental fact of physics (that usually is not regarded as physics at all): the fact that it's possible to build experiments that give the same result every time they are performed with the same initial conditions.
Mathematics (what is called mathematics today) is the research of "models' factorizations" that are able to compress the information content of other models (physical or purely logical ones). A formal proof makes only use of the computational (or topological) part of the model. The part that remains not expressed in formal logic is usually expressed in words, and is often related to less fundamental parts of physics, such as, for example, the geometry of space.
Riemann understood that the concepts of "straight line", measure, and the topological structure of space are not derivable from logic, but should be considered as parts of physics.
In the future, when mathematicians will start to use quantum computers to perform calculations, I believe that even the existence of repeatable experiments will not be considered "a priori", but as an even more fundamental part of physics. So, there will be quantum logic that is more powerful than standard (or even constructionistic) logic, at the price of not being able to be 100% sure that a proof is correct (but you will be able, for example, to say that we are sure about this theorem with 99% of probability).
Surely your ( and most of other peoples' ) reply to what I just said is that "this is no more mathematics". Well, at the time of Euler topology was not mathematics either.
OK, let alone BT. In my opinion there are no interesting results that depend in an explicit way on the fact that a continuum line is defined as an uncountable set of points. Are there? You could assume that a line is made of infinite trees, or functions from integers to integers, and everything could be made to work at the same way: if you assume the axiom of choice and the existence of non-measurable "gadgets", you can think a line to be made of whatever "gadgets" you want.
A point has length zero. How many length zero things can you fit on a line segment length one?
1 / 0 = UNDEFINED
Anything with length zero does not exist, so its like asking how many non-existing things fit within an existing thing. An answer of UNDEFINED seems right.
But how do you think you can use a value that is not defined? Not defined means simply that you did not define what that thing is: you cannot reason about something without defining what is it, can you?
If we instead use a more sensible definition of a point, say length 0.1, then we can reason about it - we get 10 points on a line segment length 1.
You can say that idealizing the real world is misleading, but after several centuries of mathematical models that work extremely well, I think that you can be convinced that this is the right way to go..
The maths that physicists use (calculus mainly) should only need depend upon the concept of potential infinity; actual infinity should not be needed anywhere in the physical sciences.
There are differing definitions of the actually infinite, one is: 'limitless or endless in space, extent, or size; impossible to measure or calculate'. The definition itself essentially precludes its existence in maths.
There are lots of reasons why actual infinity is illogical. Looking at just one - the division operation on transfinite numbers:
We can imagine the division operation as compressing space: for example division by 2 would compress the numbers on the real number line by a factor of 2: 10 would become 5, 4 would become 2 and so on. The convention is that ? / (any number) = ?, so division by infinity does not decrease the 'overall' size of the real number line - the endpoints at 0 and ? are unaffected, but every finite number on the real number line must move closer together - this is clearly illogical from a physical standpoint.
What I am arguing about the model of real numbers based on ZFC is not that it's not consistent because of actual infinity, but that it's unnecessarily complex: the same thing can be achieved with simpler axiomatic theories, that seem to be more "natural" to encode the relevant theorems of mathematics.
This, as fishfry is trying desperately to make me understand :wink:, doesn't make much sense from the point of view of logic, because a formal logic system is like a definition of a mathematical structure: there are not good or bad definitions, but only definitions that are more or less appropriate because of their complexity.
Well, in my opinion there is a concrete way to judge if a definition, or a formal logic system, contains unnecessary overstructures or not. And what I was trying to understand with my poorly understood discussions "Is it possible to define a measure how 'interesting' is a theorem?" and "Is mathematics discovered or invented" is if somebody knows about other results/ideas in that direction.
P.S. http://mathworld.wolfram.com/ContinuumHypothesis.html
I think that something can be consistent and illogical at the same time. For example, if I define a simple maths system with only one number: 1 and one operator: + then I can axiomatically define 1+1=1. Its consistent but not logical. That sums up my feelings about actual infinity in set theory; maybe the axioms are such that its consistent (or appears to be consistent)... but its not logically correct.
https://en.wikipedia.org/wiki/Finitism
Lots of people doubt the reality of actual infinity. It just happens that not so many finitists frequent this site.
Actual infinity means in a strict mathematical sense a number greater than any other. But there is no greatest number. So there is no actual infinity.
Actual infinity in a physical sense means something that goes on forever. But only in our minds can things go on forever / be infinity dense etc... these things are impossible in reality.
Yes. Consistency and truth are not the same thing. That is one of the main consequences of Godel's incompleteness theorems https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems
Truth is about the interpretation of a proposition on a concrete model. I a not even sure if there is a commonly accepted definition of what is a model, or a widely accepted definition of truth. I believe the definition of truth is more in the domain of philosophy: working mathematicians treat their (abstract) mathematical models as if they were real concrete physical models and the propositions were concrete physical experiments on which a proposition can be tested to obtain a result true or false.
Consistency, on the contrary, is defined as a property of a formal logic system: a formal system is inconsistent if a proposition can be proved to be both "true" and "false". "true" and "false", in this case, is are purely syntactical objects: a string of characters that is the result of the execution of an algorithm.
If a proposition cannot be proved to be "true" nor "false", it (it's interpretation as experiment, let's say) can be true for some models and false for other models. If a proposition can be proved, it has to be true for all models. It it's negation can be proved, it has to be false for all models.
Quoting Devans99 I believe that the word "logical" is really used as a synonym of "consistent", but only in colloquial english. In mathematics I think it's not used at all.
A simple math system may be consistent even if it doesn't have any model (except from the "purely syntactical one": every consistent logical system has a model made regarding as objects of the model the syntactical symbols of the language). You can prove the proposition "1+1=1", but you cannot say that it's true if you don't specify in which model. (By the way, a formal logic system is not a completely arbitrary set of rules an symbols: symbols of negation true and false have to be defined: otherwise the definition of "consistency" itself makes no sense)
P.S. Sorry, I just realized that what I wrote about the definition of consistency is wrong: a formal system is inconsistent if for some proposition both the proposition and it's negation can be proved. I think the strings "true" and "false" (or something equivalent) are not required to be part of a formal logic system (well, everything depends on which logic we are speaking about: there are too many of them.. :-) ).
Any truth in an axiomatical system is only as true as it's axioms (and the axiom of infinity in set theory seems false) whereas true in the real world means more than that - it means it fundamentally reflects the way the real world is.
So set theory or any axiomatical system can be consistent but not true at the same time. But it is (or was) the habit of mathematics, physics, logic to choose axioms that are very likely true in the real world. IMO set theory has an axiom that is very likely false in the real world. It should come with some sort of health warning attached because it has confused a lot of people into believing that actual infinity is a physical reality.
There are other axiomatical system that may or may not be true in the real world. For example non-euclidean geometries. But set theory stands out as choosing an axiom that is clearly false in the real world - so it has no real world applications because of that (the part of set theory relating to infinite sets I mean, finite set theory is useful).
I feel that axiomising something that is clearly false in the real world (IMO) is the basis of my beef with set theory.
I feel this is putting what you said slightly better. When we expand on mathematical concept with new information we can produce new axioms. The problem with infinite is that it is polymorphic word or concept. When you use infinite in one statement it might have slightly different meaning then if used in a different context. To expect the same axioms to apply to each different (polymorphism) situation is not mathematically or logically sound. I don't know if you stated what i just said previously. Sorry if i duplicated information.
Whatever the exact definition, actual infinity leads to problems... how can a subset be the same 'size' as its containing set? The whole is greater than the parts. How can something be bigger than anything else possible? How can something go on forever? IMO we are in the realm of make believe if we countenance these... unicorns, pixies, magic and actual infinity are all of the same mould.
The problem is certain concepts such as infinite have not been systematically defined for different contexts. In software development you will use the same function or method in different ways simply by changing the arguments that go into that function or method. (the method can be defined with in the program completely differently by the programmer and the compiler for example will know which one to use simply by the arguments or context changing). Infinite and other concepts are not always systematically defined. Me and you might use the word infinite and expect a common understanding but because (subtlety) we are talking about two different contexts we will be in violent disagreement. If God hits me over the head with a rock and i get mad but never look at the rock, and never stop to look at the rock, i might have different results then if i immediately look at the rock and discover it looks alot like gold. People misunderstand your comments on infinite because they don't realize its a very important subject to investigate, and classification in some text might make coming to a common conclusion more likely.
- Potential Infinities. The limit concept in calculus could be regarded as an example
- Actual Infinities. As represented by infinite sets / transfinite numbers in set theory
Finitists (of which I am one) might object to the first usage, but always object the second usage. Strict finitists do not even allow for potential infinities. In fact some even hold the number system should be restricted to only those numbers that are comprehensible to the human mind. Classical finitists allow potential infinities and do not believe there is a ‘greatest number’. I am a classical finitist.
So for example, I do not object to future time potentially going on forever as that would be an example of a potential infinity.
But I would object to the idea of an actual completed infinite set existing in the real world. So a collection of objects whose number is not finite I would object to.
Hope this is clearer...
i understand. Alot of the stuff you write has to be copied into a text file like note pad and analyzed line by line. That last statement i agree with.
Ok herewith my response to your Part 2. I'm buried in mentions again. Working hard to catch up.
Quoting Mephist
I posted earlier a proof that the computable reals are not complete. That's indisputable.
But it turns out that you are right, the constructive reals are not quite the computable reals, and I found many points of inconsistency in the literature.
First:
"The concept of a real number used in constructive mathematics. In the wider sense it is a real number constructible with respect to some collection of constructive methods. The term "computable real number" has approximately the same meaning."
https://www.encyclopediaofmath.org/index.php/Constructive_real_number
Ok, the constructive reals are sort of like the computable reals. Then this:
"The main message of the notes is that computable mathematics is the realizability interpretation of constructive mathematics."
http://math.andrej.com/2005/08/23/realizability-as-the-connection-between-computable-and-constructive-mathematics/
Ok, the computable reals are the "realizability interpretation" of the constructive reals. I'm not familiar with that technical phrase but it seems to indicate once again that the computable reals aren't too far off from the constructive reals.
But then there's this link you gave me:
https://users.dimi.uniud.it/~pietro.digianantonio/papers/copy_pdf/RealsAxioms.pdf
That paper (badly written and/or badly translated, and confusing) give a completeness axiom that they claim makes the constructive reals complete. But if that's true, then this is nothing more than an alternative axiomitization of the standard reals. Good for Coq, fine; but devoid of philosophical interest since all complete totally ordered fields are isomorphic to the standard reals.
By this paper, which you asked me to look at, the constructive reals ARE the standard reals and contain uncountaby many noncomputable reals.
So I admit I have no idea what the constructive reals are and I can't find two explanations that are consistent with each other.
Perhaps the problem is with the underlying murkiness of intuitionism. Or perhaps I haven't seen the right article.
But I'll conceded that evidently the constructive reals are different than the computable reals; but how, I can't say and don't know.
Quoting Mephist
After a day's research, I agree. I have no idea what the constructive reals are.
Quoting Mephist
Well than that is inconsistent with the article you asked me to read. Because that article's constructive reals are Cauchy-complete, and therefore must include many noncomputable real numbers. So your definition is not consistent with the definition in that article.
But if YOU are falling back on the requirement that constructive reals must be computable, then I already proved (twice) that they are not Cauchy complete.
Quoting Mephist
Yes, that's the problem. The link you gave me did do this. But then the constructive reals in that article must (a) contain lots of noncomputable reals, and (b) must be isomorphic to the standard reals, depriving them of any philosophical interest regardless of how Coq-compatible they may be.
Quoting Mephist
Yes, I agree I was wrong about my claim. However the hyperreals do happen to fail to be Cauchy-complete, so my Goldilocks remark stands.
Quoting Mephist
Why is not being Cauchy complete a problem? Because then they fail to satisfy the ancient intuition of the continuum. I thought I explained all that.
Quoting Mephist
Yes. But there's even another wrinkle. The computable real numbers are countable; but they are not effectively countable. That's because any enumeration of the noncomputable reals must be noncomputable itself. Else you'd solve the Halting problem. So a strict computabilist can claim that the computable reals are NOT countable, because there's no computable enumeration of them! I've seen this argument in print.
Quoting Mephist
Not a real line, a "cloud." Let's please not go off talking about the hyperreals. I only used them to point out that of the three famous models of the reals, only the standard reals are Cauchy complete.
Quoting Mephist
If that's true then they necessarily contain many noncomputable reals, contradicting your own definition.
Quoting Mephist
You're already contradicting your own definition. The constructive reals, whatever they are, are quite murky.
Quoting Mephist
At this point I have to decline to go off in yet another direction. I actually wish you would make a brief, simple, clear point that I can work with.
Quoting Mephist
I'll pass on this remark but I don't agree with it.
Quoting Mephist
Fine, whatever. I would prefer at this point to constrain the discussion, not widen it. I was greatly disheartened after reading your own link and finding out that their model of the constructive reals is isomorphic to the standard reals. That undercuts everything I know about the constructive reals.
Quoting Mephist
There's exactly one model (up to isomorphism) that is Cauchy complete; and that is the standard reals.
Quoting Mephist
Ok. Next up, parts 3 and 4, and a big stack of mentions in the politics forum, which I'm afraid to read.
But can you do us both a favor and write a short, clear, and consise thesis that we can discuss? Part 2 was terrible for me because it contradicted itself in so many ways, giving multiple characterizations of the constructive numbers, at least one of which is isomorphic to the standard reals modulo an awful translation from the original Italian.
I have to defer this for now. You already said you're not interested in the proof of B-T and now you've written a lengthy post about it. The proof in ZFC is unassailable, there is no point in your trying to find fault with it. And no point in my trying to explain it, since you shook my confidence in our conversation by claiming a translation in Euclidean space can't be continuous. I wish you'd keep your posts simple.
Right now my plan is:
* Catch up to parts 3 and 4 of your earlier replies;
* Catch up my mentions on the political threads;
* Try to get you to state a simple thesis so that I know what you're talking about. As it is you have given me definition of constructive reals that contradicts itself, along with a badly translated paper that defines the constructive reals as isomorphic to the standard reals. I can not keep up with all this as much as I'd like to.
* Urge you to split out a SEPARATE THREAD on Banach-Tarski. And start with a study of the paradoxical decomposition of the free group on two letters. That's the heart of the paradox and it doesn't require any set theory. It would be true in anyone's universe.
Quoting Mephist
There's little point in your trying to find fault with an 80 or so year old theorem that's been checked and rechecked. But I would love for you to split out your B-T concerns into a separate thread so we can focus on them. Your claim that isometries aren't continuous or don't preserve topology is just terribly wrong, I honestly don't follow your reasoning at all. You also confuse measure 0 with nonmeasurable.
By the way I truly commend you for diving into the proof. You seem to understand most of it. That's impressive. I'm not sure what you mean about your objection but I'll try to understand it.
You'll have to take that up with Chaitin and the mathematical logicians. As it happens, the class of real numbers definable in first-order logic is slightly larger than the class of real numbers computable by Turing machines. This is indisputable. I have no idea what an ideal mathematical language is. But in fact I only used Chaitin's Omega because it's a noncomputable real that we can visualize. The proof would go through with any noncomputable number. The n-th trunctions of any noncomputable number form a Cauchy sequence of computable numbers that fail to converge to a computable number. proving that the computable reals are not Cauchy-complete.
Ok, herewith my reply to Part 3.
Quoting Mephist
Ok fine. But now you're spinning off into new topics. I can't keep up with this. I've learned a lot from this thread but once again I urge you to focus your concerns so that I can address them. Who the hell is arguing that first-order logic can determine the cardinlity of the reals? Who is claiming that????
Quoting Mephist
But I haven't said that!!!! You are arguing with strawmen. We already know that ZFC does not determine the cardinality of the reals. Who the hell is saying any different?
Quoting Mephist
I have no idea what you are talking about. Honestly. I can't relate this to anything we've been discussing.
Quoting Mephist
Once again I fail to relate this to anything we've been discussing.
Physical model? Physics has nothing at all to do with the independence of the parallel postulate.
Quoting Mephist
My friend @Mephist, that is word salad and nonsense. We've come this far, but you're losing me. What you said is just false, to the extent that it has any meaning at all.
Quoting Mephist
Well at least Part 3 was easy. It's incoherent. I don't mean to be provocative. Only that your other posts to me have been insightful and interesting; and Part 3 was nonsense.
But now I'm ready for Part 4 and my catching up to your posts will be a great accomplishment for me. I urge you, I beg you, to try to be concise. You're burying your own points in words.
I didn't know that. But ok. If HOTT is not constructive, what are we talking about?
And now after all this: What ARE we talking about? I no longer know what we're discussing. You lost me somewhere and you've refused to throw me a lifeline.
It's been interesting as hell, but the last two of your 4-part post really lost me.
Quoting Mephist
Ok that's very interesting. Poor Voevodsky.
Quoting Mephist
Yay! I made it!!!! Not sure what I accomplished, but at least I got caught up in this thread.
You mean besides relativity and quantum physics? The 't' in the Schrödinger equation is a continuous parameter over the real numbers. What math do you think the physicists are using?
Of course I'm not saying that the physicists MUST use continuous math; only that to date, they do.
Wow. I don't know where to start. That couldn't be more false.
Quoting Mephist
Not sure what you're getting at, but nothing I relate to.
Quoting Mephist
His work on non-Euclidean geometry was purely mathematical. But if you understand that Riemann knew that math wasn't physics; why won't you make that same distinction?
Quoting Mephist
Mathematicians don't use computers to do calculations. This is a very naive point of view of what mathematicians do. It's often been remarked that of all the STEM fields, math departments don't use computers! The proof of the four-color theorem was an anomaly in 1976 and remains an anomaly today.
Quoting Mephist
You've persistently claimed that mathematicians are doing physics, but this is an engineer's view of math. Mathematicians don't do physics. Just ask the physicists!
Quoting Mephist
It's true that there are probabilistic proofs, but those are a subset of math, not all of math. Not even most of math. Mostly, a little bit of math.
Quoting Mephist
Not the first time you've put words in my mouth that I didn't say, wouldn't say, and don't agree with. But nobody would disagree that probabalistic proofs exist. But even the guy who wrote the famous Death of Proof article in Scientific American had to backpedal.
Quoting Mephist
You seem to be grinding an ax but I'm not sure about what. That future math will be different than current math which is different than past math? Ok. Why would anyone disagree?
I hope going forward that you'll write less but with more focus. You really wore me out and in the end you yourself agree that you're not sure what the constructive reals are. As Gauss said: Few, but ripe.
That seems to be the common point of all your arguments about real numbers, so I wanted you to show you this: https://mathoverflow.net/questions/128569/a-model-where-dedekind-reals-and-cauchy-reals-are-different
I know that there is a proof of uniqueness.
Yes but first, that in no way invalidates the fact that Cauchy-completeness uniquely characterizes the standard real numbers up to isomorphism; and secondly, that the comments and answers in that thread are "inside baseball" remarks from professional specialists in the field. I can understand the words but not the meaning. I have to assume the same is true for you, unless I'm wrong. I'm aware that Cauchy completeness is not the same as Dedekind completeness.
Example from one of the comments: "You won't find a model of ZF where they are different but there are models of IZF where they are different."
Now this is not something I'm prepared to say I understand even if I can parse the words. I assume the same must be true for you. This thread is very inside baseball, it's not for amateurs like us. IZF is evidently intuitionistic ZF. More mysteries beyond my pay grade.
But gee, I only meant to put in a good word for the standard reals. I'm not driving a stake in the ground and vowing to defend them to the death. Everyone knows they're murky.
The one thing I do put a stake in the ground about is that there aren't enough Turing machines to plug the holes in the computable reals. But apparently this is less clear when it comes to the constructive reals, which at least that one Italian paper seems to think are Cauchy complete. But how can that be? Apparently there are constructive approaches to Cauchy completeness. I confess great bafflement in this regard.
Thanks man I'll get to this later. I am most definitely worn out for the day. What I did learn from all this is that the computable reals are NOT the constructive reals; and that the constructive reals can be Cauchy-complete. Now THAT I did not know, and it causes me to confront the depths of my ignorance. I'm really mystified on this point.
One article I'll definitely be rereading is Andrej Bauer's Five Stages of Accepting Constructive Mathematics.
http://math.andrej.com/2016/10/10/five-stages-of-accepting-constructive-mathematics/
P.S. After reading this, please read my explanation on what is "wrong" with Banach-Tarski paradox, if you have time.
No. Math is not physics. That seems to be a theme today.
In fact whenever mathematicians run into infinities, they do NOT say Aha here's an actual infinity in the real world. Some amateurs do that but professional physicists never do.
Rather, they say: "Our model has broken down. We don't know what happens below a certain scale, which is small but greater than zero."
There's an elementary example that most people should be familiar with. Consider classical Newtonian gravity,
[math]F = G \frac{m_1 m_2}{r^2}[/math]
where [math]F[/math] is the gravitational force, [math]G[/math] is the gravitational constant (just some number that makes the units work out), [math]m_1[/math] and [math]m_2[/math] are the masses of the two bodies, and [math]r[/math] is the distance between their centers of mass.
Newton proved a theorem that if the bodies are spherical, you can just use the center of masses and it's the same as if you added up the gravitational attraction between all the respective pairs of points in the two spheres, a much harder calculation. Basically Newton did multivariable calculus before there was multivariable calculus.
Because of this theorem, it's common to think of massive bodies as point masses; that is, dimensionless points that nevertheless have mass.
Now if you have two point masses, they can get arbitrarily close together. As the distance between the two point masses gets close to zero, the gravitational energy between them gets arbitrarily large. It "goes to infinity" as they say.
Physicists do NOT say, "Wow a black hole with infinite energy results from two point masses getting too close together." We might think of it that way in a late night session at the dorm under the influence. But in the cool light of day we must say: "Newtonian gravity breaks down below a certain distance scale. When two sufficiently small masses are very close together, the gravitational energy is greater than all the energy in the universe. The equation may not be applied below a certain scale of distance.
You can see that even without quantum theory, we can show that there's a minimum distance, below which the theory breaks down. There's a sort of Planck length even in Newtonian gravity.
Numbers exist in our mind only. Actual infinity can exist in our minds only along with other things that are not logical / do not exist in reality like square circles. But just because you can think of actual infinity does not mean you have realised/completed actual infinity in your mind.
How many numbers are there?
- First thoughts: there are a potential infinity of numbers. We can go on counting forever so its a potential infinity. But of course we would lose track after a while. So this potential infinity has a finite, mind related limit.
- Second thoughts: in no way is the human mind capable of realising an actual infinity of numbers - the mind in finite after all - there is a limit on the largest number than can be conceptualised - the number of bits representing the number must fit into the storage capacity of the human mind. A very large number then, and dependent on the mind that's conceptualising it.
So actual infinity is not realisable in our minds or in reality.
Are we the standard by which we measure infinity? Isn't this like saying tortoises that live upto 300 years don't exist because humans only live upto 100 years?
Numbers exist in the mind only so I guess you are correct - we should not limit our consideration to the human mind. An alien with a larger mind would be able to mentally realise larger numbers than a human. But they would still be limited to conceptualising finite numbers as per the storage capacity of their minds.
Minds exist in reality so all minds are a finite collection of storage elements (I hope we agree that realising an actually infinite collection in reality is impossible). So whatever the actual size of the mind, the largest possible number is still finite.
Grandi series: [math]1-1+1-1+1-1+...[/math]
It's used to prove that [math] 1+2+3+4+...= -1/12[/math].
Does the above have anything to do with your views on infinity?
A more physical demonstration of Grandi's series is Thompson's lamp - the lamp is on for the 1st second, off for the next 1/2 a second, on for the next 1/4 of a second, and so on. After 2 seconds, is the lamp on or off? It must be one or the other, it cannot be 1/2 on.
Both problems are examples of 'supertasks'. In both cases, knowledge of whether actual infinity is odd or even is required, which requires knowledge of the size of the associated sequence representing the series. But an actually infinite collection of objects is an impossibility - so the sequence cannot be said to have a size - so we cannot evaluate it.
I think these are illustrations that all supertasks are impossible - anything with an actually infinite number of steps cannot happen in finite time. In the case of the lamp, something physical in reality would limit the number of steps in the supertask to a finite number - probably time is discrete - planck time for example - then the series could not go on forever and the lamp would definitely be on or off at the end of 2 seconds. Discreteness addresses Zeno's paradoxes too (which are also examples of supertasks).
I think the fact that maths cannot evaluate the series is supportive of the view that actual infinity does not exist as a (sound) mathematical concept and does not exist in reality.
The above series is called divergent meaning it tends towards infinity. Yet using the Grandi series we can prove it equals a finite value [math]-1/12[/math]. In the youtube video (numberphile?) it's mentioned that some equations in physics involving [math]1+2+3+4+...[/math] can yield ''true'' results using [math]\frac{-1}{12}[/math]. Does this agree with your views that infinity isn't actual or that there are no real infinities?
I'm going to take a look at this to understand it better:
https://www.youtube.com/watch?v=YuIIjLr6vUA
I had an insight. A formula in the Italian paper is the link between what I know and what we're trying to understand.
That's this paper:
https://users.dimi.uniud.it/~pietro.digianantonio/papers/copy_pdf/RealsAxioms.pdf
It's written for specialists and I didn't relate to most of it. They're proposing axioms for the constructive reals that they claim have some benefits over other axiomitizations.
One thing caught my eye. They have a Completeness axiom.
Before this I had no idea constructivists were concerned with Cauchy completeness, but it turns out they are. That validates my earlier point that among all models of the real numbers, the standard (ZF) reals are privileged by virtue of being Cauchy complete.
The constructivists agree! They are trying to develop a context in which they can say that the constructive reals are complete.
[From now on complete means Cauchy complete and not any other mathematical meaning of complete].
Here's their axiom verbatim. Slight grammatical inaccuracies as in the original, as well as some mathematical ambiguity that's not important at the moment].
After learning that constructivists care about completeness, my second surprise is that this axiom appears noticeably weaker than standard Cauchy completeness. Indeed in their very next paragraph they say:
" our axiom could appear weak at a first glance."
So we are on the same page. This is encouraging. Also encouraging is that that's the kind of math I understand! I can compare their definition to standard Cauchy completeness and try to figure out if they're the same or not.
Now, three points:
* How can the constructive reals be complete? If they are complete they must contain many noncomputable reals. How can that be regarded as constructive?
* Does their formulation actually imply standard Cauchy completeness? Or if not, are they reasoning in some kind of alternative context that allows them to claim Cauchy completeness in a way that standard math wouldn't?
* In what sense is their formulation constructive? I gather this may have something to do with the rate of convergence, in which case perhaps the theory of computational complexity may come into play. Big-O, P = NP and all that. I'm a big fan of Scott Aaronson's site.
So that's it. To sum up:
* Constructivists do care about Cauchy completeness.
* They have a funny way of expressing it that at first glance appears weaker than standard Cauchy completeness, but that they claim gets the job done.
* Even if it does, in what way does it count as constructive?
* And if the constructive reals are Cauchy complete, they must contain a lot of noncomputable real numbers. How can that be called constructive?
Sorry, but you have to give me some time to learn about how to write symbols on this site, and I have not much time for this right now. Otherwise we keep speaking using words without meaning, never reaching any conclusion.
I promise that I'll answer very clearly to all your questions, but I am going to need some time (probably some days), even because I have a lot to do at work these days.
not trying to be rude but write out the formula on a piece of paper and take a picture of it with your phone and post it to this site that way. There are other ways too. Thought this might help. I assume everyone has a phone these days and i know you you have access to a computer.
No worries. Working out if their definition implies Cauchy is something I can take a run at. Math markup's not hard but it does have a bit of a learning curve. Lots of info online.
You can also copy/paste from a site like https://math.typeit.org/ but of course that's not as good as learning a little [math]\LaTeX[/math].
Also I like this site https://www.codecogs.com/latex/eqneditor.php which lets you enter a string of markup to see what it looks like.
Not sure what you mean.
Suppose I have an axiom system that includes axiom X. Then X is a theorem with a one line proof: "X". Any axiom is a theorem with a one-line proof. The question is what their completeness axiom means, and in what context.
Note that Cauchy completeness, in the form of the least upper bound principle, is an axiom of the standard real numbers. It can't be proven without an axiom. The rationals are an ordered field that's not complete. We need an axiom to ensure completeness.
Of course Cauchy completeness is a provable theorem from ZF via the standard constructions like Dedekind cuts and equivalence classes of Cauchy sequences. But the article is an axiomitization and not a construction.
Sure, just like the ordered field axioms in ZF. If you omit completeness you get the rationals. If you toss in completeness you get the reals. I understand what you're saying but I'm missing your point. The paper gives an axiomitization. If completeness were provable without an axiom, they wouldn't need an axiom. That's clear.
P.S. OK, sorry. They have Dedekind as an axiom, that is equivalent in ZFC, so it's the same thing. My mistake! Just ignore what I said... :confused:
How do you know it's wrong?
First of all the ''proof'' seems to have been reviewed and accepted by mathematicians. Second the result is useful in physics.
https://en.wikipedia.org/wiki/Convergent_series
So the series has no sum. This article explains it quite well:
https://plus.maths.org/content/infinity-or-just-112
As far as the physics is concerned it is something called the Casimir effect that is associated with a finite sum of a divergent series. I don't know the maths or physics well enough, but from a layman's perspective, I think some of the maths in QM looks suspect. There is an alternative explanation for the Casimir effect that does not involve questionable maths:
https://en.wikipedia.org/wiki/Casimir_effect#Relativistic_van_der_Waals_force
Or it could be that the physicists have the precise maths wrong but it is close enough to being right that with some questionable manipulations, it gives results that agree with experiment. QM maybe is not quite the finished article.
What do you mean by "the standard (ZF) reals"?
Maybe I am saying obvious things, but at risk of being pedantic, I prefer to make everything clear about some basic facts. (That's one of the reasons why I like computer-based formal systems: everything has to be declared. No implicit assumptions!)
Fakt N.1. ZFC is NOT an axiomatic theory of real numbers. ZFC is an axiomatic theory of SETS.
In fact, in first order logic all functions and relations can be applied to all variables, and there cannot be some functions (like addition and multiplication) applied only to numbers and other functions (like union and interception) applied only to sets. In ZFC, all variables are interpreted as SETS.
Fakt N.2. The standard representation of real numbers in ZFC is the following: (https://www.quora.com/How-is-the-set-of-real-numbers-constructed-by-using-the-axiomatic-set-theory-ZFC-set-theory)
- Natural numbers are sets
- Integers are pairs of natural numbers (a pair is a function with two arguments)
- Rational numbers are pairs of integers
- Real numbers are sets of rational numbers
Here's the definition of real numbers in ZFC in Bourbaki: https://math.stackexchange.com/questions/2210592/in-which-volume-chapter-does-bourbaki-define-the-real-numbers
Fakt N.3. There is a more "usable" definition of real numbers given by Tarski (usable in the sense that the demonstrations are simpler): https://en.wikipedia.org/wiki/Tarski%27s_axiomatization_of_the_reals
- Tarski's real numbers are defined axiomatically, and make use of set relations (similar to the ones of ZFC). But they are not based on first order logic.
The difference is that it is allowed the quantification on all subsets of a given set (and not only on all elements of the universe), but the meaning of the subset relations is encoded in the rules of logic, and not given axiomatically.
The use of a second-order logic is essential to be able to express the property of being Dedekind-complete (Axiom 3)
So, going back to what you wrote:
Quoting fishfry
OK, let's check the definition of being "Cauchy complete":
"Cauchy completeness is the statement that every Cauchy sequence of real numbers converges." (https://en.wikipedia.org/wiki/Completeness_of_the_real_numbers)
What's a Cauchy sequence? "a Cauchy sequence (French pronunciation: ?[ko?i]; English: /?ko??i?/ KOH-shee), named after Augustin-Louis Cauchy, is a sequence whose elements become arbitrarily close to each other as the sequence progresses." (https://en.wikipedia.org/wiki/Cauchy_sequence)
As you said, in the Coq formalization of real numbers there is this axiom:
completeness ?f : N ? R. ?x ? R.(?n ? N. near(f(n), f(n + 1), n + 1)) ?(?m ? N. near(f(m), x, m))
In the logic of Coq, "completeness" IS A FUNCTION, at the same way as "near" is a function and "+" is a function.
The function "completeness" takes as input a sequence of real numbers "f" and returns two things:
1. a real number "x" -- let's call it "first_part"
2. a proof that IF "f" is a Cauchy sequence, THEN "x" is the limit of "f" -- let's call it "second_part"
So, for every sequence of real numbers "f", the term "(completeness f).first_part" is a real number. If I have a proof that "f" is a Cauchy sequence, then I can use "(completeness f).second_part" (that is a proof) to obtain a proof that the Cauchy sequence "f" converges to the real number "completeness f".
That's all I need to get a COMPLETE field of real numbers: for all sequences "f", if you have a proof that "f" is a Cauchy sequence, you can produce a proof that "f" is convergent to the real number "(completeness f).first_part".
However, you can't explicitly compute it, because axioms are functions that cannot be reduced (https://stackoverflow.com/questions/34140819/lambda-calculus-reduction-steps)
Then, IF you ASSUME that you can get a real number for every Cauchy sequence, THEN you can prove that there is a real number for every Cauchy sequence. Magic! :-)
Quoting fishfry
Yes, that's the same old question that I thought I just answered many times... :-)
Here's the "computation" of pi:
Definition my_sequence := func n -> sum of 1/n bla bla...
Definition pi := "(completeness my_sequence).first_part"
Here's the proof that my_sequence converges to pi:
1. proof that my_sequence is a Cauchy sequence (let's call this proof my_proof)
2. from "(completeness my_sequence).second_part" (the proof that IF "f" is a Cauchy sequence, THEN "x" is the limit of "f") and "my_proof" (the proof that my_sequence is a Cauchy sequence) I get a proof that "x" is the limit of "f"
(by applying the rule of cut)
I "computed" the noncomputable real number pi, and the result is "(completeness "func n -> sum of 1/n bla bla...").first_part
If "completeness" were a theorem instead of an axiom, I should have provided the implementation of the program that computes the function "completeness". But an axiom is treated as an "external function" of a programming language:
I ASSUME to have some external machine that is able to compute the function "completeness", but I don't have to show how that machine works. That's cheating! :-)
However, remember that this is an AXIOMATIC DEFINITION of what real numbers are, NOT A MODEL of real numbers.
Axiomatic definitions, in whatever logic (intuitionistic or not), are not guaranteed to be consistent: you have to be careful on what axioms you assume to be true.
So, in principle it's not guaranteed that the real numbers that you defined make sense in some concrete model.
And that is true even for Tarski's real numbers, that are described using classical (non intuitionistic) logic.
Instead, this is not true for the description of real numbers in ZFC. In that case, real numbers are a model built from sets, and completeness is PROVED as a theorem, and not assumed as an axiom. Real numbers are concrete objects made of sets!
The problem is that sets are defined axiomatically in ZFC. So, IF sets make sense (are not contradictory), then real numbers make sense. But IT'S NOT GUARANTEED THAT THE SETS DEFINED IN ZFC MAKE SENSE IN SOME CONCRETE MODEL.
So, again, to be sure about the sets of ZFC you should define them as a model in some other axiomatic theory that you trust more than ZFC. Or you could use a FINITE MODEL: in a finite model you can verify a proposition "by hand" (as one of my favourite professors of analysis used to say) simply inspecting the model!
But, obviously, there is no finite model that verifies all the axioms of ZFC. The best that you can do is to verify ZFC axioms on a model built from natural numbers. But natural numbers are NOT a finite model (you cannot check theorems on natural numbers "by hand"). And, as Godel proved, there is no axiomatic definition of natural numbers, in any formal logic, that is guaranteed to make sense.
OK, I'll stop it here because it's just become too long and I am only at the first question.
Quoting fishfry
Let me rephrase this question: "if a given Cauchy sequence has a limit in intuitionistic logic, does it have a limit even in ZFC?"
The answer is YES, because the axioms of intuitionistic logic correspond to theorems in ZFC, and the rules of intuitionistic logic are just a subset of the rules of classical logic. So, if you can prove that a given sequence is convergent in intuitionistic logic, you can use EXACTLY THE SAME PROOF in classical logic. You can map any proposition of one logic to a corresponding proposition of the other logic and every rule of one logic to a corresponding rule of the other logic. It doesn't matter what's the interpretation of the rules: if the rules are the same (or a subset of them), whatever you can prove in intuitionistic logic you can prove even in classical logic: just apply the same rules!
Quoting fishfry
Nothing to do with the rate of convergence! Constructive, in the particular interpretation of Coq, means that every object (every real number in this case) can be built using recursive functions ( except axioms... :-) )
Quoting fishfry
This sentence has a hidden presupposition: that real numbers are a concrete set of objects and you can check if a given noncomputable real number is present or not. This is not true: any model of real numbers is ultimately based on an axiomatic theory that cannot be checked "by hand".
[ I know, I wanted to use formulas to be more precise and at the end I didn't do it (mostly for lack of time). And probably I still wasn't able to be clear enough on what I meant. So, please repeat the questions that I wasn't able to be clear about, or where you thing that I am wrong. Maybe in that way will be easier to arrive at some conclusion ]
I just wanted to jump in quickly and tell you that your lengthy post (1) mostly consisted of things I already know; (2) completely failed to address anything I wrote; and (3) well I haven't time for all my other concerns. Honestly, and I say this in the context of finding most of our conversation very interesting and valuable, but this post said nothing at all to me as you completely ignored everything I wrote.
I'll get to a full reply later. For now please note that my early post specifically focuses on the Italian paper. So even your invocation of the constructive completeness axiom shows that you didn't read or understand a word I wrote. The axiom you gave is very different than the one in the Italian paper. The axiom in the Italian paper is very important to me because it's written in traditional math language and I can work with it mathematically.
Quoting Mephist
Funny that the part you shouted in upper case directly contradicts ?Gödel's completeness theorem. An axiomatic system is complete if and only if it has a model. So if there IS a concrete model, then the sets DO make sense.
https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_theorem
Finally, you consistently misunderstand the relationship between axiomatic models and constructions. For example the axioms for the real numbers (ordered field plus least upper bound) completely characterize the real numbers. The Dedekind cut construction, on the other hand, explicitly constructs a model of these axioms within ZF.
The axiomatic definition is sufficient to do everything one needs to do with the real numbers. Its only drawback is that it suffers from the following refutation: "Oh yeah? How do we know there EVEN IS such a thing?" That's why once in one's life, one is shown a specific construction such as Dedekind cuts or equivalence classes of Cauchy sequences. Then one forgets the messy construction(s) and works forever after with the axiomatic definition.
When I have time later on I'll comment paragraph by paragraph, but frankly at least 2/3 of your exposition was just a recapitulation of what every undergrad math major knows; and the rest completely ignored my specific comments on the Italian paper, which is the ONLY thing I'm talking about at this point.
By the way I have proven to my own satisfaction that the the characterization of a Cauchy sequence in the Italian paper does imply standard Cauchy sequences. I still have to write up the proof, which has a number of interesting points regarding constructive thinking. The next step is to find a proof or counterexample of the other direction, that standard Cauchy implies Italian paper Cauchy.
Finally my tl;dr: At this point I am ONLY talking about the completeness axiom in the Italian paper, because it has the property of being written in standard math language (ie no undefined "near" function, no reference to Coq, etc.) Your pointedly ignoring my entire post is depressing to me. We can't have a conversation if I write, "Here is what the Italian paper says, I have the following questions," and you reply with a lengthy recapitulation of an undergrad class in real analysis followed by a discussion of Coq completely unrelated to what I wrote.
Quoting Mephist
Well worth your time to learn. The SINGLE most important and magic development in math from the time I was in grad school to today, is LaTeX math markup and its web cousin MathJax. Back in my time the math departments had staffs of secretaries to type up the professors' papers, and the students were stuck with writing longhand. The true impact of computers on math isn't just Coq, it's [math]\LaTeX[/math]!
Ok, here's my detailed response to your post. But I fear we're diverging, since right now I'm 100% focused on understanding the completeness axiom from the Italian paper. The reason being that it's written in standard mathematical style so I can work with it.
The standard (ZF) reals are the real numbers as understood in mainstream math, say at the level of an undergrad math major. Either axiomatically, as a Cauchy complete ordered field; or by construction as equivalence classes of Cauchy sequences or Dedekind cuts. There are other equivalent constructions as well.
Quoting Mephist
There are no implicit assumptions in ZFC. I don't understand why you think there are.
Quoting Mephist
You're right. You're stating the obvious. But ok.
Quoting Mephist
Yes ok. Why are you spelling fact as fakt?
Quoting Mephist
Perfectly familiar to undergrad math majors. It's a nice theory in fact, served the 20th century well.
Quoting Mephist
Not sure what is the point of this link.
Quoting Mephist
All of modern math as well as physics and all other physical sciences find the standard definition perfectly serviceable, to the extent they care about it at all. Not following your point.
Quoting Mephist
As are the standard reals, as a complete ordered field.
Quoting Mephist
Nor are the standard reals, as the least upper bound property is second order.
Quoting Mephist
We're in perfect agreement on all this. Except for my puzzlement as why you're mentioning it.
Quoting Mephist
Yes. This was my great recent revelation. I always thought the constructive reals were the computable reals, but as this is only vaguely true (very murky area here), apparently the constructive reals are claimed to be complete in some sense.
Quoting Mephist
I only mentioned that since I don't want to keep writing Cauchy complete.
Quoting Mephist
Perfectly well agreed.
Quoting Mephist
That's a vague handwavy statement. The precise statement is that the sequence of real numbers [math]\langle s_n \rangle [/math] is Cauchy if:
[math]\forall \epsilon > 0 \ \exists N \text{ such that } n, m > N \implies |s_n - s_m | < \epsilon[/math]
If you want to be pedantic that's the official definition. It says (informally) that A sequence is Cauchy if the tails get arbitrary close together. The intuitive idea is this. Suppose that we live in the open unit interval (0,1). We know about the sequence 1/2. 1/3, 1/4, 1/5, 1/6, ... But we can't say it "converges" because zero is not in our universe. Nevertheless there is SOMETHING about this sequence that "morally" converges, even if the point of convergence doesn't exist. That something is the property of being Cauchy. It defines a sequence that "should" converge if only there were something in our universe that it could converge to. That's why the standard reals are the Cauchy completion of the rationals. The reals contain the limits of sequences of rationals like 3, 3.1, 3.14, ... that "should" converge but don't, because pi isn't rational.
Quoting Mephist
Here is where your post totally went off the rails. As I said? I said nothing of the sort! I quoted the definition in the Italian paper. I have no idea what your formulation means since I have no idea what near() means. The entire point of the Italian formulation is that it's written in standard math language and makes perfect sense in the context of standard math.
When you say, "As you said ..." and then quote something I NEVER SAID, I despair. Why did you do that?
Quoting Mephist
Yes but I'm not talking about Coq.
Quoting Mephist
Yes but I didn't ask about Coq.
Quoting Mephist
Yes but I didn't ask about Coq, nor does what you wrote help me because it's written in a nonstandard language.
Quoting Mephist
I'll take your word for it, but I didn't ask about Coq and this verbiage doesn't help me understand the constructive reals.
Quoting Mephist
All well and good, but nothing I asked about or talked about.
Quoting Mephist
Doesn't help me understand anything, doesn't respond to any of the questions and points I raised earlier.
Quoting Mephist
No you haven't.
Quoting Mephist
Pi is computable, so this is all completely beside the point. It's computed by the famous Leibniz formula, for example, which can be coded up by a beginning student of programming.
Quoting Mephist
You completely misunderstand the relation between axiomatic definitions and constructions. Completeness is a theorem about Dedekind cuts and an axiom of the axiomatic definition of the reals as a complete ordered field.
Quoting Mephist
I have no idea what you're talking about. Is an external machine like an oracle? What do you mean? Why is this relevant?
In any event all of this has NOTHING AT ALL to do with the specific points I raised about the completeness axiom in the Italian paper that you asked me to read.
Quoting Mephist
Ok whatever. Same remarks as all along. Even if true, it's seriously off course relative to the concerns I raised.
Quoting Mephist
As Gödel pointed, out, if there's a model then the axioms are consistent. That's the purpose of the Dedekind and Cauchy models: To show that the real number axioms are not vacuous.
Quoting Mephist
Ok fine, but it would be far better if you'd try to respond to the specific points I made.
Quoting Mephist
Not news to me but if it seems worth an exclamation point to you, ok.
Quoting Mephist
This is a problem?
Quoting Mephist
As I pointed out earlier, if there's a model of the reals then the axioms for the reals are consistent.
I perfectly well agree that nobody can prove within ZFC that ZFC is consistent. What of it? That's an 88 year old result and everyone's made their peace with it long ago.
Quoting Mephist
You're just complaining about incompleteness. Like I said, 88 years and counting. You'll just have to get over it. I honestly don't know what to say.
Quoting Mephist
Ok, for a moment there I thought you had one in your back pocket that you were going to throw at me. But no, there isn't any such thing.
Quoting Mephist
Right.
Quoting Mephist
You haven't answered any of my questions. You've gone backward by changing the subject. I am ONLY interested at the moment in the completeness axiom in the Italian paper, because it is written in the language of standard math and I can work with it technically.
I've already convinced myself that their axiom implies standard Cauchy completeness, but not yet the converse.
Quoting Mephist
Yes that's the one direction. But it's the other direction that's harder. If a sequence is Cauchy in standard math, is it Cauchy in intuitionist math? That's a good question and I'm sure the constructivists have an answer, I just don't happen to know what it is.
Quoting Mephist
We agree on one direction but I haven't seen a proof of the other direction.
Intutionistic convergense implies standard convergence, but is the converse true?
Quoting Mephist
My understanding is that intuitiionist logic doesn't have enough rules to do standard math. That's my question.
Quoting Mephist
Tell it to the authors of the Italian paper, which specifically references the rate of convergence. Their axiom is explicitly phrased as a claim about the rate of convergence.
Didn't you even read my post that you claim to be replying to? I"m really baffled here. Please go back and read what I wrote.
[b]They explicitly talk about the rate of convergence as being significant[b]. That's one of my questions.
Quoting Mephist
Surely this is patently false; the noncomputable numbers serving as witnesses.
Quoting Mephist
They're implied by Cauchy completeness. I can check. I have checked. I could check here in public. There are only countably many Turing machines and uncountably many reals. That's the proof of the existence of noncomputable reals.
Quoting Mephist
I don't know what that means. I can prove there are infinitely many primes and I don't have to check them all by hand. I can prove that every even number is divisible by 2 without checking them all by hand. Wiles can prove Fermat's last theorem without checking every quadruple (a,b,c,n) of positive integers by hand. So what?
Quoting Mephist
Completely separate topic, but math markup is the best thing ever and anyone studying math should take the time to learn it. The basics are very simple.
[math]e^{i \pi} + 1 = 0[/math]
On some websites you can Quote that and see how it's done; but this particular website quotes the marked up form and not the original markup, unfortunately.
The markup for the above is: e^{i \pi} + 1 = 0, enclosed in "math"begin and end tags enclosed in brackets.
Quoting Mephist
I know you're passionate about your point of view, but I don't know what your point of view is. I've asked you repeatedly to state a clear and concise thesis so that I can understand where you're coming from.
I gather that your point is that ZF is not constructive and that Coq is. That's fine. But my understanding is that the constructive reals can't contain all of the standard reals and that therefore they can't be Cauchy complete. That's the point I'm trying to understand. You're claiming the constructive reals are Cauchy complete and I most certainly have not seen a proof of that fact nor do I believe any proof could exist. I'm happy to be shown the error of my thinking.
Quoting Mephist
Ok. Question: How can any model of the reals built on constructive principles be Cauchy complete?
I have made progress on the Italian completeness axiom and that's the focus of my efforts at the moment.
ps -- I found a paper of interest:
https://arxiv.org/abs/1510.00639
On the Cauchy Completeness of the Constructive Cauchy Reals
Abstract: "It is consistent with constructive set theory (without Countable Choice, clearly) that the Cauchy reals (equivalence classes of Cauchy sequences of rationals) are not Cauchy complete. Related results are also shown, such as that a Cauchy sequence of rationals may not have a modulus of convergence, and that a Cauchy sequence of Cauchy sequences may not converge to a Cauchy sequence, among others."
I think I may find some clues here.
So, I'll try to change my style of writing: go straight to the answer of only one specific question and keep the post short and focused on one question at a time.
The Italian paper is about a formalization of real numbers in Coq:
"We have formalized and used our axioms inside the Logical Framework Coq" (from the first page).
That's why I was speaking about Coq
Answer: because Cauchy completeness is assumed as an axiom of the theory (this is not a model of the reals because real numbers are described axiomatically). You can argue about the consistency of the theory, but you cannot argue about Cauchy completeness. Cauchy completeness is assumed as an hypothesis.
Let me rephrase the question: "If a sequence is Cauchy in ZFC, is it Cauchy in intuitionist math?"
This question is too vague to have an yes/no answer:
"a sequence is Cauchy in ZFC" we know what it means.
"a sequence is Cauchy in intuitionist math" I don't know what you mean.
The adjective "intuitionist" is a property of a logic. You should say of what theory of real numbers (formulated in that logic) you are referring to.
I can try to interpret it as "a sequence is Cauchy in the theory of real numbers of the Italian paper".
Answer: If you take a Cauchy sequence in ZFC, you have a set of sets such that... (a proposition about that set of sets). Not all sets of sets that are expressible in the language of ZFC have a corresponding term of type R (the type of real numbers) in the language used in the theory of the Italian paper. So, you cannot really compare them.
Put it in another way: which logic do you want to use to compare the two sequences? The first one is expressed in first order logic, the other one in Coq (If you don't want to speak about Coq, please choose another concrete intuitionistic logic and model of real numbers. There are too many of them to be able to speak in general).
The definition of real numbers in the Italian paper is on the third page. The last axiom of that definition is this one:
[math]completeness: \forall f: N \to R. \exists x \in R. (\forall n \in N. near(f(n), f(n+1), n+1)) \to (\forall m \in N. near(f(m), x, m))[/math]
Is that what we are speaking about? ( finally I learned how to write symbols :-) )
[ I don't want to address too many points because I'll go off the road again. So, I'll wait for an answer about these ones for the moment ]
Hi @Mephist. I didn't read your latest posts but I had another little insight.
* First, I must say that as much as I've been aggressively rejecting your remarks about Coq, that is only because I'm not yet ready to receive the information. First I need to grok the essence of this constructiveness business; and I have found that Cauchy completeness is a bridge from math that I know, to constructive math that I'm trying to understand. So I'm "On a Mission" and can't be distracted.
* On the other hand, if and when the day comes that I am ready to learn about Coq -- which I confess I've been interested in from afar since I started watching Voevodsky videos -- I will start at the beginning of this thread and read every word you've written and follow every link! Because you are giving a master class in how someone can think about Coq in the framework of constructive math.
So I didn't want to give you the idea that I think your Coq material is anything less than awesome. It's just that I'm on a Mission right now and must focus on one thing.
* And what is that thing? It's Cauchy completeness.
* And I figured something out tonight. Just from eyeballing that paper I linked earlier about Cauchy completeness in the constructive reals. This paper.
https://arxiv.org/abs/1510.00639
And this is what I'm getting, if I am getting this right. You know how these days it's so easy to learn "about" things, without learning the things. One reads a Wiki page, one gets the gist; but without putting in the years it takes to actually own the material. But we can still make connections among or "knowing about" facts and ideas. This is kind of epistemological. Knowing stuff (years of study) versus knowing about stuff, as in eyeballing Wiki for five minutes.
So with that disclaimer, this is what I think I understand:
The constructive reals are Dedekind complete but not Cauchy complete.
Now this is something I can sink my teeth into because it's math I can investigate. That's why I have to find my own learning path through this material. When people start talking about Martin-Löf type theory and lambda calculus, it does me no good because I never learned those things. But I know a lot of other things.
For example, in the standard real numbers, Cauchy and Dedekind completeness are equivalent. You have one if and only if you have the other. And so, here is a confession: I don't even know what Dedekind completeness is. That's because I know what Cauchy completeness is and Dedekind completeness is just another equivalent form. So I never bothered to pick it up.
But now it turns out that (if I'm reading that latest paper correctly, and this is all from a very cursory read) that in the intuitionist setting, they are not the same. That was a direct quote I think from the paper.
So now I need to look up Dedekind completeness, get warmed up by proving its equivalence to Cauchy completeness, and then start to attempt to grok what it means to be Dedekind complete but not Cauchy complete in an intuitionist setting (whatever that means!)
* I hope you can see that I'm on a very focused path right now. I want to grok the "meaning" of constructivism. If it's subtly different then computability, I want to at least understand the difference.
So someday I will read all the Coq stuff; but from now on you should just be aware that you're writing it for yourself and others that may now or someday be interested. But it's going right over my head.
* Finally I just want to say that with all the back and forth, and I do note that we both tend to the wordy side, this current post of mine represents my latest thinking about everything; and all prior comments are null and void.
* This is it: The constructive reals are Dedekind complete but not Cauchy complete. When I understand that I will be enlightened.
Quoting fishfry
OK! as you wish :smile:
Quoting fishfry
No problem!
Quoting fishfry
Actually, I find Coq is much easier then the paper that you found! But everything depends on your background, I guess.
Quoting fishfry
OK, let's focus on Cauchy completeness.
Quoting fishfry
Hmm.. I am sorry that I have always to disagree with what you say, but in my opinion this is not exactly what that paper says :sad:
Let's read the abstract: "It is consistent with constructive set theory (...) that the Cauchy reals (...) are not Cauchy complete".
That is exactly equivalent to say this: "It is not possible to prove with constructive set theory (...) that it is not true that the Cauchy reals (...) are not Cauchy complete.":
To say that a proposition is consistent with a theory means that it's not possible to prove that the proposition is false in that theory. It doesn't mean that it's possible to prove that the proposition is true.
In fact, the proposition "Cauchy reals (...) are not Cauchy complete" cannot be proved with the constructive set theory that he considers.
How do I know? Because the axioms of IZF_Ref (the constructive set theory that the paper is speaking about) are provable in ZFC, and the rules of IZF_Ref are the same rules of ZFC without Excluded Middle (here is a good reference for the axioms: https://plato.stanford.edu/entries/set-theory-constructive/axioms-CZF-IZF.html).
Then, all theorems that are provable in IZF_Ref are provable even in ZFC: just use ZFC axioms to prove IZF_Ref axioms, and then apply the same rules as the original theorem (ZFC has all rules of IZF_Ref, then it can be done).
So, if "Cauchy reals (...) are not Cauchy complete" were provable in IZF_Ref, it would be provable even in ZFC. But, as we know, in ZFC this is provably false.
In fact, some models of IZF_Ref are not Cauchy complete (the two models that he considers) and some other models of IZF_Ref (the standard ZFC reals) are Cauchy complete.
Quoting fishfry
Dedekind completeness is simpler than Cauchy completeness to formulate in set theory, but practically impossible to use in analysis. Basically, this is the thing: if you build "Dedekind cuts" of rational numbers you obtain the real numbers (this is not part of the theorem, but the definition of real numbers in ZFC), but if you build "Dedekind cuts" of real numbers you obtain again the same real numbers. The same thing hapens with Cauchy: taking limits of rationals you obtain reals but taking limits of reals you obtain again reals (the set is closed under the operation of taking limits and forming Dedekind cuts).
A Dedekind cut is simply the partition of an ordered set in two non empty sets that respects the order relation (any element of the first set is smaller than any element of the second set).
Quoting fishfry
OK. In my opinion, intuitively it means that they can be the same thing as standard ZFC reals, but can even be something very different. Simply there are more possible "forms" for the object called "set of real numbers". And there is even another problem with the definition of convergent Cauchy sequences defined in this article: they are not the sequences of rational numbers (as the standard definition) but sequences of pairs made of a real number plus a function (from page 2: "So a real is taken to be an equivalence class of pairs
Quoting fishfry
OK, I'll not look at you previous posts any more :wink:
1. Imagine a backwards travelling, counting, eternal, time traveller
2. Assume that past time is infinite (I do not think it is, but for the sake of argument)
3. Then the traveller, will from our perspective, have counted every number
4. Seems unbelievable (no greatest number), but thats the nature of infinity
5. If this is not an actually infinite process, I’m not sure what would qualify?
6. Every number has been counted and no number corresponding to 'actual infinity' has been encountered
7. So it seems we can conclude that actual infinity is not a number?
You will have to explain that.
I believe there is constructivism - a minority view in maths - which rejects actual infinity.
I believe the vast majority of mathematicians accept actual infinity as a number - see the transfinite numbers from set theory.
"Transfinite numbers are numbers that are "infinite" in the sense that they are larger than all finite numbers, yet not necessarily absolutely infinite. The term transfinite was coined by Georg Cantor, who wished to avoid some of the implications of the word infinite in connection with these objects, which were, nevertheless, not finite. Few contemporary writers share these qualms;it is now accepted usage to refer to transfinite cardinals and ordinals as "infinite". However, the term "transfinite" also remains in use."
- https://en.wikipedia.org/wiki/Transfinite_number
???
1. Aleph-naught is the size of the set of naturals
2. Sets contain a positive number of whole items only
3. So Aleph-naught must be a natural number
4. But there is no largest natural number
5. So Aleph-naught cannot exist (or be larger than all the natural numbers)
I see that there is a lot of misunderstanding about constructivism meaning the rejection of actual infinity.
Constructivism is not about the rejection or acceptance of actual infinity, but it's about choosing computable functions as a fundamental logic concept (that is implemented the rules of logic), as opposed to the more abstract idea of functions that is implied by the classical axiom of choice.
But this, in my opinion, is more a question of practical convenience in simplifying the proofs (mainly in topology), rather than a philosophical point of view: you can always reason about computable functions by using the internal logic of a "topos" in category theory, even taking non constructive logic as fundamental. And you can easily axiomatize the standard functions (the one corresponding to the classical set theory with the axiom of choice) by using constructivist logic. Only that, in my opinion, the latter choice is conceptually simpler and easier to use, at least for the branches of mathematics directly related to topology.
If you choose logical rules to represent computable functions, you get constructive logic.
If you choose logical rules to represent abstract functions (in the sence of input-output correspondence, not necessarily computable), you get the standard non constructive logic.
In summary, it's only about the choice of which kind of functions you choose to be "fundamental".
"computable functions are exactly the functions that can be calculated using a mechanical calculation device given unlimited amounts of time and storage space"
https://en.wikipedia.org/wiki/Computable_function
It strikes me that it is impossible to specify a function that a computer could execute that would result in actual infinity as the output? No amount of successive addition or multiplication yields actual infinity.
Any size of a set must be a natural number. So the definition of Aleph-naught as 'not a natural number' is plainly contradictory.
https://www.youtube.com/watch?v=oBOZ2WroiVY
Goodstein's theorem is a theorem about a computable function that cannot be proved without assuming the existence of actual infinity.
P.S. Don't ask me why: I don't understand it either.. :smile:
I’m considering the number of points between 0->1 and 0->2. I’ll denote these as functions: points(0,1) and points(0,2).
One of the following must hold true:
1. points(0,1) = points(0,2)
2. points(0,1) < points(0,2)
3. points(0,1) > points(0,2)
I think by common sense, we can eliminate [3] and focus on [1]:
points(0,1) = points(0,2)
points(0,1) = points(0,1) + points(1,2)
If there is a finite number of points in each interval, space must be discrete, so I will assume an infinite number of points in each interval:
[math]\lim_{n\rightarrow\infty} n = lim_{n\rightarrow\infty} n + lim_{n\rightarrow\infty} n[/math]
[math]\lim_{n\rightarrow\infty} n = 2 lim_{n\rightarrow\infty} n[/math]
(1+1+1+1+…) = 2 x (1+1+1+1+…)
1 = 2
This last step, dividing though by ?, is not conventionally allowed, presumably because it’s argued that there are different kinds of infinity. I would have thought (wrongly?) that because I am dividing though by an identical kind of infinity to the infinity in the expression then it should be allowed?
So [1] does not seem possible, that just leaves [2]:
points(0,1) < points(0,2)
Which makes sense, but it implies that space cannot be a true continuum. There are more points in the larger interval implying that the larger interval is structurally different to the smaller interval, whereas for a true continuum, they must be identically structured.
It could be that a point is not the fundamental unit of space, but the argument I think would be identical for a segment.
If you want to show that continuum space implies a contradiction, you should start by assuming as hypothesis that "space is continuum AND one of the three points is true".
So how do you show that "space is continuum and point 1 is true" implies a contradiction?
The argument that you are making with the limits assumes that space is infinite but discrete, right?
How do you rule out that "space is infinite and continuum" ?
A piece of space is discrete if it allows only a finite number of possible positions (points). So I've assumed an infinite number of possible positions - if its not discrete, it must be continuous. From Wikipedia:
"Formally, a linear continuum is a linearly ordered set S of more than one element that is densely ordered, i.e., between any two distinct elements there is another (and hence infinitely many others), and which "lacks gaps" in the sense that every non-empty subset with an upper bound has a least upper bound."
So all continua are (in the above sense) alike in that they can be subdivided forever, so we can write:
points(0,1) = points(0,2)
At this point, there is a one-to-one correspondence between the left and right side so they have the same 'size' so the equals sign seems maybe justified.
points(0,1) = points(0,1) + points(1,2)
Now there is a still a one-to-one correspondence between the left and right side, however, when it is written like this, it appears to part from common sense. I know this corresponds to the convention ?=?+?. I also know that if two things are identical and you change one of them, then they cannot be identical anymore.
I just cannot get my head around continua, they just seem impossible. It is possibly the use of the equals sign to represent a one-to-one correspondence that is the problem. It maybe that it is invalid logically to write:
points(0,1) = points(0,2)
It comes back to Galileo's paradox - the above are equal in the sense of a one-to-one mapping but at the same time, one is clearly twice the other. I think that it is not valid to compare the size of two infinities (as Galileo believed) - they are fundamentally undefined so have no size and cannot be compared. If something never ends, then it can never have a size and never be fully defined. I do not believe Cantor has added anything our the understanding of infinity - he has detracted from it - Galileo was on the right lines.
So coming back to my original starting place, my assumption that one of the following must be true:
1. points(0,1) = points(0,2)
2. points(0,1) < points(0,2)
3. points(0,1) > points(0,2)
seems incorrect. It seems I should instead have written:
1. UNDEFINED != UNDEFINED
2. UNDEFINED !< UNDEFINED
3. UNDEFINED !> UNDEFINED
So I think that maths cannot model actual infinity or continua. Does that mean these things do not exist in the real world? I think that maybe the case. If continua exist, then that implies that the informational content of 1 light year of space is the same as the informational content of 1 centimetre of space - in the sense that both 'containers' record the position of a particle to an identical, infinite, precision. This flaunts 'the whole is greater than the parts'. I trust that axiom more than I trust Cantor's math.
But to be "discrete" does not mean to be "finite" (https://en.wikipedia.org/wiki/Discrete).
Discrete means that is made of parts that are distinct from each other: it can be finite, but not necessarily. The set of natural numbers is discrete but infinite. In fact, I think discrete is probably always synonym of "countable": it mean that you can associate each element of a discrete set with a natural number, so there are as many elements as there are natural numbers.
A linear continuum instead is made of "non separable" elements: you can shrink or expand the length of a segment, but you can't "separate" it's points from each other: the elements of a continuous set are not "countable".
Quoting Devans99 OK
Quoting Devans99
Galileo's paradox is about positive integers, not about continuous sets. In fact, I believe that the idea of a "continuous set" had not even been invented in XVII century. For what I know, Euclidean geometry never speaks about a line being a set of points: for Euclidean geometry, 1-dimensional objects (lines) are a completely different kind of things then discrete (countable) objects. And Galileo does not even consider the idea that a line can be made of a set of distinct objects. For what I know, the idea of continuous (uncountable) sets was invented after Cantor, 200 years after Galileo.
Quoting Devans99
So, if I understand correctly, you are trying to prove that the set of points of a line is finite (not countably infinite). Is it right?
If that's what you are arguing (that a line is made of a finite set of points), the obvious question is: how many points are there in a given segment?
In this instance, I am referring to finite distances of space. If a finite distance of space is also discrete then it cannot be decomposed into infinite sub-sections.
Quoting Mephist
I think Galileo's paradox is applicable to continua. For example, to establish a one-to-one mapping between two continua:
{0, 1} maps to {0, 2}
{0, 1/2, 1} maps to {0, 1, 2}
{0, 1/4, 1/2, 3/4, 1} maps to {0, 1/2, 1, 3/2, 2}
etc...
At the same time we can note that 0->1 is half the length of 0->2. So Galileo's paradox appears to apply - the continua are equal in terms of a one-to-one mapping but unequal in terms of one being twice the length of the other.
Quoting Mephist
Yes I suspect that space is discrete... proving it would be nice (so would winning the lottery!).
If reality is discrete and a line segment (in reality) is composed of points (or fixed sized line segments) then the length of each point/sub-segment is some number greater than zero. The Planck length is often mentioned in this regard.
But this means that points in geometry should have a size that is not zero, right? So, other question: what's the shape of a point? a 3-dimensional sphere? But you cannot fill the space with 3-dimensional spheres (there would be gaps between them). They should be rather like cubes.
In 1 dimension it could work: finite segments can be attached to each-other with no gaps. But if you try to build 2-dimensional geometry with squares in the place of points everything becomes much more complex:
- two "intersecting" lines not necessarily must have a point in common
- space cannot be isotropic (meaning: the same in all directions)
- the concept of straight line becomes very difficult to define, if the direction of the line respect to the grid is not a rational number.
Basically, you loose most of the symmetries of space. And symmetries seem to be a fundamental fact of nature. How do you deal with this problem?
With QM, we have waves (and I suspect a particle is just a compressed wave) and so the waves would be centred on one of the grid points.
All of this would take place down near Planck length, so the universe would appear completely continuous to us.
Loop quantum gravity - the competitor of string theory - has space as discrete.
Are you familiar with these concepts? Let me know what you think of the videos.
Imagine a backwards travelling, counting, eternal, time traveller. Assume that past time is infinite. Then the traveller, should have, from our perspective, counted every number! If this is not an actually completed infinite process, I’m not sure what would qualify. But it is impossible to count all numbers - no matter how many you count, you are always 0% of the way to completion. So it seems that even given an actual, completed infinity of time, actual infinity is not realisable/completable. Which makes me think an actual infinity of time is impossible.
Hell, neither of us in our lifetime could ever write enough zeros to write a googolplex properly, 10^10^100. It's not a real fixed number sure, but it is a concept within our universe of discourse and we don't know if the universe is of a finite or infinite size. Can't know is more accurate here really.
But as you mention, it's impossible to count to infinity (from the traveller's perspective, its impossible to get more than 0% of the way to counting to infinity).
I guess to me infinity does not make sense as a concept, perhaps this scenario brings that out.
The reality is that numbers as we understand them will always be finite as long as we are for each new high number that is written in sequence can consider itself discovered by we who use numbers and are counting them out. However, inbetween 0 and 1 you could also infinitely count out decimal places without ever reaching 1. Numbers are infinitely divisible. with the exception of zero.
The universe expanding and the universe being actually infinite seem to me contrary.
I am a finitist and I do not believe in different sizes of infinity - a minority viewpoint.
Who said anything about different sizes of infinite? Are you just trying to be provocatively edgy without a logical argument to back up the belief? Because you come off as incredibly arrogant when you do this.
Our best observations to date strongly suggest that the universe has no spatial curvature. It may be expanding in time, but the geometry of space at any given time, is Euclidean.
The simplest topology that corresponds to Euclidean geometry is that of flat, infinite space. So by Occam’s razor, we can conclude that in the absence of evidence to the contrary, the universe appears infinite.
Now the desire to engage with these ideas is great, but I sincerely hope that if I'm taking the time to recommend reading to you (Like Cohens preface to logic, for the 3rd time) then you are at least going to look it up. If you aren't going to listen and are just here to waste time with edgy nonsense for the sake of placating your ego then I will no longer interact with you. Here to learn and help others learn.
But the wave functions in quantum mechanics are not supposed to have the size of the particle that they represent. For example, electrons are supposed to be point-like (or at least smaller than any detectable size), but the relative wave functions inside atoms are around [math]10^{-10}[/math] meters, and are even visible using scanning tunneling microscopes.
And there's even worse: in quantum mechanics if you make the space variables discrete, the momentum variables become continuous and periodic: a zero size for both position and momentum is forbidden by Heisemberg's uncertainty principle.
Quoting Devans99
I don't know loop quantum gravity (except from some vague descriptions), but I am quite sure that it makes use of differential equations defined on continuous domains, like in all physics. If you use discrete values for the space variables, you cannot use derivatives on space variables.
Anyway, in any theory compatible with quantum mechanics events must be associated with a probability amplitude (https://en.wikipedia.org/wiki/Probability_amplitude), that is a complex number. And complex numbers are continuous. And you even need variables for fields, energy, momentum, etc.. I don't think there is any realistic theory of physics that has been defined without making use of continuous variables.
I think that our theories use continuous variables as an excellent approximation for a reality that is discrete on such a fine level that it appears continuous to us (and thus continuous theories give results very close to reality). So continuous calculus gives good results but does not reflect the micro nature of reality correctly. This seems to be how classical physics works - the classical approximation of (what we now know to be) discrete matter are good enough because discreteness of matter only manifests its effects in the micro world.
I believe that if we ever hit on a ToE then it will be a discrete theory and utterly useless for predicting behaviours in the macro world - a quantum theory of gravity would never be used to try to predict the orbits of the planets for example - we would stick with Newton/Einstein for that. It would be useful in understanding the early moments of the Big Bang though.
If I had a reasonable explanation of what the collapse of the wave function means I would immediately publish it and I would become the most famous physicist of the world :grin:
In my opinion the most likely interpretation of quantum mechanics should be a somehow refined version of the many-world interpretation ( https://en.wikipedia.org/wiki/Many-worlds_interpretation ). But, as it is now, even this interpretation doesn't make sense. A collapsed wave is not smaller than an uncollapsed one: a collapsed wave is part of classical mechanics, and all experiments are part of classical mechanics. Quantum mechanics simply does not describe experiments. It describes the evolution of any physical system as the evolution of an abstract mathematical object (a vector in an infinite-dimensional vector space, called the state-vector) that does not correspond to any real physical object. And you can use that object to predict the probability of an experiment by the "trick" of making the state-vector (or wave-function) collapse, but the experiment itself must be described using classical mechanics, not quantum mechanics. So, practically, inside quantum mechanics there are not "collapsed wave functions". The collapsed wave function is the mathematical "trick" used to make in some way a state-vector (or wave -function) become "visible" from classical-physics observer.
Quoting Devans99
Yes, exactly. But I think that in any case, whatever theory you use, physical results will always be only approximations. You can say that if you measure quantities that are fundamentally discrete, you can measure them with exact precision. But in quantum mechanics there is the problem that for very high degrees of precision you cannot be sure that the measure is correct: results have only a statistical meaning.
However, even if it were possible to create a model that is fundamentally discrete in all variables and described reality with absolute precision, it probably would be so complex that would be impossible to use!
The advantage of using infinity and differential equations instead of finite-difference equations (https://en.wikipedia.org/wiki/Finite_difference) is that calculations become much, much simpler. Without infinities, there are no symmetries. And without symmetries the laws of physics become a complete mess, impossible to use (even if they were the most accurate possible description of the real world).
P.S. I have a very good example of this kind related to computer-science that probably is easier to understand: computers are not able to represent natural numbers exactly, because memory locations are finite. But when you use them, you treat them as if the memory locations were infinite. You don't take into account what happens if the digits don't fit into memory: simply you take a bigger memory location so that you don't have to worry about the size.
Why? Because finite-precision arithmetic is much more complex that regular arithmetic, and if you had to take into account the finite size of memory locations, it would become impossible to reason about programs. Well, a very similar thing happens with the use of infinities in physics.
Quoting Mephist
Ok I've satisfied myself on the use of the rate of convergence in the Italian paper. The short answer is that by controlling the rate of convergence, the function that inputs [math]\epsilon[/math] and outputs a suitable [math]N[/math] in the Cauchy definition is a computable function. Now that makes sense and I see what they're doing.
But this necessarily leaves out all those Cauchy sequences whose convergence rates are not so constrained; and the limits of those sequences can not be Italian-Cauchy reals. But they are standard reals (being the limits of Cauchy sequences). So what the constructivists call Cauchy complete can not be actual Cauchy completeness. I am pretty sure about this. If the constructivists don't care, all fine and well. A lot of smart people are constructivists these days so I'm sure the fault is my own. But the Italian definition of a Cauchy sequence omits a lot of more slowly converging sequences unless I am mistaken (I haven't fully gotten to the bottom of this yet).
Quoting Mephist
Aha! Well that makes perfect sense. I didn't read the whole paper, just latched onto their completeness axiom.
Quoting Mephist
Yes but what the Italian paper calls Cauchy completeness is not what I would call Cauchy completeness. They include only the limits of a small class of Cauchy sequences, namely those that converge at a particular rate. Again if this actually includes more than I think it does then I'll correct myself later, but as I understand it, they require the function that inputs epsilon and outputs N to be computable, and that must necessarily leave out a lot of Cauchy sequences. As I understand it.
Quoting Mephist
The Italian paper definition. Or in general, any theory that requires the epsilon-N mapping to be computable.
Quoting Mephist
That works. Or in general, in any constructive theory in which the "modulus" of the Cauchy sequence, which is their name for the epsilon-N mapping, must be a computable function.
Quoting Mephist
I don't know what your set of sets refers to. But if the modulus of a Cauchy sequence must be computable, that necessarily omits all those Cauchy sequences whose modulus isn't computable. That seems to be the entire point.
Quoting Mephist
The meaning of a computable modulus is perfectly clear to me. It's a significan restriction, or at least seems like a significant restriction, in the definition of a Cauchy sequence. It seems that there must exist sequences (of rationals, say) such that the sequence is standard-Cauchy but not constructive Cauchy.
Quoting Mephist
Yay on the symbols!! Best thing since sliced bread. My handwriting always sucked. I wish they'd had that back in the day.
Sorry I didn't read that far in the paper but I'll review that part. But my main point stands. If the modulus must be computable then the constructive reals are missing a lot of limits of standard-Cauchy sequences.
Quoting Mephist
Ok it's all good!
Quoting Mephist
Ok I'll stop here because the next thing you wrote is lengthy and I have to digest it.
I don't want to get sidetracked into all that but every time I take a look at these ideas, I come away thinking that the physicists have a very different concept of infinity than the mathematicians do. The "flat universe" argument leaves me ... flat. If I could only make one brief response to that idea it's that the supporting observations only apply to (by definition) the observable universe, which is a tiny part of the universe. So at best the observable universe appears to have a flat topology, which leaves a lot of options on the table. An infinite universe is by no means the Occam choice. The assumption of infinity is very strong and is not necessary. It can be replaced by, for example, a toroidal topology. Everybody knows this. The infinite universe is a weak argument.
Quoting Mark Dennis
And besides .. you can never observe the evidence to the contrary if it's outside the observable universe. So this is one of those self-fulfilling ideas that is popular but unprovable.
Yes, you are right. This leaves out all Cauchy sequences whose convergence rates are not computable. Because in ZFC you can define non computable functions by using the axiom of choice, that is not available in constructive mathematics. These (the ones left out) are the Cauchy sequences that are not expressible as an explicit formula (or algorithm), and then are not expressible at all in constructivist mathematics. Put it in another way: non constructive mathematics has more functions than constructive mathematics. The additional functions are the ones that you can define only by using the axiom of choice.
But I would like to stress an essential point that maybe you missed: the axiom of choice is not incompatible with constructivist mathematics: it's only an independent axiom (not refutable and not provable). So, if you add it to constructivist logic, you obtain... non constructivist logic! This is not a limitation, but it's only a choice to treat this axiom not as a "logical axiom" (that you are forced to use because it's built into the logic), but as an axiom of your theory (the theory of real numbers, for example). In this case the axiom can be seen as an "oracle function": a function that is not computable, that you assume to exist. In constructivist logic (at least the one used in Martin-Lof type theory, the one that I know) there are no logical axioms at all. Only logical rules! That, in my opinion, is an advantage, not a limitation: keep axioms out of logic and all logical symbols become mere definitions. The only primitive concept that remains as an integral part of logic is... computable functions!
Let me repeat it once again: you can get classical logic out of constructivist logic in a very simple way: just add an axiom to the theory! But if you want to reason about an universe where only computable functions exist using classical logic, the only way that I know is to use the internal language of topos theory (part of cathegory theory). Because you can easily add an axiom to a theory, but you are not allowed to take out an axiom from logic. It's just a much nicer "factorization" between what's part of the logic and what's part of the mathematical theory that you are building.
However, It would be a rather strange universe if it stops being flat outside of the observable universe. Unless you’re suggesting that it only appears to be Euclidean due to our observing of state/vectors?
I’m a pragmatist so I have a hard time really engaging with ideas until I know what purpose they serve to us now.
Still working on my response to the lengthy final part of your earlier post about whether the constructive reals are Cauchy complete. Such an interesting topic!
I just wanted to clarify one technical thing about what you wrote. This will be a standalone post so I can present a somewhat involved example at the end.
ZF is already nonconstructive. That is there are three levels of specifying a set: Computability, ZF, and ZFC. In other words we don't just jump from computability to the axiom of choice. ZF already gives noncomputable sets. You made that minor error in your earlier post then repeated it so I wanted to clarify this. The three levels of identifying the elements of a set (if you think of it that way) are:
* Computability. We have a Turing machine that generates all and only the elements of the set;
* ZF. We have statements of existence of sets we can't compute. The powerset axiom for example is very powerful. Cantor's theorem is a theorem of ZF so that the powerset of the naturals must be uncountable; yet there are only countably many Turing machines. So most of the subsets of the naturals are noncomputable sets. No axiom of choice is needed for ZF-noncomputability.
One little fact I know about this is that although there are only countably many Turing machines, there are not constructively countably many. Although in ZF we can enumerate the Turing machines, no such enumeration can be computable! That's because a computable enumeration of TMs would effectively solve the Halting problem, which Turing showed can't be done. So a constructivist could deny that the set of TMs is countable. That's kind of what they're doing with Cauchy sequences, subtly redefining a technical term then saying they still satisfy the definition.
* Then the axiom of choice gives you yet another level of incomputability. Have you seen the standard example? Let me just present it since this is a standalone post and perhaps some reader hasn't seen this.
Start with the real numbers [math]\mathbb R[/math]. Define the following relation [math]\sim[/math] on them. For two reals [math]x[/math] and [math]y[/math] we say that [math]x \sim y[/math] if and only if it happens to be the case that [math]x - y \in \mathbb Q[/math]; that is, the difference of [math]x[/math] and [math]y[/math] is a rational number.
We can verify that [math]\sim[/math] is an equivalence relation (proof by reader), so that it therefore partitions the reals into a collection of equivalence classes. In fact each equivalence class is countable, and there are uncountably many equivalence classes (again proof by reader).
The set of equivalence classes is usually denoted [math]\mathbb R / \mathbb Q[/math] and called "the reals mod the rationals."
Now by the axiom of choice there is a set, call it [math]V[/math], consisting of exactly one element from each equivalence class.
We know nothing about the elements of this set. Is 1/2 in it? We don't know, only that it contains exactly one rational (proof by reader). Is pi in it? I have no idea. The only thing I know about this set is that it exists. That's the classic example of pure mathematical existence. It exists, but not only can't its elements be computed, they can't even be identified by ZF. All we know is that the set exists.
[math]V[/math] by the way stands for the Vitali set, named after Guiseppe Vitali, who discovered this example in 1905.
It turns out that this set is nonmeasurable. There is no sensible way to assign it a length, as we can with intervals or complicated combinations of intervals of reals. Yet if we ban the axiom of choice, it's consistent that the real numbers are a countable union of countable sets. This would destroy the foundation of modern probability theory and everything that sits above it (statistical mechanics, quantum field theory, sociology, the state lottery, etc.)
I'm sure the constructivists have some workarounds but I gather they sometimes adopt weak forms of choice, because they need to. So the constructivists aren't so pure after all!
So to sum up, the next thing past computability is ZF, which already gives you sets that are arguably not computable (depending on who's arguing). Then AC gives you sets that are not only noncomputable, but beyond even the reach of ZF.
Then of course there are all the large cardinal axioms ... it's turtles all the way up!
Why???
What part of probability theory is inconsistent with the negation of axiom of choice?
Here's an extract from https://en.wikipedia.org/wiki/Probability_theory ( Idon't have time to fix formatting of formulas now)":
Modern definition: The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by {\displaystyle \Omega } \Omega . It is then assumed that for each element {\displaystyle x\in \Omega \,} x\in \Omega \,, an intrinsic "probability" value {\displaystyle f(x)\,} f(x)\, is attached, which satisfies the following properties:
{\displaystyle f(x)\in [0,1]{\mbox{ for all }}x\in \Omega \,;} f(x)\in [0,1]{\mbox{ for all }}x\in \Omega \,;
{\displaystyle \sum _{x\in \Omega }f(x)=1\,.} \sum _{x\in \Omega }f(x)=1\,.
Ok let me pick up my reply to your earlier post here so I am one post behind you. I am paddling as fast as I can through these murky (to me) constructive waters.
Quoting Mephist
Yes yes I misspoke myself a bit and I understood at the time what it's saying. However I must admit that I don't believe them, which is why I paraphrased them in a way you feel is unfair. I think that the only way they can have a model of the constructive reals that's Cauchy complete is by modifying the definition of a Cauchy sequence along computable lines, that is by constraining the rate of convergence. And that in so doing, they must necessarily omit some sequences that are Cauchy in ZF but not in their computable formulation. This is my strong belief but of course I may well be wrong and that's what I'm trying to figure out.
In other words: They define a Cauchy sequence constructively, that is with a convergence rate that allows them to deduce a computable modulus. Remember the modulus is just the function that inputs [math]\epsilon[/math] and outputs a suitable [math]N[/math] in the definition of a Cauchy sequence.
Then they define the constructive reals as the completion of all these computable Cauchy sequences; and then they can prove that their reals are Cauchy complete. But it's not the same Cauchy. This is my working thesis for what's going on. If it turns out to be more subtle then I'll learn something.
Quoting Mephist
Yes I understand. Sorry I was deliberately imprecise so as to make my point, leading you to believe I didn't understand what they were saying.
Quoting Mephist
I believe you but I feel that they must be using the restricited definition and not the standard one.
Quoting Mephist
Ok. This is my next assignment then. I'll read the SEP article and try to grok IZF a little. I need to understand this point you are making. I don't understand it at the moment so I owe us both a response to this paragraph.
Quoting Mephist
Isn't that just because the constructive reals can't see all those ZF-Cauchy sequences that it's leaving out? This MUST be the case. Isn't it?
It's like, if I close my eyes I can prove there are no elephants in the room, even though if I were to open them, I'd see the elephant! The restricted definition of Cauchy sequences in the Italian paper bothers me greatly. It's missing a lot of ZF-Cauchy sequences. Sure it can prove that it doesn't see them. That doesn't prove they're not there, only that the constructivists need to open their eyes!
I must really be missing something deep and basic about constructivism. I hope you can take pity on the struggles of an old timer who had ZFC beaten into me by eminent mathematicians at some of our finest universities.
Quoting Mephist
Ah ... you are saying that the standard reals are a model of IZF_Ref. That is a very enticing remark. I'll spend some time on this. That's a clue for me to hang some understanding on.
Quoting Mephist
Yes I probably shouldn't have confessed to any ignorance, it's just that Dedekind completeness is too trivial to even think about in standard math. If you define the reals as the set of Dedekind cuts then every real is automatically a Dedekind cut. Having done that, they then prove the reals are Cauchy complete and have the least upper bound property. But nobody ever draws the implication chart among these properties because they're all trivially equivalent in ZF. I'd love to get more insight in this area.
Quoting Mephist
Right, Dedekind completeness is baked in so there's no need to talk about it.
Quoting Mephist
Yes and a good thing too!
Quoting Mephist
Yes yes.
Quoting Mephist
Throwing a marshmallow to a drowning man! I still don't see the distinction between Cauchy and Dedekind in the constructive setting.
Quoting Mephist
Ok perhaps this will be more clear when I dive into IZF, which I hadn't heard about till you mentioned it.
Quoting Mephist
Right. A constructive Cauchy sequence consistes of the sequence, along with a computer program that lets you prove that it is in fact a Cauchy sequence. I can see why a constructivist would care. I just need to understand how they can convince themselves that they're still ZF-Cauchy complete. That's the part that's bothering me.
Wait I think you said that wrong. It's not a sequence of pairs (rational number, modulus function). Rather it's a pair (entire sequence, single modulus function). It's the sequence, plus the function that inputs epsilon and outputs a suitable N.
Quoting Mephist
Not that we haven't hit on a lot of interesting side topics, but we're making a lot of progress talking about Cauchy sequences so I'm happy to focus on that.
[quote="Mephist;304932"]Constructivism is not about the rejection or acceptance of actual infinity, but it's about choosing computable functions as a fundamental logic concept (that is implemented the rules of logic), as opposed to the more abstract idea of functions that is implied by the classical axiom of choice.
I believe you responded this to someone else's post but it brings up a question.
You talk about computable functions. Every time I drill down, constructivism turns out to be about computability. But the constructivists seem to regard the computable reals as somewhat different than the constructive reals. One article referred to the constructive reals as the "realization" of the computable reals. Or the other way 'round? Either way the point is that computability is not exactly the same as constructivity. I'd like to understand this.
The other minor point is that you jumped from computability to the axiom of choice, but as I pointed out in my previous post there's an intermediate step. ZF is already noncomputable, even before we get to ZFC.
Your markup didn't render but no matter. I'm talking about the Kolmogorov axioms:
https://en.wikipedia.org/wiki/Probability_axioms
Here's the argument.
One of the axioms is countable additivity. It says that the measure of a countable disjoint union of sets is equal to the sum of their measures. For example suppose I throw a dart at the unit interval [math](0,1)[/math]. What's the probability of hitting a point in [math](0, \frac{1}{2})[/math]? It had better be [math]\frac{1}{2}[/math], right? Likewise the probability of hitting a point in [math](\frac{1}{2}, \frac{3}{4})[/math] is [math]\frac{1}{4}[/math] and so forth.
Now what is the probability of hitting some point in [math](0,1)[/math]? It's 1. And we must be able to calculate this as the probability of [math](0,\frac{1}{2}) \cup (\frac{1}{2}, \frac{3}{4}) \cup \dots = \frac{1}{2} + \frac{1}{4} + \dots = 1[/math].
[Endpoints don't matter in this discussion but to be more accurate I should fix up the intervals to be half-closed so I don't leave out any points].
So ok, countable additivity.
Now in the absence of the axiom of choice, there is a model of the reals that is a countable union of countable sets. Every countable set must have measure zero (proof by reader or just ask) and therefore the measure of the reals is zero. That's bad. No more probability theory. The axiom of choice is essential for probability. Of course I'm sure the specialists can work around this in various ways. And I'm sure they'd still run the state lottery.
https://mathoverflow.net/questions/100717/zf-the-reals-are-the-countable-union-of-countable-sets-consistent
If you take as sets the sets of finite segments, all sets have a measure and Kolmogorov axioms work perfectly well: zero-length segments will have zero measure and non zero segments will have a non zero measure. Where is the contradiction?
Power Set in ZF:
?x?y?z[z?y??w(w?z?w?x)]
Power Set in Coq:
Inductive Power_set (A:Ensemble U) : Ensemble (Ensemble U) :=
Definition_of_Power_set :
forall X:Ensemble U, Included U X A -> In (Ensemble U) (Power_set A) X.
The powerset of A is the set of all functions from A to a 2-element set (a subobject classifier in topos theory).
Coq uses inductive types (a generalization of recursive types, such as the one used to define integers in Peano arithmetic) to define infinite data types. In ML Type theory, that does not include inductive types,
it depends if you consider the predicative or impredicative version of the theory (https://ncatlab.org/nlab/show/predicative+mathematics)
In the predicative version of ML Type theory, this is true: to define the set of all subsets you have to use an axiom to define a "recursor function", that is assured to be consistent by the meta theory of the logic, but is still an axiom. So, in that case you are right: there are really 3 levels: finite, inductively definable and non-inductively definable.
This is the same as for the definition of integers: you have to use the induction principle, that is not provable in the predicative version of ML Type theory.
Are you sure about this? I think turing machines are simply strings that can be enumerated as integers. This is not the same thing as solving the halting problem.
If R/Q is the quotient class, 1/2 is contained in only one of it's subclasses: the one that contains all rational numbers.
The sets that you say are not computable, but computable in ZF, are the ones that in type theory are called Inductive types, and correspond to initial algebras in category theory. The simplest example is the set of natural numbers. You say the set of natural numbers is not computable. It's not so simple: an inductive set is not a function, and the recursor on this inductive set is a computable function, because the recursion is done counting down to zero.
And there are even co-recursive sets, that correspond to co-initial algebras in category theory (not sure if the name is correct). Co-inductive functions are not required to terminate, but only to consume data at every step. However, you never get an infinite loop because there are restrictions on how co-inductive functions can be used. So, it's not so easy to define what is "computable" from the point of view of a constructive type theory.
There are too many points, and I have the impression that this discussion doesn't make sense if we don't agree on the definitions of the worlds. So, let me start from the most ambiguous one: what do you mean by "computable reals"? How do you "compute" a number that is not representable as a fraction? I'll wait for this answer before going ahead with the other points, because I really don't know what are the "computable reals".
You quoted yourself there.
Quoting fishfry
Yes, you are right.
Quoting fishfry
Yes, exactly!
But I have to tell you that in my opinion the road that you've chosen to learn about constructive mathematics is the most difficult that you could choose. I didn't know what IZF_Ref is, I had to look for the definition to understand the paper that you pointed out, and in my opinion IZF_Ref is not relevant at all, and the axiomatization of real numbers that he is speaking about is even less relevant: you can choose another slightly different axiomatization and you'll get different results. I'll tell you more: I don't think that Cauchy convergence is in some "objective" way weaker or stronger than Dedekind convergence (I can be wrong on this, I am not sure at all!), as you seem to deduce from this paper: I think that if you defined real numbers in ZFC using Cauchy convergence, then Dedekind convergence would become not provable in IZF_Ref.
In my opinion ZFC, or ZF, makes sense only in first order logic (of course!), and intuitionistic logic makes the most sense in the context of type theory. But of course you can choose whatever random combination of axioms and rules, and find their implications, but I doubt that in this way you can discover something meaningful about what the "real" real numbers are...
Quoting fishfry I don't know...
If you consider a weaker logic (taking out axioms) you get more possible models, but more abstract.
Consider this question: why modules don't have a base? Is it because they can't see all those inverse functions of multiplication that vector spaces see? Well, yes! But does it mean that modules are vector spaces when you close your eyes to not see the inverse of multiplication?
I know the objection to this argument: vector spaces and modules are abstract algebraic structures. Real numbers are a concrete unique object, because there is only one model of real numbers. Well, there is only one model in ZFC, but there are a lot of models of ZFC itself! Consider the Yoneda lemma in category theory built on top of ZFC: in every reach enough category (let's say in every topos) you can build a different real number object (https://ncatlab.org/nlab/show/real+numbers+object), and using the Yoneda lemma you can map them to different sub-categories of sets. All the theorems of real numbers will remain valid in all categories that have a real-number object, but if you consider a specific category there will be properties that are "visible" only in that category, and not in the standard real numbers.
Quoting Mephist
Basically, the point is that building proofs in intuitionistic logic is equivalent (it's really the same thing) to building computable functions: there is a one-to one correspondence between intuitionistic proofs and terms of lambda calculus (sorry, lambda calculus again...). This is really a very simple and practical thing, but I don't know why all explanations that I am able to find on the web are extremely abstract and complicated!
I can't speak to non-pointed set-theoretic probability theory. I know about ETCS (elementary theory of the category of sets) so I understand that sets don't require points. But as to probability, I can't say. However if you take finite line segments as sets, you seem to lose intersections. Are these closed or open segments? You have a reference for this interpretation of probability theory?
You totally missed or ignored my point. You said that (in ZF presumably) we have computable sets and then we can introduce the axiom of choice to get noncomputable sets. You said it twice in two different posts. I pointed out that there is an intermediate level, that of noncomputable sets in ZF. My remark was minor, just correcting a minor error. Yes?
I see your point. I confused that with the fact that there is no computable enumeration of the computable numbers. To enumerate the computable numbers you have to be able to enumerate the TMs that halt. That you can't (computably) do.
I believe that preserves my larger point, which is that a constructivist might say that I can't constructively prove that there are only countably many computable numbers, because I can't computably enumerate them.
I did not say there are any sets that are not computable but computable in ZF.
Quoting Mephist
I neither said nor believe such a thing. Are you saying that's true?
Quoting Mephist
You lost me a while back in this post.
Quoting Mephist
This does not seem to bear on the extremely trivial point I made in correcting an error you made twice. I reiterate my original remarks and don't understand your two off-point responses. I'm sure I must be missing something vital but I thought I made a very trivial point whose importance was only that you committed the same minor error twice and I wanted to clarify the issue. Which clearly I didn't.
Ah you don't know what are the computable reals! Why didn't you say so about ten or twenty posts ago when I first mentioned them?
The definition is due to Turing, whose famous 1936 paper is called, ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ENTSCHEIDUNGSPROBLEM
(caps in original).
https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
A computable real is a real number with the property that there is a TM that inputs a positive integer n and halts, outputting the n-th decimal digit of the number (or binary, same thing). With this definition I hope you will agree that pi is computable (earlier you thought it wasn't) and that Chaitin's Omega isn't (since its computability would amount to solving the halting problem).
An equivalent definition is that given epsilon there is a TM that cranks out a rational approximation to the number within epsilon.
All of the familiar mathematical constants such as pi, e, sqrt(2), etc. are computable. However there are only countably many computable numbers (since there are only countably many TMs in the first place). But Cantor's theorem shows that there are uncountably many reals. Hence there are uncountably many noncomputable reals.
Point being that there are only countably many constructive Cauchy sequences (if we require that the modulus must be a computable function) hence the constructive real line is full of holes. It cannot be Cauchy complete.
I have made or referenced this argument probably a dozen times already, so if you didn't know what I meant, I can certainly see why you don't take my point about the constructive Cauchy sequences.
But I m not choosing to learn constructive math. How we got onto this was that I made the rather trivial point that the standard reals are the Goldilocks model of the reals with respect to Cauchy completeness. The constructive (ie computable) reals are too small, there are only countably many of them. The hyperreals are too big, they're not Archimedean. Only the standard reals are Cauchy complete.
I thought this was a very harmless, lighthearted observation. You replied by claiming that the constructive reals are Cauchy complete. You can only prove that (as far as I can tell) by changing the definition of a Cauchy sequence to include only the computably Cauchy sequences, then defining the constructive reals as the limits of all the computably Cauchy sequences. This is what we've been discussing for the last many posts.
But my intention was not to learn constructive math. There's too much standard math I'm interested in and that keeps me busy. My point was to challenge your claim that the constructive reals could ever legitimately be called Cauchy complete. That's what we've been talking about.
You are the one who pointed out that paper to me! Didn't you? I really thought so.
I can't comment on this. But if you haven't understood all along what a computable real is, and that there are only countably many of them, and that the standard reals are not countable, then you have not understood, let alone challenged, my argument.
Which is: The constructive reals can not legitimately be called Cauchy complete. There aren't enough of them to do the job.
Pretty funny. The statement "Every vector space has a basis" is logically equivalent to the full axiom of choice. So every time I give you a vector space and you say, "Well of course it has a basis," you are asserting AC.
To answer your question directly, I suppose that if someone asked me, why don't modules always have a basis, I suppose I'd say, "Some modules aren't free." Which doesn't answer the question, since a free module is just a module that has a basis. But even a module with a basis is not necessarily a vector space, so I think your analogy is a little off. The integers are a free module over the integers, with a basis, but are not a vector space.
But why do all vector spaces have a basis? It's because of the axiom of choice! Without choice, there's a vector space that has no basis. Consider how strange such a thing must be, before you casually abandon choice.
I said no such thing and of course believe no such thing.
Quoting Mephist
Is this the Curry-Howard correspondence?
Ok I believe I've caught up to your series of posts replying to my most recent of the other day. I still owe you a response to the second long post you replied to me a few days ago.
But wait let's stop here for a moment. If you didn't know what I meant by a computable real, then let's go back to the basic argument. The computable reals can't be Cauchy complete because there are far too few of them to plug all the holes with only computably Cauchy sequences as defined in the Italian paper.
And again, I only mentioned it to kind of wrap up our original discussion. Saying in effect that, "Ok constructivism is all the rage and there's a ton of stuff I'll never know, but the standard reals are the only Cauchy complete model of the reals." That was my original point. If you haven't known what I meant by a computable real all along, this has been a vacuous discussion with an empty intersection.
Also of course there are many (many!) exotic models of ZFC, each with their own version of the reals. That doesn't bear on my simple and basic point.
Sorry, but I was thinking that you used the term "computable reals" in an intuitive way, without a concrete definition!
Yes, now I think a lot of what you said starts to make more sense! :smile:
I did study Turing machines a long time ago, but I didn't remember about this definition at all!
OK, well, I am at work now.. :razz:
I'll read again what you write later, but now I think it's going to be much easier to agree!
In the end, math is not an opinion, right? :smile: :smile: :smile:
Let's look at Kolmogorov axioms here: (http://mathworld.wolfram.com/KolmogorovsAxioms.html)
Everything that is needed is a set [math]W[/math], some [math]Q_i[/math], that can be "anything", a function [math]Q_i[/math] from the [math]Q_i[/math] to real numbers, and a function "complement" on the [math]Q_i[/math].
Let's consider as our probability space the segment [0, 1].
I can take for [math]Q_i[/math] the closed sets included in [0, 1] made of countable number of non overlapping segments with non zero length, and for [math]W[/math] the set of all these sets. The complement of a [math]Q_i[/math] will be the closure of the remaining part of [a, b] when I remove the [math]Q_i[/math]. There are no [math]Q_i[/math] of zero measure (and this is very reasonable for a probability theory: every event that can happen must have a non zero probability to happen).
The complement of a [math]Q_i[/math] overlaps with [math]Q_i[/math] only on the end points, and that is compatible with the axioms: the sum of measures adds up to 1.
The elements of [math]W[/math] are simply ordered pairs of real numbers instead of single real numbers, but everything works at the same way: from the point of view of set theory two segments are equal if and only if the extremes are equal: no mention of overlapping segments at all.
The definition of overlapping segments is the usual one: the higher number of the first pair is bigger than the lower number than the second pair.
There is no need to consider infinite sets of points, and for probability theory there is no need to speak about points at all: probability theory does not need zero-measure events, and no physical possible event has zero probability.
P.S. This is only a very simple example to show that it's not contradictory to define probability without points. Pointless topology is much more general than this and makes use of the concept of "locales" (https://en.wikipedia.org/wiki/Pointless_topology)
Yes... well, half of it: the "proofs-as-programs" interpretation is valid even in the standard first order natural deduction logic, if you don't use excluded middle (the intuitionistic part). Here is a summary of all the rules of first order intuitionistic logic with the associated expressions in lambda calculus: https://en.wikipedia.org/wiki/Natural_deduction#/media/File:First_order_natural_deduction.png
The only rule that doesn't fit with this interpretation is excluded middle. You can take a look at the paragraph "Classical and modal logics" in https://en.wikipedia.org/wiki/Natural_deduction for an explanation of why this happens.
Quoting fishfry
OK, I see your point now!
But consider this (taken from https://en.wikipedia.org/wiki/Real_number under "Advanced properties"):
"It is not possible to characterize the reals with first-order logic alone: the Löwenheim–Skolem theorem implies that there exists a countable dense subset of the real numbers satisfying exactly the same sentences in first-order logic as the real numbers themselves."
Even ZFC can't distinguish between countable and uncountable reals!
You often used this argument: Quoting fishfry
And my answer is: you can assume without contradiction that constructive reals are uncountable, exactly as you do in ZFC! The fact that you can build only a countable set of real numbers only means that you can only consider a countable set of real numbers in the propositions of logic (the same as in ZFC), not that the logic does not allow the existence of an uncountable set of real numbers.
Now, let's consider "computable reals". Computable reals are functions from natural numbers to natural numbers (you know better than me: take as input the position of the digit and produce as output the digit). Well, not all of these computable functions are valid terms in Martin-Lof type theory. The set of functions from Nat to Nat that are allowed as valid expressions depends on the details of the rules, and in Coq sometimes it even changes from one version to the other of the of the software. The reason is that the rules of the language must allow to build only total recursive functions (functions that always terminate), otherwise the logic becomes inconsistent.
So, the functions that you can actually build using the rules of the logic correspond only to the Turing machines that are provably guaranteed to terminate (by using the meta-logic of the language, not the language itself), and that of course is a strict subset of all the Turing machines. But this IS NOT the entire set of functions from Nat to Nat that is assumed to exist, at the same way as the set of real numbers of which you can speak about in ZFC is not the set of real numbers that is assumed to exist.
About the Cauchy completeness problem, I don't know how to address it, because in constructivist logic there are different axiomatizations of real numbers that are not equivalent between each other, and even not equivalent to ZFC reals. You can consider it a defect of the logic, but you can even consider it an advantage, because the thing that all these the axiomatizations have in common (well, not sure if all, but at least the two or three that I have seen..) is the set of theorems that is sufficient for doing analysis. So, the degree of freedom that is left out and makes the difference between the various constructivist axiomatizations corresponds to the aspects of ZFC reals that are not so "intuitive", such as for example the difference between point-based or pointless topology. This, in my opinion, means that there is not only one intuitively correct definition of what are real numbers.
Here's the book: https://homotopytypetheory.org/book/
Chapter 11.3 Cauchy reals:
"""
There are three
common ways out of the conundrum in constructive mathematics:
(i) Pretend that the reals are a setoid (C, ?), i.e., the type of Cauchy sequences C with a coincidence relation attached to it by administrative decree. A sequence of reals then simply is
a sequence of Cauchy sequences representing them.
(ii) Give in to temptation and accept the axiom of countable choice. After all, the axiom is valid
in most models of constructive mathematics based on a computational viewpoint, such as
realizability models.
(iii) Declare the Cauchy reals unworthy and construct the Dedekind reals instead. Such a verdict is perfectly valid in certain contexts, such as in sheaf-theoretic models of constructive
mathematics. However, as we saw in §11.2, the constructive Dedekind reals have their own
problems.
Using higher inductive types, however, there is a fourth solution, which we believe to be
preferable to any of the above, and interesting even to a classical mathematician. The idea is
that the Cauchy real numbers should be the free complete metric space generated by Q.
"""
Well, I see that the problem is much more complex than I thought...
However, solution (i) is the one used in Coq standard library (the one that I knew). And, as I said, in that case Cauchy completeness is an axiom, so.. no problem! :smile:
However, I don't know how to prove that for each ZFC real there exists only one constructive real that verifies all possible propositions expressible in ZFC, and maybe you are right that this cannot be done.
He proposes a fourth solution, based on the hierarchy of infinite sets typical of Homotopy Type Theory, and in 11.3.4 he proves Cauchy completeness. But I don't know HOTT enough to prove anything in it..
So, well, in the end I don't have a definitive answer :sad: (but maybe somebody on https://mathoverflow.net/ could have it)
P.S. I found an answer about countability of constructivist real numbers here: https://stackoverflow.com/questions/51933107/why-are-the-real-numbers-axiomatized-in-coq
"""
As for your second question, it is true that there can be only countably many Coq terms of type R -> Prop. However, the same is true of ZFC: there are only countably many formulas for defining subsets of the real numbers. This is connected to the Löwenheim-Skolem paradox, which implies that if ZFC is consistent it has a countable model -- which, in particular, would have only countably many real numbers. Both in ZFC and in Coq, however, it is impossible to define a function that enumerates all real numbers: they are countable from our own external perspective on the theory, but uncountable from the theory's point of view.
"""
Hi @Mephist
I've been percolating on all this a few days. The insight I'm taking away is that constructivists care about Cauchy completeness. I hadn't formerly realized that. And I have a bridge from what I know, the detailed bits and bytes of Cauchy sequences (ie "epsilonics") and the corresponding notions in constructive math.
I've learned about as much as I'm going to about constructivism at the moment; and much more than I ever knew in the past.
I still don't see how constructivists can claim (if they do) that they have a model of constructive reals that are Cauchy complete. If that is true, to me it seems that it cannot also be "morally" Cauchy complete. I'm wearing my Platonist hat now, also called realism. And in particular, how constructivists deal with the holes that I claim must be present in the real line absent noncomputable reals. I will most likely need to defer learning more about this till a future time.
But you did point me at a reference that you say sheds some light, so I'll read that at least.
I will now dive into the thread since I was last here and see what there is to see. My intention is to answer any questions that I can, but not make too many assertions that I would have to defend.
I believe the main source of confusion here is the concept of a model. If you take ZFC and remove some axioms (the axiom of choice and the logical axiom of excluded middle) the set of theorems that you can prove will be smaller, but the set of models of the theory will be bigger.
All models that were valid in ZFC will be still valid in the "reduced" ZFC, because all valid proofs in the reduced ZFC are still valid proofs of ZFC: you can't prove propositions that are not true in the model if you couldn't do it with the full ZFC.
But yes, it is. It's opinion. What math is, how mathematicians think about and practice math, is under constant change.
In a trivial sense a proposition, once proved, is proved forever. But which proofs and which proof frameworks are considered meaningful, changes over time.
Techniques of proof vary over the years, as do standards of rigor. Gauss was the first person credited, in his doctoral thesis of 1799, with a fully correct proof of the fundamental theorem of algebra (FTA), which says that a polynomial in one variable with complex coefficients has a root in the complex plane; and as a corollary, that every n-degree polynomial has n roots, counted to multiplicity. (ie [math]x^2[/math] has one root but it's counted twice).
But by today's standards, Gauss's proof wouldn't be acceptable from an undergrad. It is no longer seen as rigorous. In fact the final subtlety in this theorem was nailed down as recently as 1920.
Likewise the categorical revolution started by MacLane in the 1940's has already caused set theory to lose its foundational status in many parts of math. Brouwer's intuitionistic dreams failed to become mainstream in the 1930's, but may end up doing so in the near future. I think of HOTT and constructivism as "Brouwer's revenge."
Likewise type theory, which Russell proposed in order to solve Russell's paradox; but instead, the axiom schema of specification won the 20th century (that's the axiom that says that you can form a set from a predicate only if you have an existing set to quantify the predicate over. And now type theory's a big deal again.
Math is not about the theorems. It's about how mathematicians think about mathematical objects. And in that respect math is constantly changing. Math is a historically contingent human endeavor. It's social and it can be political, just like anything else.
At the moment I haven't bandwidth to engage with the probability topic. So to roll this back, I made the following claim:
Claim: Without the axiom of choice you can't do modern probability theory.
I'll now retreat to saying that this is my belief, but if you have refuted my point then so be it. However skimming your argument I see that you refer to pointless topology and locales. These are modern concepts and somewhat specialized. So at worst, my Claim is operable up to a certain level of math, which is still relatively high up the chain in the scheme of things.
Ok I read the next two of your posts, and they contain a lot of meat. I want to respond to them but not tonight, it's late here. I will just say ahead of time that I reject all Lowenheim-Skolem arguments for the following meta-reason. The theorem dates from the 1920's or whatever and can always be used to trump any argument. "Oh yeah, well for all you know the reals are countable because L-S."
Now, here is the thing. For a hundred years, mainstream math hasn't cared. L-S is regarded as a curiosity and of course it's technically true, but nobody thinks that way.
So you now have to be making the claim that the constructivists are making the following argument: "For a hundred years we didn't take L-S all that seriously; but now we do." So I'm not convinced.
Also looking ahead to my more detailed comments to come, I believe that there IS a canonical intuitively correct model of the real numbers, and it's the standard reals. The second order standard reals. With the least upper bound axiom, the axioms for the reals become second order (I've always been a little bit shaky on this point so correct me if I'm wrong) and therefore are not subject to L-S.
The standard reals are the model of time in the Schrödinger equation. Everyone thinks of them the same way. The high school analytic geometry reals. Those are the morally correct reals. All other models are not. I believe that even a constructivist must be able to conceive of the standard reals (and is thinking of the same structure as the rest of us) and must recognize it at some level as the "real" reals. The only way to avoid this is to fall back on formalism. This is my belief. It may well be that I'm nothing but a product of my education, but I have a very strong intuition of the standard reals and I refuse to believe that others don't.
Anyway those are two points I wanted to make after a quick read of your two lengthy posts, but there is a lot of stuff in there and I'll get to it tomorrow.
However, as I wrote here (https://thephilosophyforum.com/discussion/5792/is-mathematics-discovered-or-invented/p1), I am convinced that what are now considered the most relevant parts of math are in reality fundamental facts of nature that are not invented by us but only discovered, and that would be discovered in an equivalent form even by an alien civilization.
OK, so I have a question: what is this morally correct model of real numbers? The set of all infinitely long decimal representations?
There is an absence of a bijection between the natural numbers and the real numbers.
So the "dx" infinitesimals in integrals and derivatives should be the morally correct model? This is how real numbers are intuitively used since the XVII century.
Ok, first of two responses to your lengthy posts of page 9.
I didn't understand this para but you said your point was half of Curry-Howard so I'll trust you.
Quoting Mephist
I'll have to stipulate that I didn't follow any of this. I can't detour into modal logic right now.
Quoting Mephist
Ok!! This was regarding the fact that there aren't enough computable numbers to plug the holes in the leaky boat of the constructive real line.
Quoting Mephist
First, that's not true. Cantor's theorem is a theorem of ZF and it shows that the powerset of the naturals is uncountable. And it's easy to prove the powerset of the naturals is bijectively equivalent to the set of all infinite binary sequences, which can be coded as the reals. So your point appears wrong.
Second, but really first, all Lowenheim-Skolem objections are suspect, because nobody seems to care. So why do constructivists suddenly care? Skolem as I understand it used his theorem to claim that sets are not sufficiently clear from the axioms. Perhaps so. But nobody in a hundred years has ever said, "Oh noes, set theory is destroyed, we have to start over!" Except the constructivists I guess. But I don't see why L-S is suddenly relevant.
Third, the standard reals are (as I understand it but I confess confusion on this point) a second-order theory because of the least upper bound property.
The second order reals are (again, as I understand it, and I am not an expert on these logical matters) categorical, meaning that any two models are isomorphic.
Quoting Mephist
Yes and no. Yes, I do make that point. But I am not making a detailed technical argument. Just stating a curiosity that the standard reals have the Goldilocks property with respect to Cauchy completeness.
I mentioned that as a curiosity, not as an argument. I never imagined anyone would disagree.
Still, what I say is true, and you have not convinced me that the constructive reals are Cauchy complete unless you change the definition of a Cauchy sequence in such a way as to change its essential meaning. This is my working thesis and you have not convinced me otherwise.
Of course there are funny models but funny models are not arguments, they're just funny models.
Quoting Mephist
If a constructive real requires a computable Cauchy sequence, there can't be enough of them to make up an uncountable set except in the "funny model" sense. I'm rejecting funny models in favor of the standard models.
Quoting Mephist
Martin-Lof is a buzzword I hear a lot in these discussions but it doesn't mean anything to me. But of course there are far more functions on the naturals than there are computable numbers. You agree with that.
Quoting Mephist
Well you are claiming that constructivists now regard noncomputable reals as existing. That's certainly contrary to my understanding. But if you allow noncomputable numbers in your ontology you have the standard real numbers and not something else.
That is: If you now claim that the constructive reals are the computable reals plus the noncomputable reals, you've completely conceded my point; and, I imagine, horrified all the constructivists who don't believe that at all.
Quoting Mephist
I can see that!! (lol)
Quoting Mephist
That's an awful lot of handwaving IMO. But there IS an intuitively correct definition of the real numbers: A Cauchy-complete totally ordered field. That's a second-order, categorical axiomitization. It was familiar to Newton and to every high school student.
I can believe that constructivists prefer a different model or models. I can NOT believe that anyone trained in math claims to not have an intuition of the standard reals. That is, I can imagine a constructivist saying, "The standard reals can't be right because you can't explicitly construct most of them." Of course that's true. But I can't believe anyone saying they have no mental picture of the standard reals as locations on the real line.
Ok second my replies to your two longer posts of 6 days ago.
Quoting Mephist
Ok true confession I haven't had a chance to look at that yet but it's on my reading list.
Quoting Mephist
Setoid. Buzzword of the day. From Wiki:
"In mathematics, a setoid (X, ~) is a set (or type) X equipped with an equivalence relation ~."
Well that tells me nothing at all. Given a set there are lots of equivalence relations on it. I assume they must mean some particular equiv rel of interest to someone. Type theorsist or constructivists or such. But what you quoted didn't explain much I'm afraid, it seems to be missing details. What's a coincidence relation? There's a key detail missing. But ok, onward.
Quoting Mephist
Ok this relates to something I ran across, that Cauchy and Dedekind completeness are equivalent if you accept countable choice. I haven't understood yet why that is but I gather it's not too difficult. And if you accept countable choice, which a lot of constructivists do, then things are not so bad. Countable choice (or sometimes dependent choice) is often enough for the theorems we care about.
Quoting Mephist
Unworthy! LOL. "We're not worthy!!"
Quoting Mephist
As far as I can see, the constructivists acknowledge the problems I'm mentioning and are trying to work their way out, but without complete success. Literally: Without "complete" success!
No version of constructively-defined real numbers can possibly be Cauchy complete unless you change the definition.
As far as I can see, I'm not wrong about that even though I'm ignorant of the mighty constructivist struggles to finesse their way out of the problem.
Quoting Mephist
Oh what a cool buzzphrase.
I know what a free group is, and a free Abelian group, and a free module over a commutive ring. I can believe that one can define a free complete metric space over Q and I would love to see that definition, since I would probably understand it. Perhaps that's the answer, or at least an answer, to this dilemma.
Quoting Mephist
We have a meeting of the minds on this. It's certainly more complex than I thought.
Quoting Mephist
Don't know. Haven't ever looked at Coq. It seems to me that the desire to be able to mechanically verify proofs (Voevodsky's original motivation) is not exaclty the same as Brouwer's desire to constrain mathematical objects to the realm of the constructible.
In other words Coq is all well and good, but if it's just to verify proofs, must we abandon standard math to use it? Perhaps I'm missing the subtleties here.
Quoting Mephist
I find it fascinating that constructivists claim to be able to prove the Cauchy completeness of a set, namely the real numbers, whose elements manifestly can not all be computed. But at least I've learned that people are making the effort.
Quoting Mephist
It's worth a shot on math.stackexchange since Andrej Bauer hangs out there I believe. Maybe I'll ask one of these days.
Quoting Mephist
I read that but I don't know Coq so had to skip the details. But the checked answer invoked Lowenheim-Skolem and adding extra axioms; and also made the point you did (which I completely agree with) that even in ZFC we can only define or compute countably many subsets of the naturals. I'll take away that these points are regarded as valid by constructivists and that if I knew more about all these things I'd be enlightened. But then I'd lose my apparenly naive faith in the standard real numbers. I'd hate to lose that!!
Quoting Mephist
Yes yes, perfectly well agreed. Nevertheless the noncomputable (or nondefinable, which is a slightly different notion) reals have a purpose, which is to plug the holes. But clearly my argument hasn't gotten any more sophisticated than that, and in the end I'm probably just not appreciating constructivism.
I can live with that. In fact I've read something along those lines in some of the constructivist references. The decimal representations (modulo dual-representations, of which there are only countably many) are a perfectly good model of the reals. They have the least upper bound property so they are in fact the standard reals.
I wanted to comment on the concept of moral correctness. In 1940 ?Gödel developed the constructible universe, denoted [math]L[/math], which is a model of ZF in which both the continuum hypothesis and the axiom of choice are true. In so doing, he showed that these two propositions are at the very least consistent with ZF. [In 1963 Cohen proved that their negations are consistent as well, showing their full independence].
[Note: The constructible universe is a proper class and not a set, therefore it is not actually a model of anything. I've always been confused on this point].
Now everyone could just go, well, we have a nice model, it's a nice universe of sets that decides the truth of these two propositions. Why don't we just assume that [math]L[/math] is the true universe of sets and be done with it?
That is, if we denote by [math]V[/math] the von Neumann universe, why don't we just adopt the axiom [math]V = L[/math], the axiom of constructibility?
The fact of the matter is that we don't do that. Mathematicians feel that [math]L[/math] is not the "morally correct" universe of sets. For one thing it doesn't have enough sets.
This is a good example of the concept of moral correctness: That although mathematicians may act like formalists when they're writing proofs (or computerized proof verification systems), deep down inside they are all raging Platonists. Nobody does math because they truly believe it's a meaningless formal game like chess. Mathematicians believe that their work is about something. They have intuitions about what these things are; and just because there is a funny model with some property, they do NOT stop there. If the model doesn't match the intuition, the model becomes a handy formal device but the search for mathematical truth continues unabated.
NO! A constructive real DOES NOT REQUIRE a computable Cauchy sequence!
ALL Cauchy sequences of rational numbers (computable AND INCOMPUTABLE) are PERFECTLY VALID real numbers in constructive logic.
If you read my posts I have always said the same thing: constructivist logic DOES NOT MEAN assuming that only computable functions exist!
If somebody that is reading this knows about constructivist logic and thinks that this is not true, PLEASE reply to this post and say that it's not true!
P.S. Now I know what computable reals are, but I still don't have a definition of non computable reals: I imagine that you mean functions from naturals to naturals that are not computable
P.S. Maybe I'll try to explain the difference between assumptions and definitions (I know that it's an elementary concept, but maybe it's not so clear).
In constructive logic I can write "Let [math]F[/math] be a function from natural numbers to natural numbers." This is an assumption.
I know nothing about [math]F[/math]: it can even be non computable.
Then I can write "Definition: [math]G[/math] is the function that takes a number [math]x[/math] and returns the number [math]x^2 + F(x)[/math]".
Now, if [math]F[/math] is not computable, even [math]G[/math] will be not computable, but I don't need to know how [math]F[/math] is defined in order to define [math]G[/math]. I can compute [math]2*G(x) - F(x)[/math] and obtain [math]2 x^2 + F(x)[/math].
[math]F(x)[/math] will never be expanded or computed, because is treated as a "black box" that I assume to exist but can't be used to calculate actual natural numbers.
So, the point is that I can assume the existence of non computable functions, but I must use computable functions in my definitions (in this case the square and the addition functions).
In standard logic this is not true, because I can use the axiom of choice even in my definitions.
In constructivist logic instead you shoud add the axiom of choice as an assumption (or axiom), and then you can use it as an "oracle function" ( a black box, such as F(x) ) inside your definitions. At the end, using constructivist logic you can even do exactly the same proofs that you do in standard logic, but you have to add the appropriate axioms explicitly as assumptions, because they are not built into the logic itself.
You can use exactly the same definition of Cauchy-complete totally ordered field in constructivist logic. Even rational numbers are locations on the real line. The problem is with continuity: points are not "attached" to each-other, right?
It's not about verifying proofs. In every formal logic proofs can be verified mechanically, otherwise it wouldn't be called "formal" logic. It's about the complexity of rigorously written proofs. In ZFC complete formal proofs become so long that nobody uses them in practice. In Coq the situation is much better. In HOTT is even better.
You don't have to abandon standard math to use Coq or HOTT. You can use HOTT to do perfectly standard math with a much simpler formalism for proofs: that's why it's proposed a new foundation of ALL mathematics.
Quoting fishfry
I think I'll give up with this discussion because I see that is leading nowhere!
Quoting fishfry The intuition of a line being made of points and having no gaps is VERY unintuitive, and it's NOT used in standard mathematics: integrals and derivatives are defined as limits of sequences: no sets of points at all. But I am sure I can't convince you about this and I don't see the point of going in circles repeating the same things...
Something like a perfectly balanced, circular orbit with zero friction on the other hand. That's where infinity is. You can't write it down or spell it out but you also have to accept it will theoretically go on forevermore.
The universe is infinite in all directions. If you try to think of infinity as linear, it will always hit a wall.
Ok, I see that I said something that induced you to write in caps. Before responding to the interesting technical points in your following posts, let me just go meta for a moment.
I'm trying to wind down my involvement in this thread.
* I have not said anything new for quite some time. *
* My next move, if I were interested enough to make it a priority, would be to dive into the wonderful references you've been giving me all along.
So if it happens that I'm totally wrong about constructivism, totally wrong about everything for that matter, it's ok with me. I've learned more than I ever knew about constructivism and it still baffles me. I understand Coq and the desire to mechanize the checking of published proofs so as to avoid error. That was Voevodsky's original motivation.
I DON'T necessary understand the mathematical metaphysics that seems to accompany neo-intuitionism. I've looked into Brouwer a little bit over the years and I confess I just don't get it. Why tie your hands with constructibility? Much less computability, which is even more restrictive. Computability is trendy right now. After all we mastered the technology only in the past 50 years, and it wasn't tell ten years ago that everyone started walking around with a supercomputer in their pocket named after a fruit. So I'll forgive a certain amount of computational metaphysics. "We're a simulation," "We can upload our minds to a computer," etc. It's nonsense but it's perfectly understandable. Just as everyone thought the world was a machine in the century after Newton. It will pass. Turing showed that there are naturally expressed problems that can't be solved by computer. This lesson hasn't been internalized in our zeal to reinterpret reality in terms of the computable.
Hence neo-intuitionism. The revenge of Brouwer. I'm all for it.
But I just haven't got any level of advocacy or passion in this thread at the moment, since I'm perfectly well out of things to say. So I'm startled to see that anything I said induced a response involving a lot of upper case typing. I assure you that eliciting such a response was not my intention. I'm not dogmatic about anything, I've pretty much said my piece and learned some things.
I'll try to comment on your posts but I'm most definitely not wanting to engage in any disagreements that involve a lot of capitalization.
Ok that said. This is different from what I understood from the Italian paper, which is that a real number is characterized by (or identified with) a funny-Cauchy sequence defined by a particular rate of convergence that allows us to show that the modulus of the sequence is computable. If there's anything technical I got from this thread it was exactly that.
But I understand as you've explained to me that there are many different approaches to constructivism; and that in some of them, all Cauchy sequences of rationals represent real numbers, not just the computable ones.
I believe you if you tell me this. But the collection of all [equivalence classes of] Cauchy sequence of rationals is exactly the standard real numbers.
So I believe you if you tell me, but actually, I don't believe you. Because you've just constructed the standard real numbers. So I'm confused again.
Ok. I will stipulate to being thoroughly confused on this point. But that's ok.
Quoting Mephist
I'll stipulate you are correct on this point. I just don't understand the mechanism of construction. Perhaps that's what I'm missing.
Quoting Mephist
A computable real is a real number that is computable in the sense of Turing 1936. A noncomputable real is a real that isn't. This is perfectly sensible, isn't it?
And of course you can identify noncomputable reals with noncomputable maps from the naturals to themselves, just as you can identify all the reals with that set.
Quoting Mephist
Is a black box like an oracle, a device that can solve a noncomputable problem?
Quoting Mephist
Ok, you can refer to noncomputable functions and then computable expressions involving them. That makes sense.
Quoting Mephist
You haven't related anything to the axiom of choice, I totally didn't get this reference here. It doesn't seem to apply to anything you said. You don't need the axiom of choice to have noncomputable functions.
Quoting Mephist
I think you are using AC in a funny way. I tried to correct a couple of instances of this earlier. Nothing we're talking about has anything to do with the axiom of choice.
I do not believe that an oracle in computer science is in any way analogous to the axiom of choice. If you know of a connection perhaps you could explain it. An oracle allows you to compute something that was formerly noncomputable. The axiom of choice doesn't let you compute anything. In fact the elements of the sets given by the axiom of choice have no properties at all and could never be computed by any stretch of the word.
Quoting Mephist
I believe you but there's a kernel I'm missing. If the constructive reals let you prove all the same theorems, what is the point? What's wrong with the original proofs? And by "built into the logic," you have a lot of meaning for that phrase in your mind but I don't know what you mean.
Perhaps you need to make your point more simply because I'm not seeing it at all.
Right. The real numbers are a continuum. The rational numbers aren't because they're not Cauchy complete. The computable reals are not because they are not Cauchy complete. (I proved this earlier). And the constructive reals ... well I can't say, because I still don't understand the difference between the computable reals and the constructive reals. That's another point of mystery for me.
But even in the standard reals the points are not attached to each other like bowling balls. It's more like infinitely stretchy taffy. You can take any two points, no matter how close together, and stretch the line between them to any length you like.
Yes yes this is the part I understand. Coq is a tool for helping to avoid errors in published math. That was Voevodsky's idea.
It's the mystical mathematical metaphysics of Brouwer I don't understand. The claim that every mathematical object should come with a recipe for building it. I've never been excited by that idea. It seems unnecessarily restrictive. After all there is no reason to believe the world is a computer, no matter how many TED talkers assert the proposition.
Quoting Mephist
On the contrary. It's been very valuable to me. It's just that I have no more points to make. And you've tried to explain things to me that I haven't understood. The fault is mine.
Quoting Mephist
You're right. You could never convince me of that since you're wrong. What if there were a sequence that morally should converge, but there was no limit there? That's the point of Cauchy completeness. You need it to ensure the existence of the limits you need for your integrals and derivatives.
But I sense that you are unhappy or frustrated. At my end I've found this very enlightening and helpful. I appreciate the conversation. If we're done that's good. I may peruse your links from time to time.
A line is made of points with no gaps? Now I don’t know any theory, but what I intuitively grasp from taking calculus is that a point has zero dimensions. A line is not made of points. A point just divides a line into two segments.
Forgive my elementary level knowledge on the subject.
In the standard set-theoretic account of the real numbers, the real line is made of points. That is, the line is the union of all the sets that each contain one point, if you think of it that way.
That doesn't mean there's a point and then a "next" point. They're not lined up like bowling balls. Between any two real numbers there's another. In fact between any two real numbers, the points between can be stretched into a new copy of the entire real line. So it's like maple syrup, not bowling balls if that helps.
There are no gaps. "Every Cauchy sequence converges." That's the technical condition that ensures there are no gaps. It means that every sequence that should converge, does converge.
As a helpful example, consider the rational numbers. Between every two rational number there is another. But there are still gaps! For example the sequence of rational numbers 3, 3.1, 3.14, ... that converges to pi, doesn't converge in the rationals. There are gaps in the rationals at every irrational.
But the reals have no gaps. That's their defining property. All the gaps in the rationals are filled in.
I can't speak to constructivist or intuitionist conceptions of the real line. And there's also the hyperreal line, in which every real number is surrounded by a cloud of infinitesimals.
Quoting Noah Te Stroete
It's a deep question, the nature of the mathematical continuum. People have been arguing about it forever.
I apologize for the intrusion.
No intrusion at all, I love to talk about the standard real numbers! I'm not sure which part of your question I didn't clarify, I'd be glad to try again if you have more questions. As I understand it you asked how the real numbers can be made of points. You take all the real numbers and collect them into a set. That's the set-theoretic viewpoint. So the real numbers are made of points just as the set of natural numbers {0, 1, 2, 3, ...} is made up of the individual numbers 0, 1, 2, 3, ...
There are no gaps in the real numbers, but i'm not sure which part of this you are asking about.
This to me seems to be what @Dfpolis was saying that mathematics must be instantiated in nature first otherwise it is pointless to talk about numbers existing in a Platonic ideal realm. That’s what I got from what you both were saying. Pardon the intrusion.
But of course in this case the axioms are chosen with the intent to be the minimal constraints that have to be satisfied by any definition of probability that makes sense in physics.
This is very common: the mathematical definition is derived from the physical intuition, but is used as an axiomatic theory completely disconnected from physics. That's why it often happens that after several years somebody finds a "model" of the axiomatic theory that does not correspond to the physical intuition at all but verifies all the axioms, so from the point of view of mathematics it is a valid model.
The point is that axiomatizations are needed if you want to use formal logic (and you want to use formal logic because physical intuition is not enough to avoid paradoxes), but the axiomatization very often does not capture exactly the physical concept that you have in mind. And that happens even for the most elementary physical concepts, such as the concept of natural numbers.
You mean the decimal representation of a real number? Yes, they're infinitely long. For example 1/3 = .333... and sqrt(2) = 1.4142... and so forth. Even terminating decimals like 1/2 = .5 = .4999... have infinitely long representations.
But the representation isn't the real number. The real number is the abstract thing pointed to by the representation, just like 2 and 1 + 1 are two different representations of the same abstract number.
A line is only one dimensional. A plane is two dimensional. Points are zero-dimensional. How a bunch of zero dimensional points make a line is a bit of an ancient mystery. Newton thought of a point as tracing out a curve as the point moves through space. But that only pushes the mystery to space itself.
There's really only one standard real number line that most people care about, the one from high school math used by physicists and engineers. The other stuff is esoteric. Constructivism is the idea that every mathematical object must be able to be explicitly constructed. That leads to a different kind of notion of the real line. But I really can't speak for constructivist philosophy, since I don't know much about it. You can see the difficulties @Mephist is having in explaining it to me!
Thank you both for your patience with me. This is interesting stuff.
But probably this is the opposite of how mathematics and logic are usually taught, and I have to confess that when I studied analysis at university (I studied engineering), I learned the rules to do the calculations and remembered how to proof the theorems, but I never studied anything about the "foundations" of mathematics, such as Zermelo-Fraenkel set theory.
So, I don't really feel qualified as a "teacher" to explain to @fishfry or to you about philosophical questions related to infinite sets, but I have very clear ideas of how formal logic systems work, and even my own theory of how mathematics "works".
First of all, does our imprecise use of real numbers warrant a precise account?
What does it even mean to say that a real number denotes a precise quantity?
For in what sense can a formal system speak of universal quantification over all of the natural numbers?
Recall that in a computer program, an infinite loop is merely intended to be an imprecise specification of the number of actual iterations the program will later perform; it is only the a priori specified number of iterations that isn't upper-bounded in the logic of a program, that is to say before the program is executed.
Therefore if the syntax and semantics of a formal system were to mimic a programmer's use of infinity, it wouldn't conflate the 'set' of natural numbers with an a priori precise set, but would instead refer to this 'set' as an a priori and imprecise specification of an a posteriori finite construct whose future construction is generally unpredictable and isn't guaranteed to exist for a given number of iterations.
A universal quantifier over N would not longer be interpreted as representing a literally infinite loop over elements of N, but as representing a termininating loop over an arbitrary and unspecified finite number of elements, where the loop's eventual termination is guaranteed, even if only through external intervention that is outside of the constructive definition of the program or formal system.
By interpreting 'infinite' quantification as referring to an arbitrary finite number of elements, the law of double negation is then naturally rejected; for an arbitrarily terminated loop cannot conclusively prove anything in general.
This interpretation also has the philosophical advantage that semi-decidable functions are no longer informally misinterpreted as talking about their 'inability to halt', and likewise for formal systems supposedly 'talking about' their inability to prove certain statements.
So in short, an imprecise and pragmatic model of the real numbers is what is important, corresponding to how numerical programmers implement them in practice, with particular attention paid to numerical underflow.
I am not sure what you mean by "imprecise use of real numbers", however the rules of calculus give precise results:
* The volume of a sphere inscribed in a cylinder is exactly 2/3 of the volume of the cylinder (https://en.wikipedia.org/wiki/On_the_Sphere_and_Cylinder)
* [math]e^{\pi i}[/math] is exactly equal to -1
Quoting sime
The standard way is assuming the principle of induction: if a proposition is true for n=0 and from the fact that is true for n you can prove that is true even for n+1, then you assume that is true for all natural numbers. (https://www.encyclopediaofmath.org/index.php/Induction_axiom)
It cannot be verified in a finite number of steps, so it's assumed as an axiom.
But assuming it to be false is not contradictory: you can assume the existence of numbers that can never be reached by adding +1 an indefinite number of times. (https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic)
Quoting sime
I don't understand what you mean by "a pragmatic model". Do you mean that there should be some sort of "approximate" logic, like some sort of fuzzy logic? (https://en.wikipedia.org/wiki/Fuzzy_logic). The standard "epsilon-delta" definition of limits is not "pragmatic"?
So then question is, in what sense does the principle of induction speak of universal quantification over all of the natural numbers?
Take any predicate P with a single argument, then take as axioms
i. P(0)
ii. (x) ( P(x) => P(x+1)) ,
where ii. denotes universal quantification over x, where x is a bound variable.
Then we say, "By informal convention, this predicate is now said to be 'true' for 'all' x."
But what exactly has our demonstration achieved in this extremely small number of steps?
Do we really want to insist that our use of these axioms is equivalent to a literal step-by-step substitution of every permissible number into P?
Do we even want to say that we are assuming the existence of this inexhaustible substitution?
All that i and ii define in a literal sense is a rule that permits the substitution of any number for x.
Nowhere does this definition imply that induction is a test for the truth of every substitution.
Therefore it is coherent to accept mathematical induction as a principle of construction, and yet reject it's interpretation as a soothsayer of theoremhood.
Sorry, that's confusing and misleading, by theoremhood i was referring to truth in the respective model of the axioms.
What i mean is that mathematical induction 'proves' a universally quantified formula in a completely vacuous sense in that the 'conclusion' of the induction, namely the universally quantified formula (x) P(x), is merely an abbreviation of the very axioms i and ii.
On the other hand, for universally quantified formulas over infinite domains that do not possess a proof by mathematical induction, there is no reason to support their informal interpretation as enumerating the truth or construction of the predicates they quantify over, by the very fact that they do not correspond to a rule of substitution.
I would say that "forall x in S", where S is an arbitrary set, is not interpreted as a step-by-step substitution, because in general not for every set is possible to define a total order: so not only it's impossible to visit all the elements during the iteration, but it's not even possible to define what the step n+1 should be for an arbitrary step n.
In general the interpretation is "choose whatever element x in S", or "choose a random element x of S".
The meaning is given only by the rules of logic, and the rules say that if you have "forall x exists y", then you have a function from x to y, and a function is assumed to be a primitive concept. So I guess there is no real definition of what "forall" means.
Quoting sime
Well, the meaning of the axiom is exactly this: there are no other natural numbers except the ones that you can reach by iterating the successor function starting from zero. So, there are no "unreachable" natural numbers.
Or, put in another way, if a property of a natural number is true for every n+1 number starting from zero, then whatever natural number you choose will have that property.
P.S. If you don't assume the induction principle to be true, there could be for example several infinitely long "chains of numbers" starting from different points, not only from zero. Or you could assume that there is an integer so far away from zero (infinitely far) that will be never reached by counting, even if each number is "attached" to a following one, as in nonstandard arithmetic
(https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic).
Yes, exactly. Not every domain of quantification (set in set theory or type in type theory) is supposed to be enumerable, or iterable.
Hi @fishfry.
Reading again what you wrote, I think that maybe I am able to explain what I meant here.
Here's the axiom of choice, taken from wikipedia: (https://en.wikipedia.org/wiki/Axiom_of_choice)
[math]\forall X [ \emptyset \notin X \Rightarrow \exists f:X \to \bigcup X, \forall A \in X ( f(A) \in A ) ] [/math]
In a constructivist logic interpretation, it says that if you have built a set of nonempty sets X, using this axiom you can build a function f.
In other words, this axiom is like a black box function (or an oracle), where you put as input the set of nonempty sets X and you get as an output the choice function f.
Then, you can use the choice function f as you do in ZFC. The resulting proof is the same, only that in ZFC you don't treat the axiom as if it were a function: you simply apply it.
In this sense the axiom is an oracle: it allows you to compute f without knowing how the function that produces f from X is implemented, and f does not have to be computable or incomputable: you simply don't care about it's computability.
Yow write "In fact the elements of the sets given by the axiom of choice have no properties at all and could never be computed by any stretch of the word". But in constructivist logic is the same thing: if X is the class of all subsets of real numbers determined by the equivalence relation "two numbers are equivalent if their quotient is rational", you can define as A the class determined by [math]\pi[/math], and then write f(A) to get a real number. "f" is treated in a purely symbolic way, never computed.
I think that the main source of confusion is that usually a logic is called "constructivist" when it does't have any axiom: in that case there will be no "oracle" functions, and the "forall x ..., exists y ..." will correspond to really computable functions from x to y. But even in this case, a real number will be typically defined as a sequence of rational functions (a limit, in the usual way), not as a function that computes the digits of the decimal representation of the number, and the fact that Cauchy functions are convergent will be part of the axiomatic definition of real numbers: it will not be a "logical axiom", but still will be treated at the same way as an "oracle", because it's part of the definition of what real numbers are.
It's the same thing as when you define natural numbers: you don't have to prove the principle of induction: it's simply part of the (axiomatic) "definition" what natural numbers are.
If you add the missing logical axioms to constructivist logic (excluded middle and choice) treating them as "oracles" (excluded middle can be seen as a function that returns "P1 or P2"), you obtain back classical logic, only with a slightly different "interpretation" of the rules.
I don't know if this explanation makes sense: maybe the best way to explain would be to show some examples of proofs..
As I think of it, perhaps a new axiom is an oracle. I understand that point of view.
So I'm willing to be agreeable on this point, but still no less confused on the issues I've raised earlier.
Also ... no form of the AC could possibly be construed as a computation. That, I just do not see at all. It's a pure statement of existence. A particular set exists with no means to possibly compute or construct or identify any of its elements. Whether this is worse or the same as the powerset axiom I can't say. Perhaps it's no worse. But a lot of people historically regarded it as worse. I can't put any of that in perspective.
I've re-read the SEP article on constructive math with a much deeper level of comprehension than I've had in the past. Your efforts have not been in vain.
I don't remember which one is the SEP article. Could you send me a link?
Quoting fishfry
Yes, the AC can't be construed as a computation, and it's not part of constructivist logic. What I am saying is that AC can be added to a constructivist logic framework such as Coq if you want to use standard logic: standard logic can be obtained as an extension of constructivist logic (in a similar way as metric geometry can be seen as an extension of projective geometry): if you want to use classical logic in Coq (and I think that's how Coq is used most of the time) you just have to import the Classical_Prop library: https://coq.inria.fr/library/Coq.Logic.Classical_Prop.html#
The possible forms of the axiom of choice (without assuming EM) and the relations between them are here: https://coq.inria.fr/library/Coq.Logic.ChoiceFacts.html
https://plato.stanford.edu/entries/mathematics-constructive/
They echoed many of the things you've been saying, such as that there are several (four in fact) variations of constructivism, and that there's a constructivist logic ... well basic stuff, but stuff I've been reading without comprehension before and can now understand. Also I've learned that a lot of this goes back to the nineteenth century. Poincaré for example was troubled by nonconstructive thinking.
That's why you should never be frustrated about not being able to get through to me. You are getting through by osmosis. I've never bothered to try to understand constructivism at a technical level before and this is an education. The article talked about putting explicit bounds on the rates of convergence of Cauchy sequences, and that related back to the Italian paper, so it's starting to feel familiar. One learns by repetition.
I will say one thing though. I still just don't get constructivism as a philosophy. I DO understand that certain formulations of constructive math lend themselves beautifully to computerized proof systems. Nobody would deny the digital deluge that's flooding the world right now, why should math be spared?
But I don't get the metaphysics. "A thing only exists when we have an algorithm to instantiate it." But the world is full of things that have no algorithm. The world itself is not an algorithm.
And here I think is the source of my discomfort with constructivism. Many these days believe that the world IS an algorithm. I disagree with that proposition. To the extent that some -- not all, ok, but some -- aspects of constructivism are in support of the belief that everything that exists must be an algorithm; I oppose that aspect. I instinctively believe that the constructivists are missing a lot: namely, everything that is important but that can't be explicitly constructed.
Quoting Mephist
You've said this to me probably several times and I finally understand it. If I've got this right, you can add various axioms to constructive math and recapture standard math. So we get the benefit of computerization and we can still do the things standard math cares about. Which is pretty interesting, if true.
Ah, "Classical_Prop." Coq has a library you can import that makes it act like standard math. You've connected adding axioms in a formal system, with with importing a library in a program. Wow. Thanks. Great insight. I got it.
I completely agree. I don't believe that the world is an algorithm either. And mathematical objects (and of course physical objects too) are not algorithms. But proofs of existence are algorithms, in any formal logic.
In my opinion, the reason of treating the axiom of excluded middle as less "fundamental" then the others is that is related to the more philosophical and abstract notion of truth: a proposition is either true or false.
In classical logic when you derive a proposition P you interpret it as a prove that "P is true". When you derive a proposition "not P" you interpret it as a prove that "P is not true (or false)".
Now, it turns out that if you replace "P is true" with "P is provable" and "P is false" with "P is absurd (meaning P implies a logical contradiction)", all the rules and axioms of classical logic will still be intuitively justified, except one: the axiom of excluded middle. A proposition can be not provable and even not absurd.
So, the axioms of logic without excluded middle still make sense, but with a different interpretation of the meaning of propositions. This is in some way similar to what often happens in mathematics: a simplified model appears to be in some way more "fundamental" if it is consistent and still compatible with the full model. For example in projective geometry there is no concept of length, but the concepts of points and intersecting lines still make sense.
Well, at the same way you can see intuitionistic logic as the part of classical logic that doesn't depend on the notion of truth. And even negation does not have the same meaning: "not P" does not mean "P is false", but "P implies a contradiction". In fact, the "not" operator is even not needed in intuitionistic logic: just replace every occurrence of [math]\neg P[/math] with [math]P \Rightarrow \bot[/math].
Following the analogy with projective geometry, it is true that not all theorems can be proved without using excluded middle, but it still makes sense to distinguish the ones that can be proved without it from the ones that can't, because the first ones are valid for a larger number of models (or, put in another way, are true for more "fundamental" reasons).
The same thing happens in all mathematics: why do you want to isolate group theory from Galois theory about the solution of polynomial equations? Because group theory is more general (since it's theorems depend only on a smaller set of assumptions), and then can be seen as more fundamental.
[ I have to go now: I'll explain in the next post what all this has to do with algorithms ]
Here's the quick answer: in intuitionistic logic a proposition P can be interpreted as the "description" of the algorithm that has been executed (using the rules of logic) to build the proof of P. And this of course has to be a deterministic algorithm that terminates in a finite number of steps, because every proof must be made of a finite number of steps.
Now, the problem is that the long answer is much, much longer. So, I'll describe the consequences of the short answer before.
The first thing to clarify is that intuitionistic logic (more concretely these rules: https://en.wikipedia.org/wiki/Natural_deduction#/media/File:First_order_natural_deduction.png)
are only a first order logic, and not a theory of sets.
ZFC, instead, is classical first order logic (intuitionistic + EM), plus the axioms of set theory.
Now, the axioms of ZFC "work well" with classical logic, but not with intuitionistic logic. In particular, the correspondence between algorithms and proofs works only in the very particular formulation of "natural deduction", with no axioms at all. The addition of ZFC axioms would introduce the [math]\in[/math] predicate, that does not respect the "propositions as algorithms" interpretation. Some of the reasons why ZFC doesn't "fit well" with intuitionistic logic can be found here: (https://en.wikipedia.org/wiki/Constructive_set_theory)
"""
In 1973, John Myhill proposed a system of set theory based on intuitionistic logic[1] taking the most common foundation, ZFC, and throwing away the axiom of choice (AC) and the law of the excluded middle (LEM), leaving everything else as is. However, different forms of some of the ZFC axioms which are equivalent in the classical setting are inequivalent in the constructive setting, and some forms imply LEM.
"""
Basically, the problem is that ZFC axioms imply EM. So even if you exclude EM from logic axioms, you can still derive it from ZFC axioms.
However, there is a "natural" extension of the rules of intuitionistic logic that respects the correspondence between algorithms and proofs and is interpreted in a similar way as a theory of some sort of "sets": intuitionistic type theory. (https://en.wikipedia.org/wiki/Intuitionistic_type_theory)
In intuitionistic type theory sets are of course replaced by types (that are not exactly the same thing), but there is "practically" an one-to-one correspondence between propositions expressed in the two languages. From a formal point of view, however, there is a big difference: ZFC is implemented as axioms on top of an independent first order logic. Intuitionistic type theory instead is an extension of the rules of intuitionistic first order logic to an high-order logic (basically by adding quantification on functions), still making no use of axioms at all.
Now, back to the answer about algorithms.
As I wrote in the previous post, intuitionistic logic does not speak about truth but about provability and proofs. And proofs are, as in every formal logic, algorithms.
The fundamental difference is that proofs in classical logic are not an essential part of the theory: the important thing is that a proof of a proposition P exists. If a proof exists, the proposition P is true, and that is all what matters about P. The algorithm that has been used to prove P is not considered as part of the language.
In intuitionistic logic, instead, the language speaks about proofs, and the terms of the language are meant to represent proofs (that are algorithms), and then the language speaks about algorithms.
For example, in type theory I can write the expression "x : P", meaning "x is the proof of the proposition P". "x" in this case represents an algorithm.
Then I can have the expression "f : P -> Q", meaning that "f" is the proof that "P implies Q". This means that "f" is an algorithm that takes as input a proof of P (another algorithm) and returns as output a proof of Q (a third algorithm).
The "side effect" of this interpretation is that I never have to verify how a proposition has been proven, because proofs are always "recorded" together with the propositions. The propositions are interpreted as types, and the terms are interpreted as proofs (always written in the form "x : P") and checking the proof
of P means checking if the term x has type P (this is done with an algorithm that is part of the meta-theory).
A full explanation of how all logical operators are interpreted would take me probably several pages, and I can't write a book here.. :smile: (but there are a lot of good books on type theory)
However, back to the point of existential propositions, if I built a proof of "exists x:A, P(x)", it means that I built a proof of A called x, and a proof of the proposition P(x) (P(x) is what is called "dependent type").
If A is the type of real numbers, a Cauchy sequence is a "proof" of A. Meaning: a real number is treated formally as if it was a proof that a real number exists, and corresponds to the algorithm that has been executed (applying the rules of logic) to build that real number.
P.S. Reading what I just wrote here, I doubt that it's in some way understandable by anybody who doesn't already know about type theory. But I am not really able to explain it better in the space of a post...
I should quit now!
I just wanted to say that in the cold light of day I no longer believe what I wrote last night. If you start with classical math and remove LEM you get some form of constructivism. If you add LEM back in, you get back classical math. So there's much less than meets the eye when you say you can add in axioms to recover classical math.
Secondly, you said you can import a library into Coq to recover standard math. But if that were true, standard math would be computerizable, and then that would remove the biggest benefit of constructivism.
I confess to being just as confused as ever, but at a somewhat higher level.
It's too hot to type here today so I'll save the rest for later.
ps:
Quoting Mephist
Ok let me just try to get some clarification. If I use the axiom of choice to whip up the Vitali set, the set of representatives of the equivalence classes of [math]\mathbb R / \mathbb Q[/math], that set is not "constructive" by any conceivable definition of the word.
But if you code up the axioms of formal ZFC into a computer, the proof of the existence of the Vitali set can be cranked out by a program. Even by an undergrad! So in that sense, proofs are constructions even if they are constructing things that are in no way constructive.
Have I got that right? What I mean is, am I confused about something sensible?
Yes! Exactly!! Proofs are constructions in ANY formal logic, because that's how formal logic is defined!
There are computer-based proof verification systems for a lot of different logics: https://en.wikipedia.org/wiki/Proof_assistant
You can look here for a short story of the development of these systems: https://en.wikipedia.org/wiki/Automated_theorem_proving
These systems are called "proof assistants" because generally they try to allow the user to skip some steps of a proof and infer the missing steps, so that the user is not forced to write every single step, that can be very tedious (depending on what logic you use), but the main functionality that all of them provide is proof verification.
However, the truth is that this is all "theoretical" stuff: in practice, at least before Voevodsky introduced HOTT, mathematicians have always ignored these systems (as they have ignored formalized logic), because they are way too tedious and time-consuming to use in practice. The problem is that they force you to concentrate on absolutely unessential steps of the proof, and to use a language that does not describe at all what the important steps of the proof really "mean". But this problem is not due to the use of computers (formal systems in a sense were "made" for computers, even if they were invented when computers didn't exist yet); this is a problem with the formal languages themselves, and that's the main problem that HOTT is trying to solve.
However, you can for example look at the HOL proof assistant:(https://en.wikipedia.org/wiki/HOL_(proof_assistant). This is a very clear implementation of proofs in classical higher order logic as programs. In HOL you don't even enter a special environment where you write proofs: you write directly a program in the ML programming language (a perfectly "normal" programming language) to literally "build" the propositions that you are proving. There is a data type in ML called "proposition" and the rules of classical logic correspond exactly to the type constructors of the "proposition" data type. And the logic (higher order logic) is probably the nearest formal language to the "informal" classical logic used in practice by mathematicians in real (non formalized) proofs.
The point is that "computerizable" does not mean "computable", because terms built in classical logic in general don't correspond to computable functions, and that is due to the fact that some of the rules, or some of the axioms, allow you to build mathematical functions in an indirect way, and that is what corresponds to the use of "oracle" (or non computable, or axiomatically defined) functions in constructive logic.
What constructive logic does is to move out of the logic all the non computable parts (leaving only the rules that build terms in a "direct" way), that can be however added back to the logic as axioms (usually by importing them as libraries) to recover classical logic.
What remains after removing the non computable parts of standard logic is a language that speaks only about proofs (not about truth), and proofs are purely syntactical entities that can be reasoned about using only computable functions.
Now, just before leaving for vacations, the metaphysical part, that surely I'll not be able to defend in a philosophy forum :razz:
The algorithms that represent "all the things that exist" are not only finite algorithms, but even infinite ones: they are ALL the functions (even not recursive or not total). So, the rules of logic are based on the model of computable algorithms, but are supposed to work for any function (that can be seen as some kind of generalization of an algorithm).
If you think about it, that's the same kind of "generalization" that is done in set theory: the rules of logic are based on the model of finite sets (I am speaking about the rules of first order logic - that are the ones that define the "meaning" of logical operators - not about the axioms of ZFC), but are supposed to work even for the infinite sets of set theory.
So, in constructivist type theory you replace the concept of infinite sets with the concept of infinite constructions. The rules of logic allow you to add "peaces" to these constructions an to compose them with each other, but of course you can't "decompose" the infinite ones: you have to treat them as a "black box", or an "oracle", or a function of a programming language belonging to an external library that "runs" on some kind of "machine" that can compute even uncomputable functions.
So, as in set theory you can think of everything as made of sets, in constructivist type theory (meaning the Martin-Lof kind of type theories) you can think of everything as made of functions.
And that's basically the reason of the strict correspondence between type theory and category theory: category theory is an axiomatic theory of functions (based on set theory). It axiomatizes the properties of functions to generalize them to morphisms, that are "anything" that "works" as a function.
As the physical intuition for sets is that every object is made of parts, the physical intuition for functions is that every our experience can be seen as an interaction, or an experiment.
The only assumption is that functions are something that always give the same result if you prepare them in the same way.
But in the end, any of these intuitions is really true at a fundamental level of physics:
- it is not true that everything is made of parts, because the "particle number" is not conserved.
- it is not true that there are experiments that can be prepared always in the same way to give always the same result.
But if you don't assume the idea that everything is made of parts, you don't have to assume the unintuitive :wink: idea that a line is made of points: you can think of lines and points as two different types, and an intersection of two lines as a function from pairs of lines to points.
The revenge of Russell. Type theory triumphant.
I'm all typed out, no pun intended, nothing else to say at the moment.
Take as an example "My local postbox contains some letters". This proposition isn't a full specification of a set, and hence is not representable as a specific algorithm and hence cannot be constructed, and therefore it must be stated axiomatically as a particular set which exists but without having any internal details. Some people might already interpret set theory, type theory or category theory in this way but this isn't thanks to those theories themselves.
- The Axiom of Choice is wrongheaded in two respects:
i) It isn't a logical rule permitting the universal existence of non-constructable sets. Rather, it is used as a non-logical proposition to refer to a specific set in a domain of inquiry whose number of elements is potentially infinite, that is to say, isn't specified in logical description.
ii) The axiom has nothing to do with choice. On the contrary, it is usually used as a reference to sets for which a choice-function isn't specified.
Whereas classical logicians often mistake AOC for a logical rule, Constructive logicians often mistake the Axiom of Choice for a tautology; for they conflate the existence of a choice-function with the existence of a set; yes it is true that 'constructed set', 'computable set' and 'choice function' are synonyms - but this misses the point as to how the axiom of choice is used, namely in order to represent 'externally provided sets' that provide elements on demand, but whose mechanism and quantity of contents are unspecified.
To nevertheless insist that every physical set must be constituted by a computationally describable mechanism, should be regarded as a physical thesis, rather than being a logical theorem. For logic is a purely descriptive language without physical implications. Physicists might use set theory as a means to document their epistemological certainty and uncertainty regarding the internal operation of physically important sets, but this epistemological interpretation of set-theory isn't part of set theory per-se.
The correct semantics of logic and mathematics is game semantics describing the interaction of entities created and controlled by the logician on the one hand, and the entities of his external environment that he is aware of, but for which he has only partial control or specification of. This semantics serves to reinforce the notion that logic and mathematics are languages of empirical description that documents the interactions of the logician with his external world.
Quoting Mephist
Ok. I didn't go through it in detail. You shouldn't get hung up on the particular rotations, that's not relevant. What's important is that the isometry group of 3-space contains a copy of the free group on two letters, which already has a paradoxical composition.
I haven't time tonight (and I'm kind of busy for a few days and probably haven't the concentration anyway to go through your post line by line) but I recommend the Wiki outline of the proof, which I find pretty clear. https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox
But you're asking for things to be continuous or preserve topologies or whatever, and those things have nothing at all to do with the proof. As I say I don't have the time/energy right now to go through your post but you're missing some of the overall structure. All of the transformations generated by the two matrices are isometries, or distance-preserving transformations. They're all rotations around the origin in fact. And the group of isometries contains a copy of the paradoxical free group. That's the main idea to absorb about the proof.
See https://en.wikipedia.org/wiki/Paradoxical_set
https://en.wikipedia.org/wiki/Free_group
I'll look at your proof some more but actually from what you wrote you're probably working from a source that emphasizes the particular matrices, but that's a troublesome way to go. All that's important is that there exist a pair of rotations that are independent, in the sense that no finite combination of the rotations and their inverses can ever be the identity rotation. That's why they are an example of the free group on 2 letters.
Def see this page.
https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox
ps -- I glanced at your link. That paper's way too detailed and technical. Use the Wiki outline. There are four steps:
* Grok the sublime beauty of the paradoxical decomposition of the free group on 2 letters, [math]\mathbb F_2[/math]. In the proof sketch you gave, you did this part in the context of the two rotation matrices and that's terribly confusing. You can see from the free group that the paradox is already inherent in simply composing any two invertible operations. It's better to see this part first, before diving into the rotations.
* Take on faith that there are a pair of rotations of the unit sphere that are independent, hence are a model of [math]\mathbb F_2[/math]. We don't care what the two rotations are. This is a technical part of the proof that's perfectly ok to simply accept.
* Let H be the copy of [math]\mathbb F_2[/math] generated by the two rotations of the unit sphere. Let H act on the sphere. The action partitions the sphere into disjoint orbits. You are correct that each orbit is countable and that there are uncountably many of them. So you're getting most of this right.
* Pick a choice set consisting of one element from each orbit. We recover the sphere by translating the choice set; analogous to the way we recover the reals by translating the Vitali set. If you have not studied the example of the Vitali set, start here. It's not directly relevant to Banach-Tarski but it's same process of taking an equivalence relation and translating back a choice set on the equivalence classes to recover the original set. This is how to think about B-T.
https://en.wikipedia.org/wiki/Vitali_set
* Finally you apply the original paradoxical decomposition of the free group. I think I'm a little fuzzy on this point tonight but I believe I understood it at one point a few months ago. That was five steps. Wiki has it as four.
ps -- Ah I think I remember. After translating the choice set, every point on the sphere can be uniquely expressed as some particular rotation applied to some particular point in the choice set. Now when you apply the paradoxical decomposition of the free group to the set of rotations, you get a paradoxical decomposition of the sphere, modulo some details.
If you're having intuitions of topologies or continuity, those are the wrong intuitions to be having. What's interesting though is that each orbit is dense -- that is, arbitrarily close -- to every point on the sphere. That's why I said earlier that the orbits are disjoint but in no way separated. They're like the rationals in the reals. Arbitrarily close to every real, but you couldn't cleanly separate them out.
I hope some of this helps. I'm not an expert. I can see that some of my exposition is fuzzy. I've been at this proof for quite some time. A few months ago I started writing up a simplified explanation and at one point convinced myself I understood about 95% of everything that was going on. Tonight I've got maybe 50 or 70%. But perhaps some of what I wrote will connect with your intuition. But forget continuity. The "pieces" of the decomposition are wildly discontinuous.
Have you seen the Vitali set? You should start there. Define an equivalence, take a choice set, translate the choice set back to recover the original set.
* Consider the integers [math]\mathbb Z[/math]. Define two integers equivalent if their difference is a multiple of 5. This partitions the integers into five equivalence classes, each class countably infinite.
Now consider each equivalence class. One is {-10, -5, 0, 5, ...}, another is {-9, -4, 1, 6, ...} and so forth.
Now "using the axiom of choice," form a choice set consisting of one element from each equivalence class. Of course in this case we don't actually need choice, we could just take, say, the smallest positive element of each class. But we'll use the sledgehammer of choice just to get into the proper spirit.
Such a choice set would look like, for example, {-10, 6, 12, 18, -1}. One element from each congruence class mod 5.
Now we translate the choice set by every integer. {-10, 6, 12, 18, -1} + 0, {-10, 6, 12, 18, -1} +/-1, {-10, 6, 12, 18, -1} +/- 2, etc. In this way we recover all the integers.
In fact every integer is the unique sum of one particular element of the choice set; plus one particular integer.
What we've done is start by partitioning the integers into five classes, each countably infinite; and end up with a countable union of classes of size 5. We've "reversed the cardinalities."
I have some pictures to go with this, is the exposition relatively comprehensible?
* Ok next example, the Vitali set. We start with the real numbers and declare two reals equivalent if their difference is rational. I hope you see that while the formalities are clear, the intuition is hellacious. The equivalence classes are very hard to visualize. That's why this is such a cool example.
Note that there are uncountably many equivalence classes; and each equivalence class is countable.
We call the resulting set of equivalence classes [math]\mathbb R / \mathbb Q[/math]. Now we do need the axiom of choice to form a choice set consisting of exactly one member of each equivalence class. Call the choice set [math]V[/math] in honor of Giuseppe Vitali, who cooked up this example in 1905.
https://en.wikipedia.org/wiki/Vitali_set
Ok now we do the same trick. Using the fact that [math]\mathbb R / \mathbb Q[/math] is an Abelian group under addition, we can show that by translating [math]V[/math] by each rational number in turn, we recover the entire set of reals.
Again, each real number is now expressed uniquely as the sum of a particular element of [math]V[/math] and some rational.
But now what we've done is partition the reals into an uncountable union of countable sets; and by translating the choice set, we ended up with a countable union of uncountable sets! Again we've reversed the cardinalities.
Why does this matter? In Vitali's example he restricted the operations to the unit interval, with the operation of "addition mod 1", or circular addition of reals. So 3/4 + 1/2 = 1/4, you wrap around. And now, when we have the unit interval expressed as a countable union of translations of [math]V[/math], we can apply countable additivity to show that the measure of [math]V[/math] can not possibly be defined. If the measure of [math]V[/math] is zero, by countable additivity the measure of the unit interval must be zero. If it's positive, the measure of the unit interval is infinite.
So there is no translation-invariant, countably additive measure on the real numbers that assigns 1 to the unit interval. [math]V[/math] is in fact a non-measurable set: a set of real numbers that can not possibly be assigned a size or a probability in any sensible way.
https://en.wikipedia.org/wiki/Non-measurable_set
* With these two examples in mind, we see how we partition a set into equivalence classes; take a choice set; then translate the choice set to either reverse cardinalities or get some other nice property that we care about.
So in Banach-Tarski, we start with the set [math]H[/math] of rotations generated by our two magic matrices. We take a choice set on the collection of partitions. Now we do NOT have a nice Abelian group; but we can use the properties of group actions to show that each element of the original set is expressed uniquely as a rotation applied to some element of the choice set. So we have the unit sphere expressed as a union of rotations in [math]H[/math] applied to the choice set. And since [math]H[/math] is a copy of the free group on two letters, the paradoxical decomposition of [math]H[/math] induces a paradoxical decomposition of the sphere.
I hope this makes some sense. I find that it helps me to understand the structure of the final step of the proof.
If topology has nothing to do with it why all the proofs of decomposition of objects that don't preserve volume are decomposing the objects in pieces that are not open sets?
Can you find an example of decomposing an object and then recomposing it with different volume where the pieces are open sets?
Question doesn't compute. IMO you are understanding some of the technical steps but not grokking the overall structure of the proof. I actually can't parse your question. Which proofs? What pieces? What not open sets? You're reading things into it that aren't there. I strongly urge you to examine the Wikipedia proof outline and put aside the paper you're reading because it's too detailed and doesn't show the big picture.
Quoting Mephist
Like f(x) = 2x stretching (0,1) to (0,2)? Yes. Open sets are easy if you don't require the maps to be isometries.
You should start with the free group on two letters, that's the key to the whole thing. Forget continuity, nothing here is continuous. Rotations are continuous; but what's being decomposed paradoxically is the group of rotations. It's a little hard to get a grasp on this the first n times you go through it, for some large value of n.
It's true that each individual rotation is continuous and carries open sets to open sets. But what's being decomposed is the collection of all the rotations; and this collection has a paradoxical decomposition by virtue of being a model of the free group on 2 letters. That is the one-sentence version of the proof.
Start with the free group on 2 letters. Go through the Wikipedia proof. Abandon the difficult paper you chose to work with. My two cents.
Quoting Mephist
Because topology has nothing to do with it! The rotations are continuous but the group of rotations has a paradoxical decomposition. Any open interval is topologically equivalent to any other, regardless of length. Topology is too weak to preserve measure. We need isometries for that.
In fact another one sentence summary of the proof is that while isometries preserve the measure of measurable sets; they do not necessarily preserve the measure of nonmeasurable sets.
Yes of course they have to be isometries. I meant: there is no way of decomposing an object in an infinite set of open sets and then recomposing them in a different way so that each peace has the same measure but the sum of the measures of all the pieces is different. If this were possible, the theory of integration would be inconsistent.
I know your objection: if there is an infinite number of pieces the measure of each peace cannot be finite. OK, but you can build the limit of a sequence of decompositions, like you do with regular integration.
I am not arguing that BT theorem is false, I am arguing that it works only because you perform the transformation on pieces that are not measurable. If the pieces were made using the decomposition in open sets, as with regular integration, it couldn't work. I know that you can even define a Lebesgue integral that is working on sets that are not open: this is not a necessary condition, but is a sufficient condition to preserve additivity.
Yes, ok.
Quoting Mephist
I'm not objecting, I'm agreeing.
Quoting Mephist
Yes of course! That's the point. Which reminds me of another reason the Vitali set is a warmup to B-T. The theorem only works because at least one of the pieces is nonmeasurable. If a set is measurable, an isometry must preserve its measure. If that's what you're saying, you're absolutely right.
Quoting Mephist
Sure. Agreed. All open sets are measurable. Can I prove that? In the reals it's easy because every open set is a finite or countable union of open intervals, intervals are measurable, and countable unions of measurable sets are measurable. In the plane or 3-space there's an analogous theorem but I'm not sure exactly what the wording is. Cartesian products of intervals are measurable so n-rectangles are, and you can probably express any open set as a countable union of rectangles but that's the part I'm fuzzy on. So I'm pretty sure all open sets in n-space are measurable but I'm only certain of the proof in one dimension.
The axiom of choice lets Vitali cook up a nonmeasurable set, and that's what's happening here. The choice set on the orbits I believe isn't measurable, but don't quote me on that.
:smile: :smile: :smile:
Quoting fishfry
Yes!! I was starting to despair that there is a way to make me understand...
My objection was only this one: BT doesn't make integration inconsistent. You can reason about infinitesimal parts and be confident of the fact that integration works, if you decompose the object in open sets. That's all I wanted to say!!!
Oh I see. You were concerned that you'd change the value of an integral by moving things around. But right, that can't happen because the pieces aren't measurable so they can't be integrated anyway. Good point.
Quoting Mephist
Yes I see your concern now. Integration doesn't break and open sets are well-behaved. That's why B-T is a technical result that doesn't actually break anything important.