The Pythagoras Disaster
In one of Robert Rosen's always fantastic essays, he begins by writing that "It is my contention that mathematics took a disastrous wrong turn some time in the sixth century B.C" - a 'disastrous wrong turn' he associates primarily with Pythagoras and his students, and extending all the way to today. So what 'wrong turn' does Rosen identify? He names it commensurability. The idea is this: that for any two objects of measurement (lengths A and B, say), there will always be a third length (C) or common measure that, in principle, would be able to measure exactly and without remainder, the initial lengths A and B. Lengths A and B would thus be commensurable lengths with respect to length C. In the example below, for instance, length C is the common measure of lengths A and B:
A: --
B: -- -- --
C: -
As Rosen points out, what the Pythagoreans assumed was that any measurement whatsoever, anywhere in the universe, would be amenable to being measured by just the kind of common measure outlined above. This is the Pythagorian disaster. The problem is multiple. Not only is the assumption just that - an assumption - but it has also caused all sorts of issues in math itself. Ironically, it was Pythagoras himself who provided the first proof of the falsity of commensuribility, after his discovery of the irrational numbers - numbers that, practically by definition, could not be defined in terms of a ratio of whole numbers, and thus did not admit of a common measure between them. And as Rosen points out, it is the (false) assumption of commensuribility that actually underlies almost all of our deepest mathmatical problems: from the paradoxes of Zeno to paradoxes of Godel, all of which, far from speaking to anything in reality, are nothing more than mathematical artefacts.
Aside from this though, the real issue of commensuribility is that it erases any reference to the real world. Recall that, in order to measure one length, A, using another length, B, one had to begin with two 'real world' things to be measured 'side by side' (in our example, B measures three As). However, once one assumes that all things are commensurable, one can actually drop the reference to any real world measure whatsoever: all one has to do is assume that, having fixed an (arbitrary) common measure for all things, that all things would be subject just that measure. The ultimate advantage of this is that it allows math to become an essentially self-contained system.
As Rosen puts it, "Pythagoras [enabled us to] create worlds without external referents; from within such a world, one cannot entail the existence of anything outside. The idea behind seeking such a formalized universe is that, if it is big enough, then everything originally outside will, in some sense, have an exact image inside. Once inside, we can claim objectivity; we can claim independence of any external context, because there is no external context anymore." However, "if we retain an exterior perspective [outside the system in which commensuribility is assumed], what goes on within any such formalized universe is at best mimesis of what is outside. It is simulation... [but] mimesis is not science, it only mimics science."
To the degree that there are those who still believe that the world can indeed be exhaustively subjected to a common measure then, we remain heirs to the Pythagorean disaster (alternative thread title: A Critique of Pure Math).
A: --
B: -- -- --
C: -
As Rosen points out, what the Pythagoreans assumed was that any measurement whatsoever, anywhere in the universe, would be amenable to being measured by just the kind of common measure outlined above. This is the Pythagorian disaster. The problem is multiple. Not only is the assumption just that - an assumption - but it has also caused all sorts of issues in math itself. Ironically, it was Pythagoras himself who provided the first proof of the falsity of commensuribility, after his discovery of the irrational numbers - numbers that, practically by definition, could not be defined in terms of a ratio of whole numbers, and thus did not admit of a common measure between them. And as Rosen points out, it is the (false) assumption of commensuribility that actually underlies almost all of our deepest mathmatical problems: from the paradoxes of Zeno to paradoxes of Godel, all of which, far from speaking to anything in reality, are nothing more than mathematical artefacts.
Aside from this though, the real issue of commensuribility is that it erases any reference to the real world. Recall that, in order to measure one length, A, using another length, B, one had to begin with two 'real world' things to be measured 'side by side' (in our example, B measures three As). However, once one assumes that all things are commensurable, one can actually drop the reference to any real world measure whatsoever: all one has to do is assume that, having fixed an (arbitrary) common measure for all things, that all things would be subject just that measure. The ultimate advantage of this is that it allows math to become an essentially self-contained system.
As Rosen puts it, "Pythagoras [enabled us to] create worlds without external referents; from within such a world, one cannot entail the existence of anything outside. The idea behind seeking such a formalized universe is that, if it is big enough, then everything originally outside will, in some sense, have an exact image inside. Once inside, we can claim objectivity; we can claim independence of any external context, because there is no external context anymore." However, "if we retain an exterior perspective [outside the system in which commensuribility is assumed], what goes on within any such formalized universe is at best mimesis of what is outside. It is simulation... [but] mimesis is not science, it only mimics science."
To the degree that there are those who still believe that the world can indeed be exhaustively subjected to a common measure then, we remain heirs to the Pythagorean disaster (alternative thread title: A Critique of Pure Math).
Comments (76)
What is real science then?
Non-quantitative analysis isn't objective enough. Take an object with mass 4 kg. I hold it in my hand it feels heavy but to a bodybuilder it feels light. But what is unchanging and quantifiable is its mass 4 kg.
How are we to even discover the laws of nature without mathematics given that the laws themselves are mathematical?
Perhaps there's a difference between ''commensurable'' and ''quantifiable'' and Robert Rosen has an issue with the former and not the latter.
If that's the case then Rosen's observation isn't anything new because, as you said, Pythagoras already identified the problem.
May be I misunderstood.
It is true that many things about the pythagoreans seem eccentric, such as the well-known story of the Pythagorean student who was drowned or strangled for discovering the irrational numbers, or the legend that Pythagoras was captured because he refused to flee across a bean field, such was his hatred of beans. But nevertheless their emphasis on pure reason, on things that could be known simply by virtue of the rational mind, is one of the major sources of Western philosophy. I haven’t read the paper and am not inclined to, but calling it ‘a disaster’ seems to me nothing more than empiricist polemics. So, sure, the particular notion in question might be completely wrong-headed but write off Pythagoreanism and you’d be having to communicate your OP via ink or smoke signals.
So much the worse for Western philosophy.
"Above all, we must give up reductionism as a universal strategy for studying the material world. But what can we do in a material world of complex systems if we must give up every landmark that has heretofore seemed to govern our relation to that world? I can give an optimistic answer to that question, beginning with the observation that we do not give up number theory simply because it is not formalizable. Godel’s Theorem pertains to the limitations of formalizability, not of mathematics. Likewise, complexity in the material world is not a limitation on science, but on certain ways of doing science."
It's only by opposing the Phythagorean disaster that one can, for instance, demonstrate the invalidity of the Church-Turing thesis as it pertains to the world. But, given that you're an arch-reductionist yourself, I suppose it's not surprising that you'd so willingly crawl into bed with the enemy.
So I think the Church-Turing thesis has problems because it regards human intelligence itself as something that can be digitised and written in binary code. That is reductionist in the extreme, but I think trying to attribute all of that to Pythagoras [if that is what is being said] is at the very least drawing a long bow.
Why? Pythagoras was the mathematical reductionist par excellence. It is not for nothing that the notion that 'everything is number' is chiefly associated with his name. There's barely a bow-string to pull, let alone draw long.
Consequently I'd argue that science has been reductionist because it has so for long been enmeshed in the 'broader philosophical tradition'. It's only now been able to begin to extricate itself from that muck and mire now that the Pythagorean inheritance is drawing its (hopefully) dying breaths. A cause for celebration.
How splendidly Peircean!
You make an unwarranted assumption; that mathematics, science and philosophy could not have evolved in any other way than they have.
Also, it is not that certain generalities cannot be known by reflecting on our experience, but that the process of reflection is misconstrued as "pure reason" has been the greatest mistake.
The very idea of pure reason is itself a grotesque reification; "a fallacy of misplaced concreteness", to quote Whitehead.
Seem to me, if the result of that disaster is mathematics, science and technology, we could use more such disasters.
But the assumption now stands in the way in of just those things.
Rosen’s point about incommensurability in all those fields still stands. But it was also overcoming the issue in a pragmatic fashion that has brought home the underlying trick involved.
Non linear maths demonstrated that measurement error is not necessarily linear. Indeed, as Rosen says, it is almost generically the case that it ain’t.
But rather than mathematical modelling just curling up and dying, it seems rather invigorated at finding ways to handle non linear measurement error.
What is the disaster?
Given the existence of irrationals, isn't the point made here already accepted? The existence of irrationals has been known since ancient times, as you say.
How does the Pythagorean doctrine of commensurability lead to Zeno's paradoxes?
What is this ''commensuribility''? The way you explained it it means finding a common factor that divides evenly into two measurements.
If my understanding is correct then the problem is with whole numbers and their inability to correctly mirror reality which has a bunch of irrationals in it e.g. pi and e and some physical constants like planck's etc.
Science, if I'm correct, does more accepting of truths which can be incommensurible than coercing mathematical models unto reality.
Perhaps there's something deeper I don't understand.
AFAIK the number line is complete (wholes, rationals, integers, rationals, and irrationals). We even have the real-imaginary number space.
I'm not saying that the above number space is complete. Perhaps we need a new set of numbers which are neither real nor imaginary. I don't know.
That's because, as I explained to you in Streetlight's other thread, the incommensurability lies in the relation of one spatial dimension to another. The modeling of space as dimensions, though very pragmatic, is fundamentally wrong. This incorrectness is demonstrated by that incommensurability.
The whole problem is precisely over the question of 'completion'. The assumption of commensuribility turns on the idea that, once we 'complete math', we could then use mathematical tools to create a one-to-one model of all reality (Rosen: "The idea behind seeking such a formalized universe is that, if it is big enough, then everything originally outside will, in some sense, have an exact image inside"). But if commensuribility does not hold - if not everything in the universe is in principle able to be subject to a single measure - then no such 'largest model' can exist.
Importantly this does not mean that modelling is a lost cause; instead, it means that modelling must be specific to the phenomenon so modelled: beyond certain bounds and threshold values, modelling simply ends up producing artifacts (at the limit, you get Godel's paradoxes!). You can have models of this and models of that but never THE model.
I think Rosen is oversimplifying mathematics and science. Commensuribility isn't a mathematical or scientific principle unless the scientific quest for The Theory of Everything could be called that.
Does current science have commensuribility as a principle? My guess is it does not.
Quoting StreetlightX
Theory of Everything?
I don't know that one can speak of 'current science' as a reified whole. Just (individual) scientists and their views, organizations and their views, institutions and their views and so on. In any case, it seems obvious that the quest for a TOE - on the premise on commensuribility - still seems to many, if not most, as a legitimate and desirable endeavour.
Interesting thing to note is that as we climb up the knowledge heirarchy there's always something that can't be explained by what was the underlying assumptions. For instance physics and chemistry in biology can't explain consciousness.
I see a pattern here like a series of buckets (branches of knowledge) each bigger than the previous. When water (knowledge) from the first smallest bucket is poured into the succeeding bucket, the receiving bucket has some space left over (unexplained stuff).
The above problem, if we could call it that, is an extension of the incommensuribility problem I think.
Quoting gurugeorge
Quoting TheMadFool
Just wanna come back and address these together as they all hit on similar points that I think deserve to be expanded upon. The idea as I understand it is this - there is in fact one way to 'save' the assumption of commensurability after the introduction of the irrationals, and it is this: to treat irrationals as the limit of a convergent series of rational numbers. In this way, we don't actually have to deal with incommensurate values per se, only rationals (Rosen: "At each finite step, only rationalities would be involved, and only at the end, in the limit, would we actually meet our new irrational. This was the method of exhaustion...")
In the last century, this was formalized with the procedure of 'Dedekind cuts', which enable the construction of the real numbers (irrational + rational numbers) from the rational numbers alone. One 'side-effect' of constructing the irrationals in this way was to definitively construe the number line as continuous (a 'gapless' number line). The idea is basically that simply by initiating a procedure of step-by-step counting, one can eventually arrive at an irrational at the limit of that process.
However - and here we get to Zeno - the attempt to 'save' commensuribility in this way simply pushes the problem back a step, rather than properly solving it. For what Zeno points out is that even if you add up the points on a number line to arrive at an irrational, no single point itself has any length, and that adding a bunch of lengthless points cannot itself yield any length (which in turn allows one to make wild (paradoxical) conclusions like ?2 = 0).
So what the Zeno paradoxes essentially mark is the irreducibly of incommensuribility. Making the irrationals the limit of a converging series of rationals in order to save commensuribility is a bit like trying to suppress a half inflated balloon: short of breaking the balloon, all one can ever do is shift the air around. One of the take-aways from this is that the very idea of the (continuous) number-line is a kind of fiction, an attempt to glue together geometry and arithmetic in a way that isn't actually possible (every attempt to 'glue' them together produces artifices or problems, either in the form of irrationals, or later, in the form of Zeno's paradoxes - and, even further down the line, Godel's paradox).
[Incidentally this is something that Wittgenstein was well aware of: "The misleading thing about Dedekind’s conception is the idea that the real numbers are there spread out in the number line. They may be known or not; that does not matter. And in this way all that one has to do is to cut or divide into classes, and one has dealt with them all. ... [But] the idea of a ‘cut’ is one such dangerous illustration. ... The geometrical illustration of Analysis is indeed inessential; not, however, the geometrical application. Originally the geometrical illustrations were applications of Analysis. Where they cease to be this they can be wholly misleading." (Wittgenstein, Lectures on the Foundations of Mathematics)
Compare Rosen: "The entire Pythagorean program to maintain the primacy of arithmetic over geometry (i.e., the identification of effectiveness with computation) and the identification of measurement with construction is inherently flawed and must be abandoned. That is, there are procedures that are perfectly effective but that cannot be assigned a computational counterpart. In effect, [Zeno] argued that what we today call Church’s Thesis must be abandoned, and, accordingly, that the concepts of measurement and construction with which we began were inherently far too narrow and must be extended beyond any form of arithmetic or counting.]
@fdrake I wonder if this story sounds right to you. I've struggled somewhat to put it together, from a few different sources.
"In the history of number, we see that every systematic type is constructed on the basis of an essential inequality [read: incommensurability - SX], and retains that inequality in relation to the next-lowest type: thus, fractions involve the impossibility of reducing the relation between two quantities to a whole number; irrational numbers in turn express the impossibility of determining a common aliquot part for two quantities, and thus the impossibility of reducing their relation to even a fractional number, and so on. It is true that a given type of number does not retain an inequality in its essence without banishing or cancelling it within the new order that it installs. Thus, fractional numbers compensate for their characteristic inequality by the equality of an aliquot part; irrational numbers subordinate their inequality to an equality of purely geometric relations - or, better still, arithmetically speaking, to a limit-equality indicated by a convergent series of rational numbers". (Difference and Repetition).
I'd draw a distinction between paradoxes of intuition and paradoxes of formalism, not that they're mutually exclusive. Paradoxes of intuition are what occur when a mathematical idea is a very strange one to imagine, paradoxes of formalism are what occur when a mathematical construct has strange provable properties. Paradoxes of intuition can be posited as resolved for the further development of mathematics related to the paradox, paradoxes of formalism act as a roadblock to further development.
Zeno's paradoxes are paradoxes of intuition. This is because it's quite easy to circumvent Zeno's paradoxes with sufficiently precise definitions of what limits and continuity are; the celebrated epsilon-delta and epsilon-N constructions of Weirstrass. You can go on as if the paradoxes are resolved because pure mathematical inquiry is largely a conditional enterprise; given these assumptions (which characterise a structure), what can be shown and what does it do? You can posit yourself past the paradoxes if you so wish, and as is usually done.
Real numbers in either case aren't paradoxes of intuition any more - they are widely accepted. The historical/cumulative nature of mathematical development brushes issues of intuition and imagination; insight about what mathematics should do; aside. So if you ask someone with mathematical training to picture a number system, they'll give something like the integers and something like the reals as their paradigm cases. The reals are treated differently than the whole numbers because they're a natural starting point for the field of mathematical analysis; which, with a coarse brush, is a study of continuity and limits. The historicality of mathematics gives playing about with axiomatic systems a retroactive effect on the very intuitions that mathematicians have, and thus what seems like a fruitful avenue for further study.
The 'cut' issue Wittgenstein is highlighting is a feature of the Dedekind cut construction, with that formalism it isn't immediately clear that the irrational numbers are a dense set in the number line (which means there's an irrational arbitrarily close to every other number), whereas the sequential construction presents the density of the irrationals in the reals in a natural way; it piggybacks on top of the density of the rationals in the real numbers; which is clear from how the decimal representation of numbers works.
I also wouldn't emphasise a contemporary disjunct between arithmetic and geometry - when you teach them to kids they're presented in an essentially unified context of Cartesian algebra (the stuff everyone is exposed to with functions and quadratic equations etc); where arithmetical operations have geometric analogues and vice versa. This is after giving them the prerequisite basic arithmetic and basic geometry.
Cartesian algebra is posited as 'after' the resolution of Zeno's paradoxes, as while it works it sweeps those issues under the rug. The same goes for calculus; which, again with a coarse brush, can be taken as the relationship between the arithmetic of arbitrarily small and arbitrarily large quantities through limiting processes.
In contrast, Godel's incompleteness theorems, treated as a paradox, are a paradox of formalism. They mathematically show that there are limits of using formal systems to establish properties of formal systems. You're either inconsistent or incomplete, and if you can show you're consistent you're not consistent. Any formal system that would circumvent Godel's theorems has to do so by being out-with the class of systems to which Godel's theorems apply. Contrast Zeno's paradoxes, which stymie some intuitions about any treatment of infinitesimals. but formally prohibit no current formalisations of them.
Quoting StreetlightX
Something under-appreciated about the mathematics of limits, which shows itself in the enduring confusion that 0.99... isn't equal to 1, is that when you're evaluating the limit of something; all the steps to reach that limit have already happened. So 'every finite step' misses the mark, as the limit is characterised as after all steps have been done. This means when you characterise irrationals as convergent sequences of rationals, the infinity of the sequence iterates has been 'done in advance'. If you truncate the series at some point you obtain an approximation to the limit, but really the limit has 'already been evaluated' as soon as you write it down. Similarly, you can conjure up the reals by augmenting the rationals with all such sequences; since the sequences have already terminated in advance.
"In the context I have developed here, the transition from rationals to reals gets around the original Zeno paradoxes, mainly because the reals are uncountable, shocking as this word might be to Pythagoras. In particular, a line segment is no longer a countable union of its constituent points, and hence, as long as we demand only countable additivity of our measure [lambda], we seem to be safe. We are certainly so as long as we do not stray too far from line segments (or intervals); i.e., we limit ourselves to a universe populated only by things built from intervals by at most countable applications of set-theoretic operations (unions, intersections, complementations). This is the universe of the Borel sets.
But if R is now the real numbers ... we are now playing with 2^R. Thus we would recapture the Pythagorean ideals, and fully embody the primacy of arithmetic... if it were the case that everything in 2^R were a Borel set. We could then say that we had enlarged arithmetic “enough” to be able to stop. Unfortunately, that is not the way it is. Not only is there a set in 2^R that is not a Borel set, but it turns out that there are many of them; it turns out further that there are so many that a Borel set is of the greatest rarity among them. It is in fact nongeneric for something in 2^R to be Borel". I'll send you the chapter, and, perhaps, if you have the time you might see where this goes (basically you get a progression form irrationals -> Zeno -> Borel sets as nongeneric -> ... -> Godel; Rosen also mentions Banach-Tarski as another instance of an artefact produced by the assumption of commensuribility).
So I guess what I wonder - genuine question - is whether the paradoxes of intuition are only seen to be paradoxes of intuition in the light of now-established mathematical development, so that what once seemed to be a paradox of formalism becomes a paradox of intuition after we come up with a way to develop the math a little (usually by 'enlarging the arithmetic'). I'm wondering because if it's possible to view the development of math (so far) as the development of ever more ingenious ways of burying incommensurability through higher-order abstractions for the sake of upholding commensuribility, then at the limit, would it be possible to say that the distinction between paradoxes of intuition and paradoxes of formalism are just the recto and verso of the same (ultimately) unsurpressable undertow of incommensuribility, just seen from different temporal angles? (as in - PoI 'after' mathematical development, PoF 'before' development). Hope this question makes sense.
Not sure that does answer my point, which is about infinite divisibility. It's the divisibility of the line that creates the numbers - IOW it's the notionally zero-width cut that separates the continuum into parts that creates the numbers, it's not that you're building up a bunch of nothings into a something, as with Zeno.
And so long as you can do that, you can find a common measure.
Quoting fdrake
Nice discussion. The core problem is that this is a tension that always exists because it speaks to an underlying metaphysical-strength dichotomy, and thus it raises the issue of what it would mean to resolve the tension without dissolving also the useful division.
So the mathematical debate seems to hinge on whether "the real" is discrete or continuous. The intuition being applied gets hung up on that. And clearly - Rosen's point - maths depends on the trick of atomistic constructability. Arithmetic or algebra are seen as fundamental as a continuity of form can be built up step by step from a succession of parts or acts.
But then continuity - the grounding wholeness that geometry seems to speak just as directly to - also seems to exist just as much, according to intuition. The geometer can see how the operation of division is a cuckoo in arithmetic's nest. Zeno's paradox shows that. There is more to the story than just the algebraic acts of construction - addition, subtraction and multiplication.
Then continuity shows its face in other ways. Non-linear systems contain the possibility of divergences at every point in their space. As Rosen argues, systems that are safely enough linear are in fact rare in nature. Linearity is non-generic. Perfect constructablity must fail.
So the problem is that the tension is real. Construction seems to work. Used with care, maths can formally model the world in ways that are powerfully useful. The world can come to seem exactly like a machine. And yet also, as any biologist or quantum physicist will know, the perfectly mechanistic is so non-generic that ultimately a machine model describes nothing in the real world at all.
It is the pragmatics of modelling that really bring this tension into the limelight. Maths can simply ignore the issue. It can keep deferring the problems of constructability by pushing them ever further away as the limit, just as @fdrake describes. It is a respectable working practice. Maths has benefited by taking this metaphysical licence. But scientists modelling the world with maths have to deal with the ill-fit of a purely mechanistic description. Continuity always lurks and waits to bite. It needs to be included in the modelling game somehow - even if it is just like Rosen's essays, the planting of a bunch of "here be dragons" signs at the edge of the intellectual map.
But the way out for me is the usual one of Peircean semiotics. Where you have a dichotomy, you actually have a pair of complementary limits. The discrete and the continuous would both be a matter of "taking the limit". And this is in turn founded on a logic of vagueness. You can have the discrete and the continuous as both merely the emergent limits on the real if they have some common ground of indistinction that they are together - measurably - serving to divide.
So now you don't have to worry if reality is fundamentally discrete or fundamentally continuous. It is neither - always being vaguer - but also it is forever moving towards those crisp limits in terms of its actions. If it is developing, it is the discrete vs the continuous dichotomy that is becoming ever more strongly manifest. It is approaching both limits at once.
At this point, we might need to more from the overly spatial dichotomy of the discrete~continuous - the idea of a 0D location and its 1D line. The simplest possible space that would be formed via a translational symmetry and the definite possibility of it being broken. The real world needs to incorporate space, time and energy as its triad of irreducibly fundamental components. A maths suited to actually modelling nature would need to align itself with that somehow.
Or indeed, being biologists, concerned with the study of organisms, we might leap all the way to a focus on agency and autonomy - the modelling relation, or semiosis pure.
Now we can reply to the issue of atomistic constructability in terms of the dichotomy it forms with the notion of holistic constraints. The real world - sans modelling - just is a product of constraints on freedoms. But modelling itself has a pragmatic goal regulating it. The goal of being a modeller - the reason organismic agency and autonomy would evolve within an agent-less cosmos - would be to gain machine-like control over nature. A model is a way to construct constraints so as to achieve purposes. And hence mathematics reflects that modelling imperative.
Maths gets it "wrong" by pushing constructability to an unreasonable seeming metaphysical limit. It makes the "mistake" of treating reality as if it were a pure machine. And yet that is also what is right and correct. Maths is useful to the degree it can construct a world constrained enough by our machinery that it achieves our goals reliably enough.
Biology itself is already a mechanisation of physics. It is the imposition of a system of molecular motors on nanoscale material chaos. So scientific modelling is simply an intellectual continuation of that organismic trick.
Rosen is a bit conflicted in that he complains about the flaws in the tools we use, and yet those flaws are only apparent in the grandest totalising metaphysical perspective. The largest model.
So what he gets right is that the mathematical approach, based on mechanical constructability, can only constrain uncertainty, never arrive at certainty. That is all maths ever does - move towards the limits of being, imagined particularly in the form of the dichotomy of the continuous and the discrete, the geometric and the algebraic, the structures and their morphic acts.
Maths can point to the limits where uncertainty of either kind - either pole of the dichotomised - would finally be eliminated. But the uncertainty must always remain. Which is why maths also keeps advancing as every step towards those limits must spring a leak that is then worth our while trying to fix, so setting up the need for the further step to repair the still smaller leak that will be now be exposed.
So it is an interesting game. Nature is the product of the symmetry-breaking tango between global constraints and local spontaneity. Uncertainty is basic. Yet also it becomes highly regulated or lawful.
Then organisms arise by being able to seize control of the regulatory possibilities of a modelling relation. If you can construct constraints - impose mechanistic structure on that natural world - then you can become a world within the world. You can use your ideas to make the world behave locally in ways that suit your interests.
Eventually humans gained such mathematical mastery over their realities that they could afford to get upset about the way even the best-constructed models were still full of leaks.
But apophatically, flip that around, and you have now your best metaphysical model of reality as a system of constraints. Uncertainty - as vagueness - becomes the new anti-foundationalist foundation. Atomistic construction comes into focus as the emergently individuated - the machinery being imposed on nature so as to limit that vagueness, corral that spontaneity or fluctuation, that spoils the way it is "meant to look".
One of the great insights of measure theory is that measures are rarely unique. For any given collection of objects there will usually be many different ways to measure them. For instance in integration, which can be thought of as a measure of areas or volumes, we have Riemann integration (which people tend to learn in high school) and Lebesgue integration (which tends to be introduced in university pure maths courses). In many cases they give the same answer but there are important cases where they don't. There are other forms of integration as well that differ from these two.
The case in the OP seems to suggest that there is some unique, true, primordial measure of the items A and B, and that we just need to find it. Measure theory reveals that we can define many different measures for the objects. A simple one is that the measure of A is 1 and that of B is 0. In the simple universe that contains only A and B, that measure obeys all the axioms required of a measure, as does the measure that says A's measure is 0 and B's is 1, or that they are both 5.1.
When we introduce a third object C we might try to define a measure in terms of that, as 'the size of an object is the number of complete copies of C we can line up against it before we reach the end'. Then when Rosen says that A and B are incommensurable in terms of C, the Measure Theoretic way of saying that is that the C-based size function we just defined is not a measure, because it does not satisfy the axioms required for a measure, in a universe that contains A and B as well as C. There's a bit of mathematical work required to show that it fails the axioms, but it's not difficult. It involves taking the left-over bit in A or B after measuring via C and replicating it until it becomes bigger than C.
I don't know the detailed history of Measure Theory. It is possible that its invention was inspired by problems like that of Pythagoras. If so then, far from being a disaster, it has led to some of the most useful and productive parts of modern mathematics.
I don't know much at all about mathematics, but I was thinking earlier when reading this thread about the Pythagorean 'incommensurability problem' that arises when the sides of a triangle are, for example, 1 unit and the hypotenuse is thus 2 units.
I thought, doesn't the problem go away if we call the sides 2 units instead, which it would seem we could consistently, do since units of measurement are arbitrary? This would make the problem seem to be more to do with the number system itself rather than the way mathematics maps onto 'reality'. But then, as I said, I don't know much about math...
If the sides are 2, the hypotenuse is 2 by radical 2, and so still irrational.
What you're asking for is some measure that can be applied to the sides, which is itself rational, and the square root of the doubling of whose square is also rational. It seems like this shouldn't be possible, but I'm not sure.
Of course, you're right; my ability to do simple addition failed me there!
So, yes it does seem that the square root of the sum of the squares of two equal numbers will always be irrational.
Yes I see that now, and would have before if I didn't suffer a momentary lapse of reason. :yikes:
One of the things I like about Wittgenstein's approach is that he 'accepts' the number line on the condition that it isn't leveraged to ground a 'general theory of real numbers' (W: "The picture of the number line is an absolutely natural one up to a certain point; that is to say so long as it is not used for a general theory of real numbers"). I think one of the things he's getting at is the conditional nature of the (dense) number line: if you want to construct irrationals from the rationals, then and only then will your number line be dense. What he worries over is the 'transcendental illusion' of confusing effect for cause: of saying that it is because the number line is dense that one can make cuts and thus construct the Reals (and with them, the irrationals).
This is why he makes a distinction between 'geometric illustration' and 'geometric application': the number line is a geometric illustration that 'falls out' of an arithmetic procedure (the Dedekin cut), but it is not the case that the number line 'really is there' in any Platonic-realist sense which is then 'discovered' in the course of employing the cuts (Witty again for reference: "The geometrical illustration of Analysis is indeed inessential; not, however, the geometrical application. Originally the geometrical illustrations were applications of Analysis. Where they cease to be this they can be wholly misleading"). Instead, the cuts retroactively 'makes' the Real number line the dense continuum that is seems to be.
(I'm again at my limit of mathematical understanding here, but I wonder if there is a similar issue when one moves 'one level up' from the Reals to the Complex numbers: is there an arithmetic procedure analogous to Dedekind cuts that can generate Complex numbers that, as it were, retroactively makes the Complex plane (?) ... dense(?) (again, I think my vocabulary is probably wrong and that we don't speak of the plane being 'dense', but y'know... filled with 'arbitrarily close' numbers... whatever it's called on the Complex plane). And then maybe reiterated for the quaternions and octonions and so on and so on? - idk if it works like that?).
Anyway, the point is that yeah, I totally get that it's not quite fair to speak of 'every finite step' because the evaluation of the limit posits the steps as 'always-already' completed, but I guess what I'm trying to say is that this is precisely the (possible) problem: it's insofar as limit procedures already 'presuppose', in advance, the number line, that one can misleadingly take it that all the numbers are just 'there' in some Platonic fashion, just waiting to be discovered (whereas Witty emphasises that it is only the application of the math which determines, reteroactively, as it were, what the math itself looks like. Again, sorry if I'm getting my terms mixed up, but I hope the sense of comments come through.
Quoting gurugeorge
The above can serve as something of a reply to this as well, I hope. Again, it's not that there is a continuum 'already there', waiting to be cut: rather, the cut and the continuum are co-extensive insofar as the one presupposes the other; the cut extends math itself and does not just apply to 'existing' math. This is what I was trying to get at in my initial post to you when I said that the number line is something of a fiction.
I'm not entirely sure what you mean by 'involve any incommensurability'. The idea is that if one assumes commensuribility, then the Zeno paradoxes are one of the results of that assumption when one tries to construct a continuum from discontinuities.
(1) A = mC, and B = nC, then we can cancel out C such that:
(2) A/B = m/n
(3) B = A(m/n)
So that now, length B is measured in terms of (rational) units of A. This is where the assumption of commensuribility becomes properly insidious because now there is no 'external referent' which acts as a 'real world' mediator between values. Once you define a length in terms of units of another length, what you end up with is a 'closed system' which becomes impossible to get out of (Rosen: "Once inside such a universe, however, we cannot get out again, because all the original external referents have presumably been pulled inside with us. The thesis [of commesurability] in effect assures us that we never need to get outside again, that all referents have indeed been internalized in a purely syntactic form").
And in fact, this is the real upshot of commensurability: not simply the idea that there is a common measure between things, but that everything inside a formal system can be measured in terms of other things within that system without remainder. And this is what the 'disaster' is: the expulsion, if you will, of any external referent which would 'ground' the seeming self-sufficiency of such a self-enclosed system. On the flip side then, what incommensurability is, ultimately, is the non-identity (or the non-coincidence) of what can be measured with what can be constructed (the irrationals again are exemplary: the discovery of the irrational forced us to expand our universe of number so as to make measurement and construction commensurate again, which, as I've tried to point out, simply caused problems further down the line).
So this is all not strictly about measurement per se, and I have no beef whatsoever with the awesome innovations of measurement theory. Instead, it's about the relation between measurement and the things so measured, and an attempt to delineate the proper bounds of that relation; a spin, if you will, on the idea of a Kantian-inspired 'Critique of Pure Math', in which if measurement is left to it's own devices to see the world only in it's image, you end up with all sorts of transcendental illusions like Zeno's paradox and so on.
I think there's a way of believing in such a regulative ideal (a Pythagorean-ish one) without hypostatising it to a nascent theory of everything. I don't know exactly what it would entail, but I'd like to be able to have the ability to distinguish good from bad models based off of general but problem specific criteria. The faith in the regulative ideal wouldn't then become a nascent global system, but it would let you espouse sensible regulative ideas - like double blinding and experimental design that accounts for confounding - for subjects of study. Aiming your inquiry towards the phenomena (or imaginative background of a formalism) is kind of an empty regulative idea in that when it is generalised over topics of inquiry it espouses nothing, but when you particularise it there are things which block or impede inquiry (like measurement error) and things which make it grow (like new types of model that work). I think Heidegger's idea of truth is pretty prescient here.
And I would resist having a global system of systems on empirical grounds too; when you look at different topics of inquiry and their methodologies, it's pretty clear that there's as much mismatch as there is commensurability between them. So if you're an anthropologist doing participant observation, that generally won't have much relevance to experimental design or modelling fluxes with differential equations. They speak differently because the phenomena demand it.
A good example of the transcendental illusion brought about by scientific posits is homo economicus; the rationally self interested utility maximiser in a condition of good scarcity falls out of the equations because that's precisely what's assumed to get all this utility maximisation formalism going. Whenever the models are seen to work, it projects truth down to their assumptions (per Lakatos).
Pure mathematics also has a good example of why a complete system is impossible, the very idea of a final theorem which allows everything to be derived from it is silly. The final theorems resemble axiom systems in terms of their function rather than results derived from them. That inquiry into mathematical structures could ever terminate after proving all that is provable is not just curtailed by Godel's theorem and the halting problem - which give a formalistic picture of the bottomless depth of mathematical structures - but also that different applications of mathematics greatly differ in conceptual structure, even when formally equivalent (like cuts and Cauchy sequences in characterising the reals).
I don't think pure mathematicians in general think that they're aiming for an architectonic system of all mathematics, rather they're working in specific domains of objects which are interesting for some reason - which includes relations between structures. But I do think that the structures in pure mathematics behave quite similarly to natural phenomena for the purposes of research; you can be guided by mathematical phenomena in much the same way as you'd be guided by nature or the real. I don't mean this in the Platonic sense that there's an abstract population of regulative mathematical ideals with an illusory substance, but it's certainly true that mathematical objects are suggestive in a similar way to real processes are. The example I used above with cuts and sequences is one I've used a lot to make this kind of point - the cuts emphasise the irrationals as holes in the rational number line, the sequences emphasise the irrationals as a dense set in the number line. So with the cuts formalism it's easier to think about the rationals as an incomplete space (you formalise the holes), with the sequences formalism it's easier to think of the reals as a complete one (you formalise the filling). That holes and fillings are equivalent from the perspective of the real number line is something which can be proved, and it's pretty nice, regardless of the different imaginative backgrounds for each formalism.
The danger is in 'transcendental illusions' fostered by successful modelling so that one begins to believe that the phenomenon at hand is essentially (in it's 'being') modelable, and that the universe as such is calculable qua encompassing cosmos (rather than Deleuze and Guatarri's/Joyce's 'Chaosmos' - perched half-way between chaos and cosmos).
— StreetlightX
Even that mathematical objects are computable in principle is an illusion. Real numbers which can be approximated with arbitrary precision by an algorithm - computable numbers - take up 0 volume in the real number line.
It is true, however, like with Rosen's highlight that Borel sets are nongeneric in the power set of the reals, that the objects which mathematicians study are rarely generic mathematical objects. It's just true that the interesting ones are sufficiently well behaved, that's what makes them interesting. Of course, some people like to catalogue counterexamples of structures - like there's a book called Counterexamples in Topology which is a catalogue of thwarted intuitions between mathematical formalisms of 'nearness' of point like objects in a space. The ones that are studied are non-generic precisely because the generic ones thwart useful analogies and relations of properties.
Though I must admit that I don't see Zeno's paradoxes as a problems, or a disaster, either – I always took them as didactic exercises. They're so obviously fallacious that it always struck me that anyone could be bothered by them: you give them to a student and they tell you why they're confused. I wouldn't try to redesign mathematics to 'avoid' them, because this falsely presupposes that they are problems.
I'd rather not put it like that, as it seems to imply that we need to 'go to infinity' in order to make sense of the limit. Then before we know it, people like the apologist William Craig are butting in making ignorant statements about the possibility of 'going to infinity', as if that actually meant something.
Yet limits can be, and usually are in formal maths texts, defined using purely finite concepts.
Consider the function f(x) that is L + (x-a) sin 1/(x-a) for x<>a and gives L at x=a. Unlike the function g(x) = (x-a) + L we can't look at an intuitive straight line that heads straight for the point (a,L), because the curve of f keeps wiggling, and the speed of wiggling increases without limit as x approaches a. But when we say that the limit of f(x) as x->a is L, all we are saying is that, for any small number e>0, we can find another number d>0 such that whenever x differs from a by less than d, f(x) will differ from L by less than e.
That definition uses only finite concepts, and proving that it holds in this case also requires only finite methods.
This is a demonstration of the little-recognised fact that most limits, and indeed most of calculus, can be done without using any notions of infinity. Even notions such as lim(x-> infinity) f(x) or (using an extended definition of limits) lim (x->a) f(x) = infinity don't actually need to use a notion of infinity. The infinity symbol in those expressions can be interpreted using only finite methods like what was done above.
Infinity is great fun to explore, especially in set theory and topology. I love Cantor's work as much as anyone does. But we don't need it to do calculus. We need unconstrained divisibility, but not a notion of infinity.
I've been thinking about this in terms of various universes that are limited as to what's in them. If we have only two objects, A and B, and they don't have regular markings like on a ruler, then I think the only measurement we can do is to say for each of them, how many complete copies of the other can be lined up alongside it without exceeding the length of the first. We do this in the same way we use an unmarked 30cm ruler to measure the length of a wall - by rolling the ruler end over end until we reach the end of the wall, and counting the number of rolls.
In this way, we'll end up with the measure of the shorter object in terms of the longer being 0, and the measure of the longer object in terms of the shorter being the ratio of the two 'true lengths' rounded down to the nearest integer.
Both of these measures satisfy the axioms required to be a true measure.
If we want to allow rational measures but have only two objects, we need at least one of them to have regular markings on it like a ruler. So if one object is a 30cm long straight stick with marks every 1cm, we can measure the length of the other object to a precision of 1cm, rather than 30cm.
I have a slight doubt about whether an object with 29 regular markings is one object or thirty, but I think I'll dismiss that doubt as too metaphysical for the time being.
If both objects have cm markings on them, each can be measured as a number of whole cm, so we can say the length of A is m cm and that of B is n cm, but we know the lengths are not exact.
I think I need to stop here and ask for input, because it's starting to look like this avenue of investigation doesn't involve irrational numbers or Pythagoras. Rather it is just saying that any physical object can only have regular markings on it with a finite level of granularity. So the problem here is the limitation of accuracy that gives us, and it arises for objects whose length ratio is purely rational, just as much as for pairs where the ratio is irrational. If the smallest measure gradation marked on either object is micrometres (10^-6 m) then, even if both objects have lengths that are whole numbers of nanometres (10^-9 m), the measure is in some sense 'inadequate' if one or both of those whole numbers is not a multiple of 1000.
A right-angled triangle of sticks, whose lengths are not those of a Pythagorean Triple, guarantees a measurement problem because one of the sticks will have a length that is an irrational multiple of the lengths of the others, but so does just having two sticks, one of whose lengths is not a whole number of the lengths of the marks on the other.
I imagine that threads like this are a safe space where we don't need to worry about giving William Lane Craig more misguided ideas about infinity.
Quoting andrewk
That's the ingenuity in the definitions, really, You can deal with limiting process with the nice substitution of arbitrarily small difference and becoming arbitrarily large (unbounded increase). So yes, what you're saying is exactly correct about the formalism of limits. It neatly avoids infinity as a concept by replacing it with functionally equivalent substitutes, so that when people think about limits it doesn't matter if they think of 1/infinity = 0 or something similar, because you can just say 'I really mean this formalism' rather than the imaginative background that surrounds it.
The formalism also gives you means of defining derivatives and integrals in a manner similar to the original ideas but with the problems regarding infinity and 0 removed. It also, with some approximation theory added, lets you evaluate indeterminate forms through limiting arguments.
Quoting andrewk
We do need the ability to iterate a process as long as we need to though, even when that need is infinity to produce the limit. If you were asked to terminate a sequence tending to 0, you don't get 0, you get something close to it. Similarly, if you were asked to terminate a divergent sequence, you get something arbitrarily far away from infinity rather than something arbitrarily large. That the whole process - the infinitude of steps - has already terminated is dealt with by the 'for all epsilon' quantifier. Just as we shouldn't ask for the 'last term' of a convergent infinite series, we should involve the infinity of steps in the progression of the series as equivalent to its culmination.
I see it as the utility of dealing with infinitely small increments in pre-rigorous calculus gave a need for a formalism to make sense of them. The formalisms would be wrong if they didn't produce the pre-established sensible ideas of derivatives and integrals.
But it is true that Zeno's paradoxes don't require the real line, infinite divisibility occurs in the rationals first, I was going to bring that up too but I thought it wasn't very relevant to the theme. The reason the real line is the family home of analysis is that it's a complete space (every set with an upper bound has a least upper bound). You don't get that with rationals (think of a sequence of rationals that would converge to Pi or e in the reals then demand that all numbers are rationals, boom, no least upper bound).
I think that point just displaces the debate to another level, we need a number line with no suspiciously convergent looking sequences failing to converge. It's another conceptual repair which seems inevitable or even a priori when considered ex post facto. Which is part of the thrust of the OP - we tinker with mathematical ideas while working with a mathematical canon which is composed by past ingenious tinkering, which is forgotten as math appears as a giant book of pre-established theorems and rules God wrote - as a self sufficient and eternally true realm apart from human activity. Arguably anyway.
While it's true that the formalisms for limits and continuity are stated without reference to the infinite, I think you have to remember the context in which they were made. The epsilon-N criterion for convergence applies to infinite series, the definitions for continuity can be restated in terms of preserving the convergence of infinite sequences. So, it's an excellent bit of tinkering that tames infinity in terms of arbitrary closeness and unboundedness, but the problems it addresses are those which were engendered by an encounter with infinite mathematical objects.
So, Platonism? There is a realm of commensurate entities to be encountered?
Nah. Encounter in the POMO sense you hate.
How is that defined then? Genuinely curious.
Long story short, a thing happens which resists interpretation (or how you use 'habit') so much that it makes a load of difficult problems for everyone involved. Usually these problems are circumvented or ignored and become either an irrelevance after the fact (like canonising) or a suppressed undercurrent (like the long lasting impacts of slavery and racism). Occasionally they're addressed powerfully and when this happens it transforms what people do in its wake; a lived 'solution' to the 'problem' of the encounter.
In academic discourse it works like a paradigm shift brought on by a problem, and the encounter as a stand-alone concept is a lot like a generalisation of a paradigm shift to other strata of phenomena. Like (inter)personal encounters of transformative love, politics or art.
So when it comes to the issue of an incommensurate world - as the thing-in-itself never fully grasped - we do have to approach it via commensurable acts of measurement. The world might be analog or continuous (as our best characterisation, as the noumenal escapes complete description), but our embodied modelling relation with it demands that we measure it in a method that is digital or discrete.
That is, semiotically, we must form a mediating sign that allows us to most effectively relate to the thing - the world - we have every good reason to believe to be out there, existing in recalcitrant fashion as that which waits to be encountered.
So the world is an unbroken wholeness. And modelling relies on fragmenting that in a useful way so that its wholeness becomes a tractable construction of parts. Paradigms are where we have achieved an acceptable level of correspondence for the purpose in mind. A lot of important problems are being handled adequately. A lot of unimportant ones make no important difference, so can be swept under the carpet in respectable fashion.
There is no particular "disaster" about this. It is business as usual. That is how a modelling relation has to work for good reason.
So infinity, as a concept, stands for the principle of the open or the unlimited, the unbounded, the endless. And that contrasts with what the modelling mind seeks - closure. A digitised tale composed of signs, or acts of measurement, which has the formal property of being closed for efficient cause.
The thing-in-itself is still beyond any such actual disclosure to the modeller. But the modelling itself sets up this closed vs open ontology. It produces the concept of the infinite, the perfectly open, that must logically be a corollary of its own hunt for this perfect closure.
Thus I would agree the infinite doesn't exist in some Platonia of abstract objects. It very much arises as an opposition within modelling itself. We encounter it as we probe the nature of what we like to call the finite. The usual dialectical deal.
And here is where I would point out the advantage of going a step further - like Peirce.
The infinite and the finite, the open and the closed, are very crisp or determinate conceptions. And recognising that characteristic should draw attention to their own corollary - the vagueness that would need to stand opposed to this canonised crispness.
Vagueness becomes another useful mathematical/logical resource that can be brought into play with our encounters with number lines and other habitual signs of the infinite.
That's probably the core issue, the relationship between the continuous and discontinuous.
Quoting apokrisis
But the problem is that they are both real, just like the rational and irrational numbers are both real. Hence the reality of incommensurability. The difficulty is in determining which aspects of reality are continuous and which are discrete, because to treat one as if it were the other is to err.
The pragmatic modelling relation approach says this is so because we accept eventually that difference cease to make a difference. The way we have everything set up means that we will reach a point where there is just no conceivable reason to care. The differences that speak to an incommensurability will cease to be measurable. They will be infinitesimal - the formal reciprocal of our notion of the infinite. And so they will themselves have become a symmetry, a blur of continuity, and no longer discrete countable entities. Modelling arrives at conceptions that are self-truncating - truncated by the a formalised principle of indifference that can achieve epistemic closure for a modeller.
Yeah. But I am arguing that both are practical conceptions. When we speak of them, we are only pointing to the fact that reality must exist between these two reciprocally-defined extremes. Both represent the measurable limits to existence. And so existence itself has to be the bit that stands in-between.
That is why every actual thing we encounter in the real world is never quite perfect like the model would suggest. The continuous things are still always a little bit discrete. And the discrete things are always a little bit continuous. And indeed most things will be far more obviously a mixture of the two possibilities. They will not be clearly divided in either direction.
This is easy to see if we look at any actual natural feature - the outcome of a dissipative process - like rivers, mountain ranges, coastlines, clouds. They express a fractal balance that puts them somewhere exactly between the discrete and continuous - in a way we can now also measure in terms of fractal dimension, or the notion of scale symmetry.
So you are taking the view that the world actually exists as either continuous or discrete in some black and white, LEM-obeying, PNC-supporting, fashion.
I am saying, sure, that is a useful basic epistemic model to apply when measuring the world. Acts of measurement depend on having that commensurate yardstick. And the way we achieve formal closure to construct a "world of measurement" is by applying that dichotomising logic. We speak of the two extremes which mutually, reciprocally, ground each other as conceptions.
But then the idea of the discrete~continuous remains just a pragmatic conception - an idea robust enough to launch useful acts of measurement. And as our modelling of reality has progressed, we have arrived at "surprises" like fractal dimensionality and other non-linear maths. The discrete and the continuous can have some exact balance which itself becomes a useful metric. We can apply them to systems that energetically grow in that kind of endless budding fashion of natural dissipative systems.
Clouds look pretty vague. Where do they really stop or start? But fractal descriptions can employ the discrete and the continuous as themselves a ratio - a log/log formula of endless, but also recursive, growth. The open and the closed in the one trajectory.
So modelling can play any game it can invent. And some of those games are surprisingly effective - as if we are actually encountering reality in a totalising fashion at last.
This is all great (sorry for late response - been busy!). Actually alot of it reminds me - and helps me put into starker relief than I was previously able to - of one of Rosen's other papers (found here [PDF]) on how it's often forgotten that a great deal of math is actually modelling of other bits of math itself ("a great deal of what passes for pure mathematics is really applied mathematics; it is essentially modelling in the above sense, except that the external referents assigned to a particular formalism are themselves mathematical in character").
And this allows me to maybe start exploring one of the things I've been getting out of my recent (and very preliminary) engagements with math, which is that while there does seem to be something to the idea that the world exhibits a certain 'mathematicality', it seems far more accurate to say instead that mathematics exhibits a certain worldliness. That is, that there is an immanent 'logic' that math exhibits that is exactly parallel with the logic of, well, anything else. So it's not that 'everything is number' - as per the Pythagoreans - but that number 'partakes' (to use an uncomfortable Platonic trope) of the same logic that everything 'non-numerical' does (a 'flat' ontology which does not privilege number but places it on the 'same plane' as everything else).
Deleuze in fact develops something like this in D&R, where, after taking the calculus as a model for his understanding of what he calls 'Ideas', he tries to address the seeming privilege he accords to math and insists that it's not that he's 'applying' math to other domains, but rather that each domain (he lists: 'physical, biological, psychical or sociological') has 'a calculus' of it's own. Borrowing from the vocabulary and ideas of the mathematician Albert Lautmann and referring to 'a dialectic' in place of what I called a 'logic' above, he writes:
"It must be said that there are mathematical, physical, biological, psychical and sociological problems, even though every problem is dialectical by nature and there are no non-dialectical problems. ... the differential calculus belongs entirely to mathematics, even at the very moment when it finds its sense in the revelation of a dialectic which points beyond mathematics. ... It is not mathematics which is applied to other domains but the dialectic which establishes for its problems, by virtue of their order and their conditions, the direct differential calculus corresponding or appropriate to the domain under consideration. In this sense there is a mathesis universalis corresponding to the universality of the dialectic."
So I wanna say that there's something right about the Pythagorean intuition that mathematics seems to structure the world, but to reply that it's not that that structure is mathematical, but that mathematics expresses, in its own way, that structure (or 'dialectic' or 'Logos' - 'wild Logos' as Merleau-Ponty once said).
So you are arguing that neither, the continuous nor the discrete are real? They are ideals and reality stands in between.
But then what is reality if it is neither discrete nor continuous, but something in between? What kind of consistency would be neither continuous nor discrete, but something in between? How would you describe this reality which is neither discrete nor continuous? Doesn't it make more sense to you, to assume that reality is a mixture of discrete aspects and continuous aspects, as I suggested, rather than that neither of these are real, but only ideal. Furthermore, how would you account for the existence of these ideals? Are they not in some way real? But you deny that the two can co-exist, (which is what they do as ideals, defining each other), by stating that all existence is in between these two
Quoting apokrisis
Aren't you just describing a mixture here? The real "things" consist of both elements, continuous and discrete. We model as one or the other, so the model doesn't quite capture the reality of the thing. It's illogical to say that the thing is neither continuous nor discrete, but in between, denying the law of excluded middle, but it does make logical sense to say that the thing consists of a mixture of both elements, and the model hasn't properly isolated the two.
Quoting apokrisis
No, I don't see this at all. The natural feature is not somewhere between continuous and discrete, it is a combination of both. I think that your interpretation of non-linear systems is deceiving you. The mathematics unites distinct variables as if they are one thing. But this is just the model which represents the distinct things as one unified thing, it is not the reality of the thing. That's why such models are extremely flexible and highly unreliable, they do not even attempt to separate the distinct elements, treating the combination of elements as one thing. This unity is completely artificial though, created by the model.
Quoting apokrisis
No again, you have misinterpreted me. I did not say that the world is either continuous or discrete, I said that it is a combination of the two. And I also said that the difficulty in modeling is to distinguish which elements of reality are of each nature. I think that there is a trend in modern scientific analysis to ignore this differential, but this renders the analysis incomplete.
Consider the concept of "space-time" for example. The concepts of space and time have been united in synthesis to create one concept. Many human beings will insist that this concept cannot be divided in analysis, that space-time is one indivisible thing. But this is to completely ignore the possibility that in reality, one of these may be discrete while the other is continuous.
Quoting apokrisis
Yes, that's exactly the problem, modelling can play whatever game it wants, in ignorance of reality. But it is not the case that some non-linear models are surprisingly effective. Some are effective in particular situations. But all are surprisingly ineffective in some situations, and that betrays the failings of such an approach to modelling.
Well remember that here I’m using the conventional categories of Being rather than Becoming. So the discrete vs the continuous is talk about that which exists in static eternal fashion. This then creates the tension that bothers you - how can limits be part of what they bound if they are in fact the precise place where that internal bit ends and the external begins.
In my own preferred process metaphysics, the discrete and the continuous become a dialectics of developmental actions. So now you have the rather Hegelian opposition of differentiation and integration. Or individuation and generalisation.
And that active view, one that sees reality as fundamentally a flux with emergent regulation, would avoid the kind of hard edge paradox that your own non-process metaphysics tends to encounter at every turn.
Heh, heh. After the post-structuralist revolt comes ontic structuralism again.
What bothers me, is that through your process philosophy, you have assigned to the limits (discrete and continuous) the status of not real, non-existent. But then you go ahead and talk about these limits as if they are somehow part of reality. You describe reality as being somehow forced to exist within these limits, yet the limits are said to be non-existent, not real.
If these limits are not real, then there are no such limits to reality, and the entire existing universe is not limited by any such ideal extremes like discrete and continuous, whatsoever. To talk about these limits within reality is just fiction, these ideals are simply within the mind, pure imagination. Though we might use them in application, in modelling, they represent nothing in reality. They cannot if you uphold the status you assign as not real.
However, you talk about these limits as if they are real, as if they are "part of what they bound". Which do you think is true? Are they "part of what they bound", therefore having real existence, such that we have to allow for the two distinct aspects of reality, the discrete and the continuous, co-existing in a mixture as I described, or are they not real, non-existent fictions of the imagination, leaving the world with no real limits in that way?
Quoting apokrisis
The paradoxes are encountered in the deficiencies of metaphysical principles such as your own. You readily avoid the paradoxes by simply ignoring them.
But in the process view, how would the contents be more real than their container?
So you are trying to impose your own non-process view on an understanding of process philosophy. And yes I agree, it doesn’t work. But that is now your problem.
Quoting Metaphysician Undercover
The paradoxes are a product of your metaphyics. So I can simply ignore them.
I'm not saying "more real", I'm saying "both real". The problem is that you talk about the contents and the container as if they are separate things, but then you reduce them in principle to one and the same thing. If the two separate things are in reality incommensurable, then the result is paradox.
Quoting apokrisis
No, I'm not trying to impose any particular view, I'm just trying to understand your view. When you mention two distinct things "contents" and "container", then talk about them as if they are really one and the same thing, I want to see the principle whereby you unite them as one and the same. Then I can judge this principle. If you have no such principle then you are just talking contradictory nonsense.
This is what I have gleaned so far, tell me if I have anything wrong. You have first stated that the "container", being the limits such as the discrete and the continuous, is not real. These limits are just ideals we have, by which we model things. So when I asked you about "real limits", the "real container", you implied that the contents are somehow also the container. Is this what you are saying, that the contents are self-contained?
If this is what you are claiming, then the contents must have inherent within them, each of the two limits, the discrete and the continuous. And, each of these two limits must be equally real, but fundamentally different in order that there may be real separation between them for the contents to possess real activity. Now we need to account for the real existence of the discrete and the continuous (the container), and the separation between them, within the contents. Do you agree?
You have inverted the perspective such that the container is inherent within the contents. But this does not negate the need to determine the two real, and distinct, limits, the discrete and the continuous, and the real separation between them, which exists within the contents.
Nope. That is how you are talking about them. The way I would talk about them is relative to each other. So there would be a contents to the degree there is a container, and vice versa.
OK, so I had it all wrong, let me try again then. We were talking about the discrete and the continuous. These two make up the limits, so this is what comprises your proposed "container", the discrete and the continuous. Moving along, you assume "contents" as well as the container, something which is contained by the discrete and the continuous.
Now, the real existence of contents and container are spoken of in terms of "degree". The container (the discrete and the continuous) is real to the "degree" that the contents are real. Can you help me to understand this concept of existence, or reality, by degree? Let's assume that there is something with real existence. Could you say that this thing is 50% contents, and 50% container (discrete and continuous), making it 100% real or existent? Could a thing have 80% real existence, being 40% contents and 40% container?
Are you saying that the content is always equivalent to container? There is always the same amount of contents as there is container? But if the container is both discrete and continuous, as the limits have two extremes, does this mean that the thing is 50% contents, 25% discrete, and 25% continuous. Is it possible that a thing could be 40% contents, 20% discrete, and 20% continuous, making it only 80% existent?
Could you explain to me exactly what you mean by "there would be a contents to the degree there is a container, and vice versa"? As you can see, I'm not quite making sense of this.
I chose to talk about the same general distinction in another way so as to broaden the view you were taking. So try to understand it that way rather than setting things up for further confusion.
To talk of contents and container is to talk about the systems view of physicalism. Everything that exists is the product of the process that is the formation of a global structure of constraints, a state of systemhood, which results - matchingly - in some locally emergent degrees of freedom.
So the container is emergent - some set of global boundary conditions or "habits" which stand for the system's defining final/formal cause. And the contents are emergent too - as the now definite degrees of freedom that stand for the system's material/effective cause.
Unpredictability regarding the contents has been stripped away by the nature of the container, resulting in a set of contents that is now definite to the degree that its material possibilities have been sharply restricted.
The discrete and the continuous do map to this view. Continuity becomes the global container - the constraints. And discreteness describes the now locally countable, because crisply individuated, degrees of freedom that are being "held" within the container.
Quoting Metaphysician Undercover
Yep. Degree of development. In the beginning, when everything is just vague, containers and contents would be hard to distinguish apart. A clear difference is what then emerges.
Quoting Metaphysician Undercover
Again, I probably made a mistake given you such a concrete image to latch onto. I aimed to give you a stepping stone out of your worldview. You are not using it to avoid actually have to step out of that worldview.
Think of a cloud. As an object, it only has a vague boundary and so only vague contents. It is kind of contained, and kind of substantial. Fly into one and it goes all misty, damp, cold. But neither its form nor its material is particularly definite - certainly relative to our usual notion of a substantial object.
Or to give another example where the active nature of containment might be clearer, think of a tornado. It is a vortex that entrains all its contents with a direction. Stuff gets sucked into its shape. It become composed of a spinning air mass, plus anything else light enough to be swept along.
But it is hard to put a finger on a sharp boundary to that vortex. It is a container, a constrainer, with a vague outline. And its contents are also in a vague state. There is a general sort of directionality to all the parts, but also a lot of individual chaos still.
OK, but our subject is the question of the existence, or non existence of the discrete and the continuous. Unless we relate this to the talk of container/contents, it is not an analogy (talking about the same thing in a different way), it is changing the subject. That is why I am confused, because I do not see this relationship and it appears like you are changing the subject.
Quoting apokrisis
But that is completely different from what you said concerning the discrete and the continuous already:
Quoting apokrisis
Clearly you are saying here, that "discrete" and "continuous" refer to two "reciprocally define extremes", and that they are "limits to existence". But now, when you apply the container/contents analogy, continuity is represented by "the container" and is called "the constraints", which represents the limits, and "contents" represents the discrete.
So I assume that you have negated the need for "reciprocally-defined extremes". In the first post, these two opposing extremes appeared to form the limits, the constraints. But now the "limits" or "constraints" do not consist of both these two, the constraints are only one of these, "continuity".
The "discrete" now, under this metaphysics which you are proposing is not reciprocally defined by "continuous". It is not defined as the opposite of continuous. The discrete is the contents, whatever it is which is limited by the container, while the container is the continuous.
Since "the discrete" and "the continuous" are no longer represented as reciprocally defined extremes, I have two question concerning the nature of these two.
First, how is it possible for "the continuous" to limit or constrain anything? If the limits, or constraints, do not consist of opposing extremes, but continuity instead, how is it possible for constraint to actually occur? Let's say for instance, that continuity is like the infinite. How can the infinite actually constrain anything" "Infinite" means the exact opposite, unconstrained. Are you saying that the continuous, which is unconstrained, without boundaries, can actually act as a constraint?
The second question is what type of existence does the discrete have now? Let's assume that the nature of the discrete is to be bounded, constrained. So we assume a boundary, and this boundary is as you say continuous, so it must be like a circle, to provide that continuity, and also be a boundary. What lies within that boundary is the question. We cannot refer to the circle itself as the discrete unit, because you have negated "reciprocally-defined". You have posited something within the boundary, something which is constrained within which is distinct from the boundary. What do you think is the nature of this constrained thing? "Locally countable", "crisply individuated", and such terms, refer to what the container does, individuating the contents, by bounding it. But we have to assume that there is something distinct from the boundary, which is bounded or else there is no difference between discrete and continuous.
That might be your subject. And the only way you understand any subject.
Quoting Metaphysician Undercover
What's so difficult? Being reciprocal is why the discrete and the continuous would map naturally to a hierarchical story of the smallest vs the largest. That is the nature of the relation being describe. The bigger one gets, the smaller the other gets.
A point can't contain the line, but it can compose the line in being its contents. And likewise, a line can't be the contents of a point, but it can certainly contain points.
Quoting Metaphysician Undercover
I thought it meant the space within which every possible number exists in bounded fashion.
Quoting Metaphysician Undercover
It is a limit on any continuity - the least amount of continuity imaginable. Just as continuity is whatever is the least unbroken state of affairs that you can imagine.
So to the degree you can define the one, you can define the other.
You are simply showing that the two can't in fact be disentangled with arbitrary completeness. Just as my developmental approach concludes.
As I say, your non-process view of metaphysics keeps crashing into paradoxes because it believes in ontological absolutes rather than a logic of relations. You keep demanding to be shown something fixed and concrete that answers to your mechanistic conviction that reality has to begin in counterfactual definiteness, rather than definiteness being a relative outcome.
Hey, you brought it up, not I. It is your subject, look:
Quoting apokrisis
Remember that post? It's not my fault that when I try to engage you on this subject, you simply tried to change the subject. You brought it up, and made many very bold sounding assertions, but instead of backing up these assertions, you changed the subject, and claim that it's my subject.
Quoting apokrisis
But when things are related, and one is designated as the largest, and another is designated as the smallest, it is not the case that the largest contains the smallest. They are considered, and compared as separate entities, or else this relation could not be established. So if you begin by comparing the continuous to the discrete, asking which one is real, as you did, then you switch to saying that one contains the other, then you have changed the subject. You do not mean the same thing by "continuous" and "discrete" that you meant in the first place. Do you recognize that this is equivocation?
Quoting apokrisis
This is nonsense, you cannot conceive of a number existing in space. This would require either a very odd definition of number, or a very odd definition of space, or both. The nearest thing would be to draw a number line, but that would be a representation, just like a numeral is a representation. You might conceive of a number line in the way you describe, but there is no "space" in this conception because it doesn't have the required dimensionality to qualify as "space". If you remove all requirements for dimensionality in a concept of "space", without replacing those requirements with other requirements, then "space" could refer to absolutely anything. And so, what you have said here is nonsense.
Quoting apokrisis
Wait a minute. You said that the continuous is like the container, it constrains, or restricts, limits the discrete. Now you are saying that the discrete limits the continuous. So now the discrete is the container, and the continuous is the contents. It appears like in reality you really have no way to differentiate the contents from the container. They both coexist and there is no way of saying that one is the contents and the other the container because each, the continuous and the discrete, seem to have features of container as well as features of contents.
This is exactly the point I was criticizing you on, which I was hoping that you could demonstrate a way of avoiding. I was saying that it appears like your metaphysics claims that the two, the discrete and the continuous, are inherently combined such that there is no possible way to separate them, and all of reality is just an indiscernible mixture of discrete and continuous, or container and contents.
Can you conceive of a way in which the discrete and continuous, as in the analogy of the container and the contents, can be differentiated from each other? If not, then there really is no container/content, or discrete/continuous, and all this talk is meaningless at best, or even deceptive or misleading.
Quoting apokrisis
A logic of relations requires that there are things which are being related. If there is nothing which is being related, then any described relations are meaningless. "Smaller than" has no real meaning without something to refer to; "smaller than X" . Described relations cannot on their own produce, or lead to definiteness, as an outcome, because it is necessary that there is something substantial, definite, to begin with in order to produce definiteness at the conclusion. That's simply the way logic works, the conclusion cannot contain more "definiteness" than the premises.
Err, reality as a process.
Quoting Metaphysician Undercover
I just tried to prevent you going down your same old rabbit hole of non-process assumptions.
Quoting Metaphysician Undercover
Huh. The relationship is precisely what is established by the discrete being part to the whole that is continuity. The relationship is that of the downward acting constraints to the upward constructing elements or individuated degrees of freedom.
Quoting Metaphysician Undercover
You mean like a representation of a .... continuous, just waiting to be broken, space?
Quoting Metaphysician Undercover
You are getting it ... by trying so hard to get it wrong! Spectacular. My job is done.
So I was right then. You talk about the discrete and the continuous as if there is some real difference between them. But when you describe the way that existence really is, you claim that there is no way of distinguishing between them within real existing things. Everything is a mixture of the two, and with respect to which features are discrete or continuous, which are the contents or the container, you cannot produce any principles for identification, because the two are fundamentally inseparable, and therefore cannot be identified individually. Sounds like the recipe for Zeno's paradoxes.
Correct.
Quoting Metaphysician Undercover
What are you talking about. This is modelling. So to the extent that we know the thing-in-itself, the dichotomy of the discrete and the continuous is the conceptual division that would describe a separation of the real - whatever that is noumenally speaking - towards its "real" phenomenological limits.
Thus if we are talking about our ontic commitments, then containers and contents are both equally "real" in that modelling sense. Likewise our notions of the continuous and discrete as the limits on possible existence.
This stands in contrast to more reductionist or monistic schemes that would want to make one or the other the "real". Or indeed, dualistic schemes that take a substantial rather than a process view of dichotomies.
So your problem is that you conflate the phenomenal and the noumenal in this discussion. It is one of the ways you keep tangling your feet.
Quoting Metaphysician Undercover
Back to front. The two are fundamentally separable because they can be individuated in terms of a reciprocal relation to each other. And by that same token, the two are fundamentally connected by being the two poles of that reciprocal relation.
OK, let's start from the beginning again, and see if I can understand what you're laying down. You have given me two distinct dichotomies, the dichotomy of discrete and continuous, and the dichotomy of the model and the reality which is modelled. The question concerns the reality of the discrete and the continuous. How do the two dichotomies relate to each other?
So here, we're back to saying that the discrete and continuous are only part of the model. However, you are saying that this model, which utilizes "discrete" and "continuous", describes a real separation within the real thing which is modeled. Am I correct in this description? If we use the discrete/continuous model, our description of reality implies a real separation between the distinct parts of reality.
Quoting apokrisis
This is where you loose me. Within the model, the discrete and continuous are completely distinct. They are defined as incompatible, one cannot partake of the other. They are defined in such a way that the one excludes the other in opposition. So if we switch now to a container/contents model, we must either maintain this principle within the container/contents model, or else we are switching to model which is different from the discrete/continuous model.
Now, the discrete/continuous model employs two distinct elements which are mutually exclusive, by definition, therefore there must be real separation between these two elements within the reality modeled, for the model to be accurate. The question is, is reality like this, that it consists of two distinct elements which are necessarily separate by way of opposition, or is the model flawed. If I understand you correctly, you are saying that this model is flawed, reality should not be modelled by two opposing terms each of which excludes the other, by definition, such that there is a real separation between these two elements. You are saying that reality ought to be modelled more like container/contents, where there is no real separation between the two defining elements of reality.
So here is the problem I have, which I've been trying to relate to you. If we model reality in this way that you are proposing, how would we distinguish between, and identify, the two defining elements, the container and the contents, within the thing which is being modelled? If we cannot have principles of identity whereby the identification of one would exclude the possibility of the other, such as in the discrete/continuous model, then how would we ever know whether we've identified constituents of the contents, or constituents of the container? Do you see what I mean? Why would we model reality as container/contents if we cannot produce principles whereby the container can be isolated from the contents? If we produce such principles of exclusion, we just go back to the two distinct, mutually exclusive parts of reality, like the discrete and the continuous. If your argument is that reality just isn't like this, there is no separation of parts in this way, then why even apply a dichotomy within the model at all? If there is no such separation in reality, then to model it with any type of separation, like container/contents, presents us with a faulty model. Either the dichotomy is real, and we provide for real distinct parts in the model, or we completely dispense with the dichotomy in the model.
Yes. Let's see if you can just remember the definition of a dichotomy as that which is "mutually exclusive and jointly exhaustive". So there is a process of separation towards reciprocally-matched limits. Two contrasting limits on "the real" emerge into view according to the distance each can each put between itself and its "other".
Quoting Metaphysician Undercover
Don't forget that they are also jointly exhaustive. So you have to have these two (the assertion about the mutuality of a pairing). And also only these two (the assertion about the exhaustion of any further possibilities).
You make the right noises about dichotomies only then to collapse everything back to your happy simplicities of pairs of terms that are then neither mutual nor exhaustive anymore so far as you are concerned.
I know it is a little bit complicated. But it ain't that complicated.
Quoting Metaphysician Undercover
This is some other confection of misunderstanding you are attempting to concoct as a distraction.
Keep starting from the beginning until you accept how a dichotomy actually works - mutually exclusive and jointly exhaustive. Dwell on that truth deeply. Really soak up the meaning in a way you can't forget or deny. Then maybe you will have the logical wherewithal to take a next step.
Do you see, that this process which you describe is not suited to a dichotomy as you define it? The process might be a movement toward a dichotomy, or away from a dichotomy, as defined, but if there is such a dichotomy there is not room for process without breaking the terms of "dichotomy". Also, the dichotomy cannot describe the limits on the real because this would not be jointly exhaustive. Anything within the limits, being not the limits themselves, which is the entirety of "the real" would be excluded from the dichotomy under the designation of "jointly exhaustive".. That's what I tried to explain earlier. If you place limits on the real, then the limits are not part of the real. And according to your description, it is impossible that "the real" partakes in any dichotomy, limits or otherwise, because everything which is real is not mutually exclusive of everything else that is real, each being real. For these reasons there is no room for a dichotomy in your process philosophy. The dichotomy cannot be real, and must be disposed of, dismissed as a false premise.
Quoting apokrisis
That's because I normally use "dichotomy" in the more general and common way. In the most general sense it is a simple division into two, a separation. In a more strict sense, it is necessarily mutually exclusive, nothing can cross the line. This leads to the idea that things which are opposed to each other, like hot and cold, form a dichotomy. But notice how all things which are warm are excluded from that dichotomy. And in your definition, which is an even more strict, mathematical sense, "jointly exhaustive" is included. Do you see, as I explained above, that if we adhere to this very strict sense of "dichotomy", there is no room for any dichotomies in the reality described by your process philosophy? Dichotomies are fictions which ought to be dismissed as false premises.
Quoting apokrisis
Actually, the question is do you have the balls to take the next step which is dictated by the logical wherewithal. Dichotomies are incompatible with your process philosophy. One or the other has to go if you want metaphysics with consistency. Which will it be?
Together they exhaust other possible limitations to that aspect of reality.
And don't forget that what follows after a dichotomous separation or symmetry breaking is the arrival at the stable equilibrium of a triadic hierarchical state of order. You get an ending to the breaking when the two limits are in equilibrium with the contents they thus now contain.
Again, because you can't be bothered to study how all this works, you keep falling woefully short of any understanding. I have to keep explaining basic stuff again and again.
Quoting Metaphysician Undercover
Exactly. You think it is a simple division. And the process view says it is irreducibly complex. Things only reach stability once the separating into polar opposites has arrived at a hierarchical balance where there is also now a connecting spectrum of concrete possibility.
Quoting Metaphysician Undercover
Hardly. All things warm are now specified in concrete fashion because they are related to the extremes of a dichotomy. There is the hot in one direction, the cold in the other. So now the warm has its own definite and measurable location somewhere on the spectrum of possibility just established.
Quoting Metaphysician Undercover
What do you understand about process philosophy? A big fat zero so far.
But my point is that after any "symmetry breaking", there is no such thing as a dichotomy. "A dichotomy" may refer to a symmetry, but the symmetry has been broken. So if a dichotomy is a symmetry, then if the symmetry is broken, the dichotomy no longer exists. A "symmetry breaking" is not a "dichotomous separation", that is a false representation. It might be possible to describe the symmetry which is broken as a dichotomy of possibilities, but the breaking of the symmetry negates this dichotomy.
If I am woefully short of understanding here, then maybe you can explain how I misunderstand this.
Quoting apokrisis
Let's maintain the model/real division. Remember, the separating of things into polar opposites is the model. Symmetry breaking is the real. A symmetry breaking is not a separation of things into polar opposites. A symmetry breaking cannot even be represented as a separating of things into polar opposites. These are completely different things, and it appears to me, like you are somehow claiming "separating into polar opposites" (dichotomizing) is a model of symmetry breaking. But this appears to me to be completely false.
You might insist that you've explained this "basic stuff again and again", but all I've seen is that you use these terms in this way again and again, as if symmetry breaking is a form of dichotomizing, but in reality these two are completely incompatible.
Quoting apokrisis
The "extremes of a dichotomy" are the principles of the model. The warm things are real. You are speaking as if the model is right there within the real warm thing. In relation to the warm thing, hot is in one direction, and cold in the other direction. But these relations are the model, they are not a part of the warm thing.
Quoting apokrisis
That is a statement which reflects your capacity to explain. When you use terms in an idiosyncratic way, as you clearly do, then you must explain yourself, rather than repeating the same idiosyncratic phrases over and over again, if you want someone to understand what you are trying to say.