When Alan Turing and Ludwig Wittgenstein Discussed the Liar Paradox
Here.
So, could the liar paradox cause a bridge to collapse?
It seems not.
It is a mistake to think that because a line can be drawn between two points, that the line is there even if undrawn. It is a mistake to think Chess was discovered.
And it seems Sokal needed to pretend that "metatheorems" are not part of the game of mathematics to protect mathematics proper from what he thought of as monsters. Seems overkill.
Playing with contradictions leads to the new game of paraconsistent logic.
Maths is made up.
So, could the liar paradox cause a bridge to collapse?
It seems not.
It is a mistake to think that because a line can be drawn between two points, that the line is there even if undrawn. It is a mistake to think Chess was discovered.
And it seems Sokal needed to pretend that "metatheorems" are not part of the game of mathematics to protect mathematics proper from what he thought of as monsters. Seems overkill.
Playing with contradictions leads to the new game of paraconsistent logic.
Maths is made up.
Comments (392)
Made up gives me the distinct feeling that we're in subjectivity territory and when was the last time, any two people agreed on that score. Math is, whatever it is, at least not all up here, in our heads.
This is an interesting article. I also followed some of the links. To be honest, it starts about level with the bottom of my nose and quickly goes over my head. It has always confused me when people talk about paradoxes as if they undermine the validity of mathematics. In particular, I've always found the reactions to Russell's paradox and Godel's theorem hard to understand. Godel's proof of his theorem has always seemed goofy to me. I don't understand how the claim that one odd, trivial contradiction proves that math is incoherent in any meaningful way makes sense.
I've been meaning to bring this up as a subject, but I'm not a good enough logician to even figure out a way to formulate the question. This article has been helpful, even only by showing me that I'm not alone in my skepticism.
Thanks for the link.
There's that subjective/objective confusion again. it's not all either subjective or objective, and never the twain. Is Chess "in your head"?
What means this :point: Quoting Banno? Try to think in terms of collective subjectivity, perhaps inter-subjectivity or mass hallucination if you like.
Is the mathematical universe like religion's heaven?
Quoting TheMadFool
Quoting TheMadFool
But if it is subjective, it's private, and hence not part of our conversation.
Further, what sense is one to make of inter-subjective? Something is both private and public? Bioth subjective and yet objective? Looks like a splint - an attempt to fix something that is broken.
No, if it's part of the conversation then it is not subjective.
Unless what is meant is something like that my preference for vanilla is subjective. In which case is the argument that maths is mere preference?
That is precisely what made up means.
Quoting Banno
I just had icecream yesterday and it was delicious.
Quoting Banno
:grin:
Really? I don't think so.
I think you misread that. Sokal is only saying, what I thought was widely known, that the overwhelming majority of working mathematicians have nothing to do with foundations at all. It is functionally a sub-field of the discipline, just as much as complex analysis or differential topology.
I'll make a statement and then ask a question. J. R. R. Tolkien's universe is made up. You claim math too is made up. Is there a difference between Tolkien's world and the math world? The floor is yours.
On balance, I think the answer might be yes.
And it's yes in part because of Turing. Nowadays engineers will to some degree rely on software to design bridges. It is fact that software complexity has created enormous challenges, and that it is not nearly so simple to verify correctness as one might wish. (In some fields like aircraft design there are strict, explicit standards for the provable correctness of programs, and still ... 737.)
I don't know enough about this stuff to point to examples, but Turing's general point that allowing contradictions can be dangerous is almost certainly correct, precisely because of the emergence of computers.
1. If you arrive at a contradiction the result would be inaction
2. If a bridge collapsed, an engineer is not wondering if the foundations of math is problematic, but if the calculation was wrong, it was not put together as per plan, or materials used were inferior
As for the Liar paradox, “I am a Liar” has a clear use is ordinary circumstances of life. Take it out of that and put it in the philosophical world, and one gets deep into confusion.
I think it pretty clear that equating made up and subjective is a long stretch.
But moreover, it is this sort of contortion that leads me to ignore the subjective/objective distinction - it causes far more issues than it solves.
...seems to imply more; a difference in logical status.
I'm somewhat ill-disposed to Sokal, finding his criticism of philosophy a bit too clever. Btu that would be an interesting topic in itself.
What do you mean by "made up" then? As is obvious, our conversation has stalled, fallen at the first hurdle as it were.
Denying mathematics any form of realism implies that it's got a subjective side to it, a collective kind i.e. it's a world that can be shared between individuals just like a work of fiction but lacking any form of real world relevance. Is this true? I think not. You've flown, me too.
W is missing the point. A line is a distance. Two points apart entails a distance, therefore a line.
Yep.
Is the line something distinct from the two points? I think it must be, since it includes all the points in between. If it is, is it there even when undrawn?
And Chess was there to be discovered?
No.
Correct.
Drawing the line is just a physical representation of the line. And saying it this way is even incorrect. But you get my point.
Too bad. I was just getting into the groove but it looks like I misread the situation. Typical MadFoolery. Carry on.
A line is not usually defined as a distance, if it is defined at all: in some systems it is a primitive element, which is not defined, but merely constrained by the axioms of that system.
I think what you are getting at is that for us to be able to define or describe a line, the world must already be such as to allow for such an object. And if that is so, then all the elements and constraints that are needed to make up a line (points, distances, etc.) are already in place. So what is there left to be invented?
I think a Wittgensteinian answer would be to say that the world (or rather, the "world" of our thoughts and conversations), its objects, and the way we put them together to construct other objects are all part of a language game.
Pffft. I'd venture to say that most people believe that "made up" and "subjective" are one and the same thing.* There surely is some reason why people believe that. How can you so readily dismiss it with an idle hand gesture?
*Ever had a health problem and went to see a doctor? You list your symptoms, but the doctor doesn't believe you. He believes only those that he can see or otherwise assess by himself, with whatever resources he has available or willing to use. You could be having a bad headache for weeks, but if the doctor isn't willing to do any tests, isn't willing to send you to an MRI test -- then, for him, you don't have a headache. And since he's the one calling the shots, there's nothing you can do.
So he should believe the symptoms you make up.
Fine.
They don't.
When you are eventually rushed into the ER with meningoecephalitis, with fever and vomitting and so on, they ask you, "Why did you wait for so long? Why didn't you come in earlier?!"
The point being that in the real world the subjective and the invented are often equated. We cannot just dismiss this, thinking that a fancy philosophical explanation will save our day.
Yes. Hard to understand why anyone would dispute Turing's point. If one allows contradictions within mathematics, they will spread everywhere, in all mathematics. The idea that engineering calculations would somehow remain unaffected is like saying: the logical foundations of mathematics are purely decorative, pure aesthetics, they do not actually matter at all when doing actual mathematics. They can be self-contradictory all you like, just like a poem can.
As usual, Witty was only trying to sound witty, and as usual he tried a bit too hard.
There is middle ground here though. Foundations of mathematics is nearly a separate field of study, and unnecessary for the doing of mathematics. You can teach high school kids (and engineers) calculus without teaching them about Dedekind cuts and making a deep dive into the nature of continuity. (And it works the other way too: you might be thoroughly conversant with the independence proofs but pretty bad at solving systems of linear equations.) As @unenlightened says, theory follows practice, and in some ways this is true of mathematics as well. It's a bit confusing here because mathematics is also a theoretical subject, and when theorizing the practice already in place, mathematicians inevitably see opportunities to fiddle with things: if we need these nine axioms to ground what we've been doing, what happens if we drop number 3 and number 8? that sort of thing. The result of that sort of thing doesn't touch existing practice but does generate new, additional mathematics.
But it's worth remembering that engineers are not waiting to find out if the continuum hypothesis is true, and the vast majority of mathematicians aren't either. A lot of the basics of probability are well understood centuries before we get Kolmogorov's axioms.
Good foundations are not absolutely necessary to build a good house, but they make it easier.
When I was in first grade, my teacher was an old woman who all her life had taught arithmetic the traditional way, i.e. training kids to learn by heart and apply mechanically certain procedures to numbers, called addition, multiplication, etc. And then two years before retirement there was a pedagogic reform, and she was asked to teach "modern mathematics". What was meant by that, was mathematics based on clear axioms, derived from set theory. Us kids were supposed to learn the foundations of math (developed during the 20th century) first, in first and second grade, and only then derive applications such as addition or multiplication. This would give us a stronger background and better mathematical abilities. And it worked, at least for me. I learnt the conceptual basis for set theory and numeration, i.e. how to count in base 10 but also in base 2 or in any other base, when I was 7 years old. And I remained a math prodigy for all my school years.
Unlike Wittgenstein, Turing knew what he was talking about. He knew that founding math on sound, non-contradictory axiomatics has been the mathematical project of the century, a project on which thousands of mathematicians worldwide have worked very very hard. It was done out of a belief that such foundations were useful and important if mathematics were to be more than just a bag of tricks.
It's certainly common these days to treat set theory as fundamental, and for kids to learn naïve set theory, and I agree that's useful. But you didn't learn ZFC in elementary school and weren't taught anything about alternative axiomatizations or independence.
What were you taught then? A lot of the mathematics people learn in classrooms is definitions and techniques. This is what we mean when we say .., this is how it works, this is what you do. Questions about whether those definitions, which support those techniques, are "good" just don't arise. And that continues to be true for much of mathematics.
It's one of the curiosities of set theory that now and then people do worry about whether the axioms are "good", not just in having the usual mathematical virtues of being powerful enough to do the job but not more powerful than needed, but in the sense of "natural". The axioms are supposed to be like Euclid's old axioms, just spelling out our intuitions clearly. Of course there was a massive failure there relatively early on with the axiom of comprehension and Russell's paradox. We teach kids you can make a subset of "all the blue ones", but we don't tell them there are rough waters ahead if they think they can always do that sort of thing.
Maybe this is what I'm trying to say: children are not actually being taught foundations, and not even really being taught set theory as they are taught other mathematics --- definitions and techniques. What they're being taught is an application of something they already know, that things can be grouped together, and they can be grouped together according to rules. In order to apply this basic intuition, it gets tidied up and even formalized a bit (though not much at this stage). But the idea is that sets are not introduced the way, say, tangents are later: here's the definition, it's just a thing, and we promise it'll turn out to be interesting. They're expected to nearly understand sets already, but not to realize just how much they can do with them.
That last part -- what you can do with sets -- might turn out to be all of mathematics, but not in practice, not by a long shot. No one proves theorems starting from ZFC, and certainly no one does calculations that way. There's a sense in which the difference between calculus before the development of set theory and after is just a change in notation.
I guess the question that's left is something like this: does our ability to express all of mathematics in the notation of set theory mean that set theory is the foundation of mathematics? Both answers to that are tempting, but perhaps that's because it's a bad question. There is no single thing that is set theory, in that sense; there are various competing ways of axiomatizing our intuitions, all of which are adequate to doing mathematics (and you actually need less than ZFC I believe to do most math).
One last example: having later learned about cartesian coordinates, you can readily think of a line as a set of points defined by a linear equation, an infinite set. But that's not how you were taught what a line is; you were taught that it's "straight". When you learn that y = mx + b produces a line, that feels like a result, not a definition, because you already know what a line is, just as you already knew what sets are.
On the one hand, I think I agree with Turing about contradictions mattering, but on the other hand it does seem clear to me that practice and intuition is the foundation of theory not the other way around, and you don't really need the theory, even when it comes to mathematics, insofar as foundations counts as the theory, to practice. Which is not to say that it can't be helpful. Maybe it's just that mathematics makes it clear there are at least two approaches to theorizing: one to justify what you're already doing, but one that is expected to feed back into practice. A whole lot of mathematicians do the latter without ever bothering about the former, starting from when, as tots, they learn about sets for the latter reason much more than the former.
[quote=Remarkl]I think the liar paradox meets mathematics at division by zero. If "I am lying" creates a paradox, then what about (a^2-b^2)/(a-b)=a+b? That "sentence" is true except when a=b, in which case we are purporting to divide by zero, which we cannot do, because no such operation is defined in mathematics. Thus, where a=b, the purported division does not "fail" or "create a paradox." It is gibberish.[/quote]
"Do not attempt what cannot be done" is the civil engineer's mantra. Bridges that relied on division by zero might well fail.
Pythagorus (or Euclid?) 'made up' an explanation for why the Egyptians used a 3,4,5 triangle as a set square. But if geometry (the clue's in the name) hadn't already been part of creation, it wouldn't have worked and he wouldn't have had anything to invent.
I would suggest that mathematics is the study of possible worlds, and paradox is the study of impossible worlds such as those depicted by Escher. Beautifully precise drawings, and fascinating, but an engineers's joke. In this respect I am with W. ; there is little danger of an engineer trying to build an Escher building, or dividing by zero.
{Possible worlds are possible structures, arrangements, orderings and disorderings processes, etc. Thus mathematics abstracts the structure from the substance of the world. Invented in the sense that there cannot be a structure of nothing; real in the sense that substance always has a structure. }
Nevertheless, allowing contradictions at the level of foundations would result in contradictions permeating the whole body of mathematics, and in the end, some calculation about some bridge may very well prove self-contradictory. So Turing was correct.
Here's another way of looking at it: we have always intended to do mathematics consistently, since long before the modern study of foundations. That's our practice. A theory of that practice is not supposed to disturb it by introducing inconsistency. But it does happen -- and mathematics is a prime example, but I think also music -- that the theory you come up with is somewhat more powerful than you need, so it supports some existing practices but also others. Now suppose some of the others it supports are not consistent with existing practices. (Not everyone wants to hear 12-tone compositions.) Are you forced to engage in these new practices because the theory authorizes them, or do you carry on doing what you were doing?
Both Turing and Wittgenstein understood that sufficiently complex formal systems (.e.g Peano arithmetic) cannot be known to be consistent a priori due to the halting problem, and that inconsistency in practice must be patched as and when problems arise in application, similar to legal precedent or a sport.
Wittgenstein argues, using the example of 20th century applications of logic and mathematics, that if logical paradoxes and incompleteness results of higher-order logic have no practical implications, then why should philosophers worry?
Turing's point should be understood in relation to artificial intelligence and automation; whilst it is true that logical paradoxes and inconsistencies aren't relevant to manual applications of mathematical modelling in bridge design, they are potentially relevant with regards to the automation of bridge design in which artificial intelligence reasons in higher-order mathematics.
So they should be regarded as being on the same side, considering the fact that both had no time for platonic superstition, and that both were making different points.
“Happily, in those days before tape recorders, some of Wittgenstein's disciples took verbatim notes, so we can catch a rare glimpse of two great minds addressing a central problem from opposite points of view: the problem of contradiction in a formal system. For Turing, the problem is a practical one: if you design a bridge using a system that contains a contradiction, "the bridge may fall down." For Wittgenstein, the problem was about the social context in which human beings can be said to "follow the rules" of a mathematical system. What Turing saw, and Wittgenstein did not, was the importance of the fact that a computer doesn't need to understand rules to follow them. Who "won"? Turing comes off as somewhat flatfooted and naive, but he left us the computer, while Wittgenstein left us...Wittgenstein.” From 1999 Time Magazine
Instantaneous velocity means what, precisely?
Do I have to be able to answer that question to build bridges?
Well, it is just amusing you picked calculus as the place for no contradictions. Perhaps I misread you. I was just teasingly (and ignorantly) pointing out why calculus was such a big sea change over what came prior and how refusing to allow "contradictions" probably delayed its arrival by a loooong time.
Random article, because why not?
Vickers, Peter (2007) Was the Early Calculus an Inconsistent Theory? [Preprint]
But then Dennett sees Turing as the practical one here. (And I think that's right. It reminds me of Anscombe's thing about Wittgenstein being a philosopher's philosopher, not an ordinary man's philosopher, or however she put it.)
Does any of this really address @Banno's bumper sticker claim that "maths is made up"?
Quoting Ennui Elucidator
A little? Maybe? Most people who use calculus everyday couldn't prove the mean value theorem from scratch. (I could have done it several decades ago on demand, but no longer.) You just don't need to understand the theoretical foundations of calculus to use it consistently. I expect we all agree on that. If you want to claim that calculus has no foundation, that it is contradictory, help yourself. I'm not (any longer) competent to rebut you, but I don't recall any of my professors saying, "By the way, this doesn't make any sense."
If you want to allow contradictions in mathematics, you need to include in your axiomatic the possibility that two mathematical statements contradicting one another can both be true nevertheless.. This means inter alia that you cannot use ad absurdum proofs anymore. You can't also limit this to certain parts of math and not others. Every theorem would be both true and false. It would be the end of math.
I read your post yesterday and have been thinking about it since then. I am far from an expert on computer programming or mathematics, but it seems to me the kinds of contradictions described by Russel, Godel, and Turing don't have anything to do with the real world, computer generated or not. I guess that's what Wittgenstein was saying.
In my school days, I took a couple of courses in computer programming and did a little programming for an engineering project. That was in Fortran, which I guess tells you how long ago it was. Even with the simple programs I worked on, it was difficult keeping track of references and connections within and between algorithms. I find it hard to imagine how they do it with the incredibly complex programs that run the world now. There is so much complexity I find it hard to believe that a little meaningless self-reference of the kinds we are talking about will gum up the cogs in the machinery.
The more I think about it, the more I believe that the kinds of paradoxes we're talking about have no connection to anything outside our minds. It's another example of people mistaking words for reality, the map for the territory.
Mathematics are in our mind, and science and technology too.
I'm not sure what you're saying in relation to my post. Are you disagreeing?
Recall that Peano arithmetic might turn out to be inconsistent. In which case, an application of a deductive system based on such arithmetic might result in physically untrue predictions via explosion.
Wittgenstein made the point in Philosophical Remarks (IIRC), that whilst such inconsistencies would lead to physically untrue predictions if applied blindly, there is no reason why the occurrence of such events would discredit uses of the system for which inconsistency plays no role. And since it is impossible to predict the existence of mathematical inconsistency before it arises (due to the the second incompleteness theorem), there is no reason to fret about the possibility a priori. We only need to patch our systems as we go.
Wittgenstein's remarks weren't targeted towards scientists or engineers, but towards philosophers who sought to establish epistemological foundations of mathematics.
Turing on the other hand was worried, due to his interests in artificial intelligence, where such systems might be applied blindly.
From my limited perspective, it seems like the kinds of inconsistencies we are talking about are trivial and, really, meaningless. As I noted previously, when I read the proof of Godel's theorem, I couldn't understand why mathematicians and logicians thought it was important. As far as I can tell, it may tell us something about the foundation of mathematics defined in a very rigid way, but it says nothing about anything that might apply in the world.
Please, convince me I'm wrong. I find it hard to believe that my thoughts would overturn the concerns of the greatest philosophers and mathematicians.
To me, that's like saying the sentence "This sentence is not true" may slip it's leash, escape, and undermine the usefulness of the English language.
I know this isn't the main point of Turing and Wittgenstein's dialogue, but:
It seems strange to say that we made up numbers like e or ?. We don't know what the 10000000000000 trillionth digit of e is, yet if we invented e shouldn't we know that?
How could we not know something about that which we made up? If I made up the tenets of a religion, I should know everything about those tenets I made up, right?
Epistemic closure of mathematics or its inability being used in practice doesn't prohibit a computer from modelling a bridge, and neither does it not prohibit an engineer from making redundancy measures to keep a boat afloat after hitting an iceberg, by compartmentalizing the ship.
And what Wittgenstein saw , and Turing and Dennett did not , was that the computer’s actions mean nothing without an interpreter.
Except that Wittgenstein rejected the idea that words represent reality and maps represent territories.
e and like are less numbers than they are recursive processes. We made up the process. To be more precise, all our mathematics is parasitic on our notion of the object, which is why modern mathematics emerged in tandem with the modern scientific notion of the empirical object. Empirical objects (not just perfect circles ) are subjective constructions, abstractions, idealizations. Such idealizations made mathematical calculation possible.
I think what Wittgenstein was saying is that the trivial inconsistencies associated with the paradoxes don't matter. Are meaningless. There's a good chance I'm wrong about that, but that's how I read the article @Banno linked to.
I don't see what this has to do with the law of the excluded middle.
Yes. Wittgenstein and I agree. Wittgenstein and I both think that mathematical inconsistencies are meaningless. I think. Maybe. I think that's what the article said.
If that's what LW was saying, i.e. that we can in practice tolerate a few inconsistencies here or there in math as long as we know how to deal with them in practice, without otherwise departing from 'either p or non p', then I can agree. That looks like a reasonable position to me. But it seems he was arguing for treating contradictions not as a problem but as some sort of creative source of inspiration. That's often a good idea in real life, in literature, in proverbs and in art, all contexts where the law of the excluded middle is at best an ideal, at worse a distraction. But I'm not sure it's a good idea in mathematics.
I don't read this in the only (?) direct quote provided in that article, which reads as follows:
(emphasis mine)
Also from the article (though not a direct quote):
The liar's paradox, like all logical paradoxes, has a simple non paradoxical solution. It's only an apparent paradox. So of course it can't break bridges or lead to poorly conceived ones. But happily or even casually allowing contradictions in math is equivalent to dropping the law of the excluded middle from mathematical logic, with far reaching consequences.
Luckily it will never happen. Math is too serious a matter to be left to philosophers.
Isn't it? It's just the best way for humanity to create proofs.Best way to "verify" its reality. Well not to say the only one which can be so accurate.
But rather than that I see no reason at all Maths to play any significance role to the universe itself.
Aliens might have created their own way ("Maths") which fits their senses better and can describe their reality better.
Maths is just an excellent necessary human invention.
Quoting Olivier5
I don’t think the solution you have in mind has anything to do with what Wittgenstein was trying to illustrate here. Quoting Olivier5
We don’t have to drop the law of the excluded middle, it deconstructs itself.
“ In the decimal expansion of TT either the group "7777"
occurs, or it does not—there is no third possibility." That is to say:
"God sees—but we don't know." But what does that mean?—We use a picture; the picture of a visible series which one person sees the whole of and another not. The law of excluded middle says here: It must either look like this, or like that. So it really—and this is a truism—says nothing at all, but gives us a picture. And the problem ought now to be: does reality accord with the picture or not? And this picture seems to determine what we have to do, what to look for, and how—but it does not do so, just because we do not know how it is to be applied. Here saying "There is no third possibility" or "But there can't be a third possibility!"—expresses our inability to turn our eyes away from this picture: a picture which looks as if it must already contain both the problem and its solution, while all the time we feel that it is not so.”
(Philosophical Investigations 352)
How is the halting problem relevant to the number of spoons on the table?
As if that any consistent formal system within which a certain amount of elementary arithmetic can be carried out is incomplete implied that the number of spoons on the table is indeterminate.
Hence,
Quoting T Clark
and almost
Quoting Olivier5
but not
Quoting Olivier5
Ridiculous.
This from the article's author:
All the above means that if mathematics is a human invention, then any contradictions and paradoxes there are (within mathematics) must be down to… us. And if they’re down to us, then they aren’t telling us anything about the physical world (which includes Turing’s bridge — see later) or even about a platonic world of numbers — because such as thing doesn’t even exist.
And this:
Wittgenstein’s argument (at least as it can be seen) was that the Liar paradox does indeed lead to this bizarre conclusion because — in a strong sense - it was designed to do so. That is, it is part of a language-game which was specifically created to bring about a paradox. And because it’s a self-enclosed and artificial language-game, then Wittgenstein also asked “where will the harm come” from allowing such a contradiction or paradox?
And this:
Indeed many (pure) mathematicians have often noted the complete irrelevance of much of this paradoxical and foundational stuff to what they do. Thus if it’s irrelevant to many mathematicians, then surely it would be even more irrelevant to the designers who use mathematics in the design of their bridges.
Whether or not Wittgenstein means what I said he means, I think this shows that the author of the article thinks Wittgenstein means what I said he means.
This is a really interesting discussion.
Quoting Olivier5
Did you read the Wittgenstein quote? Do you understand what he’s trying to say?
I baulked at this the first time I read it, but on re-reading I gather the emphasis is on representing; as in word do not represent reality, but are the reality. The world is what is the case.
No models, nothign between "the cat is on the mat" being true and the cat 's being on the mat.
IS that right?
He is saying that the LEM is not always useful. So? We should give it up just like that?
When you find out that your cellphone cannot mow the lawn, does your cellphone deconstruct itself?
But then we have to wade into the messiness of ‘use’. The cat is on the mat is no less complicated than the sense of any particular word. How is it being used in a particular context? We’d have run through a potential infinity of such uses before we came upon that use in which the concept of ‘truth’ becomes relevant. But having done so , what can we conclude about the status of ‘truth’? Can we save some sense of it that doesn’t get sucked down into the relativity of use? Is ‘true’ just another thing we say in certain contexts for certain purposes?
Wittgenstein is literally asking why should one be afraid of contradictions in mathematics. What you or the author are saying, I don't know. I would answer that mathematics as we know them are built on the LEM, so the reason why we should be afraid of contradictions in mathematics is to keep that body of work alive and well. Now if anyone wants to build a parallel form of mathematics where the LEM does not apply, be my guest.
Just don't build any bridge with it.
No, but my sense of its usefulness changes. Logical propositions have to do with things being the case or not. So they presuppose that the things we are passing judgement on just sit there being what there are indepdently of our judgements about them. This way of looking at logical propositions doesn’t recognize that before something can be the case or not, there has to be agreement on the sense of what it is to be a ‘case’. That is to say, words have an infinity of potential senses , and which sense is being generated is a function of the context of use. Logic can pretend that such constant subtle shifts in sense do not exist because they don’t often amount to enough of a disagreement to become noticeable. For most intents and purposes , we can assume that we are all on the same page when inquiring whether something is the case or not. But this is because the generality of logic was designed to mask these usually subtle interpersonal( and infra-personal)
differences. The law of excluded middle is thus a kind of useful fiction.
So one might think, but because maths is made up we can develop little intricacies by thinking otherwise. Inconsistent mathematical systems are a thing. They are based on rethinking logical explosion rather than excluded middle.
Here's some inconsistent geometry:
I'm ok with that. As I said before, this is not my area of expertise. It feels good that Wittgenstein agrees with me, even if Turing and you do not.
contradiction in Wittgenstein’s sense after all.
But what happens when you take the images seriously? see Chris Mortensen.
eg: Inconsistent Mathematics, Reutersvärd, And Buddhism: An Interview With Chris Mortensen
or
Review of 'Inconsistent Geometry', by Chris Mortensen
More Australians mucking stuff up.
Ah, thanks. :up:
I'm interested in seeing more in detail what Turing said, not so much Wittgenstein himself. Mathematics, much less foundations of mathematics, is beyond me.
But some paradoxes are interesting. It may be due to mathematical considerations, or linguistic ambiguity or lack of comprehension, so looking at Turing's reply might be instructive.
So the point in what you quote, as I take it, would be to avoid building bridges that fall. A system with a paradox is a kind of faulty logic system.
But there's many reasons why they could fall not limited to paradox. I mean, why do paradoxes arise at all? :chin:
Moreover, whether any given result in foundations has immediate practical consequences, the concepts and methods of foundations provide the context for the study of the branches of mathematics that do have great practical consequences. The field of computability is built from symbolic logic and mathematical logic. The invention of modern computing itself was born in this context.
Moreover, one may wish to evaluate for themself whether certain results in undecidability bear upon certain matters in combinatorics and cryptography. And, without doubt, one of the most pressing practical questions in mathematics is the P vs NP question, of which a resolution would have enormous practical import. There is currently a million dollar prize for a solution, which might be the largest offered prize now in mathematics. A solution is sought not only to satisfy theoretical curiosity but rather for the great economic impact of the question. Also, for example, Turing's theorem on the unsolvability of the halting problem is often said to entail that there can be no universal program debugger, so researchers don't have to waste efforts looking for one, but instead various other concepts and approaches are devised.
/
It was posted: "[...] people talk about paradoxes as if they undermine the validity of mathematics."
Certain paradoxes, if allowed as formal contradictions, vitiate the systems in which the contradictions are derivable. The common response in foundations is to provide systems in which the contradictions are not derivable (or at least are not shown to be derivable).
It was posted: "Godel's proof of his theorem has always seemed goofy to me. I don't understand how the claim that one odd, trivial contradiction proves that math is incoherent in any meaningful way makes sense."
I don't know what is meant there by "goofy". And Godel's proofs are not proofs of contradictions. And I don't know any informed writer on mathematics who has claimed that Godel's work makes mathematics incoherent.
It was posted: "[...] allowing contradictions in math is equivalent to dropping the law of the excluded middle"
No it is not.
Retracting excluded middle wouldn't allow contradictions. The logic is monotonic: we don't get additional theorems from subsets of a consistent axiom set. To have contradictions, we have to add axioms that prove contradictions. (To prevent contradictions from entailing all statements, normally a paraconsistent system is used that retracts the law of explosion.)
This whole discussion started from the question of whether the liars paradox has any implications for the design of bridges, i.e. if the paradox undermines the basic aspects of using math to solve problems. Thoughts?
And
“But I will talk about the word “ foundation” in the phase “ the foundation of mathematics”. This is a most important word and will be one of the chief words we will deal with.” From Wittgenstein’s Lectures on the Foundations of Mathematics(LFM)
Keep this mind that this is what Wittgenstein is trying to show - that philosophers of mathematics are creating bewilderment because these words are being pulled from their typical surroundings.
W “By “seeing the contradiction” do you mean “ seeing that the two ways of multiplying lead to different results”?”
T: “Yes”
W: “The trouble with this example is that there is no contraction in it at all. If you have two different ways of multiplying, why call them both multiplying? Why not call one multiplying and the other dividing, or multiplying A and the other multiplying B, or any damn thing? It is simply that you have two different kinds of calculation and you have not noticed that they give different results” LW (LFM)
I'm not sure but Kurt Gödel's incompleteness theorems (utilizes a variation of the liar paradox) imply that we don't know, can't prove math is consistent and while Ralph Waldo Emerson claimed "a foolish consistency is the hobgoblin of little minds...", I'm certain that a mathematical inconsistency could cause more than just bridges to collapse.
Funny you say this. I won't preface a statement about math objects as "usually". They're just are. Also, interesting that you mentioned constrained by the axioms of the system. Don't you want to direct that statement towards Banno's question regarding chess?
As far as I have seen, which, admittedly isn't far, the inconsistencies in math are analogous to "This sentence is not true." The proof of Godel's first incompleteness theorem uses similar slight of hand to show that, as Wikipedia says:
...no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers.
From what I've read, the foofaraw about these ideas comes from the fact that they crush logician's and mathematician's dreams of a perfect formal logical system, not from any impact to any mathematical system that could have an impact on the real world.
Am I sure about this? No way, but it seems like that's what Wittgenstein was saying in the linked article that @Banno provided. Is it possible I have misunderstood? You betcha.
Well, let's use the sour grapes technique on math - if we can't discover every truth in math because of Gödel's incompleteness theorems, maybe we can prove that all such mathematical truths are merely trivial, truisms like 1 = 1, not worth knowing at all or, more accurately, obviously true if there can be such a thing in math. I dunno.
A digression no doubt but something worth looking into, no?
Quoting T Clark
Is the liar statement (this sentence is false) more about language than about logic?
The Russell paradox is basically the same thing using sets instead of sentences.
/
The true arithmetical statements that are unprovable include ones that are not trivial, and especially are not of the form of logical truths (since, of course, all logical truths are provable). One can easily look up the subject of substantive mathematical statements (ones worth knowing whether they are theorems) that are undecidable in the pertinent systems.
/
It was said, "[...] inconsistencies in math are analogous to "This sentence is not true."
A system is inconsistent if and only if it has a theorem of the form P & ~P. Whatever is meant by "analogous", we know that there are inconsistent systems having nothing to do with the liar paradox.
/
Russell's paradox is couched in terms of sets when discussing set theory, but the basic paradox does not require any notion of sets whatsoever. That is illustrated by using "shaves" rather than "member of". The result is that for any 2-place relation whatsoever, call it 'R', it is not the case that there is an x such that for all y, we have y bears R to x if and only if y does not bear R to y. Symbolically:
~ExAy(Ryx <-> ~Ryy)
There is no mention of the notion of 'set' there.
/
Among the salient uses of the liar paradox for mathematical logic is Tarski's theorem. That theorem is that systems of a certain kind that also can form their own truth predicate can thereby form the liar paradox so that such systems are inconsistent.
Quoting Wayfarer [from the thread: An Analysis Of The Shadows]
What's the deal with self-reference (self-reflection)?
Fitch's Paradox Of Knowability
Assumption: Everything is knowable.
Conclusion: Everything is known.
:chin:
Why then, i guess your answer to Wittgenstein ought to be: don't dispose of the LEM, it's useful...
Wonderful. On my side it feels very good to agree with Turing -- enigma buster & war hero, inventor of the computer -- and to disagree with Wittgenstein, whom I consider a fake philosopher.
Fair enough. As I said, if anyone wants to build a parallel form of mathematics where the LEM does not apply, I see no objection whatsoever. As long as they don't build any bridge with it...
Within the context of a given mathematical system, yes. But there is more than one system, and hence more than one way to define/describe a line. For example, in analytical geometry a line is a collection of points, because that's just how analytical geometry is built up. Roughly speaking, you start with numbers, from numbers you build points and spaces, and from that you build all the geometrical objects, including lines. That is not how lines are introduced in Euclidean geometry though. Euclid himself doesn't really define a line - he just gives an intuitive picture of what he is going to talk about. The real "definition" of a line comes in the form of axioms that constrain its properties.
Quoting Caldwell
Funny you should mention chess, because chess pieces are a good example of use-definition. A formal description of a chess game would not have a formal definition of a chess piece - it's just an abstract object to which we give a name. Its meaning is given by the use to which it is put in the game: the rules of how different pieces move, etc.
What was @Banno's question?
The point is that allowing contradictions as fine and mellow in mathematics would contradict the LEM.
As already explained, this is not really the question at hand, rather it is a bit of a caricature of the more general question at hand, which was: How should we treat logical contradictions in mathematics? Should we reject or minimize them, as if they were a problem, or should we rather welcome them and treat them as a source of creativity?
I think he asked: Did we invent or did we discover chess? This I read as a parallel to the question of whether math are invented or discovered.
Yes, the liar's paradox statement is shorthand for the overall argument.
This question was aimed at @TonesInDeepFreeze, who seems to understand this better than the rest of us.
I think it's always a good idea to ask precise questions; that was the sense of my input.
Anyway, I don't know what was intended by mentioning Fitch's paradox in connection with my remark about the second incompleteness theorem.
I asked Banno what is the payoff for this belief. The payoff is, it deflates the argument that number is real but not material. Numbers exist in minds, minds depend on brains, without brains there can be no minds and therefore no numbers, goes the reasoning. Because if number is real but not material, then you have something real but not material, meaning materialism is false. And that is a no-go in secular scientific culture. Ought not to over-complicate it.
So you infer that there is a ghost in the machine in order that there can be five spoons on the table even when they go uncounted. Presumably the spirit counts the spoons when no one is around in the quad. I hadn't fully understood that you were so close to Berkeley.
But you have misunderstood my position. I have repeatedly gone through the process of disavowing materialism. Here's a list of thread I have created that touch on the topic:
Philosophical Plumbing — Mary Midgley
Causality, Determination and such stuff.
Nothing to do with Dennett's "Quining Qualia"
Midgley vs Dawkins, Nietzsche, Hobbes, Mackie, Rand, Singer...
Anscombe's "Modern Moral Philosophy"
Subject and object
You participated in most of them. Indeed, let's not over-complicate it, but when you are less distracted we might engage a bit of nuance.
I wouldn't call it pointless to point at one consequence among many. Your choice of word. But we agree on the rest.
You're a math anti-realist. *shrug*
Well, yes, but see the intro to my realism thread:Quoting Banno
It's more a question of which, when than of either/or.
Are there five spoons on the table when we haven't yet counted them? That seems to be what we do. We might equally say that we don't know how many there are until we count them, but that is about what we beleive, not what is true.
Of course, it doesn't matter because both materialism and idealism are metaphysical positions and, therefore, are neither true nor false, but, rather, useful or not. I read that mathematicians tend to be idealists and scientists tend to be materialists, which, given that, makes sense.
For the sake of simplicity, let's call them active and passive intentions.
With aesthetics, it's both. The beauty of the golden ratio and the perfect fifth were passively discovered, not created. But having discovered them, I may actively reproduce them. It's the same with ethics.
Though Pythagoras and his followers were literally violently opposed to irrational numbers, their appearance all around us couldn't ultimately be suppressed. So if you see the active in math, the passive also appears to be there.
As for the role consciousness plays in the production of the world we know, it's hard to deny some place for it. It's just a matter of how much, I suppose. Again, it's probably both.
They're already called world-to-word and word-to-world. Active and passive would confuse the issue.
But yes, that's the hypothesis of the other thread. Your post might be better there.
It's one consequence not just among many but among all. The supposed connection with the LEM does not hold.
Thanks, @Olivier5
Quoting SophistiCat
Let's have consistency at least. Okay, more than one way to describe a line you say. Yet, you dismiss your own statement of "it's just an abstract object to which we give a name" regarding chess. So which is it? Chess exists in a vacuum. A line does not.
First, it only takes two points to make a line. Of course you can put however many points you want -- a collection of points. But we imply distance here, which doesn't change its meaning.
In short, I don't get your point.
The math involved in structural engineering have changed overtime. If in one of these changes, them engineers postulated that anything mathematical is both true and false at the same time, as Wittgenstein was effectively (though unwittingly) suggesting, they might have ended building quite a few failed bridges.
Quoting Caldwell
No, chess does not exist in a vacuum, any more than a line. I think when people talk about Wittgenstein's "language games," and how math is "made up" because it is just a game we play (@Banno), they may be led astray by an association of the word "game" with something arbitrary and frivolous. But that's not at all true about literal games, such as chess, is it? If you make up an arbitrary game, it's going to be shit and no one will want to play it. And yet chess has been played for many centuries (and has evolved quite a bit over time). That doesn't just happen arbitrarily.
And the same is true about math, of course.
I’m not saying that you yourself advocate philosophical materialism but that the general motivation for anti-realism in mathematics is that sans some notion of an incorporeal intelligence - which in pre-modern philosophy was assumed to be 'the divine intelligence' - the idea that numbers can be real outside of the human mind doesn’t make any sense. There’s no conceptual space for it. If number arises from counting, and if counting is something done by humans, then indeed maths is invented not discovered and it must be understood accordingly - which in practice means understanding how such an ability might have evolved.
Whereas the reaiist view is that integers (at least) are 'real in all possible worlds', and that humans simply evolved to the point of being able to recognise them. But that is not at all in keeping with the mainstream attitude which is generally nominalist and empiricist.
I've quoted a few times from a current article in Smithsonian magazine about 'What is Math' the following passage:
I think that really encapsulates the philosophical situation. It makes empiricists nervous, and empiricists hold sway.
(I've found that that very scholar, Robert James Brown, has written what seems a pretty authoritative (and extremely expensive!) text book on this subject, Platonism, Naturalism, and Mathematical Knowledge.)
I consider myself an empiricist, and yet I accept the existence of concepts. Am I doing something wrong?
I'm not accusing anyone of anything. The question is, what kind of existence conceptual information has. That is what is at issue. I think that brief passage I quoted expresses a pretty widely-held view. Am I believing something wrong?
No offence, but I think you pay too much attention to naïve materialists. You know, the kind of people who think their selves (i.e. themselves) don't exist because there can be no ghost in no machine... It seems to me that you resent their academic influence and credibility, but the world in which they are credible is a very small one and most probably not the world you live in. They only "hold sway" in a few academic circles that are irrelevant to anything.
I bet most people around you actually think of themselves as more than just meat puppets. And even the most naïve materialist out there will usually behave as a normal person, not as a meat puppet. They would for instance expect some respect from other human beings and would also extend some respect to other human beings, quite unlike the way they would treat actual puppets, and unlike the way they treat animals.
There's no reason to feel angry about academic fads. Academics only have the prestige that you give them.
I'm informing you that it is a basic misconception to think that the LEM is a consideration in the way you have claimed. That is not ill will.
I must congratulate you for learning, at long last, to use the quote feature. Well done!
Constructively, the implications of the incompleteness theorems are stronger than that. The consistency of certain systems (PA and the like) cannot be constructively proved by any means. All that can potentially exist in the constructive sense with regards to such systems are i) potential constructive proofs of absolute inconsistency via brute-force evaluation of the underlying sequent calculus until the inconsistency is unearthed, and ii) conditional proofs of relative inconsistency, where the inconsistency of one system implies the inconsistency of another. E.g Gentzen proved that the inconsistency of PA implies the inconsistency of PRA + transfinite induction on the ordinals.
Personally, I am tempted to interpret Gentzen's result as denying the meaningfulness of epsilon zero (i.e the first limiting ordinal) as being considered a "well-founded" ordinal. For while it might be shown one day that PA is inconsistent, it can never be shown to be consistent unless one begs the question. By semi-decidability it is meaningful to ask if epsilon zero isn't well-founded, but it isn't meaningful to assume or hypothesise that it is well-founded,
As an aside, I've read of some scientific papers recently that indicate that children have at least a preliminary understanding of number from a very young age. This leads to the hypothesis that a sense of number is inborn, instinctual, just as our ability to learn and use language is.
I am well versed in the quote feature, as seen in my many posts in other threads. But I had been experimenting with not using it lately in order to avoid too much of a personal tit-for-tat kind of conversation. That is, to make my remarks general toward the points no matter who might have asserted them, though I recognize the disadvantages of not using the quote feature. Then, at a point, it became too cumbersome to reply without quoting, and then my experiment manifestly failed when you resorted to the personal claim that I lack "good will".
It's not that I bar myself from making personal remarks - indeed, I may liberally say quite what's on my mind about a poster's qualities - it's just that lately I was in the mood for experimenting with ways that might avoid my receiving them and then replying with them.
So, congratulations are in order now also for your utterly petty, sophomorically sarcastic, and incorrect snipe. Well done.
As far as I can think it through, your first paragraph seems reasonable and good added information to my own remark..(Though when I said 'by certain means', that of course encompasses whatever means do fail, such as the constructive ones you mention.)
As for the second paragraph, I don't know what you mean by an ordinal that is not well-founded. Any ordinal is well-founded by the membership relation, of course.
The thing about notions like ‘inborn’ and ‘instinctual’ is that they don’t differentiate between whole hog pre-formed contents and a capacity to learn to construct in stages a complex activity. Language and number I think are good examples of phenomena that can be understood in either way. Chomsky and Fodor belong to the ‘whole hog innate content’ group, believing inborn semantic as well as syntactic contents.
The thing about number and calculation is that they are not one simple thing , but mean different things in different cultural eras. Even when we begin from an agreed upon definition of number , observing the performance of very young children doesn’t tell us how much perceptual constructive activity was necessary for that child to get to the point where concepts like object and multiplicity made sense for them.
Of course, you persist petulantly. No, you were not correct. You gave the impression that I had just learned the quote function, which is not correct.
And you were incorrect about LEM, as I amply explained. And It's not contrarianism simply to explain that the connection you claimed between the LEM and paradox is incorrect and is a basic misunderstanding of logic.
And, yes, as I mentioned, I knew that not using the quote function had disadvantages, including the one you just mentioned.
This incident with you, in line with ones with other posters with misconceptions about logic and mathematics (not in philosophy, but in the mere rote, technical facts) confirms my thought that on forums such as this, it is virtually impossible to post corrections and explanations without there being posters who will bristle personally about that.
My level of understanding comes from reading summaries of a few scientific papers and then "The Language Instinct." My intent was not to make a strong case for any differing views, but just to point it out as an interesting sidelight.
What's the meaning of the LEM, according to you?
I don't bristle against being corrected on matters of logic.
I don't know what scope you have in mind by 'meaning' but I take the LEM in its utterly ordinary sense:
Syntactically: P v ~P
Semantically: Every sentence in the language is either true in the model or it is false in the model (where 'or' is the inclusive or; while the 'but not both' clause for exclusive or is demanded by the law of non-contradiction: ~(P & ~P)).
I think the rejection of 'innate ideas' and mathematical intuitions is characteristic of a much wider circle than lumpen materialism. The 'other scholars' that the quote refers to are not necessarily eliminative materialists. In the IEP article on the 'indispensability' argument for mathematics - ' [Rationalist philosophers] claim that we have a special, non-sensory capacity for understanding mathematical truths, a rational insight arising from pure thought. But, the rationalist’s claims appear incompatible with an understanding of human beings as physical creatures whose capacities for learning are exhausted by our physical bodies.' None of those referred to in that article are eliminative materialists.
Quoting T Clark
Which empiricism generally resists, on the grounds that humans are born 'tabula rasa', a blank slate, on which ideas are inscribed by experience.
Well, there’s metaphysical , or ‘naive’ realism , which tends to be linked with Enlightenment associationism, and then there’s representational realism , which is often associated with neo-Kantianism. The former was consistent with behavioristic approaches in psychology , while the latter is compatible with cognitive science. Embodied versions of cog sci reject tabula rasa in favor of a cognizing subject already pre-situated by virtue of both learned schemata and inborn predispositions. And yet it considers itself a fully naturalistic empiricism.
Alright. And how would you write down Wittgenstein's proposal that we should happily welcome contradictions in mathematics, syntactically and semantically? What sort of axiom would that translate into, in your opinion?
I have nothing to say about that.
Though, while probably not specifically apropos of Wittgenstein himself, one can look up the subject of paraconsistent logic.
I see Heidegger’s approach here as overlapping Wittgenstein’s. Heidegger explains that in taking something to be the case in a propositional judgement (for instance, S is P) , we are taking something as something within a wider context of pragmatic relevance.
“The most immediate state of affairs is, in fact, that we simply see and take things as they are:
board, bench, house, policeman. Yes, of course. However, this taking is always a taking within the
context of dealing-with something, and therefore is always a taking-as, but in such a way that the
as-character does not become explicit in the act.”
“What is to be got at phenomenally with the formal structures of "binding" and "separating," more
precisely, with the unity of the two, is the phenomenon of "something as something." In accordance with this structure, something is understood with regard to something else, it is taken together with it, so that this confrontation that understands, interprets, and articulates, at the same time takes apart what has been put together. If the phenomenon of the "as" is covered over and above all veiled in its existential origin from the hermeneutical "as," Aristotle's phenomenological point of departure disintegrates to the analysis of logos in an external "theory of judgment," according to which judgment is a binding or separating of representations and concepts. Thus binding and separating can be further formalized to mean a "relating." Logistically, the judgment is dissolved into a system of "coordinations," it becomes the object of "calculation," but not a
theme of ontological interpretation."
What Heidegger is saying is that the sense of S is P is always framed and situated within a wider context. Things are the case or not the case within this wider sense-making space, which is context-sensitive. The bottom line is that the meaningful
sense of S is P is a moving target , and what the LEM does is delimit how much the sense of the meaning of the proposition can vary before it becomes incoherent. At that point we blame each other for misunderstanding the definitions.
What kind of existence does a material object have? A material existence. What kind of existence does conceptual information have? A conceptual existence. This is all just a matter of words as I see it.
The more I think about it, the less sense that kind of question seems to have.
So, if you want to deny that having a conceptual existence is a material function, then what kind of existence do you imagine conceptual information as having?
Many working mathematicians have been mathematical realists; they see themselves as discoverers of naturally occurring objects. Examples include Paul Erd?s and Kurt Gödel. Gödel believed in an objective mathematical reality that could be perceived in a manner analogous to sense perception. Certain principles (e.g., for any two objects, there is a collection of objects consisting of precisely those two objects) could be directly seen to be true, but the continuum hypothesis conjecture might prove undecidable just on the basis of such principles. Gödel suggested that quasi-empirical methodology could be used to provide sufficient evidence to be able to reasonably assume such a conjecture.
Within realism, there are distinctions depending on what sort of existence one takes mathematical entities to have, and how we know about them. Major forms of mathematical realism include Platonism and Aristotelianism.
Mathematical anti-realism generally holds that mathematical statements have truth-values, but that they do not do so by corresponding to a special realm of immaterial or non-empirical entities. Major forms of mathematical anti-realism include formalism and fictionalism.' ~ Wiki
The question being, if mathematical primitives are real, in what sense are they real? A common expression is 'out there somewhere', but they don't exist in the same sense that material objects do. It's the 'in the same sense' that is problematical'.
I would think this part is unwarranted, in the sense that life is already semantic. Our physical bodies are semantic. The really hard problem to me is not consciousness but life itself. Given life, it was only a question of time for some critter to start recording and decoding information in real time so to speak, as information arises around the organism, information acquired through the senses and analysed through some (originally small) brain. These abilities (primitive eyes etc.) appear very very early in evolution.
From a critter that can look at something, to another critter that can reflexively look at itself looking at something, all you really need is some redundancy in brain power.
At some point in our evolution, there were about a dozen (known) species of Australopithecus, Paranthropus and Kenyanthropus and the likes. One of these most probably gave rise to the fist Homo species (habilis). All these species coexisted in Africa, sometimes in the same place but eating different things. The striking fact is that all of them shared, as a sort "special weapon", an inordinately large brain as compared to their body size.
If some platonic truth is out there, evolution can be seen as a very slow process of coming out of the cave.
The interesting thing is that materiality is already ‘conceptual’ through and through in that the very notion of an empirical object is a complex perceptual construction , an idealization. Furthermore , it is this idealizing abstraction at the heart of our ideas of the spatial object that makes the mathematical
possible. They are parasitic on and presuppose each other.
I have explained more than once already why it is not the case that an otherwise consistent system can be made inconsistent by retracting the LEM. It is not required that I propose couching Wittgenstein into formal syntax and semantics to make that point.
I don't agree with this; I think 'materiality' as a concept is obviously (by definition) conceptual. But material things are sensed, even animals find themselves in an environment comprised of material entities which, judging from their behavior, they must see much as we do (although obviously they don't conceive of them as material entities).
There is none of that kind of postulation in calculus, even though from the start the modeling of movement in calculus is fictive in the sense that it is not really movement, but a static equation. All the math involved in structural engineering requires is that it works; that it can effectively model things like tensive and compressive forces and the hardness, strength and flexibility of construction materials.
Yes to this, a universal rule. Statements and propositions are always made as part of a context, in a very real i.e. local, human sense of who says it, but also theoretically speaking as any statement is to be understood as part of a broader conceptual framework. There's no text in a vacuum. And yes, the truth value of a proposition (often even its meaning) always depends on this context.
Quoting Joshs
This is where you lose me. I do not understand what you mean. That the LEM is not universally applicable, that it has a clear domain of applicability here but not there? If yes then ok, it's consistent with the above about propositions being true in certain contexts and not others.
What animals ( and humans) ‘sense’ , once we have removed all the higher level constructions that make phenomena appear for us as self-persisting things in a geometric space-time, is a constantly changing, chaotic flux of impressions. Out of this steaming flux we discern regularities and correlations, not just in the changes happening in our environment, but in the relation between these changes and the movements of our body. An ‘object’ is the product of all these correlations and regularities. Most of what we see at any moment ina spatial object is provided by our own expectations based on previous experience with something similar. We mostly construct the object from memory and anticipation. So the idea of spatial objects is an idealization based on actual experience which is contingent and relative.
It is not a fact that objects persist in time , it is a presupposition, and one which is necessary in order for there to be naturalistic empirical science and mathematical calculation.
It seems obvious there are temporally persistent stable objects both for us and animals. My dog sees his food bowl where I see it. I see him going to it. I see no reason to doubt that animals see the same things in the same locations as we do. The dogs use the steps just as I do. The cat climbs the tree. The bird sits on the branch. You don't observe animals trying to climb steps or trees or perch on branches that don't appear to be there to us. When I throw the ball for my dog he can obviously see the ball, he tracks it as it flies through the air and usually manages to be within a meter or two of it when it hits the ground.
Those of us who have been around young children know that they come out of their mother's anything but blank. They are who they are and always will be the minute they are born. Probably before.
You made a general statement about it. You made your own claim about mathematics and mathematical logic. And your claim is incorrect. My point about that pertains irrespective of Wittgenstein. Whether people are talking about Wittgenstein in particular, or inclusive of other tangents, it is worthwhile to point out that certain specifics mentioned in context or standalone are in error.
Yes yes yes but all this assumes that if the tensive strength of this material is X, it is X. It is not something else than X. There's only one correct value or range (+ or - whatever residual error).
Yes, that’s pretty much what I mean. I think the issue here is that the later Wittgenstein seems to be wanting to radicalize the notion of context such that it no longer allows for categories of use. That is to say , he seems to want to make every situation , for every individual, it’s own context. In doing so , he rejects the coherence of rules, grammars, criteria as categories with any existence outside of particular situations and seen from
particular individual perspectives. So , in sum , if one understands context in a formal categorical sense, then the LEM is applicable in some contexts
and not in others. But if one equates context with absolute situational and perspectival contingency , then the LEM can no longer find the minimal categorical identity over time in the idea of context necessary for it to contribute anything useful.
What claim are you talking about?
Most of what you say is based on what seems obvious to you.
The one we talked about:
Quoting Olivier5
And I mentioned it most presently only about an hour ago and as you responded to me right after.
How the fuck would we all function in the world if there were no persistent objects?
I haven’t read up on this , but if your dog watches you attach something to the side of an object he can’t directly see, will he know that there is another side of the object and look for it there? Will he discern the general
direction and distance of the ball from watching the arc and force of your throw? Research says yes, dogs are capable of object permanence, and cats may also have this ability, but not generally lower animals. Whether he has these skills or not, my point is that it is these and many many other kinds of correlations that determine the very meaning of ‘object’ for an animal. Of course, in many ways a dog is quite close to a human in intelligence , but some differences are obvious.Most Human facial expressions are meaningless to a dog , as is a pointing feature. Now, these are admittedly meanings are are much more sophisticated than those involved in recognizing and tracking objects. But I think you will find all kinds of subtle and not so subtle differences between rabbits and birds and snakes and fish in terms of how ‘objectness’ functions for them.
Also it's not true that dogs can't recognize the act of pointing; in fact they are, if I remember correctly, the only animals that can. They may not read facial expressions, but they certainly respond to differently to different vocal tones.
It beats me why people want to deny that there is a physical environment that we share with others, as well as with other kinds of physical entities, who perceive pretty much the same persistent features of the environment as we do.
Yes, counting is something we do, not something we discover.
Its a way of talking about stuff. The way of talking is made up. The stuff isn't.
There's no need for ghosts. If mathematicians are discovering anything, it's the consequences of a game that started with triangles and numbers.
My suspicion is that the subjective/objective bugaboo is lurking here. That mathematics is made up does not mean it is subjective, private, only in one's head. It's a shared game.
The point I’m labouring is that there are real intelligible objects. They comprise an enormous range - not of phenomena, because phenomena are ‘what appears’. They inhere in the domain of logic, geometry, laws and conventions - and so on. They’re real, in that they’re the same for all who think, but they’re only graspable by a rational intelligence. Hence they’re ‘intelligible objects’, real to all who are capable of grasping them, but not the product of your or my mind so, not ‘made up’.
One can agree that in very general terms higher animals perceive pretty much the same persistent features of the environment as we do without having to then conclude that there is such a thing as a ‘physical’, organism -independent basis for this commonality. Analytic philosophers fou sit necessary to jettison the ‘myth of the given’ , the idea that we directly perceive the stuff of the world unmediated by our own schemes . Phenomology didn’t deny that we perceive an ‘out there’. They only denied that the ‘out there’ come packages as physical stuff. Enactivists say that each organism co-creates its environment in relation to its needs , goals and aims as an ongoing environmental
process. So each specie’s world is in some sens u inquest to its own functional goals.
But there must first be a peculiar way of thinking about stuff before it makes sense to calculate and measure. We must assume stuff is objectively self -persistent in some fashion. It would make no sense to ‘count’ some variable aspect unless that aspect belonged to something that did not vary during the counting. So even the ‘stuff’ rendered as identically self-persisting is made up (an idealization)
If what you are saying is that there must be something to count before one counts, then... well, sure, but I don't see the relevance.
Are we talking about spoons here, or that there are five spoons?
I don't see what work "real" is doing in your post.
If there were no "physical organism-independent basis for this commonality" then what would explain the commonality? A universal mind? The rejection of the myth of the given is not a rejection of perception independent invariances, but a rejection of the idea that the way different percipients apprehend those invariances is completely independent of their various cognitive faculties.
[i]"What is the Myth of the Given?
Wilfrid Sellars, who is responsible for the label, notoriously neglects to explain in
general terms what he means by it. As he remarks, the idea of givenness for knowledge,
givenness to a knowing subject, can be innocuous.
1 So how does it become pernicious?
Here is a suggestion: Givenness in the sense of the Myth would be an availability for
cognition to subjects whose getting what is supposedly Given to them does not draw on
capacities required for the sort of cognition in question.
If that is what Givenness would be, it is straightforward that it must be mythical.
Having something Given to one would be being given something for knowledge without
needing to have capacities that would be necessary for one to be able to get to know it.
And that is incoherent".[/i]
From here: file:///C:/Users/dynam/AppData/Local/Temp/mcdowell-Avoiding-the-Myth-of-the-Given1.pdf
1. Babies show recognition of object constancy at 8 months.
2. They may or may not be able to speak at that age, but their rationality is definitely limited.
3. So maybe what we're seeing in their behavior isn't ideas, but instinct.
4. The problem is that the concept of instinct requires a backdrop of object constancy.
5. That means I'm actually asserting that my own assertions arise from instinct rather than knowledge.
C: There's an impending collapse of sense here.
Quoting sime
The consistency of, for example, PA cannot be proven by finitistic means, but I don't know whether it can't be proven by constructive though non-finitistic means. First, to pin the question down exactly, we would need an exact mathematical definition of 'constructive'. Second, even if we say, for sake of argument, we mean a particular constructive set theory or other constructive theory, then we would have to prove that such theories do not prove the consistency of classical PA.
As to Gentzen's proof, if I am not incorrect, we can describe the situation in two ways:
Let 'T' stand for transfinite induction on e_0.
(1) PRA+T |- Con(PA)
In other words, by finitistic means plus transfinite induction on e_0 we prove that PA is consistent.
vs
(2) PRA |- Con(PRA+T) -> Con(PA)
In other words, by finitistic means we prove that Con(PRA+T) implies Con(PA).
It is not, at least to me, apparent that those are not constructive (I don't know whether they are or are not constructive). However, I did find a paper that mentions that T is constructively challenged by some writers (actually, it's not T that's discussed but an alternative assumption used in place of T).
But you also say:
Quoting sime
That is of the form ~Con(PA) -> ~Con(PRA+T). But, at least prima facie, that does not intuitionistically imply Con(PRA+T) -> Con(PA). Yet, the latter version is the one we more often see. So I don't know why you stated Gentzen as ~Con(PA) -> ~Con(PRA+T).
Five of anything is 'five'. We're talking about (among other things) numbers - whether they have any reality outside individual minds, outside of the individual act of counting. That's what 'real' means. Anti-realists will say no. Realists will say yes.
Quoting Banno
It's a discussion about mathematical realism, isn't it?
It's that formulation that is at issue, that very division that presupposes things and minds, the notion that there is an inside and an outside.
I'd like to do an analysis of "realism" to see when it came into use. I suspect it is a reaction to idealism; that realism wasn't needed until the nonsense of idealism came about.
https://www.etymonline.com/search?q=realism
https://www.wolframalpha.com/input/?i=realism
https://www.inphoproject.org/idea/480.html
And yet I can't see that there could be, on account of there being some inconsistencies or paradoxes in certain areas of math, any reason for them to say such kinds of things.
That is not a rigorous mathematical proof. However, there is a rigorous mathematical proof that .999... = 1.
'.999..' is an informal way of describing a certain infinite summation. And infinite summation is defined by convergence. And we prove that the sequence converges to 1.
I've had many a discussion with contributors here who are convinced that ideas are essentially 'brain structures', and that the brain is shaped by evolutionary adaptation according to Darwinian principles to operate in a way that is advantageous for survival. Do you think that is a view held by a minority of academics?
Your proof from convergence seems no different though, since 0.999999 can converge infinitely with 1 out ever reaching it, just as 0.33333 converges infinitely on one third, which seems to be exactly the point against the argument I deleted.
I'll leave my post as it is so that your following self-correction makes sense in context.
From your #2
Notice the difference between 3 and 5. 3 is scientific realism, today's realism, what people mean by realism in ordinary speech. 5 is scholastic realism, realism concerning universals. Mathematical realism is nearer to (5) and (3) and tends to contradict (3), because it is assumed by (3) that numbers can't exist outside of minds, and because another axiom of (3) is that reality is what exists independently of any mind.
That's not the point.
'.999...'
is informal for
SUM[n = 1 to inf] 9/10^n
And SUM[n = 1 to inf] 9/10^n is the limit of a sequence. And that limit is 1.
/
By the way, when you wrote, "0.333333333 is no more equal to one third than 0.9999999 is equal to 1", I glanced over it too quickly and took you to mean that we can't assume .333... = 1/3 any more than we can assume .999... = 1". That is correct. But we can go on to prove both that .999... = 1 and that .333... = 1/3. We don't assume them, but we do prove them.
So, if the proof is a proof, then it seems that we do have an inconsistency, and my argument that it has no bearing on structural engineering would stand.
(Note that when I say we have an inconsistency I just mean that the intuition that .9999 does not equal 1 is inconsistent with any proof that the two are equal).
The context in which .999... = 1 is classical mathematics as introduced ordinarily in Calculus 1 then more rigorously explicated in Real Analysis, especially made rigorously axiomatic by the set theoretic development of the real numbers. The infinite sum as the point of convergence of a sequence is what mathematicians MEAN by .999... Saying, in face of that context, that .999... does not equal 1 is not informed intuition but rather ignorance of the actual mathematics. Claiming, without mathematical basis, that .999... does not equal 1 is claptrap that needs to be remedied by study of the basics of the subject. (I am addressing here only what is ordinarily meant in mathematics, as I recognize that there are other approaches including finitistic views and computationalist views).
That's not what mathematicians, including Turing, mean by 'inconsistency'. That's not (as far as I can tell) a sense of inconsistency at issue with the "bridge collapses" issue mentioned in connection with Turing, which is the sense of a system proving a formula and its negation, thus, by the principle of explosion, the system proving every formula whatsoever in the language of the theory.
One must make certain assumptions about this something: namely that it stays put as what it is, that it is res extensia, that it has duration. But nothing in the world stays put as what it is. Moment to moment it transforms itself ever so subtly. Self-identicality is an illusion of sorts. This is my point, as stated differently by Husserl and Heidegger:
For Husserl, extension, duration and magnitude are all implied by the idealizing thinking of self-identical objects. The ideal geometry of a line made possible the empirical intuitions pertaining to various characteristics of number.
“A true object in the sense of logic is an object which is absolutely identical "with itself," that is, which is, absolutely identically, what it is; or, to express it in another way: an object is through its determinations, its quiddities , its predicates, and it is identical if these quiddities are identical as belonging to it or when their belonging absolutely excludes their not belonging. Purely mathematical thinking is related to possible objects which are thought determinately through ideal-"exact" mathematical (limit-) concepts, e.g., spatial shapes of natural objects which, as experienced, stand in a vague way under shape-concepts and [thus] have their
shape-determinations; but it is of the nature of these experiential data that one can and by rights must posit, beneath the identical object which exhibits itself in harmonious experience as existing, an ideally identical object which is ideal in all its determinations; all [its]
determinations are exact —that is, whatever [instances] fall under their generality are equal—and this equality excludes inequality; or, what is the same thing, an exact determination, in belonging to an object, excludes the possibility that this determination not belong to the
same object.”
“ In this sphere of magnitudes, and initially of spatial magnitudes—first of all in classes of privileged cases (straight lines, limited plane figures, and the corresponding cases of spatial magnitudes), first of all in the empirical intuition that magnitudes divide into equal parts and are composed again of equal parts—or of aggregates of like elements which decompose into
partial aggregates and can be expanded into new aggregates through the addition of elements or
of aggregates of such elements—in this sphere, there arose the "exact" comparisons of magnitudes which led back to the comparison of numbers. Upon the vague "greater," "smaller," "more," 'less," and the vague "equal" one could determinately superimpose the exact "so much" greater or less, or "how many times" greater or less, and the exact "equal."
Every such exact consideration presupposed the possibility of stipulating an equality which excluded the greater and the smaller and of stipulating units of magnitude which were strictly substitutable for one another, were identical as magnitudes, i.e., which stood under an identical concept or essence of magnitude.”
“Thus it was possible to conceive of processes converging idealiter through which an absolute
equal could be constructed ideally as the limit of the constant approach to equality, provided that one member [of the system] was thought of as absolutely fixed, as absolutely identical with itself in magnitude. In this exact thinking with ideas one operated with ideal concepts of the unchanging, of rest, of lack of qualitative change, with ideal concepts of equality and of the
general (magnitude, shape) that gives absolute equalities in any number of ideally unchanged and thus qualitatively identical instances; every change was constructed out of phases which were looked upon as momentary, exact, and unchanging, having exact magnitudes, etc.”
Husserl and Heidegger share a focus on Galileo as originator of modern mathematical science based on an idealization of geometric spatio-temporality as objective bodies in causal interaction. Heidegger traces the origin of empirical science to the concept of enduring substance.
“Mathematical knowledge is regarded as the one way of apprehending beings which can always be certain of the secure possession of the being of the beings which it apprehends. Whatever has the kind of being adequate to the being accessible in mathematical knowledge is in the true sense. This being is what always is what it is. Thus what can be shown to have the character of constantly remaining, as remanens capax mutationem, constitutes the true being of beings which can be experienced in the world. What enduringly remains truly is. This is the sort of thing that mathematics knows. What mathematics makes accessible in beings constitutes their being. Thus the being of the "world" is, so to speak, dictated to it in terms of a definite idea of being which is embedded in the concept of substantiality and in terms of an idea of knowledge which cognizes beings in this way. Descartes does not allow the kind of being of innerworldly beings to present itself, but rather prescribes to the world, so to speak, its "true" being on the basis of an idea of being (being = constant objective presence) the source of which has not been revealed and the justification of which has not been demonstrated. Thus it is not primarily his dependence upon a science, mathematics, which just happens to be especially esteemed, that determines his
ontology of the world, rather his ontology is determined by a basic ontological orientation toward being as constant objective presence, which mathematical knowledge is exceptionally well suited to grasp.”
The basis for this commonality is a reciprocal causality operating between organism and environment . Since other organisms belong to each organism’s environment , there is a complex web of interaction taking place at every level, between the mind and body , between body and environment , and between organisms in a wider environment. All of these levels are nested within one other. The result is that each individual has their own perspective on a world that they share with others. The difference between this enactivist model and physicalism is that the latter creates commonality by correspondence with a presumed already existent reality. The former, however , see the real world not as pre-existing and independent of sense-making organisms , but as a co-produced continual development pragmatically inseparable from an organism’s goals and needs. To ask if a thing exists in the world is to ask how it is pragmatically relevant to my ongoing direction of functioning, how it is useful to me. These are not separate questions but the same question.
I need a translation.
It wasn't "decided"; it's what we do. And we are able to do so because of how things are.
Quoting Joshs
Just as objects are 'fictions'/inventions (eddies in the stream), so perhaps are meanings?
Quoting Joshs
That sounds correct, but would you agree that the conceptual is social and 'material'? So that the game of reducing one to the other is perhaps futile? Or that the game is at least fundamentally impractical inasmuch as the distinction between conceptual and the material is not likely to lose its intense utility?
Quoting Olivier5
:up:
But what we make of that math does seem to be a matter for philosophers (and everyone else, really.)
That seems right on some level, but this game of pretend, presumably evolved, is no so easily shaken off. What's an event? Does it involve objects? 'Transformation' implies some thing that is transformed, maintaining its identity in some sense. As another poster has mentioned, this kind of point threatens to 'deconstruct itself,' which is not necessarily a bad thing.
I get that the world of objects perceived in common is a relational, interactive world. But I don't think the commonality of the different organisms cognitive :machinery' is enough to explain the fact that we all, animals and people see the same objects in the same locations. Beyond all the perceptions of the world I think we have every reason to believe there is a world that is perceived; that is what it is regardless of how it is perceived and would be there just as it is (as it is, not as it is perceived, mind) in the absence of any percipients. We have every reason to believe that because it is the best explanation for a shared world; in fact it is the only explanation apart from some form of idealism; some notion that all minds are somehow conjoined or that there is a universal mind we all partake in..
I think someone wrote that Wittgenstein didn't understand basically Turing's example (the Turing Machine) and Turing failed to describe it for him. The anticipated meeting of the two didn't create great advances. But it's telling that even for Gödel to understand that Turing's findings were actually similar to his did take a long time, so it's no wonder if Turing and Wittgenstein didn't understand each other.
I'm not sure when reading Murphy's article he understands this either. Using negative self-reference as Turing or Gödel did isn't similar to Liar paradox. Close, but they don't result in a paradox. That perhaps is the crucial point to understand.
I agree it will never happen. It was just another absurd idea from Wittgenstein.
I think such a position is easy to rebut, in that it is a self-referencing negative statement structurally identical to the liar's paradox, or to "this sentence is false". The idea that ideas are essentially 'brain structures', if true, is itself a mere brain structure.
It's a point I used to make quite often here, at start, so I agree that this place (TPF) has more than its fair share of this kind of kindergarden version of materialism.
Why yes, we are all amateur philosophers anyway, even those of us who are professional mathematicians. And there is no subject matter that philosophy cannot legitimately discuss. Mine was a joke.
Quoting TonesInDeepFreeze
Wonderful. So now, how would you write down the proposal that we allow contradictions in mathematics, syntactically and semantically? What sort of axiom would that translate into, in your opinion?
And then the interesting question (for me at least) becomes: in which types of contexts does it apply, and in which types of contexts does it not apply.
My system crashed again. In other words, please?
That is not a good question, because is has a trivial answer: Add any contradiction as an axiom.
A better question is: What systems preserve important parts of classical logic but not EFQ (explosion)?
https://plato.stanford.edu/entries/logic-paraconsistent
Also, contradictions, in the formal sense such as with inconsistent systems, are syntactic, not semantic.
You mean, one by one?
You only need one. When you have one, you get them all. I have mentioned several times already the principle of explosion. Indeed, it is at the very heart of the Turing role in the matter of this thread, as mentioned in the originally linked article.
Correct.
As I've said, it is inconsistent with everything.
And again, the point I made to you about LEM is not about what contradicts it, but rather that merely taking LEM out of a system does not make the system inconsistent.
It seems therefore that as I was taught it, some decades ago in another country than yours, the LEM included the LNC by way of an exclusive or.
In any case, and this clarification being made, I agree with you that the 'inclusive or' version of the LEM is not the relevant logical law in relation to Wittgenstein's proposal to allow contradictions in mathematics. The LNC is.
Classically, LEM and LNC are equivalent, since all logical truths are equivalent. That is, for any logical truths P and Q, we have P <-> Q as a theorem.
But LEM and LNC are different in these senses: (1) They are literally different formulas and (2) If we remove certain classical assumptions, then they are not equivalent. Most saliently, in intuitionistic logic, we cannot derive LEM from LNC.
/
No, the literal formulation of LEM in ordinary symbolic is NOT exclusive or. Literally, LEM is:
P v ~P, where 'v' stands for inclusive disjunction.
Quoting Olivier5
Yes, inclusive 'either or'.
/
Also, Wikipedia is sometimes okay to get a quick and rough summary of a topic, but one must be on guard against misinformation and poorly formulated expositions there. However, in this instance it is correct. I took the French article and translated it to English (as I don't know French) and do not find a claim that the 'or' in LEM is exclusive. Indeed, the article says, contrary to you, exactly what I have been saying: It is from LNC that we get that it is not the case that both P and ~P are true, while it is from LEM that we get that it is not the case that neither P nor ~P are true.
/
If you want to understood such topics in logic, I strongly suggest studying an introductory textbook in a systematic and thorough way rather than relying on a cobbled together misunderstanding from bits and pieces mis-gleaned here and there.
It matters because the 'v' ('or') connective should never have been conflated with exclusive-or.
Also, your notion that exclusive-or has an advantage of elegance is ill-conceived, as I could explain also.
I think the key issue here isn’t materiality or objectness but relative stability. The question is, how does a position like Heidegger’s, which claims to deconstruct the self-identical object , achieve stability of meaning? The answer is that an event doesn’t occur into a vacuum , but into an exquisitely organized referential totality. That is precisely what an event is, a way that this totality of relevance changes itself moment to moment. So there is a tremendously intricate and intimate overall coherence from one event to the next. Each event is a subtle variation on an ongoing theme, and it’s very appearance shifts the sense of this theme without rending its pragmatic consistency. This relation between a referential background and new events allows one to say that the world can continue to be the same differently, as an ongoing style.
This is the paradox of this kind of model. On the one hand , it is more radically and immediately transformational than models
positing empirical objects. On the other hand it avoids
the arbitrariness of objective causality. There is a radical
belongingness of one moment to the next that is missing from causal approaches. We tend to assume that it is only by nailing down objects as self-identical that we achieve the possibility of order and stability in our models of the world. And this was true in comparison with pre-modern thinking. But it isn’t self-identically that gives us the order we crave , it’s the extent to which such assumptions facilitate our ability to discern real-time among such entities. The assumption
of self-identicality actually limits the possibilities of relationality that we can find in the world.
My posts aren't models of eloquence, but they are, for the most part, articulate and more precise about technical matters than normally found in a casual context such as this forum.
After a series of clearly correct explanations by me, and over a course of your stubbornness, finally those explanations provided you with understanding why you were incorrect to begin with, and also provided you with context of other notions that are important to the topic. But the very fact that I provided you with ample explanations leads you to gripe that they were not eloquent enough for you and too ample.
Quoting Olivier5
Yes, an explanation why your notion of an advantage of a certain elegance is wrong headed would be longer than a couple of lines, because your notion is wrong headed in so many different ways.
Here are alternatives from those who reject idealism. Maybe you can agree with the gist of their arguments.
From Evan Thompson:
“Many philosophers have argued that there seems to be a gap between the objective, naturalistic facts of the world and the subjective facts of conscious experience. The hard problem is the conceptual and metaphysical problem of how to bridge this apparent gap. There are many critical things that can be said about the hard problem (see Thompson&Varela, forthcoming), but what I wish to point out here is that it depends for its very formulation on the premise that the embodied mind as a natural entity exists ‘out there' independently of how we configure or constitute it as an object of knowledge through our reciprocal empathic understanding of one other as experiencing subjects. One way of formulating the hard problem is to ask: if we had a complete, canonical, objective, physicalist account of the natural world, including all the physical facts of the brain and the organism, would it conceptually or logically entail the subjective facts of consciousness? If this account would not entail these facts, then consciousness must be an additional, non-natural property of the world.
One problem with this whole way of setting up the issue, however, is that it presupposes we can make sense of the very notion of a single, canonical, physicalist description of the world, which is highly doubtful, and that in arriving (or at any rate approaching) such a description, we are attaining a viewpoint that does not in any way presuppose our own cognition and lived experience. In other words, the hard problem seems to depend for its very formulation on the philosophical position known as transcendental or metaphysical realism. From the phenomenological perspective explored here, however — but also from the perspective of pragmatism à la Charles Saunders Peirce, William James, and John Dewey, as well as its contemporary inheritors such as Hilary Putnam (1999) — this transcendental or metaphysical realist position is the paradigm of a nonsensical or incoherent metaphysical viewpoint, for (among other problems) it fails to acknowledge its own reflexive dependence on the intersubjectivity and reciprocal empathy of the human life-world.
Another way to make this point, one which is phenomenological, but also resonates with William James's thought (see Taylor, 1996), is to assert the primacy of the personalistic perspective over the naturalistic perspective. By this I mean that our relating to the world, including when we do science, always takes place within a matrix whose fundamental structure is I-You-It (this is reflected in linguistic communication: I am speaking to You about It) (Patocka, 1998, pp. 9–10). The hard problem gives epistemological and ontological precedence to the impersonal, seeing it as the foundation, but this puts an excessive emphasis on the third-person in the primordial structure of I–You–It in human understanding. What this extreme emphasis fails to take into account is that the mind as a scientific object has to be constituted as such from the personalistic perspective in the empathic co-determination of self and other. The upshot of this line of thought with respect to the hard problem is that this problem should not be made the foundational problem for consciousness studies. The problem cannot be ‘How do we go from mind-independent nature to subjectivity and consciousness?' because, to use the language of yet another philosophical tradition, that of Madhyamika Buddhism (Wallace, this volume), natural objects and properties are not intrinsically identifiable (svalaksana); they are identifiable only in relation to the ‘conceptual imputations' of intersubjective experience.” (Empathy and Consciousness)
From Dan Zahavi and Hilary Putnam:
Knowledge is taken to consist in a faithful mirroring of a mind-independent reality. It is taken to be of a reality which exists independently of that knowledge, and indeed independently of any thought and experience (Williams 2005, 48). If we want to know true reality, we should aim at describing the way the world is, not just independently of its being believed to be that way, but independently of all the ways in which it happens to present itself to us human beings. An absolute conception would be a dehumanized conception, a conception from which all traces of ourselves had been removed. Nothing would remain that would indicate whose conception it is, how those who form or possess that conception experience the world, and when or where they find themselves in it. It would be as impersonal, impartial, and objective a picture of the world as we could possibly achieve (Stroud 2000, 30). How are we supposed to reach this conception?
Metaphysical realism assumes that everyday experience combines subjective and objective features and that we can reach an objective picture of what the world is really like by stripping away the subjective. It consequently argues that there is a clear distinction to be drawn between the properties things have “in themselves” and the properties which are “projected by us”. Whereas the world of appearance, the world as it is for us in daily life, combines subjective and objective features, science captures the objective world, the world as it is in itself. But to think that science can provide us with an absolute description of reality, that is, a description from a view from nowhere; to think that science is the only road to metaphysical truth, and that science simply mirrors the way in which Nature classifies itself, is – according to Putnam – illusory. It is an illusion to think that the notions of “object” or “reality” or “world” have any sense outside of and independently of our conceptual schemes (Putnam 1992, 120). Putnam is not denying that there are “external facts”; he even thinks that we can say what they are; but as he writes, “what we cannot say – because it makes no sense – is what the facts are independent of all conceptual choices” (Putnam 1987, 33).
We cannot hold all our current beliefs about the world up against the world and somehow measure the degree of correspondence between the two. It is, in other words, nonsensical to suggest that we should try to peel our perceptions and beliefs off the world, as it were, in order to compare them in some direct way with what they are about (Stroud 2000, 27). This is not to say that our conceptual schemes create the world, but as Putnam writes, they don't just mirror it either (Putnam 1978, 1). Ultimately, what we call “reality” is so deeply suffused with mind- and language-dependent structures that it is altogether impossible to make a neat distinction between those parts of our beliefs that reflect the world “in itself” and those parts of our beliefs that simply express “our conceptual contribution.” The very idea that our cognition should be nothing but a re-presentation of something mind-independent consequently has to be abandoned (Putnam 1990, 28, 1981, 54, 1987, 77)
Quoting TonesInDeepFreeze
Too slow, rather. You could have said a long time ago: "you must mean the LNC, because the LEM does not actually rule out contradiction."
I don't know what you mean by 'a long time ago' relative to the duration of our exchanges. But many posts ago, I wrote:
Quoting TonesInDeepFreeze
You're blaming me for the fact that you didn't bother to read what I posted.
Since P and ~P are mutually exclusive, what difference does it make whether the disjunction is inclusive or exclusive?
P and ~P are mutually exclusive in classical logic, but not necessarily in other logics, especially paraconsistent logics.
To answer your question:
(1) It is important to not be confused as to which connective is actually used.
(2) With inclusive-or, we can take out LEM to get intuitionistic logic, and also, by not subsuming LEM within an LEM/LNC combo, we can take out LNC to get paraconsistent logics.
(3) The notions of LNC and LEM go back to antiquity, and have been critical in the discussion of logic through the centuries, so to suddenly say that LEM now means something different would be quite confusing.
[Edit: I should not have written, "We can take out LNC to get paraconsistent logics."]
We started this discussion two or three days ago though.
Oh, no, I didn't deliver your desired angle on the subject soon enough, even though what I did say was correct at every point while you persisted otherwise!
I gather that you have no idea what a nudnick you are being.
I adduced, entirely gratis, the point about LNC and LEM, because it was at that stage that I sensed it might bear upon your confusion. It's not my job to immediately anticipate what is confusing you about the subject and then to immediately warn you about those confusions. I gave you clear corrections, which is itself gratis, and then at other stages added more explanation also gratis. For that matter, there is even more about the subject I could add now, but, again, it's not my job to try to figure out why you are mixed up so that I choose just the right points customized for you.
I'm not saying it is. In fact I am saying it isn't. If it was, you wouldn't be too good at it. Still grateful for the clue though.
I certainly don't claim to be especially skilled in seeing into the minds of people who are ill-informed about the subject to know how they came to their misconceptions. Sometimes, though, I do sense at which point they got off track. And I did so soon enough with you too.
Quoting Olivier5
It was not just a clue. It was a clear, concise, and precise statement.
I'm curious to see though whether you're going to continue with snide, petty, ill-premised misgivings against the person who gave you correct information.
I suppose this is the main advantage of dissociating the two.
Coming back to the topic at hand, I do have a question after all: has anyone tried to build a bridge based on paraconsistent logic then, and if yes, what does the bridge look like?
What do you mean by a "bridge"?
Also, to be able to get intuitionisitic logic.
Quoting TonesInDeepFreeze
What's intuitionistic about the 'inclusive or' in LEM? Can a door be both open and not open at the same time? Can a man be alive and not be alive at the same time?
I need to correct that. Usually, paraconsistent logic is attained by not having EFQ. Taking out LNC would be something different. Nevertheless, I still think it is the case than having LNC and LEM as different principles bears upon paraconsistent logic though in a more involved way than I incorrectly stated.
I mistakenly thought you meant 'bridge' in the sense of a connection between the two logic principles.
I have no idea about paraconsistent logic used in engineering. However, the Stanford article I suggested does talk about paraconsistent logic used in computing and especially for databases.
I don't think it's particularly intuitionistic. Rather, it's that LEM (which is with inclusive-or) is eschewed in intuitionistic logic, while intuitionistic logic does accept LNC, so having LEM and LNC as different principles makes it convenient to describe intuitionistic logic relative to classical logic.
Yes, I corrected myself. It's more complicated than I suggested. Indeed, you can see for yourself at the Stanford article. But it's still the case that paraconsistent logics do allow theorems of the form P & ~P.
Thanks for providing these quotes Josh. For the sake of simplicity I'll just address the parts that seem salient to me one at a time.
If we had "a complete, canonical, objective, physicalist account of the natural world" would that not be a subjective fact of consciousness? Or would you say it is an objective fact of consciousness? I see a problem with the 'subjective' or the 'objective' there; why not just ask whether it would entail the facts of consciousness. If it were compete then I would say it would, it would have to in order to count as complete.
It's not clear what Thompson has in mind with "the subjective facts of consciousness", and it's also not clear why they should be referred to as subjective rather than objective.
I believe math is, inter alia, a language.
Going by what Israeli-born historian Yuval Noah Harari says in his book Sapiens, the written word was created to handle mathematical information (accounting, inventorying) - partial scripts they're called. Only later did people expand written language to full scripts, ones capable of recording any and all conversations (poetry to prose).
Good, but not sure of the inter alia...
What else is it?
My inclination is to treat maths as a grammar; misusing "grammar" in a familiar way.
That is, maths is a way of talking, a set of language rules. Hence the spoons example that I have repeated several times but which does not seem to have drawn comment.
That there are five spoons on the table is a way of talking about the stuff on the table.
Any mooted inconsistency in mathematics will not change the number of spoons on the table.
The history of division, especially in Egypt, seems to fit the historical model you suggest, where an enacted solution, a procedure, became encapsulated into a formal calculation. One onion each until there weren't enough to go around, then a half onion each, then a quarter, and so on until there was not enough left to argue about. This became the basis for the quite clever method of doubling for multiplication and division used in Egypt.
One objection might come from those who think 0.99...<>1; dare I mention it? There is a point at which some folk fail to see the process 9/10+9/100+9/1000... become the very same as 1; This lack of insight is the very source of many an internet troll-fest. It's an adult version of subsitising, of looking at a group of objects and realising how many there are. Someone might characterise this as a move from a process to a thing; but I think that argument weak.
Doubtless this is because you find it so often at odds with your own views.
One wonders why.
What an absurd proposition. :eyes:
Paraconsistent logic drops
- the explosion or ex contradictione quodlibet by introducing a third truth value. A consequence of that is that (A & ~A) is allowed, but that it is not assigned either T or F, but N.
Paraconsistent logic does not allow contradictions; it does not allow (A & ~A) to be true.
Dialetheism assigns truth to (A & ~A). That's a different fish.
:chin: You have a point. Mathematics, I wanted to say, is abstractions but so are the words in natural language. Yet, very thought-provoking, is it not?, that mathematics seems to be universal; that is to say it doesn't seem to be culturally/geographically constrained. Everyone everywhere, it appears, hit upon the idea of numbers & shapes - there are numerous mutually unintelligible natural languages but math stands out as one language that everybody understands (trade, money, engineering). This to the extent that scientists, mainly astronomers, are under the impression that the safest bet in re communicating with aliens is to use math. Is there something we're missing here?
Quoting Banno
[math]\frac{9}{10}+\frac{9}{100}+\frac{9}{1000}+...= 1[/math]
So folk become puzzled as to why it should turn out that 2 is so useful for Fijians as well as for Europeans. All languages use nouns, too, but this does not lead to puzzlement. Some ways of talking are better than others.
Adopting an argument from Davidson, what would a community look like in which 2+2=3? What utterances or behaviours of theirs would convince us that they thought this? How could they be seen to bring two groups of two together and get 3? How could they behave as if that were what happened? Perhaps they pretend that the fourth item has disappeared; but what would that look like to us - a ritual?
Indeed, it puzzles me. A certain aspect of reality (quantity) is discovered all over the world almost simultaneously, accepted, fast & furious, by everyone. There are non-mathematical entities of course that too enjoy a similar status e.g. morality, variations don't detract from the overall universality of the notion of right and wrong. That's intriguing as hell.
Quoting Banno
Couldn't parse that to respond sensibly.
Yeah, it approaches 1, but it never quite does become 1, though, does it?
So let's not go down that burrow.
Are you going to make a cogent argument as to how a series that is approaching 1 and could do so forever without actually reaching it is the "very same as 1", or are you going to throw around snide comments instead? ( Note; I'm not denying that for mathematical purposes it may be good enough to count that series as equivalent to 1, but the logic says that no matter how many fractions sequentially decreased tenfold you add, you will never actually get there.
Fair enough. It needs expansion. But I don't have the inclination to write an essay now.
Quoting TheMadFool
I think that's not what happens. Rather, a certain way of talking about the world is found in many places. There are, after all, languages without much by way of number. Would you say that the folk who speak them have failed to notice an aspect of reality, or would you say that they have no use for a particular process, a certain way of speaking?
No, partly because I don't wish to derail my own thread, and partly because there are plenty of places you can find these using Google.
I'm not being snide. It is a genuine issue amongst mathematics teachers. See the Wiki article on 0.99... It's on a par with kids who are not able to see three dots as three.
Folk who think 0.99...<>1 have missed a vital aspect of mathematics.
Quoting Banno
I didn't say anything about truth values (semantics) for paraconsistent logics.
I am not expert, but, if I am not mistaken, the main point about a paraconsistent system is that it does not have EFQ. There is nothing stopping us from having premises that are a contradiction, and using a paraconsistent system to derive those premises trivially by the rule of placing a premise on a line. And those premises might be contentual axioms. So we would have theorems in contradiction with one another. And if the system has the rule of adjunction, then we can have the conjunction of the two contradicting premises. The important point though is that we can't use EFQ. But, of course, there are widely different kinds of paraconsistent systems, so I don't intend a complete generalization.
That's syntax (proof system). As to semantics, if I am not mistaken, not all paraconsistent systems accommodate dialetheism, but some do. Indeed the SEP article states that every dialetheistic approach must have a paraconsistent syntax . So, since the set of dialetheistic semantics is not empty, there must be paraconsistent systems (syntax) that accommodate dialetheism (semantics),
In a chart:
Exist. Paraconsistent syntax with dialetheistic semantics.
Exist. Paraconsistent syntax with non-dialetheistic semantics.
Exist. Dialetheism (which is semantics) with paraconsistent syntax.
Not Exist. Dialetheism (which is semantics) with non-paraconsistent syntax.
Quoting Banno
If I'm not mistaken, that is incorrect as a generalization over all paraconsistent systems, as I mentioned above. As a rough generalization, paraconsistency does not "frown" on deriving a contradiction, and some paraconsistent approaches do not frown on having true contradictions. Rather, all paraconsistent systems don't have EFQ. (By saying that they don't have EFQ, I mean that they don't have "For all sentences P and Q, {P ~P} |- Q)".
I still think Betrand Russell nailed it in a nutshell. 'Physics is mathematical not because we know so much about the physical world, but because we know so little; it is only its mathematical properties that we can discover.' You can trace that all the way back to the Greeks. They discerned that numerical qualities were persistent, stable, and directly knowable by reason, unlike sense-able things which are always mutating and never stable. From there is also derived Galileo's 'book of nature is written in mathematics'. That's why I could make sense of @Joshs passage about Husserl and Heidegger - they both analyse the 'history of ideas' through that lens - even though I can understand your perpexity.
Of course, I also think modern physics has gone way overboard with their reduction to mathematical qualities, but that is tangential to this thread. However the point about the implications of mathematical realism is still a good one.
Hence an appropriate response to @Olivier5's question, I thought! (I had Godel Escher Bach on my bookshelf for decades but must admit never more than skimmed it.)
The point of the post was to point out that paraconsistent systems avoid inconsistency by redefining it as other than (A & ~A), usually by adding a third truth value.
Nail a nutshell and it will crack.
According to the principle of explosion, if you accept one contradiction in a logical system, you accept them all.
I can meet you halfway and say that I can see that for formal purposes, the conceptualized infinite series of fractions we are discussing equals 1. But since it is impossible to instantiate an infinite series, what we are really talking about is a rounding off, a "for all intents and purposes". I don't understand why you related this to the 'three dots' example; who would not be able to see that three dots are three? Someone who actually couldn't count, maybe?
Yeah, you really got that flippant dismissive thing down.
Anyway, better that life is too short than it be too long.
Quoting Olivier5
I have not found problems (though, of course, no source is perfect). You asked about paraconsistency in context of engineering. I told you of a place where there is a real nice writeup (much more eloquent than, by your account, my own postings) about use in data systems; one could see how that could be adapted to an engineering context too.
So, my explanations are not eloquent enough for you, but you still are interested, but not interested enough to take a few moments from your too short life to read a professional account much better than I would write. Such a case you are.
Again, that reflects a fundamental misunderstanding of what '.999...' stands for.
What's that, exactly?
Here's an instantiated infinite series for you:
1,2,3...
Quoting Janus
A three-year-old. They have to count. They have not moved from the process of counting to recognising the number when they see it. Just like those who cannot move from seeing the process of adding 9/10, 9/100, and so on, to seeing 1.
It's an issue of pedagogy, not maths.
Again, it's not about a sequence "reaching" anything. '.999...' stands for the limit of a certain sequence. There is no "becomming" or "reaching". Simply, the limit is 1.
I'm here to please.
That is not my understanding, which is: Certain paraconsistent systems do not avoid inconsistency; rather they avoid explosion. But, yes, in the semantics, three truth values is one way. Also, in syntax one way is to use three values in truth tables and take derivation rules based on those truth tables. But, if I'm not mistaken, there can be dialetheistic semantics for paraconsistent systems.
A set S of formulas is inconsistent iff there is a formula P such that both P and ~P are members of S.
As far as I know, that is the presumed mathematical definition.
Then hurry up and take our lunch orders.
We instantiate them all the time. I instantiated the series in question earlier in this thread.
Shh... not in front of the children...
yes, in paraconsistent logic (SEP)
Of course, I don't begrudge adding a rubric 'absolute consistency' that way, though I like the term 'non-trivial' better.
A set S of formulas is non-trivial iff there is a formula P that is not a member of S.
With that definition, with a paraconsistent logic, a set of sentences can be closed under deduction while being inconsistent while being non-trivial. Pretty much another way of saying that EFQ does not obtain.
By "instantiate an infinite series" I mean write it down as a full series, not in a shorthand form. I'm no mathematician, but that logical distinction is clear.
Quoting Banno
LOL, no that's just a finite set of marks on the screen. I get that it stands for an infinite series but it is not itself actually an infinite series. Are you getting it yet?
You misread it.
As I said, it's an issue of pedagogy. You need learnin'.
(the bit where I said I wan't going to do this...!)
You just don't want to admit the distinction I pointed to, so you default to patronising mode instead. I understand the concept of "1,2,3,..." denoting an infinite series, but it is not itself an infinite series. As I said, I'm not a mathematician, but that logical distinction is simple enough; there can be no actual infinite quantities of anything.
End of the test. you get 0/10.
And we don't do that.
Quoting Janus
The word 'instantiate' has a certain meaning in mathematics. What you mean though - to type out in finite time and space individually all the members of an infinite set - is of course impossible. But that doesn't entail that the limit of an infinite sequence is not rigorously defined.
OK, thanks, I'm not familiar with the mathematics specific use of 'instantiate'. I haven't claimed that the limit of an infinite series is not rigorously defined, and you agree that an infinite series cannot be instantiated in the sense I meant it; so it seems we are not disagreeing about anything.
Of course, if one doesn't countenance infinite sets, then one might not countenance the classical notion of convergence to a limit. But that doesn't change that what ordinary mathematics means by '.999...' is not some kind of "reaching" but rather convergence to a limit.
To be in agreement, you'd have to agree that the limit of the sequence is rigorously defined.
I am not opposed to the idea of infinite sets. I can even accept that some infinite sets are larger than others. Logically I see convergence to a limit as analogous to an asymptotic approach, but I can also accept that the mathematical concept may be different, even though I don't have the background to properly grasp the difference.
My original point was only that what we understand intuitively can be understood in ways inconsistent with that in mathematics.
No problemo.
Quoting Banno
Well, it wouldn't be wrong to say that, necessity is the mother of invention, if a people had no use for math, they wouldn't have ever adapted language to make it math-apt.
Your point then is...I draw a blank at this point.
I agree to take your word for it, because I imagine that if I were more mathematically literate I would agree with you. In any case that was never something I was arguing against.
Informally inconsistent, yes.
Anyway, when we move beyond child-level thinking that there must be involved a "reaching" and instead we study rigorous mathematics, then we understand that .999... = 1.
Actually, this is implied after an infinite amount of operations on a series. Like a Riemann sum.
There is an infinite sequence and there is the limit of that infinite sequence. There is no supertask - no performing an infinite number of operations - used.
But, that's how it's usually phrased by others when thinking that you have to add an infinite amount of terms in a series, yes?
I've not seen it phrased that way, especially in a rigorous exposition in classical mathematics. Not even in freshman calculus. I don't know what writings in classical mathematics you have in mind.
There is an infinite sequence, with a finite description. And then there is a finite proof that the limit of that sequence is 1. No supertask.
It's just a Riemann sum, Tones, lol!
What exactly do you disagree with in what I just posted?
With that you were disregarding with instantiating infinity in a series sum that apparently converges to 1 for 0.9...(9)
What I mean is the typical notation for a Riemann sum of as n goes to infinity. So the implicit assumption is that you do reach a number, but as per calculus that amount of steps is reached once you instantiate an infinitesimal...
Specifically, we don't do that even in simple freshman calculus. I have never read an author say there is an "implicit reaching" (whatever that would actually mean, as you have not given a mathematical definition of 'series reaches').
And basic calculus does not use infinitesimals. It is in the context of ordinary classical mathematics that
SUM[n = 1 to inf] 9/10^n
is defined as
the limit of the sequence {<1 9/10> <2 99/100> <3 999/1000>...}.
And then with a finite proof we show that
the limit of the sequence {<1 9/10> <2 99/100> <3 999/1000>...} = 1.
No infinitesimals adduced and no undefined "reaching".
[Edit: I fixed the notation of the sequence.]
Well think about whether 1/x converges for the set of all irrationals, as x goes to infinity. It doesn't. So, you have to consider whether some value is reached or is the limit, no?
Of course, if the sequence does not converge then it's another ballgame. But we easily prove that the sequence we've been talking about does converge.
You seem to be offering up diversions - infinitesimals, nonstandard analysis, sequences that don't converge - when the point is actually as simple as I've shown.
I seem to think that the point being is to demonstrate the issue pictorially with a converging series. So, something is "reached", no?
What? .999... is not a sequence. It's a number. That number is 1.
The sequence is {<1 9/10> <2 99/100> <3 999/1000>...}
and its limit is 1.
Once more:
SUM[n = 1 to inf] 9/10^n = lim of {<1 9/10> <2 99/100> <3 999/1000>...} = 1.
EDIT: Corrected formulation of the sequence.
How about:
[(10^n)-1]/(10^n) As n approaches infinity from 1?
See https://en.wikipedia.org/wiki/Talk:0.999.../Arguments and the eleven archives attached.
It's not an argument worth entering in to; a troll's haven.
Not at all. If people struggle with understanding that 0.999... is 1 then, 0.999... exists inside the sequence of [(10^n)-1]/(10^n) when starting from 1 to n approaching infinity.
So I will ask for Moderator support to discontinue this discussion on this thread.
By all means, folks, set up a thread of your own, or go back to the innumerable previous threads on the topic. Just go away.
What's happening?
I answered this before, and my answer has been deleted. So here it is again:
It is a poor analogy because a series of 7 can be instantiated (in the sense of the word as I mean it) : 1,2,3,4,5,6,7. whereas an infinite series cannot. I was arguing in good faith.
:up:
Nice quote, Josh.
Quoting Joshs
This too.
:up:
There are five spoons on the table. How could any paradox, any alternate paraconsistent mathematics, make it the case that there are not five spoons on the table?
In Wittgensteinian terms, arn't these different language games?
And if that works for spoons, why not for bridges? If the mathematics led to the bridge collapsing, wouldn't we say that we made use of the wrong maths?
You asked me the question about there being five spoons a few pages back. You said
Quoting Banno
The point that I'm trying to make is that the table that ostensibly exists in the absence of any observers, is still an object of thought. Your imagining that collection in the absence of any observers, is still an imaginative construction on your part. I'm not saying that, therefore, the table or the spoons cease to exist when not observed, but that whatever you say, believe, or think about what exists or doesn't exist, depends on a framework of judgements which is a product of the mind.
Even if you imagine an empty universe before there were any subjects to observe it, that empty universe, even if characterised by scientific and empirical rigour, is still a mental construction, to the extent that you reference it or contemplate it. And if you don't reference it or contemplate it, then there's no subject of discussion. And to that extent, I am in agreement with Berkeley. Where I'm not in agreement with Berkeley, is that he doesn't recognise the role of universals in the ordering of reason. 'The fact is,' says a Thomist, 'that the human intellect grasps, first in a most indeterminate manner, then more and more distinctly, certain sets of intelligible features, which exist in the real as identical with individuals, with Peter or John for instance, but which are universal in the mind and presented to it as universal objects, positively one (within the mind) and common to an infinity of singular things (in the real).'
Number is a class of universal idea. The mind more or less effortlessly calls on it to organise its cognitions due to its innate rational ability. That is the ability that is denied by empiricist philosophy, with the consequence that maths is then treated as a 'useful fiction'.
That's all for now.
We'd probably say the math they used was wrong.
Of course your imagination of an empty universe is a mental construction, that is a tautological truism; but the empty universe itself (if it exists) is not. It puzzles me that you apparently cannot grasp that basic distinction.
It is 2 that I am calling into question. It puzzles me that you apparently cannot grasp that basic distinction.
More to the point, with respect to mathematical realism - I'm arguing that mathematical objects (such as numbers) are real, in that they're the same for all who can count, but that they're not empirical, because they're perceived by reason, not by sensory perception. Scientific realism, on the other hand, argues that sensory objects are real irrespective of any perspective, that is, 'mind-independently'. It seems to me that most of the argument against mathematical realism comes down to this conflict - because mathematical objects are not sensable, then they can't be regarded as real according to scientific realism, and so must be treated as conventions or useful fiction (by for example 'fictionalism'). And I think this says something really important.
We don't know for sure whether the entities that science describes and every day perception encounters are mind-independently real. And it also depends on what you mean by "entity". Are you saying that it is impossible that these entities are mind-independently existent?
So, I can see the basic distinction between the possibility that those entities are mind-independently real and the possibility that they are not. They are (the) two (logical) possibilities, granted. So what distinction is it here you think I am not grasping?
You seemed to be claiming that an empty universe is (or definitely would be, not merely may be) nothing but a mental construction. And I responded saying that obviously our imagination of the empty universe is a mental construction, but that it does not follow that an empty universe must be a mental construction. There might have been an empty (I took you mean empty of life or consciousness, not totally empty) universe prior to the advent of life. In fact it seems that all the evidence points to the conclusion that there was. And if the universe existed prior to the advent of life and consciousness then it could not have been a mental construction, obviously, unless you were to posit a universal mind in which it existed. Personally I think the idea that it simply existed is the more plausible view, but I grant you that could be thought to be a matter of taste.
Quoting Janus
It is empirically obvious and furthermore, true, that these are persistent objects. As I've said, I'm not claiming that, outside your or my perception, objects cease to exist. That is their 'imagined non-existence', or you imagining that they go out of existence. Like G E Moore asking if the carriage wheels disappear when all the passengers are boarded.
As I said, we can imagine the universe with no humans in it - which was empirically the case until a couple of hundred thousand years ago. But that doesn't take into consideration the sense in which even such an 'empty universe' is also an intellectual construct. The mind provides the framework, on the deepest level, within which any such idea is comprehensible. But then 'scientific realism' doesn't see the role that the mind plays. It believes the universe just is at is is, and would be this way even if nobody was in it. It 'brackets out' the subject, not seeing that the subject is still intrinsic but by that act, forgotten. (Schopenhauer makes a remark about that exact point*.)
There's methodological naturalism, which is to act as if an object of analysis appears just as it is, without the slightest hint of subjectivity. Which is all well and good and perfectly proper. But when that becomes metaphysical postulate, that the Universe really is just as it is, without there being any observers in it, then it oversteps by taking a methodological postulate as a metaphysical axiom, which it is not.
That is one of the implications of that essay I often refer to, The Blind Spot of Science. BUT, this is a severe digression within Banno's thread, so I will cease and desist.
----------------
[quote=Schopenhauer, World as Idea 35]All that is objective, extended, active— that is to say, all
that is material — is regarded by materialism as affording
so solid a basis for its explanation, that a reduction of
everything to this can leave nothing to be desired
(especially if in ultimate analysis this reduction should
resolve itself into action and reaction). But we have
shown that all this is given indirectly and in the highest
degree determined, and is therefore merely a relatively
present object, for it has passed through the machinery
and manufacture of the brain, and has thus come under
the forms of space, time and causality, by means of which
it is first presented to us as extended in space and
active in time. From such an indirectly given object,
materialism seeks to explain what is immediately given,
the idea (in which alone the object that materialism
starts with exists), and finally even the will from which all
those fundamental forces, that manifest themselves, under
the guidance of causes, and therefore according to law,
are in truth to be explained. [/quote]
I'm not perplexed; I just outlined the possibilities.
Quoting Wayfarer
It is you who seem to be perplexed on this point. So you're claiming the universe did not exist prior to consciousness, or you are claiming that it could not have existed? And I'm asking about it actually having existed then, not about our imagining now it having existed then.
Quoting Wayfarer
I wouldn't worry about that, Banno has himself also derailed this thread. But if you are worried about it, then answer me in the 'realism' thread instead.
Let’s talk about what we actually experience as we make our way toward that door that we remember being there. As we approach it there is absolutely nothing about the visual scene that reproduces itself
exactly from moment to moment. The lighting, our angle , speed and style of approach and the accompanying perspectival view, howour eyes and neck and body turn in relation to the door, and how we need to move our body to open it and get through it. The classic empirical argument is that it is just the appearances that change, not the object itself. The phenomenological argument , however , is that we make the mistake of grounding the appearances on a notion of the identically self-persisting object which is itself constructed by us out of changing appearances.
So is there really a door there or is it just a subjective and intersubjective construction? The answer is both We dont make up or imagine the door.
We perceptually construct a reliably consistent unity by cobbling together memory, a continuing flow of new sensation, and based on this , expectations of what is to come next. This peceptual cobbling is what we see as this door. As we approach the door , we arrive with a rich web of perceptual expectations of what what we are about to encounter. As we begin to see it, what we see forces our perceptual system to rapidly adjust these expectations to the novelties of the current perspective. We don’t generally notice this adjustment taking place , and instead simply say we are seeing the ‘same’ door. We are indeed seeing something similar , and we only notice the discrepancies if they are pronounced ( under certain lighting conditions it may no longer look like a door ) or with certain brain injuries that interfere with our ability to make perceptual adjustments between expectation and reality. What we can never have , is evidence of temporally self -identical persistence of objects that doesn’t presuppose what it claims to prove.
You could legitimately argue that for all intents and purposes , making the classical empirical claim that objects persist over time as self-identical vs making the phenomenological argument that we construe self-identicality out of self-similarity leads to the same experience of the world and of science. I think this is true at the perceptual level , and with regard to the natural
sciences , but the classical view becomes limiting when we continue to apply it in the social sciences, particularly to psychological phenomena like empathy, affectivity and psychopathology.
“ For Husserl, physical nature makes itself known in what appears perceptually. The very idea of defining the really real reality as the unknown cause of our experience, and to suggest that the investigated object is a mere sign of a distinct hidden object whose real nature must remain unknown and which can never be apprehended according to its own determinations, is for Husserl nothing but a piece of mythologizing (Husserl 1982: 122).
Rather than defining objective reality as what is there in itself, rather than distinguishing how things are for us from how they are simpliciter in order then to insist that
the investigation of the latter is the truly important one, Husserl urges us to face up to the fact that our
access to as well as the very nature of objectivity necessarily involves both subjectivity and
intersubjectivity. Indeed, rather than being the antipode of objectivity, rather than constituting an obstacle and hindrance to scientific knowledge, (inter)subjectivity is for Husserl a necessary enabling condition.”(Dan Zahavi)
Banno has spoons on the table. I prefer beans if there are five, or ducks if they are aligned, but spoons will do. A teaspoon, a dessertspoon a soupspoon and a couple of love-spoons. Five spoons. So I pick them up one by one and say to you 'this is a spoon, it is not a number' each time until I have all the spoons back in the drawer, and then I say - 'there is no five on the table, and I have not picked it up, Therefore there is no five and never was. It was empty talk.
I doubt you are convinced. I hope you are not. It's the old stuff and arrangements switcheroo. There is stuff, and stuff is always in some kind of arrangement - the cat is on or off the mat, the spoons are on the table or in the drawer, the ducks are in a row or not, the beans form a hill or do not. This is the reality about which we like to talk and share tales of. Cats and ducks and beans and spoons and arrangements and quantities and relations of them. Things arranged - cat on mat - each word refers to reality - things in relation. The relation is as real as the things. But it is not the stuff. This is conventionally physicalist materialist, but materialists do not deny the reality of structure and mathematics is the study of structure. One counts things, and one does not count nothing. one gets things in a row, or one on another, but not nothings. As they say in schoolboy physics, "state your units". There is no 'on' unless something is on another thing, and there is no 'five' unless there are five spoons, or beans, or schoolboys.
Quoting Wayfarer
I might read this in either of two ways. Perhaps as the tautology that if we do not talk about it, then it is not the subject of discussion. With that I agree. Alternately, that if we do not talk about it, then it isn't there. With that, I disagree. And indeed you seem to be saying the former, since you also say Quoting Wayfarer
As for the universals and so on, one's mind only gets to organise the spoons into fives if there are indeed five. That seems to be 's point. So it isn't just mind that counts.
But here:
At about age four or five a child stops having to count the dots and sees four.
That is, what exactly is added by the phenomenological analysis? Why wouldn't it 'drop out', like a boxed beetle?
Because it matters greatly when we get to the higher level of political, ethical, scientific and personal conceptualization. At these levels, ‘dropping out’ the acknowledgement that such frameworks of understanding, like simple perceptions, are not inner representations of pre-existing data but constructions which generate the criteria for the evidence that appears within them leads to political and ethical
violence, interpersonal conflictual and conformity rather than innovation. The important doorways in our lives are not geometric but metaphorical shapes. These are constantly shape-shifting, and we end up being barred from many opportunities to pass through new portals of
understanding because of our assumption that the empirically true is what persists in itself. independent of our conceptions. goals and aims.
Only what is derivative can drop out. Phenomenology is the condition of possibility for such notions as identical self-persistence, so one can bracket off what is empirically real as objectively present and not lose any of what is essential to experience
Boxed beetles drop out precisely because they become meaningless without their connection to contextual use. So what should drop
out isn’t the phenomenological analysis but the classical empirical assumption of self-identical persistence.
Heidegger makes such a point when he says that pointing to a door as simply an objectively present object rather than as part of a contextually relevant activity for us is failing to understand as it any more.
“ When we just stare at something, our just-having-it-before-us lies before us as a failure to understand it any more.”(Being and Time)
But that is due to the innate ability which is unique to human children. Some animals can recognise up to about 2-3, but I think the point stands. In any case it's a very simple illustration, humans can recognise all manner of complex symbolic relationships, something which to some extent is learned by experience, but unless the innate capability existed, then they would have no chance of learning it. No amount of effort has ever imparted significant language skills to non-human primates.
Quoting Banno
As I said to Janus, the principle I'm calling into question is this one:
'The entities described by the scientific theory exist objectively and mind-independently. This is the metaphysical commitment of scientific realism.'
I am questioning that they exist 'objectively and mind-independendently', in other words, I'm questioning the metaphysical commitment to scientific realism.
0ne of our learned contributors made the point so well that I can do no better than reproduce it:
Quoting sime
The key point is the very last one. Indeed this is why I keep arguing, to much general annoyance, that the net effect of early 20th C physics has been to undermine scientific realism (and thereby classical materialism). The role of the observer is inextricably entwined with any statements, even those that are 'true for all observers'. For practical purposes they can be 'bracketed out', but it's when this 'bracketing out' is taken to be a philosophical principle, rather than a methodological step, that 'scientism' enters the picture.
Recognizing 4 dots as the number 4 is just pattern recognition , not a mathematical ability. This is no different than recognizing a bunch of lines as a house. The pattern of dots on the dice begin to look like a picture of something , in this case a particular number, when the concept 4 is paired with the pattern of dots enough times. This is not unlike a dog associating the sight of a food dish and the sound of a crinkling package with the image of food.
What is innate is simply synthesizing new levels
of sense on the basis of what appears similar to something else. The similar becomes fused in our mind with what preceded it and we then have a néw unity. Animals do this too, but their synthesizing abilities fall
far short of ours, which is why their language capabilities are so rudimentary.
I like the rest of your post.
Maybe we can agree that what is
‘temporally persistent’ doesn’t have to mean temporally self-identical in order to give rise to the appearance of enduring objects. But keep in mind that whatever it is that appears immediately before us and other animals in perception is just a small part of what we actually experience as actually present. The rest comes from memory and is fused with that small bit of stimulus that comes to us from outside.
We can see how important this synthesizing filling in becomes when we think about how much of our human environment consists not of simple physical entities but of cultural value objects. In our homes , for instance, chairs , appliances, cupboards , couches , computers , these are all objects for us based on how we use them. A chair is for sitting, a cupboard is for storage, etc. How we look at such things , how we interact with them, even our ability to see them as single, unified objects , is dependent on our understanding of what they are for.
Does a dog see a computer a s a single entity? How could it? A desktop computer is a mouse, maybe a tower , a monitor , maybe a printer. But to a dog they are only objects to the extent it can grab them in it’s teeth and move them around. The same is true of a three dimensional carving of a chinese word symbol. To the dog and to the human. who doesn’t recognize it it is a random pattern of lines and curves. Are they seeing the same thing as the person who can read it ? No, that person has synthesized something more complex. Is the symbol less real than what the animal sees?!Not if we propose that an ‘object’ is a way in which an organism interacts with an aspect of its world. Almost all
of the culture objects in the human world are objects that don’t exist for other animals because their interaction with their world is so much simpler.
Piaget never renounced the notion of the real , but said that human individual and cultural development was a process of embedding the real within more and more differentiated schemes of relation. He said we were always on the way to the object, that objectivity was an asymptotic limit towards which human knowledge progressed. What he meant was that the reliability you associate with persistent objects like rocks and doors , only is really attainable as we create more and more complex schemes of reciprocal
relation to allow us to predict and anticipate the changes in our world in more and more adequate ways.
Persistence and self-identicality don’t add up to meaningfulness , reliability and usefulness if they are meant to pertain to what something supposedly is in itself outside of its role in an organism’s functioning.
If that were so then the position of the dots would make a difference.
It doesn't.
Subitising is more than just pattern recognition, although that is part of it.
We can also learn to immediately recognize a patten of 5 or 6 dots as their corresponding numbers. The more dots that are involved , the more important the placement of the dots is. My own experience with dice is certainly that way. I instantly know what ‘5’ looks like because I’m familiar with their placement on the dice. Change that pattern and I guarantee you there will be a slight lag before I process it as 5.
“In playing dominoes, for example, we grasp groups of ten to twelve dots with one glance. Indeed, we even assess their number with total immediacy. It must be observed, however, that in such cases we can speak neither of an actual colligating nor of an actual enumerating. The number name is here directly associated with the characteristic sensuous appearance, and is then recalled on each occasion by means of that appearance without any conceptual mediation. With groups that large, as everyone can test, a direct and authentic collection and enumeration is an impossibility.” (Husserl, Philosophy of Arithmetic)
Same holds for most of what philosophers like to call innate abilities.
Quoting Banno
Here’s some more armchair musings from Husserl. You’re going to have to refute them with your own arguments. Let the show begin.
In Chapter XI we treated in detail the problem of how immediate appraisal of groups comes about without the actual carrying out of the relevant psychical activities - those of individual appre-hension and collection. A unitary intuition is given to us, and in one glance we judge: a group of balls, coins, and so on. To explain this peculiar fact we referred to the figural Moments of the unitary group intuitions which enter into an association with the name and the symbolic concept of the multiplicity - mediating the reproduction of the latter, and thereby making possible the immediate appraisal of the phenomenon as a group. Immediate number estimation presents a quite similar problem and the means referred to completely suffice for its solution.
The matter stands forth most clearly in examples, as is abundantly illustrated in play at dice, dominos and cards. Each surface of a die possesses a characteristic fixed configuration of dots which enters into an association with the number name (or with the symbolic concept of a certain number named by it). If several dice are thrown simultaneously, then either there occurs a rapid quasi-summation utilizing the tables of addition - in which, of course, the mere number words intervene - or else, given long practice, the number word corresponding to the sum of dots is reproduced immediately by means of the figural character of the total complex phenomenon.
The number of configurations to be impressed upon us for this is in fact only a limited one. The same holds true for play with dominos, and it is well known what a knack experienced players have for instantaneous estimation of numbers. They often can count up to forty dots in one glance. In the examples considered up to now the configurations were of a fixed type, or even more so, were closely related in type. In order to explain the latter case it should be pointed out that a die surface, for example, in each change of position through rota-tion, receives another figural character, and that it therefore must basically be the corresponding generic character that establishes the association [with the number]. This observation makes it clear that the difference between the cases considered and others where wholly arbitrary distributions of objects are estimated as to number is not so great as it might at first appear. However three cleanly separated objects may be distributed in the field of vision, they together form a characteristic configuration -presupposing that they can in general fuse into an intuitively unitary appearance of a group.
The various three-point configurations which arise, depending upon the varying relative positions of the objects, are indeed well-distinguished in intuition. But they possess so much striking analogy that the character common to them all can mediate with certainty the reproduction of the number three (or, more precisely, of the name "three," along with the symbolic concept of a specific number named by it). A some-what more essential difference is exhibited by the figural character only in cases where the three objects come to lie in a straight or approximately straight line: a boundary case whose quite noticeable special character makes possible the association of the number. It is similar with groups of four objects. Here the configuration exhibits either the familiar quadralateral type, or else other characteristic types show up - as when all four or any three of the objects lie in a row, or when one object falls within a triangular figure formed by the three remaining ones. And so on. The more objects the group includes, the greater is the re-spective number of intuitively distinct figural types, and thus it becomes understandable why in reliable number estimation we usually do not get past groups of five members - unless by means of constant, methodical practice. Preyer, who did experiments on this, is of the opinion that in the latter case the attainable limits may lie, on the average, at twenty. Nevertheless, the famous calculator Dahse could instantaneously estimate some thirty arbitrarily distributed objects.”
“…after repeated enumeration of many types of object distributions, the number names enter into fixed associations with their typical figural characters. Moreover, I would hold it to be quite well possible that even someone completely ignorant of enumerating could bring the number names into association with those figural characters, and develop into a skillful domino player, for example.”
https://www.mathematicalbrain.com/pdf/SUBIT.PDF
Notice that pattern recognition played the same role in subitizing and in counting.
Quoting Joshs
No, I don't. The science will settle the issue. Subitizing is not just pattern recognition, but involves counting.
Edit: Just to make my point clear, your claim was that subitising is just pattern recognition. In this study it was shown that pattern recognition showed up in groups of four or less, and also in groups of more than four. That is, pattern recognition was found in both subitising and counting. It has a part to play, but is not the whole of the story.
@Isaac?
yes well sometimes their absence also makes itself reasonably clear.
There's a true saying, 'you can't fake talent'.
I agree that pattern recognition and counting are different abilities. I thnk too much importance is accorded to the former by a lot of people.
:up: Point well made.
https://core.ac.uk/reader/224990548
https://www.jstor.org/stable/1422405
But it is not the whole story.
Notice that the study I cited used both canonical and non-canonical spacial disposition.
Thank you for citing something with a bit of empirical content... :wink:
The debate is still going on I'm afraid. I've not kept up with this thread, but as I understand it, the debate is about automaticity of number recognition, yes? The conflict between various stroop-like experiments manipulating visuo-spatial indicators of magnitude with actual numerical representations (a big number '7' vs a tiny number '9' - that sort of thing). Cohen Kadosh pretty much put that to bed in 2008, so papers from before then would have to be viewed in the light of a then open debate which is now considered less so. As any follower of developments in neuroscience will have come to expect, the matter turns out to be much more complex. Elements of visuo-spatial signals (size, pattern) are taken into account alongside priors expectations from things like ordinal and magnitude judgements in context. In fact Cohen Kadosh found that the specific instructions by the experimenters lead to different patterns of activation. Ultimately there's a significant degree of non-abstract numerical representation - numbers are represented in the brain not as the concept 'three' but as a combination of patterns, language, magnitude, numerosity, and even more synaesthetic relations. We know from something as simple as digit recognition that the inferotemporal layer can make 'best guesses' with ambiguous signals from the v4 regions (higher order perceptual features like overall shape, texture etc). It's likely that the final behaviour (assigning a number word, performing a calculation, weighing magnitudes...) is both dependant on, and influences (by backward acting suppression) the balance of collected 'evidence' in the v4 region.
Basically, a range of evidence is gathered and which evidence takes priority is dependant on the task at hand. (don't know why I didn't just say that at the beginning, still...it's written now.)
What I would add though, is that the often touted studies on infant number recognition are being misused to support mathematical realism. Infant studies done thus far just about all support the prevailing magnitude-estimation hypothesis, they are not about infants recognising 'the eternal number three' or any such. Magnitude estimation and numerosity are two different processing streams and shouldn't be confused. One can estimate the relative magnitude of two pages full of dots without having to, or even being able to, count them. Fine-scale magnitude recognition is both granular (ie based on individuating objects) and scalar. It's this granular magnitude recognition that's often misquoted as support for innate number recognition, but experiments such as this one https://pubmed.ncbi.nlm.nih.gov/11814309/ (and other subsequently) demonstrate that it is granular magnitude-recognition that underlies these infant abilities.
Quoting Isaac
Complexity increasing with yet another loop.
Quoting Isaac
Which in my ignorant head harmonises with the Wittgensteinian rejection of concepts as pieces of furniture in minds. If we are to look to use instead of meaning, we would not expect to find a "spot" in the brain for each number, but instead to see something reflecting the range of stuff we can do with numbers.
Quoting Isaac
Interesting then that Subitising seems, according to the study I mentioned, to use the same networks as counting - is that the same as numerosity?
https://octolab.tv/can-an-octopus-count-viewer-request/
Yes, that's it. The prescience of some of Wittgenstein's ideas still surprises me.
Quoting Banno
Not quite. As the authors say "we show that left-lateralized parietal activation is modulated by numerosity and is not involved in subitizing 1– 4 dots". The controversy of the paper is over the involvement of the right-lateralized parietal area, which is involved in this granular magnitude estimation I referred to earlier, but had not been postulated to be active in previous studies like this. I should be clear though, that this area is involved in a lot of fine-grained attentional shifting activities, of which counting is only one.
The study seems really interesting and raises some serious questions about previous models. My gut feeling is that we're seeing the involvement of multiple related processes because of the experimental set-up, a set of three dots 'means' 3, which, when observed is going to have triggering effects in the parietal areas anyway, and possibly some backward acting suppression on whatever pattern recognition was being employed - imagine someone shows you symbolic picture of a train, you might trigger linguistic areas ("train") and little else, but a train enthusiast might engage regions involved in more detailed edge-discernment, simply because they're expecting to see some feature you and I wouldn't even know ought to be there. In this sense, we're all number-enthusiasts.
It makes it difficult to study. we want to see what parts of the brain recognise three dots and our brains, like enthusiastic five-year-olds are excitedly telling us everything we know about 'three'. I'd love to see such a study done on people who are innumerate, like very young children, who would be less likely to have enumeration priors for such an experimental setup.
Well, the relating is as real as the things. The pointing a relation word at them. There just aren't any relations being pointed at, like there are things being pointed at. Unless you're Plato.
You can throw together brain imaging studies of cognitive tasks , add to this reaction time measurements, recordings from groups of neurons and other such quantitative readings associated with particular behaviors, and get consensus of a large group of psychologists over the accuracy of these results. But raw empirical data and interpretation are two different things. Mirror neuron studies and theories of empathy is a good example. Almost all psychologists involved in this area agree on the specific neurological findings , but there are at least four distinct theoretical camps in the explanation of how mirror neurons contribute to human and animal empathy. These camps differentiate themselves along philosophical lines.
So the raw data will give us guidelines concerning the meaning of , and difference between, enumeration and subtizing, but the science won’t settle the issue of what counting , subsitizing and their distinction is without you picking philosophical sides first. I will say this. There seems to be more and more convergence these days among Wittgensteinian, phenomenological and neuropsychological approaches to cognition in general.
Which leads me to wonder what we’re debating here, if anything. I read Isaac’s posts and found them very helpful. I don’t see that anything in it contradicts Husserl’s analysis. Of course , his is conducted at a different level from the neuroscientific studies. But I think there is agreement that effortful enumeration is one category ( or maybe a series of related
categories) of mnemonic process , and subitizing can reasonably be linked to a different class of processing that bypasses the intense demands on working memory by drawing from learned patterns in long term memory and using them to fill in ( Isaac would say pattern match).
Let me throw in a little more detail foe the heck of it. Described at a phenomenological level, effortful enumeration involves separating out, identifying and counting each individual dot in a pattern of dots. Each isolated individual must then be assigned a numeric value. To do this one relies upon a pre-learned mnemonic sequence that one must keep in mind during every stage of the counting. This menomiic allows us to quickly recall any number name by its learned association with a lower number. the name ‘two’ cues ‘three’ and three cues four , and so on. This is similar to the way we remember the sequence of letters
of the alphabet , or song lyrics or melodies. The effort comes in when we have to make sure that we are not recounting previously counted dots.
This means we have to use a strategy ( finger pointing, blocking off segments of the pattern) to remember what we’ve already counted as we go along.
Subitizing frees us of this short term memory effort by bringing up from long term memory an almost identical version of the shape of the whole pattern we are trying to count. We may still have to begin by enumerating a small number of dots before we can access the pattern as a whole. This may be analogous to reading words. Our prior expectations allow us to rapidly fill in rather than having to process each letter individually. But we likely begin with a rudimentary analysis of lines and curves before letters pop into view. I wouldnt be surprised if this preliminary sequential processing of simple , isolated lines didnt make use of the same area of the brain as enumeration.
I’m reminded of George Miller’s famous paper of the 1950’s , 7+_2. He wrote that we can only keep in short term memory around 7 unrelated items( which is why such things as phone numbers are about that length). In order to recall any larger number of items we have to ‘chunk’ them, find a way to link them together as a unity that can be recalled all at once. The key is they have to be associate meaningfully. There are all kinds of well known mnemonic techniques that achieve this, such as the pegword method. Take a list of unrelated words, like a grocery list , associate each one with some object on a well trod path of yours, such as walking through the rooms of your house. Imagine each word in some humorous, shocking or ridiculous concrete way with the refrigerator, the front door , etc.
Notice that our number system is constructed for easier recall Take Roman numerals. Rather than just an increasing series of vertical lines, the sequence of numbers is ‘chunked’ at regular intervals. III becomes IV instead of IIII. Similarly , 0 through 9 becomes chunked at 10 , and then regularly thereafter. So we don’t actually do a lot of counting. We mostly use shortcuts to avoid having to count.
Do you have any objections to the assertion that both effortful enumeration and subitizing are constructive cognitive activities that are common to a range of phenomena( assigning letters of the alphabet
to individual dots) of which number is just one example?
You seem to be comfortable, as I am , in determining number in Wittgensteinian terms as a wide variety of use-dependent senses. Furthermore , there is nothing ‘pure’ about number , either from a realist or a platonic perspective.
I must be Plato, then. But no, When the cat is on the mat, the real cat is really on the real mat. But there is no eternal realm containing the form of 'on' nor the forms of cat and mat. Relations are real. This is really a response to your post and not really unrelated to it. But mathematics is the study of relations in the abstract, and that is why counting works for beans and spoons and sheep. To put it another way, mathematics is the study of possible relations, and of course not everything possible is actual. but when relations are actual, one can indeed point to them, and it is an ambiguity of pointing that Mr W. alludes to somewhere, that one cannot exactly tell whether one is pointing at the left pant, the right pant or the whole pair, or to its lurid pink colour, or something else, because pointing fails to specify its units.
I don't think we are.
In a previous life I taught teachers and parents the importance of subitising. The thinking at the time was based on studies in which it was found that an inability to subitise was an early sign of dyscalculia, and hence there was a need to ensure kids could subitise. One of the tasks then was to convince folk that subitising was more than recognising the patterns on die. Indeed, it was going beyond the recognition of the iconic patterns that was thought important.
My own experience is that if groups of more than about four objects are arranged "geometrically" I can recognize the number of objects immediately. If they are placed randomly I may have to count them; that is, the number of the group is not instantly recognizable. Others' experience may be different, obviously.
I tend to think the recognition of numerosity and the beginnings of numeracy do begin with pattern recognition. If counting begins with noticing multiples of similar objects then apprehending similarity of conformation (which is a kind of pattern recognition) would be important
Here's the thing: our not talking about something does not make it disappear.
When you put the spoons back in the draw, they do not cease to exist. You do not go to the shop to buy new spoons because you surmise that sine can't see the ones in the draw they no longer exist.
That is, what you have said here seems to be in error. The subject is still there even when we do not talk about it.
The usual idealist reply is to ask for proof that the spoons in the draw still exist, as if this were an empirical claim. Of course it is not. But you and I will both go to the draw for the spoons, and not order new ones online. That is, the idealist indulges in the philosophical pretence that we cannot know if the spoons are really in the draw, while for all other intents and purposes behaving as if they are.
Quoting Wayfarer
I'll join you in questioning if absent things exist "objectively", since as I've argued the term is next to useless. The 'mind-independent' bit is where we differ. It seems you think that a mind is needed in order that what you think of as the unknowable 'whatever' of the nomenon can be imbued with "spoonness", or something along those lines. What I've pointed out is that the assumed unknowable whatever is, by that assumption, unknowable, and so cannot form part of our analysis - we cant say anything about it. This is to say that the world is indeed interpreted, put into the context of our thoughts and actions, and to take seriously the point that nevertheless, there is a world that is so interpreted. That we can't say anything about the supposed nomenon renders it outside of our considerations, but importantly this does not mean we cannot talk about stuff.
The only alternative would be to suppose that since we cannot say anything about the nomenon, we cannot say anything about spoons, tables and draws. This is of course Stove's Gem, and there are folk hereabouts who take this seriously. Don't ask them for a cup of tea.
I've been rabbiting on about direction of fit. It fits here, too. What I'm advocating is that treating the world as "mind-independent" is what we in fact do. The arguments tend to be oddly passive, as if all we ever do is absorb phenomena. But of course we pick spoons out of the draw, we scoop sugar, we stir the tea; we interact, moving from world-to-word to word-to-world intentions freely, indeed blithely.
And that's what realism is. Not the conclusion of an argument but the very presumption on which what we do sits.
It takes a philosopher to decide otherwise.
You seem to think your view gains purchase from considerations of relativity and quantum magic. That would require some extensive and clear argument.
Relativity suggests that the laws of physics are true for any observer. This is a clearer way to think about these laws than the supposed "god's eye view", that the laws are true for every observer.
And Quantum mechanics' supporting your contention of the centrality of the observer depends on what counts as an observer: a mind or a measurement. This is an issue of contention in physics.
Perhaps, but on the other hand if numeracy can be explained by recognizing similarity, difference, repetition and pattern if real objects, then there would be no need to appeal to the reality of universals, a kind of realism which, ironically, is really idealism.
Not only recognizing similarity, difference, repetition and pattern, but use.
hnece,
Quoting Banno
Kind of like the beetle in the box? We can of course say the spoons still exist outside of our interaction with them, but saying so is meaningless outside of some use such an utterance serves relative to our pragmatic purposes. Notice that claiming use-independent existence is meaningless is quite different from claiming that the thing-in-itself is unknowable. Kantian Idealism isnt denying the existence of a world of use-independent givens. Wittgenstein and phenomenology, however , are arguing that such assumptions are pointless and don’t do anything for us, except to the extent that they emerge as relevant out of some ongoing project. It’s interesting to note that a spoon can be meant as a cultural object as well as a physical object. When Im hungry and look for my spoon I’m searching for an implement to feed myself. When I look to see if there is object permanence to a physical object called a spoon, my aim is different. Or I could be looking for the last entry in the book that I am writing. When I find it I can say that it existed when I wasn’t there. But does it exist for others, too? Something certainly exists for them if they locate my words on a laptop. But I know they don’t understand the words the way I intend them. So what they find is not the same for them as what I find.
And when I find my previous words , I notice that they seem a little different than I remember them when I typed them. Just the act of going back to them changed their sense in some subtle way. Is this experience so different from that of locating physical objects that I have put away?
Is there no interpretive change in the sense of what the object is for me as I return to use it day after day? So yes, I could say that the spoon continues to exist without me , but now I’m realizing its existence WITH me is one of a continually contextual shift in sense over time. So if that’s what underlies the so-called self-identical persistence of the object when I’m using it , then it seems beside the point to posit persisting self-identically foe the objects that are not currently being used by me.
Yes! Our utterances and acts are what constitute meaning, in so far as meaning is anything at all. And so of course that meaning is in a state of flux induced by our changing acts and wants.
Quoting Joshs
And yet it is what you and I do; here, you in looking for the reply you now read; me in writing it with an expectation that it reach you. Beside the point? What could be more salient here, now than your reading this?
Yeah. Some of my wife's early work was with the development of children's writing recognition. A similar (I think) thing is found there. The recognition that this 'G' is a letter G no matter what font it is written in, is a skill that is a layer above the basic pattern recognition and into the more Bayesian modelling (though we didn't think of it in those terms then). It's as much about suppressing extraneous data as it is about processing relevant data. I think people (universalists?) too often think of this process as leaving behind some kind of platonic 'essence' of 'G', but it's not, the data discarded in some contexts overlaps with the data included in other contexts. It's about applying fast heuristic hypothesis testing, leaping forward in the word, bringing in context, sentence meaning, location in the word... My guess is that subitising is the same.
She had children who had to learn to read one font at a time...
But, you're right, we're very far off topic.
All those have to be in place before there can be any use. But sure, use is obviously important; we, like all other species, are basically pragmatists.
Gs written in most fonts are still recognizable as a particular configuration or pattern.
Thanks for the considered reply, although I think it's still rather 'argumentum ad lapidem'. But I will leave off for now and think about it some more.
Yes, our anticipating into the next moment is absolutely central to what salience, mattering and pragmatic
use are all about. Maybe I’ve been reading too much phenomenology , but when I think about the spoons being where I look for them , I don’t have in mind the persistence of a thing , but an expected new variation in a an ongoing performance. The performance is the enacting of a body-environment interactive cycle.
The spoon isn’t an independent element that just happens to participate in the performance. It ‘drops out of’ the performance as a derivative biproduct. If a ‘spoon’ is only a slot in an ongoing narrative and body-world performance, then the looking for and finding of a spoon just demonstrates that as self and world feed back into and modify each other moment to moment , there is a referential continuity to this creative
becoming that gives our experience ce a thematic consistency and predictive utility. One could say the objects of our experience self-persist as returning to themselves and to us differently but recognizable in relation to what we want to do with them. The world talks back to us , but only in response to our formulations. It’s feedback changes
those formulations , which then trigger a newly modified talking back from the world. The aim of all this back and forth between formulations and the changing feedback it triggers from the world is to coordinate the interaction in more and more intimate and intricate ways, choreographing the dance between changing self and changing world in the direction of seamless movement through new events.
I think so. It's not good for your mental health, you know. You will find yourself writing long, convolute sentences that say very simple things. Yep, the way the spoon looks changes over time.
What changes?
The way the spoon looks.
Therefore there is a spoon.
Of course there is a spoon. It's the implications of there being a spoon that phenomenologists and metaphysicians are interested in. You don't have to be interested, though, if the basic spoon is good enough for you. It's perfectly adequate for baby food. :wink:
Yeah, them. :wink:
Seems as some folk think that phenomenology has something to say in regard to the OP - that maths is not made up. Issue here is conceptual clarification - what is it that phenomenology tells us about maths, and if nothing, why are they on this thread?
Quoting Janus
They came here; I'm not chasing them, just asking them to explain themselves.
The spectres of predictive text...
I see you've progressed from cups. Impressive.
What I meant was more along the lines of if you were interested in phenomenology then you might find they do have something interesting to contribute. It is a given that if you are not interested in phenomenology, then anything anyone says along those lines will likely leave you cold.
[math]W = \{Ludwig \space Wittgenstein\}[/math]
[math]L = \{This \space Sentence \space Is \space False\}[/math]
[math]A \cap W \cap L = Boot \space Strapping???[/math]
:chin: