If everything is based on axioms then why bother with philosophy?
https://ideasinhat.com/2018/11/16/what-is-the-munchhausen-trilemma/#:~:text=The%20Münchhausen%20trilemma%20is%20a,that%20all%20beliefs%20are%20unjustified.&text=Their%20theories%20of%20knowledge%20are%20objectively%20the%20case
The link is messy but the trilemma got me thinking about why do we bother if everything is ultimately based on three unsatisfactory endings? How can we call anyone right or wrong when our justifications reach a dead end?
I got into an argument with the guy about how morality is incompatible with solipsism because it requires a social setting. He pointed me to the trilemma
The link is messy but the trilemma got me thinking about why do we bother if everything is ultimately based on three unsatisfactory endings? How can we call anyone right or wrong when our justifications reach a dead end?
I got into an argument with the guy about how morality is incompatible with solipsism because it requires a social setting. He pointed me to the trilemma
Comments (60)
Math is based on axioms, and math is a branch of philosophy. But not the entire body of philosophy is math.
I have to apologize, but I did not read the link. I fear there may be something untowardly there. I am very careful not to get a bug. A bit of a mask on my browser.
A trilemma is a kind of argument. The top Google result for the word is about an economic argument, but OP is talking about the Munchhausen trilemma, aka Agrippa's trilemma, an epistemological argument, showing that beliefs are justified either by chains of argument that terminate eventually in unquestioned assumptions or axioms (foundationalism), circular arguments (coherentism), or chains of argument that go on forever never reaching any bottom (infinitism).
The OP's link seems to take it that foundationalism is the only viable of those three alternatives, and therefore that all beliefs are based on axioms.
@Darkneos, this trilemma is a good reason to reject justificationism entirely in favor of critical rationalism. The short version is: instead of saying that people should reject every belief until it can be justified from the ground up -- which as this trilemma shows either results in infinite regress, circularity, or appeal to something entirely unjustified being taken as unquestionable -- we should merely permit tentative belief in anything that has thus far survived falsification. So in a disagreement, neither side is wrong by default until they can prove themselves right. Either side is possibly-right, until the other side can show some reason why they must be wrong.
So instead of starting with a blank slate and trying to find some base certainty to build up from -- since we can't ever do that, without just assuming something to be certain by fiat, as an axiom -- we start with an infinite space of contrary possibilities, and slowly weed out the ones we find to be impossible, forever narrowing down the range of remaining possibilities but never pinning down exactly one conclusive one.
You could try and explain wny you think such a thing, which here, you've patentlly failed to do, other than pointing to some offlink.
Obviously, one does not simply carry on with life when someone is wrong.
Here we go again.
If the triemma shows something, it justifies something. Therefore the trilemma is a justification for believing that there are no justifications for beliefs.
I still don't understand these philosophers that just don't get the contradiction they make in asserting that knowledge is inherently flawed.
Quoting Pfhorrest
If one is possibly right, they are possibly wrong at the same time. To be right, one must make all possible wrongs and learn from them.
Ah, the direct realist specialty. Things are as we perceive them because we say they are.
Sounds a lot like the falsificationism Pfhorrest advocates: dismiss beliefs as and when they become untenable.
I think the OP has value and my response is more about pragmatism. In mathematics we have axioms, in philosophy a priori knowledge and assumptions, in science we have laws (which are empirical rather than ab initio).
The progress of mathematics and science has been to take those fundamentals and look for new fields in which they can be derived. An axiom in algebra might be a conclusion in set theory; a law of chemistry is a prediction in QED.
The effects are that a given axiom is buttressed on both sides and that the most fundamental axioms tend to become fewer in number and simpler (not always, but usually). Ever more fundamental beliefs tend to refine higher level beliefs, which is to say they give the opportunity to falsify those beliefs.
It is really a mixture of justificationism (the belief has explanatory power), coherentism (the belief can in turn be explained, or is fundamental), and falsificationism (the belief has not been ruled out) that whittles down the number of viable beliefs.
Worst case scenario is that we don't have a viable, explanatory, coherent narrative at all. Next worst is that we have too many, i.e. we have multiple competing theories with incompatible axioms and each are coherent and viable, which only helps in avoiding untrue, incoherent and meaningless beliefs. Third worst is we come up with exactly one and it's wrong but we never find out.
It's not a matter of right or wrong because this is intuitive, but it should give pause to consider to what is knowable. That's the real question here.
That being said I still repeat how do we move on to knowledge. I mean axioms sound like a pretty terrible way to go. “This is true because we say it is”? If we are to go the route of falsification then we are really screwed because an external world or other minds can’t be falsified or proven so the how can we verify anything at all? Ethics would be an issue too as that is more or less just personal opinion and not really facts.
I just feel like the trilemma shows how useless philosophy ends up being and why the pyrrhonists preached what they did
Nobody here is saying that "knowledge is inherently flawed", we're saying that knowledge doesn't operate the way justificationists say it does, because if it did then the Munchhausen trilemma would in turn show that knowledge is impossible, which is exactly the kind of contradiction you're talking about. That contradiction is thus reason to reject the possibility of justificationism.
Quoting Harry Hindu
Yes, that's exactly what I'm saying. By default everything and its negation might be right and might be wrong; then knowledge comes from determining which things are definitely wrong, and thus narrowing the range of things that might still be right.
In contrast, the justificationist assumption underlying the Munchhausen trilemma is that by default everything is wrong, until knowledge is built by showing something to be definitely right, and then building up from there. But the Munchhausen trilemma, running from that assumption, thereby shows that knowledge thus-understood is impossible, a contradiction (you can't know that you can't know anything), and thus a reason to reject understanding knowledge in that way, i.e. to reject justificationism.
But how do you do that without justification? The point about the trilemma is that everything is ultimately based on three unsatisfying assumptions. Usually to determine what is wrong we have to justify it, we can't just say someone is wrong. But if all justifications fall back to those three points then one could argue there is no right or wrong because everything has arbitrary beginnings.
This trilemma is precisely such a reductio for justificationism itself: if you assumed justificationism was true, the trilemma that follows from it would prove that it’s impossible to ever justify anything, a contradiction with your initial assumption; so you must reject justificationism.
If we want empirical evidence to be able to falsify things too, we just need to show that anti-empiricism leads to a similar absurdity, which I think can be done. Disregarding empirical evidence is then thereby ruled out, so aside from internal self-consistency a belief has to be consistent with empirical observation as well or else your belief system as a whole will have an inconsistency.
From there science proceeds as normal, and philosophy can sit back and just watch.
I don't think it has a refutation. But it does not need a fefutation for humans to continue operating with reason.
Human experience is based on belief. Socrates pointed out first the difference between knowledge and make-belief; and he said there is knowledge, in the form of Ideals, but humans haven't reached the way to uncover them yet.
The trilemma is a way of showing how humans can't reach rational knowledge.
But we do NOT need rational knowledge. As long as we have assumptions that we say are given; in other words, there are things we accept as true, whether they are or not; we have a mode to operate, and to apply our reason.
So what if we are wrong. We are most likely wrong in our knowledge. There is no way to check that. But does false knowledge bother us any? No, instead, it eggs us on to gain more false knowledge. And the conglomeration of false knowledge gives us a world view that works for us, and we can even make predictions based on our false world views.
In this sense, the falsity is not a problem; the problem is only that we know it is falsity. In and by itself, falsity never bothered anyone any. In the middle ages they believed a set of superstitions; in the ancient times humans believed a yet different set of superstitions; in the times before that, there was yet another world of superstitions that formed human's world view.
I am quite sure we are living in yet another age of superstition, but just like the persons in the middle ages, in ancient times, and before, we are not made aware of it. We are not aware of our mistakes, we can only say that we are probably wrong, and most likely wrong in our claims of how this bloody thing, the universe, works.
I love this trilemma. It solves nothing, but it points at how we should get comfortable in our ignorance and set of false beleifs.
In the first half of this proposition you are explaining how we show something is 'wrong', but what you actually demonstrate is something which seems to you to be 'absurd' and which provides, for you, 'reasons' to reject it.
Are you claiming that what is 'wrong' is synonymous with what you personally find absurd or objectionable?
No, and you should know that already, because we've been around this merry-go-round many times before and if it didn't sink in the first million times I'm not wasting my time going over it with you again.
Let me Google that for you: reductio ad absurdum.
Ah. Are you referring to the discussion in which literally everyone involved was pointing out how you were wrong but you insisted you were right regardless?
So how come it's the case that it's me who doesn't 'get' your argument and not you who doesn't 'get' everyone else counter argument? Tell me, what's most likely - that you're a unique genius who nobody understands, or that you've made a mistake which you don't understand?
Quoting god must be atheist
:up:
I'm referring to many previous discussions in which you repeatedly, and I think willfully, misinterpret "reductio ad absurdum" as "reducio ad something-I-subjectively-don't-like", rather than the technical meaning in which "absurd" means "self-contradictory".
But you seem to be referring to one specific discussion in which everyone kept bringing up things I didn't disagree with and then acting like that somehow proved something against my position that already included within it the things that they were saying.
Quoting Isaac
Because I already agreed with what "everyone else" was saying, so it can't be that I was somehow failing to be persuaded by their arguments, since I wasn't disputing the conclusions.
It reminds me of arguing for libertarian socialism only to be met with right-wingers presenting arguments against the state as reasons why not to adopt socialism. Yeah, I already agree with those arguments against the state... that's why I'm not a state socialist, but a libertarian one. No amount of arguments against the state could change my mind about the state, because I already agree with the conclusion of them, and am already anti-state. If you think that those are arguments against my position, it's you who fails to understand what my position even is.
Quoting Isaac
I never claimed to be a unique genius. Almost all of my positions are ones that much better-credentialed people than me also support. In this case, aside from the obvious philosophers like Karl Popper, Ernest Gellner, and Hans Albert, you've also got legal scholars like Reinhold Zippelius, physicists like David Deutch, biologists like Hans Krebs, and the one I expect you'll like most, neurophysiologists like John Eccles.
But yeah, when it comes to discussing a topic in which I majored summa cum laude with easy straight-As, putting me in the top twentieth of people who have BAs on the topic, on an anonymous internet forum where over two thirds of people don't even have a BA in it at all, yeah I'm leaning statistically toward it being other people not understanding me rather than vice versa.
Then why did you direct me to a Wiki definition in which the first paragraph states reductio ad absurdum to be "the form of argument that attempts to establish a claim by showing that the opposite scenario would lead to absurdity or contradiction"? I've bolded the 'or'. One or the other, not that the two are being treated as technically the same thing.
If you want to use that BA of yours to teach me something about the technical meaning of philosophical terms, then it would help if you directed me toward definitions which actually support the claim you're making.
Quoting Pfhorrest
This ^ characterisation of the discussion is the thing I'm talking about. Everyone else was saying that this wasn't what was happening and that your posts had meaningful problems of the sort we described, you were saying that this was exactly what was happening and we were all wrong in thinking otherwise. It is the characterisation of our objections as being "the things you were saying and already agreed with" that was your error in that thread. They were not the things you were already saying, you simply didn't understand the difference.
Quoting Pfhorrest
No. You think your positions are ones that much better-credentialed people than you also support. It is possible for you to be wrong about that (which is something people better qualified than me have tried to demonstrate also). That you think your arguments are supported by these writers is not de facto evidence that they in fact are. But you seem (in common with a worrying number of people here) to have some trouble distinguishing between things seeming to you to be the case and things actually being the case.
Notwithstanding that, all of those writers are themselves critiqued and opposed by a slew of similarly well-credetialed people, so their support alone doesn't lend authority to your arguments, it just helps us understand where they're coming form. As far as an indication of who has misunderstood whom, they're useless.
Quoting Pfhorrest
Really? Then I suggest you take a serious look at the stratification of your samples. Within a couple of paragraphs I can tell quite easily f I'm talking to someone who has some knowledge of the topic or not. The idea that ten pages into a discussion you're still assuming your interlocutors are drawn from the full population who responded to that survey is rather worrying.
I directed you to a Google search, as a rhetorical device indicating that you should understand these things already if you're going to take the high horse that you always do.
But to follow up on the very first reference in that Wiki article that is the top result:
[quote=The Definitive Glossary of Higher Mathematical Jargon;https://mathvault.ca/math-glossary/#contradiction]Proof by Contradiction
An indirect method of proof that attempts to prove a claim by proving that the opposite will lead to a contradiction. For that reason, the method is also known as “reductio ad absurdum” — or “reduction to absurdity” in Latin.[/quote]
Yeah, it is sometimes used more loosely than that (as the second reference in the Wiki article states), but it should be clear from context to anyone fluent in English who isn't looking to maliciously misinterpret me that I'm meaning the sense equivalent with proof by contradiction, because I'm explicitly talking about contradictions.
See above, but also: I'm not really trying to teach you anything here, I'm trying to disengage from conversation with you, because you've long since demonstrated that you're not interested in an honest and charitable conversation but in scoring some kind of imaginary internet debate points.
But I can't resist one last snip:
Quoting Isaac
One of the aforementioned people, Hans Albert, is the originator of the trilemma that is the topic of this thread, and he introduced it specifically as an argument for critical rationalism. In arguing for critical rationalism here, I'm pretty much just explaining what the point of the argument the OP is talking about is.
Where in that article does it say that the 'absurd' definition is 'more loose'? It just seems to reiterate exactly the conclusion I reached from the wiki. As does every other reference I followed, apart from the one single reference you cherry-picked (from mathematics, not philosophy) to try and prop up your untenable argument.
Quoting Pfhorrest
I took both definitions as being possible. I don't see how that's malicious or uncharitable.
I said...
Quoting Isaac
...which you've neatly avoided having to answer by side-tracking into this attempt to recast your inability to raise a counter-argument as some kind of stance against my cantankerousness.
Both absurdity and contradiction are senses which you personally might have of two propositions and which others might disagree with. So I'll ask again. Are you suggesting that what you personally find absurd or contradictory is the measure of what is actually wrong?
Are you suggesting that logical contradiction or consistency is only a matter of subjective opinion? Or merely that people can sometimes wrongly assess whether or not something is contradictory? Many of your responses across this forum seem to rest on implicitly conflating those two kinds of things: "You might be wrong, therefore there are no correct answers at all".
Yes, people can be wrong. (And yes, I am a person... complete the syllogism in your head, we all get your obvious point.) People can even add up numbers incorrectly. That doesn't mean that there is no correct answer to the question of what is the sum of two numbers, or that everything else that depends upon the sum of two numbers is completely subjective too.
Things either are contradictory or they're not. People can assess whether they are or not incorrectly, but "you might be doing it wrong" is the most inane argument against anything that I can imagine. Get back when you can point out a specific thing someone's doing wrong. Meanwhile, the mere possibility of doing it wrong doesn't make the entire endeavor pointless or futile.
see "The meaning of life & greek mythology".
I would say that's the case yes, I agree with Ramsey that logic is simply a mode of thought, not an objective fact about the world, and as such is prime to some subjective variation. But that's not my argument here, rather it is...
Quoting Pfhorrest
...even under the view where contradiction is an objective fact, it is still possible for epistemic peers to disagree about what is and what is not contradictory.
Quoting Pfhorrest
This would all be relevant if the 'incorrect', or 'wrong' we were talking about were, like your example with the sum, the goal. In that example, there is a 'right' answer regardless of the propensity for some to miscalculate.
But that's not what your claim is here. It's not simply that some things are right and others wrong and that we should strive to reject the wrong, leaving the viable options for what is right. I agree entirely with that claim.
What you do here is additionally claim an objective method for doing so. That the 'wrong' can be identified by determining that it is contradictory, and that such an identification carries normative weight. That's not the same as the maths sum example at all. Instead of talking about whether the answer could possibly be right (regardless of the potential for miscalculation), you're talking about how we know whether the answer is right. A completely different proposition to the mere declaration that there is a right answer.
Say you have a proposition, and you 'feel' it's wrong. Later you compare it to another (necessary) proposition and you 'feel' it leads to a contradiction. How is your first 'feeling' made objective by your second? You could be wrong in either case, in either case we might agree that there is a 'right' answer out there somewhere...
What is it about the status of feeling there's a contradiction that gives it this authority over any of your other feelings about the proposition in question?
Isn't that also a conclusion arrived at by using logic? I always get confused by these kinds of arguments.
Quoting Isaac
But if you agree with that claim, then you also agree that there is a way to figure out what is wrong, don't you?
I am not sure I buy the distinction you make between claims about the truth value and claims about the method. Why can I make one claim, but not the other? I can say that the flat earth theory is wrong because it's refuted by observation, but I can not say the zetetic method (something some flat earthers champion) is wrong because it arbitrarily singles out some observations as more relevant?
Both of these claims seem equally based on logic, it's just that the factual claim is mediated by the scientific method.
Well, it depends on how circumscribed your definition of 'logic' is. Ramsey likens logic to aesthetics, or ethics. A mode of thinking we find to be pragmatic. So, by that measure (a mode of thinking, among others) the observation that logic is such a thing is just empirical, and the resolution of empirical data need not be subsumed within the definition of 'logic'. I think the merit of this approach is that it avoids the potential circularity of defining logic by 'whatever mode distinguishes right from wrong answers', and then that any answer delivered by flawless logic is right on that basis.
Quoting Echarmion
Not as I see it, no. That there are states of affairs which are objectively the case does not necessarily imply that there are means of determining them. I infer that there are states of affairs which are objectively the case because it seems to be a good explanation for the success of scientific prediction. Not being privy to the exact thought processes of those making these predictions, however, I'm less confident about assuming some homogeneous method accounts for their apparent success.
In fact, the access I do have to their thought processes through cognitive sciences seems to me to show quite the opposite. A heterogeneity of method.
Quoting Echarmion
I think you're right, you can make both claims. In a sense, that's my point when I said...
Quoting Isaac
Basically, I'm disputing @Pfhorrest's notion that there is one method (feeling that there is a contradiction), rather than a suite of methods. I'm saying you can make both claims, but that it is not necessarily required that your first claim is justified by your second. There's no special status given to the feeling that two propositions are contradictory above the simple feeling that one proposition is wrong. 'Wrong' and 'contradictory' are just two attitudes we might have toward propositions.
My point was that you were using justifications, or reasons, to show that justifications and reasons are not valid qualifiers for knowledge. If so, then your assertions are not necessarily knowledge. If they aren't knowledge, then they are either wrong or just scribbles on a screen. What you seem to be saying is that reasoning does not necessarily lead to knowledge. If not, then how do you know that you know anything?
Beliefs require justification to qualify as knowledge. How much justification some belief needs to qualify as knowledge can vary depending on the state of affairs being talked about which includes the origin or causes of said state if affairs. States of affairs created by humans (like Trump is the 45th president if the United States) seem to be easier to justify than facts not created by humans (the solar system was formed 4.5 billion years ago from a massive cloud of hydrogen gas). Presidents are arbitrary creations of our own mind and don't need justification beyond most people agreeing and using the words in that way. The latter doesn't depend on popularity as that would be a logical fallacy. It depends on the actual state if affairs being the case or not. Presidents are created by humans therfore knowing what presidents are is simply an act of you defining what they are at any moment. The solar systems formation is dependent upon facts not created by humans but facts that existed before humans and their knowledge of such facts. So there are some facts that we can know merely due to the fact that we created those facts.
Knowing that you believe something requires no more justification than you believing it.
Well they are scribbles on a screen regardless but where did beliefs go? Knowledge or nonsense? Doesn't sound right.
I can see what you mean here. And that seems like a reasonable position to take with regard to specific, formalised logic systems, like predicate logic, modal logical or mathematics. But at the same time, it seems to me that there must be some basic wiring in the human brain (and, being basic, it would have to be universal to the species) which provides a basic problem-solving framework. Even if all "logic" in the strict sense is empirical, there must be some way to process empirical data in the first place.
Quoting Isaac
Aren't you pre-supposing a correspondence theory of truth here? You start your deliberation at states of affairs, when we could start it at mental phenomena instead. The idea that there might be something "objective" that our mental phenomena might correspond to, and that the success of prediction is a measure of "objectiveness" must be coming from somewhere. There is already some kind of method at work here.
Quoting Isaac
The cognitive science you refer to sounds interesting. Can you expand on it with reasonable effort?
Apart from that, isn't "success" the homogenous method we're looking for? It doesn't particularly seem to matter whether all the methods are heterogenous if we can then judge the results by a homogenous standard - their predictive success.
Quoting Isaac
Well it does feel to me that they're different. That saying something wrong is different from saying something incoherent. I can imagine wrong states of affairs - counterfactuals. But I cannot imagine contradictory ones. By the same token, I can organise a society according to wrong goals, and have those goals nevertheless be reached. That's not the case if the goals are contradictory.
Of course, this may just be because I have been taught to distinguish between two kinds of "wrongness".
There are two kinds of truths: a priori and a posteriori. The first kind is true at any time, in any part of the world, because it does not depend on empirical observation. The second kind is the truth we find in such things that can be demonstrated to be false by experiment, by observation (if any).
Reason can't defeat a truth if it's an a priori truth. And reason is part of the a priori truth.
Reason can't defend the truth of an a posteriori truth. Only observation can defeat it, and nothing can defend it in an absolute sense.
================
There is a lot of hoolabaloo on the forums in this thread because people are too lazy to observe the nature of these kinds of truths, or they are lazy to state which of the two they are talking about.
I am not an exception from making this fault in my discourses.
Quoting Aristotle, 350 BCE
Just another sad case of analytic philosophers reinventing Greek Philosophy.
Yes, it seems that's basically true, although the further back in developmental term researchers are able to go the fewer assumptions we seem to have. So far the idea of cause and effect seems fixed, rules around containers and contained objects seem hard-wired and are probably essential to our visual system. Rules around object permanence, surprisingly, seem to be learned rather than innate, so I think the law of identity is of questionable origin, instinct-wise. Laws like non-contradiction are almost certainly learnt. Very young babies show no surprise at all at obvious contrary scenarios co-existing, Andras they do with something like an larger object seeming to fit in a smaller container.
I generally tend to think that some ways of processing sense data are hard-wired, but they're often not the ones we'd expect, and I don't think they cover all that much of even what we might call 'rational thinking'. The expectation of consistency in the world is the one 'rule'of rational thought I'd be tempted to say was hard-wired, but if a new piece of research later proved it not to be, I wouldn't be that surprised.
Quoting Echarmion
Not intentionally, no, and it's not clear from the rest of the paragraph what has lead you there, could you perhaps explain a bit further the link you're seeing here?
Quoting Echarmion
Sure. A fair amount of work has been done trying to see what goes on when people are resolving the truth value of syllogisms "Socrates is mortal...etc". The findings are broadly that the regions of the brain involved in the resolution vary quite a bit. Not hugely, but enough to be interesting. Subject matter changes what regions are employed, for example. Syllogisms regarding unfamiliar objects are more likely to utilise the left cerebral cortex alone, Andras those involving familiar objects might be solved referring to the perirhinal cortex dealing with memories of the properties of objects.
http://www.yorku.ca/vgoel/reprints/Goel_cambridge2a.pdf is a really approachable read on all this, with a great discussion on the contribution of neuroscience to psychology at the beginning.
The point here is that if the method for solving a simple syllogism is this heavily context dependent, it seems vanishingly unlikely that we're all assessing theories and beliefs in anything like a consistent manner.
Quoting Echarmion
Yeah, I can go along with that. It's very much the view of the Cambridge version of pragmatism at least (I don't know much about American pragmatism). That which works when we act as if it were the case is less wrong than anything which doesn't. But calling that a 'method' I think could only be justified with something like trial and error. With something like statistical analysis of empirical data, we might judge the outcome by it's success, but the method by which we derived the theory we're testing was not itself to 'test it's success'. It was to compare the statistical significance of a correlation against the probability fo it's occurring by chance. It's a specific heuristic which we've found delivers useful results in the past, so we reuse it.
Quoting Echarmion
I certainly would agree they're different. But does that difference lead to one being superior to the other in establishing which theories are wrong?
Well, saying something is wrong because it is inconsistent gets you in a lot less trouble than saying that something is wrong because it "feels wrong". So it's superior in that sense. And I don't think much trumps saying that things are wrong because they are inconsistent in terms of not getting you into trouble. So I would guess that's why pfhorrest uses it as the arbiter.
Yes, I agree. We find it a more satisfying argument, for sure. But consider the case of something like "I can fit this car into this matchbox". You just can't. The plain and simple fact that you can't is actually more compelling than an argument that it is inconsistent with the laws of physics because one might not fully understand the laws of physics and so leave room for doubt. One can more clearly 'see' a car can't fit in a matchbox, than one can 'see' the contradiction with the laws of physics.
Quoting khaled
I would have less issue with it if he did, but it comes along with a long line of previous argument about the means by which 'wrong' ideas can be objectively identified as such, and it had little to do with what people find satisfyingly convincing. Were that the justification, I'd probably agree, with a few caveats.
Why do a priori truths not need justification (observation), but a posterior truths do? It seems to me that there is still an observation taking place or else how do you distinguish the a priori from a posterior truths? How do you know the difference between them to be able to make an objective assertion for what a priori and a posterior truths are not just yourself, but for others too? What are we to look for in distinguishing a priori from a posteriori truths?
Words are just scribbles and sounds, so a priori truths take the form if scribbles and sounds which are empirical forms.
To know that you know anything requires some sort of empirical justification, which can include use of sounds and scribbles.
If you want to rule out justification then you rule out everything else with it. There would be no reason to claim anything or use anything for support because you could never justify it's use.
I think that philosophy tends to run away with language in that people say stuff that they believe is interesting and profound, but when you parse their statements it makes no sense whatsoever.
Think of it as similar to the coherentist branch of the trilemma. What you're basically looking for is a complete set of beliefs that doesn't contain contradictions within itself. But rather than saying "this is a coherent set of beliefs, therefore it is true", as the justificationist coherentist does, a critical rationalist only says "this is a coherent set of beliefs, therefore it remains possible". Both agree that you rule out possibilities by showing them incoherent, but the important difference is the differentiation between justified as in "epistemically permissible" and justified as in "epistemically obligatory".
A priori proofs or truths don't need justification by observation, but they do need a sentient being with powers of reason, which in our experience comes about with learning about the sensed world.
1. Peter is both Peter and not Peter.
2, Peter married Mary.
1 is obviously false. In any set of circumstances. No matter in what physical realms we place this sentence, it's always false. It cannot be but false. Granted, people need to know that Peter is a proper noun, and what the rules and syntax of grammar are, and how their semantics make up a meaning. That part depends on sentient learning, but once it's integrated into a sentient mind, the rest follows.
2 can only be known to be true if you actually gain factual knowledge about this. It is not true in all possible worlds. In communities where no man Peter alive ever married a female Mary it's not true.
You totally missed my point.
If a priori truths don't need justification, then what were you trying to show with visual scribbles on the screen?
This misrepresents what a priori and a posteriori mean. I don't need to look in my wallet to know that if I have £10 then I have at least £5, but I do need to look in my wallet to know if I have £10. "All husbands are married" is true by definition but "all men are married" isn't, and requires us to check each person to see if they are a man and if they are married to determine its truth.
That reading sentences requires observation is irrelevant to the distinction. You can just think the propositions if you like.
Edit
Actually, thinking about it again I can understand your point. Knowing that all husbands are married is knowing that "husband" means "married man", and knowing that "husband" means "married man" isn't a priori knowledge, and so therefore knowing that all husbands are married isn't a priori?
Perhaps the distinction is that a priori truths are truths that derive from the meaning of the words and a posteriori truths are truths that don't. After learning a language I can know that all husbands are married but I can't know that all men are married, and that is how the distinction is made.
It's worse than that even. Since there's no objective set of rules as to what words in a language 'really' mean, nor boundaries where one language ends and another starts (pidgin English for example), you don't even know that all husbands are married a priori after you've learnt a language. You know it in no less a way than you know the earth is round. All the while you continue to successfully use the terms synonymously, it's true. At any point in future, or within any given sub-set of language speaker, or within any new language game, it may cease to be the case.
Not only that, but it requires the existence of marriage and men - both of which are visual concepts. The statement is about men and marriage, without which the statement makes no sense. We are talking about things that we can observe and whose existence is the justification for such statements.
Quoting Michael
It's more like just how we think, or the process of thinking, or categorizing. It seems that a priori truths are being conflated with the process of thinking and reasoning. Thinking is always about things. The process by which we categorize observable things is still dependent upon observed things. The same process can be applied to other things. Categorization isn't a truth. It's a way of processing information.
It would always remain the case, even if humans became extinct and language disappeared from the universe, that husbands were once defined as married men by a particular human society. Or you could at least say that a particular society of humans at one time organized scribbles in this way: "Husbands are married men".
you are absolutely right. I don't see any point in your objection. If you insist on equating conceptual thought to dots and scribbles, and you deny that meaning transcends physical signs that convey it, then I especially see no point in your objection.
How else do you justify that you engage in conceptual thought if not by using scribbles and sounds? Michael was on the right track by equating it to language (symbol-use). The scribbles point to observable things and events (men and weddings). Categorization is a type of information processing and is based upon goal-oriented behavior. To say that the category is true is to conflate truth with an arbitrary rule/goal by which information is processed.
That's just it, HarryHindu. You equate the two: conceptual thought and dots a scribbles. That is, if I may make this observation, your mistake in your reasoning. Whereas here you clearly stated "... by using scribbles and sounds".
I don't suppose you see my point, or that you ever will. Using something. Do I make that something into the thing that I am using it to create it?
A few examples: Pyramids, highways, railroads and buildings: People were used to build them. Are railroads (the actual rails) people, money, design or execution? No, they are railroads. Yet according to you, how you use dots and scribbles, the dots and scribbles are the concept themselves. Well, no. You are making a huge mistake by being unable to separate the two.
I am getting angry. This is by no way to affect you, as I believe and hope that I have kept my tone civil. But I can't hold back much longer. Please forgive me, but I must terminate my debate with you, on extended doctor's orders. This is a reflection on you, and on my condition. Please forgive me, but this is it for this topic. I ran out of patience.
I don't understand the turn in your emotional state regarding this topic. There really is nothing to get emotional about. You've actually moved the ball forward with your examples. Thanks.
To use your example of say the Pyramids, yes, it took people and their tools and the actual materials to make the Pyramids. This is equivalent to you using pen and paper, or a computer with a keyboard, to create scribbles on the paper or the computer screen. The scribbles are the finished product, or result of people and their tools. The scribbles are a representation of the conceptual thought, just as the pyramids are a representation of concept that created them (a visual of large pyramids).
My point is that while the physical pyramids are not the same as the conceptual pyramids, they both appear in the same way - as a visual of a shape of a pyramid constructed by stones that is used to house the corpses of pharoahs. What I'm saying is that you use the same visual of scribbles in your head that you then use to create those on paper with a pen. When thinking, "All husbands are married men." all you are thinking of are sounds and scribbles in your head, just as when thinking about pyramids before contructing them, you are thinking of the visual of a pyramid. To say that one is thinking of a pyramid, one could be thinking of the Pyramids in Egypt, or some other pyramid - even one that only exists in their head, but they all have the shape of a pyramid in their mind because that is what thinking of a pyramid entails.
I asked how do you know the difference between a priori and a posteriori knowledge. The same can be asked about how do you know that you're thinking of a pyramid or a cube? They appear different visually - no matter if the pyramid and cube is in your head or out in the world. The same can be said about a priori and a posteriori knowledge. They appear differently in the mind or on a computer screen as a pattern of scribbles/sounds. Given that most of us talk to ourselves in our mind and not write to ourselves in our minds, a priori knowledge often takes the form of sounds in your head or sounds spoken from your mouth.
If you want to say that those scribbles and sounds point to real things in the world, then that is simply more of what I've been saying - that what those scribbles point to is the justification of that knowledge.
What is necessary for all husbands to be married men? Men and marriage? Language? What? Wouldn't those things be the justification for "All husbands are married men"?
That is about like saying we can not catch a fish because it does not stay in the same place.
May I offer the concept of democracy? It is rule by reason and we come to that reason by arguing until we have the best reasoning. This process does not stop but is ongoing. At any time, anyone can argue the established reasoning is not the best reasoning and then trying to persuade everyone of better reasoning. That is why we have a governing body all the time, instead of establishing what will be, and then simply enforced the status quo.
I will argue at a given moment in time, agreement on the best reasoning is justified. It just isn't unchanging like a holy book.