The definition of knowledge under critical rationalism
The traditional philosophical definition of knowledge, dating back at least to Plato, is that knowledge is justified true belief. That is to say that it is not enough merely to believe something to be the case, and it is not even enough for that belief to turn out to be true, but for someone to know something they must also have a justification for their belief, a reason to believe it, because it would not constitute knowledge to simply guess at an answer to a question (or otherwise come to believe it for insufficient reason) and just by luck turn out to be right.
Edmund Gettier has since proposed that even justified true belief is not enough to constitute knowledge, to the extent that reasons to believe something can sometimes be imperfect, can suggest beliefs that nevertheless turn out to be false, yet we nevertheless want to say that someone can still be justified in believing something for such reasons. Because if justification can be imperfect, someone could be justified in believing something that, despite that justification, might nevertheless turn out to actually be false, and in such cases we would not want to say that it counts as knowledge to be misled by imperfect justifications to believe something that could nevertheless have still been false but, by an unrelated coincidence, does happen to also be true, just not for the reasons justifying the belief.
This problem could trivially be remedied by insisting that only perfect justification, the kind that guarantees the truth of something, is good enough to turn true belief into knowledge; but that would imply that knowledge of almost any substantial topic, where such certainty cannot be obtained, is thereby impossible.
My response to this problem is similar to that of Robert Nozick: I say that knowledge is believing something because it is true, such that not only does one believe it, and it is true, but if it weren't true one wouldn't believe it.
This last condition can, I think, be considered a different sense of "justification" from the usual one, and so salvage the traditional definition of knowledge, albeit only by turning the concept of justification on its head, which I argue needs to be done anyway to have a workably rational method of deciding what to believe.
Namely, rather than the usual justificationist sense of rationalism, whereby no belief is justified until it can be supported from the ground up somehow, instead any belief is justified (including contrary ones) until there is support to the contrary, i.e. reason to rule that belief out -- an epistemological position called critical rationalism, supported by philosophers like Kant and Popper.
IOW, under a critical rationalist conception of knowledge, there is no Gettier problem at all, because justification in such a paradigm doesn't mean what Gettier assumes it does.
Edmund Gettier has since proposed that even justified true belief is not enough to constitute knowledge, to the extent that reasons to believe something can sometimes be imperfect, can suggest beliefs that nevertheless turn out to be false, yet we nevertheless want to say that someone can still be justified in believing something for such reasons. Because if justification can be imperfect, someone could be justified in believing something that, despite that justification, might nevertheless turn out to actually be false, and in such cases we would not want to say that it counts as knowledge to be misled by imperfect justifications to believe something that could nevertheless have still been false but, by an unrelated coincidence, does happen to also be true, just not for the reasons justifying the belief.
This problem could trivially be remedied by insisting that only perfect justification, the kind that guarantees the truth of something, is good enough to turn true belief into knowledge; but that would imply that knowledge of almost any substantial topic, where such certainty cannot be obtained, is thereby impossible.
My response to this problem is similar to that of Robert Nozick: I say that knowledge is believing something because it is true, such that not only does one believe it, and it is true, but if it weren't true one wouldn't believe it.
This last condition can, I think, be considered a different sense of "justification" from the usual one, and so salvage the traditional definition of knowledge, albeit only by turning the concept of justification on its head, which I argue needs to be done anyway to have a workably rational method of deciding what to believe.
Namely, rather than the usual justificationist sense of rationalism, whereby no belief is justified until it can be supported from the ground up somehow, instead any belief is justified (including contrary ones) until there is support to the contrary, i.e. reason to rule that belief out -- an epistemological position called critical rationalism, supported by philosophers like Kant and Popper.
IOW, under a critical rationalist conception of knowledge, there is no Gettier problem at all, because justification in such a paradigm doesn't mean what Gettier assumes it does.
Comments (162)
How is the existence of a reason to rule it out not also a belief? All you have here is competing beliefs - the belief in A and the belief in a reason to rule out A.
(It could still be the case that B, or C, you just know that you have to reject at least one of them, so it can't be the case that A. In general, ruling things out never narrows down to one specific remaining possibility, only a narrower range of possibilities).
Trying to ground a positive proof of one specific thing, on the other hand, inevitably leads down an infinite regress.
All of which requires a belief in the truth-preserving nature of logical functions, a belief that 'A = B and C', and a belief that 'B and C are contrary to each other'.
If you also had a belief that A, how does your system tell you anything about your belief that A? It remains the case that you are either mistaken about A or mistaken about the relationship between B and C, or mistaken about the relationship A = B and C (or you're mistaken about logic itself).
As you have no means of determining which of those mistakes are the case, you are not discarding a belief because it can be ruled out. You are discarding a belief because you wish to replace it with another.
There are two kinds of arguments viz. 1. deductive and 2. inductive. We needn't worry about deductive arguments as they're foolproof justifications in that if the argument is sound it's impossible that the conclusion is false. There is no room for error with deductive arguments is what I mean.
Coming to the other strain of arguments viz. inductive arguments, it is already known, in fact it's contained in the definition of such arguments, that the inference is probabilistic in the sense that there's a gap between inductive justifications and their conclusions which requires, as some would describe it, a leap of faith.
Gettier problems, in my humble opinion, are essentially about this justificatory gap in inductive arguments. In other words, we were already aware of the problem even before Gettier formulated his now famous argument regardind justified true belief. :chin:
Let us imagine there is a certain proposed belief/idea.
Who, in your opinion, has the last word to approve it is true or not and/or it is useful or not?
Thank you.
As I said in my first comment, this is of no use with knowledge claims because we have no means of distinguishing premises from conclusions. We cannot say that our belief in A is justified by the deductive argument 'If B then A, B therefore A' because our belief that B might be what is at fault, or our belief that 'if B then A'.
The whole approach rests on the flawed assumption that we build up our beliefs one block at a time from some first principle like an inverted pyramid. There's scant evidence that we actually do this and abundant evidence that we don't.
Are we talking about my own personal belief or something that is believed by everyone because all humans by necessity believe or perhaps because science believes ? Inductive belief can be either one and personal justification is quite different from what is innate or a law of physics.
"Because"??!!
Lets go over why this doesn't work.
1. People believe in things that aren't true all the time, but believe they are true.
2. People believe in things that aren't true, and ALSO do not have any contradictions to their conclusion that is true, based on a lack of evidence or critical thought.
3. The only way to know that what they believe isn't true, is to have an "outside observer" who knows the actual truth.
4. But if 1 and 2 are the case, then how can we trust the outside observer? Couldn't they also have the issue of 2? How do we know they aren't the same as 2? Meaning we haven't really discovered knowledge, because our current proposal has a giant hole it cannot fix.
The issue is trying to state that knowledge is a 100% grasp of the truth. Knowledge is a tool of rationality. It is our best approach to claiming the truth, much better than mere inductive beliefs. Basically, knowledge is a rational conclusion from the limited evidence and thought capability that we have. But it must always have the caveat that it is a conclusion that might lack the whole picture. While the only thing we can rationally conclude within the picture is likely the best course of action, we must always be open to the fact that we do not have the entire picture.
This requires knowledge of whether or not the thing is true, the knowledge that is under question. If one already knows it is true, belief is irrelevant. In JTBs, the truth is verified after the belief. I believe the ball is red. I look at the ball. It is red. The above has knowledge of the truth as a prerequisite for the belief which is in turn a prerequisite for the knowledge.
Quoting Pfhorrest
I've seen this, but never seen a sensible example wherein the belief is both false and justified, or true and not justified either. Examples include 'X told Y that Z was true', as if believing X is a given, which bypasses the justification aspect.
I'm not sure his better to explain it than I just did. In order to support the notion that a belief that A somehow changes property as a result of some logical deduction, becomes knowledge that A, the facts which prime the deduction have to have some property different from the belief that A, but in no case do they. The 'fact' that B is just another belief.
The deduction hasn't had any impact at all on the belief that A, all it's shown it that it is logically consistent (or not) with a belief that B. It remains a belief that A.
Sure, but you’re still mistaken about at least one of those things, so you know it can’t be the case that all of them are true at once, and the range of possibilities is thus narrowed.
Quoting Isaac
You get that opposing that assumption (or rather, the assumption that that is the correct way to form beliefs) is what critical rationalism (as opposed to traditional justificationist rationalism) is all about, no? I’m not making that assumption, I’m explicitly opposing it.
Quoting KerimF
Nobody, but that’s more a question about institutional knowledge and epistemic authority than the topic of this thread which is individual knowledge.
Quoting magritte
See just above, although I do think that the methods of individual knowledge formation form the basis of the institutions of knowledge as well.
Quoting Philosophim
I agree completely and didn’t mean to say anything to the contrary. It’s rather quite the whole point of critical rationalism that you never narrow down to any single certainty, only rule out some possibilities leaving fewer (but always multiple) remaining options.
It sounds like maybe you’re misreading the same thing Kenosha is below...
Quoting Kenosha Kid
No more so than traditional JTB. Really the whole “truth” component of both traditional and my modified JTB is a historical vestige that’s rather redundant. Knowledge is merely justified belief, where justification itself implies a reason to think it is true; we only bother saying “justified TRUE belief” because before the justification criterion was added, the standard was simply “true belief”. It would have been better if the “true” had simply been replaced by “justified”.
Quoting Pfhorrest
Ok, I just wanted to make sure this was what you really believed, as on its face, it seemed contradictory.
I'm surprised you've never visited my thread here https://thephilosophyforum.com/discussion/9015/a-methodology-of-knowledge/p1
I essentially start with the same Popper premise, but flesh out what justification entails. You might like it. And if you don't read it, that's fine to, just an invitation.
No. You always were mistaken about one if these things, they merely exhaust the set. If you've narrowed it, what was the possibility you've eliminated?
Quoting Pfhorrest
No, I don't get it. It doesn't sound like that at all. You suggested a belief that A can be held as knowledge (ie changes status by some significant property) as long as there is no... Quoting Pfhorrest
If you agree that there being a lack of "reason to rule that belief out" is just another belief (let's call it a belief that B), then I'm struggling to see how its mere existence changes the functional status of the belief that A.
It doesn't make A more likely (there's no reason at all to think my belief that B any more likely to be the case than my belief that A so believing both cannot make each more likely to be the case).
It doesn't make A more robust (if anything it makes it more intractable)
So what does a belief that B (the belief that there's a lack of reason to rule A out) do to the belief that A to change its status at all?
Perhaps an example might help rather than all these As and Bs.
Say I believe that there are unicorns in my back garden. How does my additional belief that there's no reason not to believe there are unicorns in my back garden change its status in any way. It seems to me to just remain a belief. Its just now accompanied by a related belief.
Thank you for clarifying.
So, it has nothing to do with my life :)
Wish you the best.
I don’t see the apparent contradiction. Can you elaborate?
I’m basically applying the same standard of justification to belief as we usually do to action, at least in the modern free world: any action is by default justified, until it can be shown somehow wrong. We’re not obligated to do nothing at all except those things that we can prove from the ground up that we must do. We would normally consider than an absurd standard for justifying our actions, but it’s all too common to apply that standard to justifying our beliefs.
Quoting Isaac
You’ve narrowed the possibilities you’re aware of being possible by realizing that certain combinations of things are not possible. They were always not possible, sure, but we’re talking about your awareness of the possibilities.
Quoting Isaac
See the above analogy to justification of actions, I think that will clear it up.
But you haven't 'realised' certain combinations of things are not possible, you just believe it to be the case. I'm not seeing how that makes any difference to the status of any one of those beliefs.
Quoting Pfhorrest
That basically opposes the idea that beliefs need to be justified from the ground up. Fine, i'm on board with that. That's only half the claim you've made. The other half is that a mere belief becomes something more than that when we also believe there is a lack of "reason to rule that belief out". It's this second half I don't get. Why does this second belief make any difference to the status of the first.
I think maybe this is the point of confusion. I’m not talking about transforming beliefs into anything else, but just when a belief is or isn’t justified, or warranted.
That is something else, though. It acquires this status 'justified' simply by virtue of there being another belief which references it. How does there being another belief about our first one change the functional status of the first belief in any way at all?
Take my example of believing there are unicorns in my back garden. That's just a belief. Then I also believe there's no good reason not to believe there's unicorns in my back garden. My first belief is now a 'justified' belief. What's changed about it to warrant the new name?
Not on my account. On my account it's justified by default. Justification is the initial status of a belief, and it can only get worse from there, not better. That's the whole point of critical rationalism. The way you're interpreting it is equivalent to the usual justificationism, and the problems you're pointing out are exactly the problems I have with justificationism.
Quoting Isaac
It's not a matter of whether you believe there are no good reasons not to believe there's unicorns in your back garden, it's a matter of whether or not there are good reasons, and whether your belief would be changed by the presence of them.
This does raise a sort of meta question, of whether we can ever know whether we actually know anything. But even if we can't ever be sure that we do know things, it's still nevertheless possible that we do know things. This thread is about my account of in which circumstances someone does actually know something, whether or not they or we can know for sure that they know it. If they believe it, and they would disbelieve it were there actually reasons to do so (whether or not we or they can be sure there are not such reasons), then they know it.
But once you've got away from actually knowing and into ways the world actually is you've removed the need for justification altogether. Why use (or even introduce) the actual state of the world {there being no reason not to believe there are unicorns in my back garden}. Why not just the state if the world {there being unicorns in my back garden}?
"It is justified to believe there are unicorns in my back garden if the state of the world is such that there's no reason in it not to believe that" seems absolutely no improvement on saying "It is justified to believe there are unicorns in my back garden if the state of the world is such that there are unicorns in my back garden".
I don't get why you'd choose the former over the latter. If you're prepared to talk about actual states of the world, why not just talk about the state of the world which the belief is about, rather than the state of the world wherein it is absent of reasons not to believe the belief in question?
So...
Quoting Pfhorrest
...they know A if A is in fact the state of the world, they don't know it if it isn't. What's wrong with that?
(Which is really the problem at the root of Gettier problems, too).
Phrasing it in terms of the counterfactual clarifies that the belief has to be responsive to reality to remain justified. If you believe unicorns are in your back yard and would continue believing that despite evidence to the contrary, but as it so happens there are unicorns in your back yard, you didn’t really know that. If you would be responsive to evidence to the contrary, and there just isn’t evidence to the contrary because your belief is correct, then you know something.
1. Why not? And 2. Who on earth believes something despite also believing there's evidence to the contrary (sufficient to counter that belief)? A few forms of severe mental illness might do this but I can't think of any real cases off the top of my head.
Quoting Pfhorrest
This just describes everyone. You're not distinguishing one group of beliefs from another here. People do not just spawn random beliefs out of thin air, they believe them because they have some reason to. If they persist despite evidence to the contrary its because they find that evidence insufficiently compelling on the basis of some other belief.
What it sounds like you're describing is people who persist in their beliefs despite evidence which you find sufficiently compelling, and it sounds a lot like you just want to find ground on which to diminish those beliefs in the light of your judgement of the evidence.
It was more that a justified belief is knowledge if the belief is determined to be true e.g. by observation, deduction, etc. The part I quoted made knowledge of truth a prerequisite of a justified belief instead. It seems circular to define knowledge in terms of a belief in something because it is true. If I already know it is true, belief is irrelevant.
Because guessing (or otherwise non-responsive-to-reality belief formation) and just happening to be right doesn’t rise to the standards that we hold knowledge to. Knowledge is supposed to be some kind of important relationship between belief and reality, not just coincidence. This is, again, the root of the whole Gettier problem.
Quoting Isaac
Once again we need to distinguish between believing there is good evidence to the contrary and there actually being good evidence to the contrary. If we don’t, then we have to concede that every belief anyone ever has is equally justified, i.e. there is no such thing as epistemic error, because as you say, everyone THINKS they have good reason to believe as they do, but often they don’t.
Quoting Kenosha Kid
That just is justification.
Anyway, see above with Isaac wrt the difference between thinking something is true (or reasons are good) and it actually being so.
I wasn't asking about the motivation. I'm sure you're desperate to distinguish these things (I'm less bothered myself). But motives are beside the point. If you can tell what the state of the world actually is (that it actually contains or doesn't contain evidence for a belief), then justifying the belief by this method becomes completely unnecessary. Why would we be at all concerned whether our beliefs were justified or not, we can just directly check whether they're actually the case or not. Justification ceases to be of any importance.
The whole point of justified beliefs is that they're more likely to be true than less justified ones. The use of justification is premised on the fact that we don't have direct access to the way the world is and so have do deal with more or less likely beliefs about its state. You can't then propose a method of justification which relies on us knowing directly the way the world actually is (whether it does or doesn't contain evidence for our belief). We need a method of justification which concedes the same premise as motivated us to want one in the first place.
Eliezer Yudkowski has said something similar, defining knowledge as "the ability to be more confused by fiction than by reality". If you can equally explain every outcome, you know nothing. This links knowledge with the concepts of information and entropy.
It also, however, limits knowledge to the physical. Knowledge can then only be gained about things that are falsifiable, i.e. subject to a prediction. You couldn't know anything about morality, for example.
Quoting Pfhorrest
It would then presumably follow that only those beliefs can be considered knowledge that have no justified contrary beliefs, i.e. all contrary beliefs are rules out. But, if we insist on some objective notion of truth, a belief can be true before we are able to find significant arguments to rule out contrary beliefs. We'd then have to conclude we have knowledge of something even though we are similarly believing contrary things about it. That doesn't sound very useful.
Yeah, that's my understanding of it too. I wasn't questioning your understanding of deduction I was questioning it's application to knowledge about the state of the world. This bit...
Quoting TheMadFool
They're not foolproof justifications, they don't help at all. Let's take an unknown state A and an inference about another state B, which I'll call 'b. If we have the logical relationship 'if B then A' about the state of the world, we can deductively conclude A because B. But by definition we do not know B either, we only know 'b. If we could directly know B we wouldn't be trying to deduce A we'd just directly experience it.
Now imagine I have a belief ~A, I also have a belief 'if B then A, and a belief B. I carry out the deduction 'if B then A, B, therefore A'. But this clashes with my belief ~A. So I carry out the deduction 'if B then A, ~A, therefore ~B. But this clashes with my belief B so I carry out the deduction... and so on. We've no way of knowing whether my belief ~A counts as a theoretical conclusion (and so should be discarded by the first deduction), or as a premise (and so B should be discarded by the second deduction).
I don't. The justification lies in someone's responsiveness to however the world is, not on knowing how in particular the world is.
Quoting Echarmion
I'm planning a thread on something very much like that soon. :-)
Quoting Echarmion
You can if you assign a meaning to moral claims the way I did in the earlier thread on metaethics and the philosophy of language.
Quoting Echarmion
The terminology of "knowledge" is frustratingly black-and-white; it's easier to state in terms of "justification of belief". If we have not yet checked for any possible contrary evidence, then our justification is very weak. If we have thoroughly checked for possible contrary evidence, then our justification is very strong. It is indeed possible to have some degree of justification for contrary beliefs -- the initial state of all belief is that everything and its negation is very weakly justified -- but the stronger the degree of justification for one, the weaker the justification for contrary ones must be.
Preemptively tying into the information and entropy thing to come soon, this is why more specific beliefs that are still unfalsified are better-justified: by their very specificity, they have exposed themselves to more chances of falsification, and yet survived.
How do you measure responsiveness to how the world is without knowing in advance how the world is? How could you possibly know what anyone is responding to?
Edit - who are these people unresponsive to the way the world is?
Take a synaesthete. When they see the number 4, they hear a high pitched ringing. They test this a million times looking at a million number 4s. Is their belief that number 4s make ringing noises more justified than if they'd done only a thousand such tests?
Sure, you seemed to imply that any belief was justified, but then came and marked out "justified belief". Your explanation of Popper falsification cleared up the issue.
Quoting Pfhorrest
Hm. I think you've just put forward beliefs which haven't been disproven yet. Lets say I believe my friend Joe dated a woman yesterday. I ask him, "How did your date go last night?" supremely confident that he would date someone, just because I believe he would. No evidence, nothing. Joe replies, "It went great!"
Are we to say that I knew Joe dated a woman last night before I confirmed it? Popperian justification requires that we apply our beliefs, that they must be able to be falsified, and we must try to do so. Otherwise you're saying even induction is knowledge. Perhaps I'm misunderstanding your intent at this point, so feel free to correct me if I am.
Everyone has direct access to a small part of the world -- that's what sensation is. (This hinges on the direct realism covered in the previous thread on the web of reality). Ignoring empirical evidence from your senses is being unresponsive to the state of the world.
Quoting Isaac
I'm not going to reproduce a list of all fallacies and everyone who's ever committed one for you.
Quoting Isaac
Inasmuch as "number 4s make ringing noises" is understood to be a statement -- like all statements should be understood to be -- about a relation between the observer and their environment, then yes. If you mean they think that number 4s produce vibrations in the air that cause other people to hear high pitched ringing, then no, because that's not an experience that could falsify that kind of belief. But if they just think that number 4s cause them to hear high pitched ringing, then yes.
Quoting Philosophim
I'm intending to agree with Popper.
On my account, you were weakly justified to believe your friend went on a date at first, and then when observation that could have falsified that didn't, your justification increased. It's difficult to state that in the terminology of "knowledge", because it's odd to say something like you "knew a little" at first and then "knew a lot" later.
Nobody ignores empirical evidence from their senses in the construction of their beliefs. It's what our brain does, we can no more stop it from doing so than we can stop our heart from beating. If you can show me a non-clinical case of someone ignoring evidence from their senses in forming a belief I'd love to do a study on them.
Plenty of people ignore claims that there is empirical evidence to the contrary of their beliefs, rather than actually check if those claims pan out. That's being unresponsive to reality.
What claims? Scientific ones? If so then no-one except the scientists involved check them out against their senses. I can't think of any beliefs that people hold which can be demonstrated to their own senses to be wrong and they refuse such demonstrations.
Hm, I understand that is a consequence of your proposal, but does that make sense? I had zero justification for believing my friend dated a woman the night before. It was just a random belief. And I think that's the problem with your proposal. If you are to state knowledge is something we simply haven't had refuted yet, you allow beliefs without justification to be declared as knowledge.
And at that point, you allow all untested beliefs as knowledge. A "belief" then can only exist if one is shown their belief is contradicted. Because if all beliefs that are not contradicted are knowledge, they're not really just beliefs anymore right? This also results in all beliefs being against knowledge. Considering you agree with Popper, I don't think that is the conclusion you are intending to draw.
Can you really think of a scenario where you'd have zero justification though? How would such a belief even get formed neurologically? Wouldn't it make more sense to say that the justifications were of one sort at t1 and of a qualitatively different sort at t2?
Following that, 'knowledge' would be beliefs whose justification was of a certain qualitative sort, which I think is more in keeping with his we actually use the word.
Yes. I've formed beliefs that just sprang up out of nowhere. Usually I evaluate them afterwards. Sometimes though I have blurted out beliefs without thinking about them. They were not justified, more like emotional expressions. I believe justification requires some type of evaluation of your belief. It is very easy to commit to beliefs without any evaluation of them.
The question was how do you imagine that these beliefs formed. A belief, neurologically, is a very complex set of neural connections which lead to a tendency to act a certain way. Normally these get set by repeated triggering from sensory inputs and responses. What I'm struggling to see on your account, is how you see such a complex array of network connections as a 'belief' just randomly occurring without being prompted to do so by stimulus from sensory inputs?
I think you're over complicating the issue. The abstract point I'm making is we can have beliefs that are examined, and beliefs that are unexamined. Is an unexamined belief knowledge?
Further, the belief I mentioned did not tie to anything that would indicate Joe dated a woman last night. So my belief being correct was an accident, not tied to any rational justification.
In your thought experiment, maybe not, but what I'm saying is that such a situation is neurologically impossible. No matter how much you insist you did, there is no know (or even plausible) mechanism by which a belief can be formed without the sensory, or interoceptive inputs to form it.
Ok? Are you purposefully being obtuse and avoiding the point?
Quoting Philosophim
This is what my point is. You can always take a thought experiment and find a way to take it out of context. That is being dishonest to the conversation and the intent of the people involved. The thought experiment is to help you understand the abstract context of the above. If I have an unexamined belief (which has nothing to do with the technical neurological process of how that belief was formed) and it just so happens to be right, was my unexamined belief knowledge?
If counterfactually you would not have held that belief if the world were different such that that belief would have been false, then yes. On my account, at least.
ETA: I guess technically on my account there is no such thing as an unexamined “belief“, because that would just be a “perception”: belief are what you get when you examine your perceptions and either affirm or deny their accuracy.
Ok, I think I see what you mean. However, that still leaves the problem that both inductive and deductive beliefs are counted as knowledge. So if you still hold to something even when it has been disproven, it then becomes something separate from knowledge, and becomes a belief.
At that point, we believe things that we know aren't true, and we know things that we can't believe are true. I think allowing inductive beliefs to be counted as knowledge is where the sticking point it. What if you held deductive beliefs that have not been disproven yet as knowledge, and inductive beliefs as mere beliefs?
I don't see why that's a problem. Induction doesn't give you certainty like deduction does, but noticing patterns (which is all induction really amounts to) is still a way to form beliefs, and so long as you would not hold those beliefs if they were not true (i.e. you have made observations that would have falsified them, if they were false), then you know the things you believe.
Quoting Philosophim
I don't see how this related to the previous sentence, and also, knowledge is a species of belief, so something doesn't become a belief after previously having been known, or anything like that. I think maybe you're not following the relationship between perception and belief on my account: to perceive something is for it to seem true to you, just intuitively. Your friend seems like the kind of guy who probably had a date last night.
It's not until you wonder to yourself "is that really right though?" and then either agree or disagree with your perception that you form a belief, either a belief in the thing you perceived (if you agree), or a belief to the contrary (if you disagree; say you're aware of something adversely affecting your perception, so you don't trust it).
You may not necessarily have thoroughly vetted the idea yet, so that belief may not count as knowledge. If you have thoroughly vetted it, such that you would have already found that it was false if it were false, then you know it.
Quoting Philosophim
I also don't see how this follows from before, and even by itself it doesn't make sense to my understanding of these words. To believe something just is to think that it is true, and knowledge is a species of belief, so to know something is in part to believe it which is to think that it's true. So you can't believe things you know aren't true, nor can you know things that you don't believe are true, because knowing them implies believing they're true.
Quoting Philosophim
Beliefs themselves aren't inherently deductive or inductive, those are just means of arguing for a belief. If you have a deductive argument, that is a disprove of the contrary, and so is certain knowledge. Inductive beliefs cannot be certain, sure, but that's beside any point I'm making here.
You're mixing up two different things and I'm getting confused. There's this issue of an 'unexamined' belief as in
Quoting Philosophim
Then there's this issue of a belief without any basis, as in
Quoting Philosophim
The point I was making as counter to @Pfhorrest's argument here is that it is impossible to generate a belief which is not based on some interpretation of the evidence (input from the outside world). It just neurologically can't be done). So everyone already has a justification for all of their beliefs, the justification is whatever external inputs caused it.
The issue of whether a belief is 'examined' is another matter - the effort one puts in to gather even more external data relevant to the belief. Here the issue is scaler and the answer can be none, but in fifteen years of working on beliefs I've yet to see any evidence of a single belief which is 'unexamined' in this sense. People are quite keen on having their beliefs provide them with useful predictions and to do so there cannot be significant clashes with whatever objective states of the world there are.
I'm not being 'purposefully obtuse' here. It's the crux of the problem with these kinds of approaches. They disguise ideology as method. To create this categorisation of predictive though as 'beliefs/knowledge based on an ideologically defined method of checking them is fine when it's upfront about it (I personally use that method of checking my beliefs, I think it works best), but I oppose the suggestion that it is ontologically relevant, that it somehow describes the objective difference between two types of predictive thought. It doesn't There are only beliefs and there are various methods for checking these beliefs. We can say whether a belief has been more or less well-checked, but only after specifying the method of checking we're measuring that against, and the choice of method is underdetermined.
That's not contrary to my views at all. (I've just been saying in the past few posts that a "belief" as I mean it is formed from a "perception", which in turn is exactly some interpretation of evidence). I think maybe you're reading in more than I intend to say.
Ah, then I've gotten confused. You said earlier that
Quoting Pfhorrest
It's this that's caused the confusion. How can it be that people are "uninterested in checking your beliefs against the senses" and yet at the same time you acknowledge that all belief are the result of interpretation of input from the senses?
Knowledge is a fuzy set, in constant revision and revolution.
It’s the “checking” part that makes the difference, between acting like the observations you happen to have made so are plenty (and possibly even being averse to makings further observations that might compel you to change your mind), and actively seeking out more observations to make sure that they continue the pattern.
Also, one can form beliefs in a top-down way as well, hearing the beliefs of others expressed first, being told that something is so, and then perceiving nothing to the contrary (or else doubting the reliability of those perceptions) and so affirming the belief, without having yet observed anything that would have organically compelled one to believe.
My apologies then. I see where you drew this conclusion from my point. Of course there is a reason for every belief at some level. I mean, Joe is male, and I've seen him date women before. There's something. When I mean by "examined belief", is a consciously examined and processed belief. Its like entering into a room and feeling dread. You might instantly form a belief that the place is dangerous from that. But do you know it is dangerous? This is where a person has to actively and consciously examine their beliefs. What is evoking dread? What is actually unsafe in this room?
I think most of us intuitively feel that a "gut reaction" is not necessarily knowledge, but can be a guide that we examine to gain knowledge. The point at which instinct crosses into knowledge is the question of epistemology. I think the OP is trying to do away with that, because it can be a tricky thing to answer. I feel that it more avoids the question of epistemology though, then solves it.
Induction is not the recognition of patterns. Induction is drawing a conclusion that does not necessarily conclude from the premises, or evidence involved. Deduction is drawing a conclusion that necessarily must be concluded from the premises. With those definitions, what do you think about my earlier statement?
Quoting Pfhorrest
Quoting Pfhorrest
What counts as thoroughly vetting then? So it seems we can have beliefs that have not been examined that are not knowledge. Once we start examining them, when do we stop? At what point do we say that's been enough? Who determines the criteria for what is false? What if I believe Bigfoot exists? This is where induction versus deduction comes in handy. Deduction can give a clear, reproducible answer. If we use induction to believe Bigfoot, necessarily two people could come to two contrary conclusions. I've got to run though, sorry I can't flesh this out better. Just think about induction and deduction as defined here and how that would fit in with your knowledge theory.
No one can be compelled to faith: hence (each generation of historians have experienced this) the passionate character, the bitterness, the infinitude of the discussions triggered by such hypercritical assumptions: we can not 'get through', and no argument can prevail.
Henri Irénée MARROU, De la connaissance historique, Éd. du Seuil, coll. Points Histoire, 1975.
You and I went around and around about this once before, but I think I have a better sense of your overall approach now and mine has shifted toward yours. Still, I'm not quite ready to treat "cause" and "reason" as equivalent. This is right next-door to @Philosophim's question:
Quoting Philosophim
This is right around the usual dual-process story: it might be simplest to call what System 1 gets up to "caused" but a lot of those habitual responses are caused in a way that will pass muster and reflect repeated earlier effortful examination of things by System 2, so we might as well call those causes "reasons" in the non-causal sense too, the sense in which "justification" is not a synonym for "rationalization".
If challenged, a person might engage System 2 and begin a process of assembling the evidence they are comfortable claiming underlies a belief; we can call that "rationalization" in a wide sense, allowing that we might approve or disapprove of their claims for evidentiary import, etc. (Sellars seems more or less to claim that saying "I know" just signals we are now playing a language-game that requires everyone to put on their System 2 hats -- it puts a claim "in the space of reasons".) Or they might refuse to engage System 2 (wait -- is that even possible??) with a bare "I just know" and philosophers tend to frown upon that.
I think there's another sense in which the example @Philosophim gives is the sort of thing philosophers hope to be able rule out: we are used to thinking of a belief as being partially caused by the world out there and partially caused by us, by our other beliefs, emotional responses, and so on, and we imagine sort of measuring and comparing the contribution each makes. We want to say that your feeling of dread (a) tracks reality -- something in this room is odd and you picked up on it, or (b) is just you. There's reason to suspect no such general program is possible (if Quine was right about the analytic/synthetic distinction) but there is something to this, and it's related to @Pfhorrest's thing about being "responsive" to evidence.
So there are two kinds of criticisms that can sometimes and sometimes not be made about the same beliefs:
Philosophers don't like either of these but will let the first slide so long as you are open to revision; the second is more or less sinful. Are there good general-purpose ways of talking about these things?
Quoting Philosophim
This all makes sense as far as identifying a process (and a worthy one at that) is concerned, but there's still this odd ontological leap, like there are two kinds of thought 'knowledge' and 'belief'. I don't see how a scalar process can result in a binomial distinction. Are you (either of you) perhaps suggesting the the moment a sub-conscious belief is consciously checked (and passes) it becomes knowledge? Or is some degree, and method, of checking required?
Either way, how do you avoid the problem I mentioned at the beginning that one cannot distinguish the presence/absence of evidence from unchecked belief?
Quoting Isaac
Basically the presence and decisiveness of evidence (the consequence of this 'checking procedure) is itself a belief.
So you have a belief that A. You check it and find what seems to you like evidence to the contrary (call that a belief that B). The belief that A and the belief that B are just two beliefs. One must obviously decide which counters which, but the ontological status of A hasn't changed, it's still just a belief, despite now having a contrary belief associated with it.
It seems as if you want to set up a hierarchy where initial beliefs are somehow considered suspicious, but subsequent beliefs about evidence for/against them can be treated as practically objective fact, and I don't see any reason why you'd want to do that.
This is very close to the way I think about knowledge (even 'truth', much to @banno's chagrin). As a signifying word which tells us we're playing a different game. In my experience, the game thus signified is more a social one than the private change of systems you suggest here (though I do like the fact that your model gives us the binomial distinction we're looking for).
What seems, again, in my experience, to be signified by a shift to the term 'knowledge' is that there will be agreement among others in one's social group. I 'know' the earth is round (anyone in my social group will agree). I 'believe' it's raining (someone who's actually outside might disagree). I know at a first read, most will baulk at this "I don't care what other people think" but the key is understanding that we have imaginary social groups to which we wish to belong, as well as actual groups. None of is is to undermine the processes we use to make this assessment.
The evidence for all this (as you so rightly were about to ask), is weak. It has a rich heritage though (Asch conformity), and a modern update (engagement of social status brain regions during both knowledge and evidence assessment), together with a host of studies inbetween, but you'll find plenty to the contrary as well. (Hey, my belief counts as knowledge, I've read most of the studies to the contrary!).
Quoting Srap Tasmaner
I think this matters a lot, but again, as I mentioned earlier, I think it's important to distinguish simply being open to revision (our beliefs are in a constant state of being revised), and being open to revision via an approved method, much like the engagement of system 2, you mentioned. How much this relates to the ontological status of 'knowledge' as opposed to just what constitutes good habits (in the Ramseyan sense), I'm not sure.
((Lately I've been reading Dewey and Herb Simon, both people with one foot in philosophy and one in psychology. I may even come to see this as a strength in Hume, rather than trouble.))
Quoting Isaac
I'm also strongly inclined, as I think you know, toward "community first" approaches for a bunch of reasons.
Quoting Isaac
It's getting harder and harder for me to care about the ontological part. (I also can't help but see the dual-process story as validating the reliance of Hume and Ramsey on "habit", though it feels a little tendentious.)
Philosophers tend to want to focus on the status of claims (is it a belief? is it knowledge?) and on the status an individual is imagined as assigning to their beliefs. But it might be possible to quit doing that. In the usual case of belief revision -- I thought there were two packs of poptarts but when I look there's only one -- does it matter that my belief was marked as revisable or defeasible? I do revise with minimal hesitation, if any. The "hunh" I grunt is, by introspection, mild curiosity about how there came to be only one or why I thought there were two, but there's very minimal tension associated with the belief revision itself.
There are all sorts of stories about people resisting revising their beliefs (although one of the biggies, "backfire", turned out later probably to be wrong), but I wonder if there is really an issue there well described in terms of a belief's status at all, or if it's just more about reasoning processes, specifics of the evidence, etc.
Quine and Duhem point out that falsifying an hypothesis by experiment is not at all as straightforward as it might seem. @Isaac seems to have something on this sort in mind in his first post, but obscured by his psychological predilections.
Knowledge as justified true belief was stillborn in the Theaetetus. Philosophers, to their great detriment, focus on knowing that 1+1 is 2 and forget about knowing how to ride a bike. The best way to deal with the OP might well be to chase @Pfhorrest off with a poker.
That's not induction specifically, that's just any invalid inference. Induction is not technically valid, because validity only applies to deduction to begin with. Induction is all about patterns: you see things that fit a pattern and take those things as evidence that the pattern holds, even though that's not deductively valid because the pattern could always break at the next observation.
Quoting Srap Tasmaner
:up:
It is certain that the bishop must stay on squares of the same colour; until it becomes time to pack away the pieces.
Yes. As Quine put it,
The all-too-neat OP seeks to advance falsification as a way of defining knowledge by showing that it avoids Gettier, while ignoring Quine.
Good; it's always better not to let new information undermine your pre-existing view...
And the knight must land on a different colored square, but of course neither of these points are in the rules; they are both inferences from the rules plus our custom of playing on a checkerboard patterned surface, and of course you needn't.
But this is another example of "knowing how" rather than "knowing that". Are you suggesting we should analyze knowledge in terms of "knowing how to form beliefs"? Wouldn't we have to tack on "well" or "reliably" or something else?
No. I'm suggesting philosophers might better analyse knowing that... in terms of knowing how...
The flatus at the end of Theaetetus comes from an attempt to make explicit what can only be show. Do you know how to add only after you understand the justification for addition in Peano Arithmetic? Or do you know how to add when you know how to proceed with an addition problem you have not seen before?
Quoting Pfhorrest
https://psychology.wikia.org/wiki/Induction_(philosophy)#:~:text=Induction%20or%20inductive%20reasoning%2C%20sometimes,but%20do%20not%20ensure%20it.
"Induction or inductive reasoning, sometimes called inductive logic, is the process of reasoning in which the premises of an argument support the conclusion, but do not ensure it."
I leave you to consider the statements of my last post with the definition of induction clearly defined.
You must demonstrate that the first premise in the chain is incontrovertible. I do that in my theory here: https://thephilosophyforum.com/discussion/9015/a-methodology-of-knowledge
I do not want to distract from the OP's point here however. If you are interested in exploring how I solve this problem, feel free to visit.
That's actually all I was trying to ask, just wasn't sure what else to say knowing-that would be knowing-how to do, but maybe there's no general form.
Did you have something in mind?
Consider your knowledge that the paper shop is at the end of the street. Does that consist in the belief that is justified by the map on your iPhone, or by your capacity to wander down the street to buy a paper?
The holism that Quine (and less so, Duhem) pointed to is just a consequence of treating knowhow as no more than knowing that. It's arse about; rather, knowing that is a type of knowhow.
Techne.
Was just reading Dewey yesterday making a related point about Plato and artisan knowledge (which he casts as proto-scientific).
It does seem like there are facts we just store in memory though, so I'm not sure it's worth universalizing.
Ah, yep. Thanks.
Quoting Srap Tasmaner
For example? Could one have stored in memory a fact that was utterly irrelevant to any action one might undertake? And here we might include saying "I remember that..."
It would be tantamount to having a private memory, with the same consequences as a private language.
Were I doing a PhD, this would be one possible topic; it seems to bring together divers aspects of epistemology.
Could you have memories that you cannot in fact access or verbalize? The answer to that is clearly yes. In principle? Dodgy.
I thought of "giving the answer expected" as what you know how to do with a fact, schoolboy style, but it feels thin. Seriously I'm sure there's research that shows facts and skills are stored differently. It is certainly true for language that word roots are in one place and rules another.
...then by what right would you call them memories? Is it so clear?
So what. Moving your foot and moving your hand are associated with different parts of the brain, but are both movements. As if talking and writing were not actions...
The fad of reducing everything to interacting neurones becomes tedious.
People tend to have memories they can't access at will, but something like a smell can trigger them. Early childhood memories are like that. (Complete recall is a known thing. Marilu Henner is a case.) Besides that there's repressed memories of trauma. Besides that there's everyday forgetting and then remembering later.
If you tell me the sky isn’t always blue, sometimes it’s orange or even grey, that’s not something surprising I need to revise my beliefs about, that something obvious that I always took for granted.
Quoting Banno
I also agree with this, as a pragmatist.
We already distinguish between knowing-how and knowing-that in everyday language. Some languages even use unrelated words there. And I'm betting the neuroscience supports the distinction we already make. You're proposing something you'd need to argue for against both sorts of evidence.
Then they're memories. Do you remember what point you're trying to make?
Do we? Then why the need for a hyphen?
Look at the example I already gave. Knowing that the paper shop is at the end of the street does not consist exclusively in a memory that "The shop is at the end of the street"; it also includes the capacity to go to the shop as needed; to describe how to get there to someone else; to distinguish a paper shop from a bottle shop; and so on, indefinitely. It is not a discreet atom that could be tied to one memory or to one neural chain. Supposing otherwise is making a categorical error.
I never said I disagreed with it, I said it’s not contrary to my views, i.e. I agree with it.
So much depends on the details though. Of course memories are connected to each other, and of course we draw on factual memories operationally. I suspect that the way factual memories are activated and used in planning and so forth is different than the way skills are, that's all. Motivations are in there too, and they're also different, aren't they?
I'm pretty sure Bismarck is the capital of North Dakota, and I could probably manage to pick out the state on a map, but I've got pretty much nothing else going on there. If I didn't know that, my life wouldn't be much different.
But of course pragmatically you’ll make the minimal necessary revision to your belief system, which will usually be just that one small belief, unless you happen to be on the threshold where your whole system of belief already needs so many exceptions that with this latest one it’s worth it to switch to something altogether more parsimonious.
But that’s going to be the topic of my next thread (parsimony and scientific revolutions), and I’m just waiting for this one to die down before posting it.
So it's use consists almost solely in your being able to answer a quiz question.
That does not show that knowhow and knowing are incommensurable.
...was misleading in that you don't actually think knowledge is best defined in terms of critical rationalism.
If my purpose is to answer such a question, but that's not why I learned it, so far as I know, and it might come in handy doing a Crossword too. What is the point of this?
I'm not sure why we're having this argument.
Is there some reason to avoid a commitment to people having either factual knowledge or (I looked it up now that I'm home from work) declarative memory, like Bismarck being the capital of North Dakota?
I do, but that is not in conflict with anything Quine has said, or that you or Isaac have said along the same lines as Quine.
I can see your point here. I was going to add to my explication of socially defined knowledge that individuals generally don't have a distinction between belief and knowledge for their own predictive thoughts. They're not treated differently, they don't appear differently neurologically, they're basically indistinguishable until they set them in a social context.
Quoting Srap Tasmaner
Yeah, that would be my take too. Although, I think people are more similar on that front than in respect to belief formation than people who worry about 'critical thinking' tend to believe. This is the point I'm trying to drive home here. The target of the OP is non-existent. No-one thinks any other way than the way described. we do not form beliefs contrary to the evidence of our senses, and when beliefs need updating owing to new sensory information, we just do so (in a suitable conservative manner - we can't afford to be just changing our beliefs every few seconds)
The target of the OP, I think is the religious, the flat-earthers, the creationists, the anti-vaxxers, the climate change deniers etc. But most people form beliefs of that more complex sort on the basis of reports from members of trusted groups. I've not conducted any neuroanatomy, I believe all the things I believe about neuroanatomy because I think it's implausible that all the people involved and working with it are doing so deceptively, it has nothing at all to do with my senses (apart from my initial judgement of their trustworthiness). I am, for example, much more suspicious about the effectiveness of medicines, despite them also being investigated and tested by similarly qualified scientists, I basically can see much more of a plausible way in which their results might be skewed and so I've less inclination to trust them. At the end of the day, both are social judgements. The amount of scientific knowledge I personally have verified is tiny, the rest comes down to trust, the more people involved the potential deception, the less likely I deem it to be because I basically trust people not to be deceptive without due cause.
I will perhaps have a read if I've time. I can't keep more than a couple of threads in mind at the moment. Perhaps in lieu of my doing so you could answer a few brief questions here?
How do you decide which is the first premise? Is it just the one you first thought of (temporally arranged)? In my example - A belief that A and a belief that evidence exists contrary to A (which we're calling a belief that B) - which is the 'first' premise and why?
Quoting Pfhorrest
What you're not grasping (or ignoring, not sure which) is that it being "not yet ruled out" is itself another belief. The point of talking about Quinean Webs of Belief is not just to aggregate beliefs into little sub-net to be treated in exactly the falsificationist manner you propose. It's to undermine the idea that the belief in question and the cause to rule it out (or not) are different in kind. you cannot, therefore, make a distinction between those of the first kind on the basis of a lack (or presence) of those of the other.
Except it's not, at least not on the same order as the beliefs under discussion. If anything, it's a meta-belief, an axiom of the method of belief revision. Being not ruled out is the default state of any belief under critical rationalism; it's not something that calls for justification. Critical rationalism turns the entire idea of justification on its head: rather than needing to have some belief to justify having some belief to justify having some belief, you may have whatever beliefs you like, until you encounter a reason why you can't.
That "reason why you can't" is not itself some other belief external to the rest of your preexisting beliefs, it's some inconsistency within your complete network of beliefs. That inconsistency, that ruling-out, does not compel any specific alternative belief, only that you revise something or other in your belief system to avoid that inconsistency.
And sure, when it comes to synthetic a posteriori beliefs, one usual element of that complete network of beliefs is that one's senses reflect something about the actual world, which is equivalently to say, a rejection of the notion that there is some "actual world" aside from the world accessible to the senses. One could always reconcile any observation with any beliefs if one just disbelieved that observation conveyed any truth. But I think there are, and have stated extensively elsewhere, reasons to reject that kind of disbelief in observation; which is to say, inconsistencies that would inevitably arise from accepting that kind of rejection of observation.
And yes, the methodology of critical rationalism itself, which is equivalently to say the rejection of justificationism, could also be considered a belief on the same order as that reliance on observation, a kind of meta-belief if you will. But there are, again, reasons to reject justificationism, which is to say, inconsistencies that arise from the acceptance of justification, which I have also already detailed elsewhere. And since even justificationism itself accepts proof by contradiction as justification within its paradigm, that should be sufficient to disprove justificationism itself from within justificationism, and so compel critical rationalism.
But this thread in particular is not supposed to be an argument for critical rationalism in general. It's just supposed to show that the Gettier problem does not apply within a critical rationalist paradigm, because it hinges on justificationist assumptions. If you already reject those assumptions, because you already reject justificationism, because you're a critical rationalist, then the Gettier problem means nothing. And existing responses to Gettier, like Nozick's, already hint at that, without (so far as I know) coming out and saying it outright.
That it is inconsistent is a belief. You can't escape this, it still forms the same structure - my belief that A, and my belief that B, and my belief that A is inconsistent with B - all beliefs, all of equal status, none prior, or beneath, or more fundamental or any of these distinguishing features you'd like to attach. They're not there.
The problem here is not critical rationalism, or justificationism, or fideism as methods of distinguishing knowledge from belief - it is the assumption that there exist methods of distiguishing knowledge from belief at all at the ontological level you want them to exist.
That you think I'm trying to distinguish knowledge from belief or that this has anything to do with ontology shows a complete misunderstanding of what I'm talking about at all. On my account, knowledge is a kind of belief, not something separate from it, and what we're discussing in epistemology generally is how to (practically) revise beliefs, in a way that avoids various problems that might otherwise arise in that activity. Epistemology is about identifying what problems might arise in that activity of belief-revision, and seeking out ways around them. Knowledge is just the subset of belief that can make it through such a process.
None of that is surprising to me and yet I still don't feel you've answered my charge, so the problem can't be in that distinction alone.
You say "Knowledge is just the subset of belief that can make it through such a process." I'm fine with that, so let's say we agree there. The issue is then the nature of the process, with which there can be two issues
1) whether it does indeed define any subset at all - ie if all beliefs pas this process then it doesn't define a subset, it just defines the set.
2) whether, assuming it does define a subset, that subset is correctly called 'knowledge' given that 'knowledge' is already a term in common use.
I disagree with your proposed method on both counts. On the former because no beliefs are ruled out by your method, it therefore defines all beliefs, not a subset of them, and the latter because we simply don't use 'knowledge' that way, as the term cannot refer to private justification and still retain a public meaning.
...then definitely use divers with one e.
Yep. @Pfhorrest seems to have missed the force of Quine's critique.
Quoting Pfhorrest
A "reason why you can't" will still be underdetermined...
I addressed underdetermination specifically in the very post you quoted:
Quoting Pfhorrest
Critical rationalism does not find undertermination a problem; you're never trying to justify one specific possibility, only narrowing the range of possibilities, so there are always a range of possibilities remaining.
How? So far as I can see, you have not shown how.
Can you perhaps give an example? "...and you can show that B and C are contrary to each other"; the point of underdetermination is that you can't show this. There are innumerable reasons why A and B might appear to be contradictory, and yet not.
1. Have a belief A
2. Demonstrate that it is impossible for A to be contradicted through deduction.
3. A can become a prime premise for B, etc.
Again, I spend a few paragraphs on it, with lead up there. If you want to know how I do that, or if you think the above does not satisfy what you are looking for, it is best we take the conversation there, and not distract from the thread here.
That's the low-hanging fruit, but I think the real motivator, in terms of cultural history, is Nietzsche, Freud, Marx, and Darwin. It's Freud in particular: the revelation that we have something like "unconscious thoughts" and, more importantly, unconscious motivations, and unconscious commitments is troubling to people careful about how they think. In the modern context, it's the widespread awareness of unconscious bias. Nietzsche (and then Heidegger, Derrida, Sartre) has all sorts of things to say about failings of the intellectual conscience, of bad faith, of having some inauthentic weltanschauung, of all sorts of self-deception. Marx offers an explanation of the source of some of those, Darwin too (Hoffman with his "desktop" thing). Whether you buy the grand narratives, nowadays you can't get around knowing that your reasoning might be motivated in a way We Don't Approve Of: on top of the "fallacies" so beloved on this forum, which are easily sussed out, there is the fact of racial bias, recency bias, availability bias, and all the rest. You might even think you're flailing away in the prison of a Kuhnian paradigm, desperately fighting off an alternative to your position just because it's not your position. You might just be a captive of your Whorfian language, thinking the thoughts you happen to have words for and no others. I haven't even mentioned feminism, which says something about me!
The variations on this worry go on and on. We are obsessed with the possibility of self-deception.
Bonus reference
Should have mentioned Wittgenstein too (and Sellars). How do I know my argument is what I think it is? Am I actually relying on a simplistic picture I have of how this works? Am I taking words that make sense in one context and smuggling them into another context as if they still have that meaning?
This still doesn't address Quine.
1. Have a belief A
1a. Also have a belief C
2. Demonstrate that it is impossible for A to be contradicted through deduction.
2a. Also demonstrate that it is impossible for C to be contradicted through deduction.
3. A can become a prime premise for B, yet C can become a premise for ~B.
It's that which Quine shows. Our theories (beliefs that B) in your example, are underdetermined. A large number of otherwise consitent beliefs can be marshalled to support or diminish B. We have no mechanism for choosing which.
I've read the first and second of your essays, but neither address the degree to which deductive beliefs form networks. You take a single example and extrapolate merely assuming that the added complexity of vast numbers will have no effect. Even if it were theoretically possible to thus ground beliefs (I maintain it isn't, but this is a lower hanging fruit), it would be pragmatically impossible due only to the absolutely vast number of beliefs involved, each of which would have to be independently disentangled from the other. With only 10 beliefs you have 3,628,800 arrangements to go through.
Oh fantastic! Lets take it there then. The first two build up knowledge from the self-subjective viewpoint. Part 3 goes into how knowledge works within society. I would love your comments on it there. Part 3 should answer how we resolve the point from Quine.
That's an interesting way to look at it. There's certainly a common thread, a discomfort, with psychology and neuroscience, that we're intruding where we're not welcome (faced some pretty nasty backlash even within these sanctified walls), I think this sense might be behind some of that. There's a certain tranche of philosophers for whom the potential inability of the brain to act as a measure of the truth of its own conclusions would undermine their whole project.
Quoting Srap Tasmaner
This too, especially the last (I think I should read Sellars). We had a massive thread a while back on "What it's like", as if the fact that I can ask what that lemon cake was like makes asking what being a human is like make any sense.
One of the things that interests me about how people write here is that philosophical methods of defending beliefs differ from those employed in more general senses. Less social perhaps, more precious about the integrity of the 'method' than content. Hinting that such a method may be nothing but post hoc rationalisation and underdetermining anyway seems like a fraught undertaking.
It would be interesting, as a matter of history, to see how this plays out in Plato, where the standards of justification are in play as various justifications are examined. It's a very old thing for philosophy, to find common sense wanting -- too casual, too unsystematic, too inconsistent. Even in Plato you see accusations of bias (of course you think X because you're A). I've sketched some of the ways I think this downright fear of ordinary thinking has been ratcheted up, because now we know it can be invisible! (And even when it's visible, and right in front of you, you might not recognize just what you're relying on. That's the net result of "ordinary language philosophy".)
At any rate, the message has gotten through that to do philosophy right you have to rely on special methods of justification, and probably specialized vocabulary invented just for the purpose, and so on. That this can look, if you squint, a bit like how science deviates from common sense is all the more encouraging.
Coming back to the matter at hand, philosophical problems like the subject of this thread just look different if you start from a modern science-aware world-view. Here's how Dewey begins Democracy and Education:
That was 1916. There's really nothing much here that couldn't have been written long before, no mention of evolution at all, but it's a starting point suffused with the impact of Darwin. This passage doesn't even nod at human beings per se, but you know this is exactly what he has in mind: a human being is an organism living in an environment, start from there. Dewey really thought Darwin would completely reshape the entire landscape of philosophy, but a hundred years later it feels like that change has barely begun. We're still writing footnotes to Plato.
I have a rather low opinion of thinking book learning equals having knowledge. I will go with Thomas Nagel and the notion that there is an explanatory gap between understanding the physical world and understanding conscious experience. I will go further and say without experience one does not have an adequate knowledge of the physical world. Young people accumulate facts but it is not until our later years that those facts begin to have a sense of meaning. That is the big problem with getting people to cooperate with wearing masks and distancing. Until they experience the reality of Covid by becoming ill or losing a loved one, the talk of wearing mask and distancing and warnings to not have gatherings, the number of people who died, etc. is just meaningless words. Words they don't want to hear. Blah, blah, blah. But when the truth comes home those words have meaning and the behavior is changed.
For all the rest, I will be pragmatic. We should never believe we know absolute truth for the reasons you explained. On the other hand, we need a starting point so I am in favor of treating our agreed truths as knowledge. It is our knowledge until better reasoning changes it.
I will even go out on a limb. I will say what we think we know, is knowledge, even if no one agrees with us. An example is knowing bacteria leads to infections and disease, and therefore, sanitation is essential. That is a fact of life even if no one agrees. It took the scientist a hundred years to convenice doctors sanitation is essential. And when we have knowledge that others do not have, it is our duty to continue the effort of pursauding others to accept that truth. That is essential to democracy. Education for "group think" has been detrimental to our democracy. Independent thinking may mean standing alone with a truth for a long time before winning the argument. We may even die before the truth is accepted.
This seems to be suggesting that all beliefs, until and unless they are ruled out, enjoy equal status. What happened to the reasons we hold beliefs in the first place? Are you not leaving perceived degrees of plausibility (themselves perhaps determined by social, cultural and psychological influences) out of the picture? What about beliefs that cannot be definitively ruled out; for example beliefs in compassion, love, sacred beauty, or ethical virtues?
Or do you see yourself as dealing just with empirically testable beliefs? Even there how would you know that "a reason why you can't" is definitive or compelling?
Can you give an example to the contrary?
I'm talking about scenarios where you have a belief system like "A" plus "A implies B", and then a new belief that "not-B". That is straightforwardly just a logical contradiction, and you have to change something about it on pain of inconsistency. At this point we're not even talking about observational evidence, just pure abstract logic. Whatever your reasons for believing that A, that A implies B, and that not-B, something somewhere in some of those reasons must be wrong, because you just can't have all of those at once.
What's underdetermined is what exactly went wrong where:
You could reject not-B, on the grounds that A and that A implies B, and then make all of the subsequent changes to the rest of your beliefs that are required to not demand you accept not-B.
Or you could reject that A implies B, on the grounds that A and not-B, and then make all the subsequent changes to the rest of your beliefs that are required to not demand you accept that A implies B.
Or you could reject A on the grounds that A implies B and that not-B, and then make all the subsequent changes to the rest of your beliefs that are required to not demand that you accept A.
As I understand Quine, his point was that things aren't as simple as just one of those scenarios. It's not like you just observe that not-B and boom, there you go, we know for sure that not-A. The implication from A to B is also up for revision, as is the observational implication that not-B in the first place. You can change any of those to salvage the consistency of your belief system.
And I have no problem with that, that's a big "no duh" to me.
Yes. Nobody's beliefs have a particular burden of proof over anyone else's. If you want to push your beliefs over someone else's, you've got to show that theirs are wrong, and more so that all the other alternatives besides yours are wrong; and saying "you can't prove they're right" is not showing they're wrong.
Quoting Janus
Everyone thinks they have good enough reasons to hold their beliefs. Those degrees of plausibility can factor in to your own evaluation of what seems most likely to you, but if someone else disagrees with the plausibility of a certain belief, if it just doesn't seem the same to them as it seems to you, then you have to agree to disagree until one of you can definitively show that the other is actually wrong.
Quoting Janus
I'm not sure what you even mean by "belief in" compassion, love, or sacred beauty. That sounds like a different sense of the word "belief" that means "support". I don't think there's any practical controversy over the fact that people sometimes are compassionate, or loving, etc. But if there were any question about those things, it would be an empirical question.
Ethics is a good question though. I do think that ethical views can be definitively ruled out, but that the method for doing so is not quite "empirical" in the usual sense, though it is analogous. This is a huge can of worms I don't want to open with Isaac in the room though. In any case, critical rationalism in general is not specifically about empirical beliefs; the specific subset of critical rationalism about empirical beliefs is falsificationism. The critical rationalist methodology can be used on any kind of belief though, not just empirical ones.
When I mentioned perceived degrees of plausibility and that they may be more or less determined by social and cultural influences, you responded with "everybody thinks they have good reasons for holding their beliefs", which seems to ignore the possible incommensurability of different people's basic assessments of what constitutes plausible reasons. So, a great glaring example of this kind of thing is the never ending theism/atheism debate. Of course, there are many much more subtle cases, too. How can you ever falsify either position, unless the starting assumptions about what constitutes a plausible reason for belief are shared?
Quoting Pfhorrest
I meant believing in the importance of those things for human life. Some do and some don't, or different people accord different degrees of importance to those and other qualities. I don't think such beliefs can be assessed strictly empirically, but if they were taken to be to be subject to empirical investigation, then that belief itself would be the result of adopting certain starting assumptions.
You have to work your way down through the networks of supporting beliefs until you find something in common to work back up from. This may take you a long way from where you started into far more abstract territory, and odds are they won’t want to go on that long journey with you, but that’s how you’d have to do it.
For example my whole belief network is grounded in my philosophical principles which are grounded in pragmatic reasons that anyone worth talking to would share: basically, “You care about something right? There’s something or another you’re trying to do in your life that you would rather succeed at than fail at, no? Well, here’s why being able to discern truth from falsehood is important for succeeding at all other things, and here’s why these principles are important for succeeding at the task of discerning truth from falsehood, and here’s some broad implications of those principles, and here are some specific big beliefs that run counter to those implications, so if you care about anything then if you’re really consistent you’ll reject those particular beliefs and then make whatever adjustments necessary to the rest of your beliefs to accommodate that.”
Yes, but what happens if you apply this, for example, to theism? There are many scientists who are also theists, and this apparently doesn't hamper their ability to do science just as well as an atheist scientist. Also there are studies that purport to show that theists live longer, are healthier, and so on than atheists. Assuming for the sake of the argument that these claims are correct, then how would you go about disabusing the theist of his or her "false" beliefs (assuming that you believe they are false, of course).
Quoting Pfhorrest
What if you can't find any? Say, for example, that someone believes that scripture is the ultimate evidence that outweighs all other sources? How are you going to find something in common to work back up from with such a person?
I was thinking of theism as I wrote that, as one of those "specific big beliefs".
Quoting Janus
I think they're compartmentalizing their work from the rest of their thought, and so being inconsistent.
If they don't care that they're inconsistent, then there's nothing to do about it.
Quoting Janus
That has no bearing on the truth or falsity of (a)theism, and in principle it should be possible to suss out exactly what it is about a certain kind of belief (or what correlates with that kind of belief) regardless of its truth or falsity that's contributes to health and long life, and replicate those effects for atheists too. (My first hypothesis would be that theism is a common but not necessary consequence of excessive optimism, broadly speaking, in one's thought patterns, and that optimism generally has health benefits).
Quoting Janus
If they really believe scripture above absolutely everything else, and there is nothing that they care about more than conforming their beliefs to scripture, then they're a lost cause. Like the scientist above who doesn't care if they're inconsistent, if they really just don't care, then I don't care to struggle pointlessly trying to convince them otherwise.
But I think that's probably pretty unlikely, for most people. They probably have reasons for believing scripture, that stem from deeper concerns, which may stem from deeper concerns, and so on. Probably most people care more about their life and well-being than they do with just agreeing with some book just because, and the reason why they agree so vehemently with the book could easily be that they think it will contribute to their eternal life and well-being. That's an avenue to start looking for some common ground, because I also care about their life and well-being, even though I don't believe in their scriptures, and if they had to choose between abandoning the scripture of abandoning their life and well-being (and were convinced that that actually was a choice between two different things, not tantamount to the same thing), I'd suspect (and hope) that they'd pick their life and well-being over the scriptures.
But they're not necessarily being inconsistent on their own terms, even if they are on yours.
Quoting Pfhorrest
I think it is more likely that theism is the source, not the result, of excessive optimism. To gain that optimism, in the face of the pessimism that might attend the realization of inevitable suffering, loss, injustice and annihilation, some people are drawn to the idea of a transcendent reality. I believe the same impulse is there in the case of Hinduism and Buddhism and most other religions too.
Quoting Pfhorrest
Yes, but you're assuming that what you count as evidence is more compelling than what they count as evidence. In the general context of modern philosophy of course you are assumed to be right and they wrong. I don't think faith is inconsistent with modern philosophy, though. There are modern philosophers I admire who are also theists. I don't see them as being inconsistent, but as adopting different starting assumptions than I do; having different reasons for their beliefs. Wherever faith is in play I think it ought to be acknowledged as such, of course; but that goes for both sides of the argument.
Quoting Pfhorrest
The people you are talking about probably care about their eternal life and well-being. You don't care about that because you don't believe in it. That belief need not impact whatsoever on their material life and well-being; how would you ever, why would you ever try to, convince them that it could, when your belief that it could is, according to all the evidence, false?
That's basically what I meant. It's wishful thinking. It would be terrible if X therefore it's not X. How could it be not X? Come up with something, then believe that, because it would be too terrible if that weren't the case and so X could be the case instead.
That's basically straight from the mouth of my devout mother when pressed on the issue. God must exist because it would be just awful if he didn't.
Quoting Janus
I do care about that, I just don't think that they are successfully optimizing their (albeit slim) chances of attaining it, but are instead believing something that tells them it's much easier and more likely for them to attain it because believing that makes them feel good.
If they really don't care whether or not it's actually true, they just want to feel good right now, that's fine with me. I'm not actually very concerned at all with whether people are theists or atheists. Things like that are, to me, merely a sign of a deeper "disease", an indication of probable flaws in reasoning that can have much worse effects (like the mismanagement of COVID-19, climate change, general political and economic injustice, etc) than just allowing someone to reassure themselves in the face of their fears, which is harmless.
So long as those other worse effects aren't manifesting, then I don't especially care about people having their flawed reasoning privately either. But if they care about being technically correct in their thinking -- i.e. if they're interested in philosophy -- then I have some opinions on that topic and arguments to support them. And as a way of forestalling the other worse effects of such flawed thinking, I generally try to encourage people to care about that, i.e. to think philosophically.
...only there's more going on here than "pure, abstract logic". The new belief that ~B itself requires justification - that is, underdetermination suggests that there is never sufficient reason to accept that ~B. Or, in your terms, there is never sufficient reason to reject the belief that B. There are always alternative explanations.
The point, made by numerous authors, is that falsification gets no further than induction; falsification at first looked promising, but...
Going back to the problem of induction, no list of single observations is ever sufficient to determine a general theorem: "This is a black cat" and "This is a black cat" and so on never determines that "All cats are black"... it is underdetermined. Your variation claims that "All cats are black" is true until we find non-black cat; but there are always ways to reject "This is non-black cat" - it's not a cat, it only appears to be non-black in a certain light, it is a fake, it is a conspiracy. No list of observations is ever sufficient to determine a falsification. Falsification is also underdetermined.
Falsification offers only a pretence to grounding science in "pure, abstract logic". In the end, and this is closer to @Isaac's point, belief is not determined by observation.
I covered that already:
Quoting Pfhorrest
Rejecting the new observation is always an option. But then there are other things you would have to reject in order to be consistent about rejecting the new observation. One way or another, you end up having to modify something about your belief system. It's underdetermined what you have to modify but NO DUH it is, and I never said otherwise.
So why bother with this thread?
Why didn't you do Hempel? Every non-black non-raven counts as "evidence" that ravens are black.
Logic is swell but it's not the swiss-army-chainsaw it's been taken to be.
Only of you ever thought falsification was supposed to prove one particular belief (or set of beliefs) as the sole unique correct one. That was never its point though. It only narrows down the possible sets of beliefs that are still viable. And even if you reject some apparent new observation instead of using it to rule our previously held beliefs, you still have to change other beliefs to accommodate the rejection of that new observation, so you’ve still narrowed down the possibilities, which again is all that was ever supposed to happen.
I’m quite aware of Hempel and the good points he makes against confirmationism, which I am also already against.
I think avoiding Hempel is supposed to be one of the strengths of falsificationism, since you claim not to be interested in supporting evidence at all.
In the real world, supporting evidence does matter and people are more or less Bayesian about it.
Well, no, it doesn't. Not on the basis of "pure, abstract logic" alone.
There's no algorithm for deciding what to believe. If you agree with that, in the face of what you have said here, then we have no disagreement.
Contradiction is not argument.
You keep claiming things I already agree with somehow refute my views and it’s getting tiresome. All it shows is that you’re arguing against the strawman of what you think I think, not against me.
Quoting Banno
My understanding of falsificationism is that it is founded on rejecting the very concept of conclusively proving any one particular belief, in favor of only on narrowing down the range of possible beliefs, which always remains a range, no matter how narrow you make it. Instead of starting with a blank slate of no possibilities and trying to build something up from that tabula rasa, you start out with every possibility live, and then for every argument or bit of evidence you encounter, every relationship between certain ideas you find, you whittle down some possibilities, where your complete belief set can't include this or that kind of feature (e.g. you can't have A and not-B), but there are always still infinitely many ways to avoid that kind of feature (you could reject A to allow not-B, or affirm B to allow A, and in either case rearrange all the rest of your beliefs however is necessary to accommodate rejecting A or affirming B, any way that will enable that, of which there will always be infinitely many).
Saying in response to that "but you never end up forced to accept any particular belief that way" is not a rebuttal of that, it's the whole point of that.
It's like setting upper and lower bounds on some value. That's actually a particular case of this process, but also serves as an analogy for the whole process. You never pin down one actual value, but you can narrow down the range that the actual value might fall within. And that's a kind of knowledge-that. Knowing what combinations of things cannot be so is still knowledge compared to thinking absolutely anything goes because you have as yet no basis to tell what won't work out.
But it doesn't do this, either; as pointed out.
Bayesian analysis works better.
But it does, as I pointed out without refutation in turn.
Quoting Banno
That is compatible with a falsificationist approach, as I'm going to elaborate in another thread soon, as soon as this one dies. But here's a preview of that part:
Beliefs not yet shown false can still be more or less probable than others, as calculated by methods such as Bayes' theorem. Falsification itself can be considered just an extreme case of showing a belief to have zero probability: if you are frequently observing phenomena that your belief says should be improbable, then that suggests your belief is epistemically improbable (i.e. likely false), and if you ever observe something that your belief says should be impossible, then your belief is epistemically impossible (i.e. certainly false).
Maybe I missed it, but it's still not clear how this whittling down is done. I get the impulse: if any one of {A,B,C,D,E} explains {x,y,z} and we can rule out B, we've made progress without settling on which of {A,C,D,E} is the best theory much less The Truth. But you need to be able to rule out B, and I'm not sure you've actually shown that you can, given underdetermination.
Can you give an example, real or imagined, but not schematic? For instance, you made it clear to @Janus that you reject theism. Do you consider it falsified? Or just unlikely?
Quoting Banno
Quoting Isaac
This thread went nowhere.
Say you think that doing a certain dance (A) causes it (if A then B) to rain (B). You do that dance, or at least you try to do it right, but it doesn’t seem to rain, at least not when and where you expected the dance would cause it to.
You must either conclude that it did in fact rain in a way consistent with your rain dance theory even though it does not seem like it did to you, and rearrange whatever beliefs are necessary to accommodate that conclusion;
or else conclude that dancing does not cause it to rain, and rearrange whatever beliefs are necessary to accommodate that conclusion;
or else conclude that you did not do the correct dance to cause it to rain, and rearrange whatever beliefs are necessary to accommodate that conclusion.
Quoting Banno
I don’t know if what I’m describing is “algorithmic” in the sense you mean or not.
Okay, I think I see what you have in mind. But I still don't see how it works.
The new observation entitles you to eliminate something, thus whittling down the number of possible consistent sets of beliefs. Sure, it doesn't tell you which one to eliminate, because of underdetermination, but what matters is not knowing for sure which belief set to eliminate but eliminating one or more. Whittling down will have been achieved.
So when does the actual whittling down happen? As far as I can tell, knowing that you're entitled to eliminate something or many somethings from an effectively unbounded set but not knowing which something -- that might be necessary but it's not the same as actually whittling down.
Stage 1. Your dance->rain hypothesis.
Stage 2. Dance & no rain.
Stage 3. ???
What happens in Stage 3? Anything? Do you just move on to Stage 4, knowing that whenever you like you have several options for filling in Stage 3? Maybe in fact it makes sense to wait, see what else turns up. Maybe Stage 4 will give you a way of picking which Stage 3 whittling-down option (and there are many) is the best. But it'll be just like Stage 3, including the option to disregard the even newer observation entirely.
You do of course have the option at any point of using some completely unrelated method for choosing which whittling-down option to follow. But that hardly seems in the spirit of the thing.
The Quine-Duhem thesis is that it is never a single prediction that is exposed to disconfirmation but the entire theory, the entire framework, right? And then you need further mechanisms to make defensible decisions about what to count as disconfirmed. I have no memory of what Quine says about this, but I suspect it convinced pretty much no one.
Falsification is already in there, isn't it? I know very close to zero about Popper, but I thought his program was to tie the fate of a given theory to specific predictions and expose them to experimental rebuttal one at a time. Fail any single test and the "whole theory" is toast. I assume that the "whole theory" is a structure defined entirely in terms of entailment, and that just looks like a fairy tale. At any rate, this is nothing like underdetermination, is it?
I agree, but I don't say it's necessarily nothing more than wishful thinking even if it is also that. I acknowledge the power of religious and peak experiences to lead people to adopt conceptual frameworks within which they can make sense of such experiences. But I do say that such adoptions are not supported by empirical evidence, logic or mathematics; it's more like the kinds of ideas one might entertain in the fields of the arts, music and poetry, ideas which stimulate the imagination, bring insight, maybe help with the discipline, but should not be understood as propositions that represent any metaphysical knowledge of reality.
It’s not so much entitled as it is obliged, on pain of inconsistency. Like a car coming at you, you’ve just got to get out of the way somehow, it doesn’t matter which way. Whichever changes seem best fit to make to you, go ahead and make those.
As you say, later observations will require further revisions anyway, so if it turns our you should have revised differently before, you’ll find out eventually.
Feyerabend: Anything goes.
Okay, yes, and that's satisficing, which means you have a clear goal, a way of deciding whether it's been met, very often a scheme for reducing candidates, and usually a deadline or a plan that definitely produces a decision ("the first thing I find that actually works" is such a plan).
Quoting Pfhorrest
But that's not.
Either you don't really mean "best", and satisficing is fine although we don't know how you're doing it, or you do mean it and you'll have to explain what it is you're supposed to be optimizing and how you'll do that.
To recap: your theory isn't falsification a la Popper but Quine's web of beliefs, and the way you select what to disconfirm when your web becomes inconsistent [hide="*"](Surely somewhere there's an Escher drawing of an "impossible spiderweb".)[/hide] is -- as yet unclear.
Yes. “Seems best” was speaking loosely. Jump out of the way of that car, in any direction you want, unless you’d be jumping into the way of something else instead. Just get somewhere clear.
Quoting Srap Tasmaner
Can you quote me somewhere that Popper says anything contrary to this, because I read Popper first, came away with the impression that he supported what I’m arguing here, then read Quine later and thought “well duh, this is already obvious from a falsificationist point of view, but yeah good points against confirmationism/justificationism anyway.”
ETA: Some quick Googling suggests that later writers like Lakatos have commented on the supposed conflict between Popper and Quine, and Popper himself may have as well (it's not clear from what I'm finding if they're quoting Popper or writing something original), saying that the "falsificationism" that is supposedly destroyed by Duhem-Quine is "naive falsificationism" or "dogmatic falsificationism", and that those are not the falsificationism of Popper himself. So it seems that like I thought, this Quinean attack on falsificationism is an attack on a strawman.
Quoting Srap Tasmaner
And not that important on my account. Just move your position to somewhere not in the way of any incoming problems, where exactly doesn’t matter, just so long as you keep doing that and so keep moving into more and more secure positions.
ETA: Of course, you could always try using falsification itself as a method for deciding. You've got several options, test them out, see if any of them have problems you can find, maybe at least narrow down the options you have to choose between via some other means.
So all three beliefs remain in play. I'm not seeing how you've narrowed the field. You can't have all three together? Is that what you're getting at, that we must choose one and so we've narrowed it from three to one because all three together were contradictory? I'm no Popper expert, but I really don't think this is falsificationism at all.
Regardless of the correct term, it's still very unclear what your target is here. Since an example has proven enlightening, perhaps you could furnish us with another. Who doesn't think like this already? Or are you simply describing normal mental activity?
Can you give an example of some belief which might be held by an actual person where they simultaneously believe that A, and that B, and that A directly entails ~B? as Srap put it
Quoting Srap Tasmaner
I'm pretty sure you'll find that even a cursory glance at how beliefs are formed and held in the brain will show you that such a state is nigh on impossible to generate, and for good reason. Nature's already got this one covered. as I said in an earlier post, to assess even as little as ten beliefs in this eliminative fashion would require you to consider 3,628,800 arrangements. Why would you want to even attempt that manually when you have the most complex computing system known to man doing exactly that at 100 hertz?
Yes, that is the kind of narrowing I'm talking about.
Quoting Isaac
As I said above, there are two parts to this view as I construct it, a "critical" part whereby we can somehow or another find limits to possibilities and separate things that are possible from things that are not, and a "liberal" part which says that you're free to hold beliefs without yet justifying them from the ground up.
The "critical" part is just rationalism generally, and I think that that is mostly a normal and uncontroversial thing, which I'm only talking about because it seemed like you and Banno were questioning that, implying that there is no way of sorting beliefs at all, them all just being held non-rationally and so immune to any rational process of comparison.
The "liberal" part is the thing it seems many people, especially many philosophers and generally self-identified "rational" people, get wrong, and so is the main thrust of the "critical rationalist" / "falsificationist" viewpoint under discussion here. (The point of this thread was not to discuss that viewpoint generally, but just about defining knowledge, especially in light of Gettier problems, in the context of that viewpoint. That's why I started another thread just before you responded, to talk about that topic more generally, for the sake of people who don't care about Gettier etc but might care about this).
The traditional, justificationist form of rationalism treats lack of proof as itself a disproof, which critical rationalism like mine rejects. Lack of proof is just nothing, the starting point, and in absence of proof one way or another, any view is tentatively acceptable, under critical rationalism. Unlike justificationism, which would (a la Descates) demand you reject anything that might possibly not be true, find something at the bottom that is definitely true, and build everything from there, which things like Agrippa's/Munchausen's Trilemma show to be impossible, which would leave you rejecting everything out the gate to begin with and then having no ground to build up from, forever.
While you might not be a fan of Feyerabend, you seem to be an - unwilling - fellow traveler. Feyerabend presents the consequences of Popper's line of reasoning, unpalatable as they might be for some.
Well now I've no idea where to put this response....
Yes, that's entirely what we're saying. Your process isn't 'sorting beliefs' is it. We've just established that. It's pointing out that you ought to do some choosing with those that are contradictory. That isn't actually doing any sorting at all. Falsification does not provide the rational method of comparison, so I don't see how banno and I arguing against it amounts to us saying that beliefs are "immune to any rational process of comparison".
Quoting Pfhorrest
Repeating it doesn't just make the counter-arguments go away. Lack of proof is not the starting point. It is neurologically impossible to derive a belief without proof and extremely difficult (read impossible for all but the severely mentally ill) to maintain one contrary to all proof.
The "liberal" part seems to summon a fideism you would never escape from. In light of any critical philosophy you would not be entitled to hold any unjustified beliefs whatsoever. Perhaps it's the wording; perhaps you mean something like 'free to entertain possibilities, without justifying them from the ground up'. This is precisely the sort of thing Popper says about the value of metaphysical speculation; we can never predict what critical knowledge will emerge when we have tested our groundless speculations.
Yes, I think perhaps you're attaching too much significance to the word "believe". To me, to believe something just means to think it's true, not any kind of special faithful commitment to it. You're free to think whatever seems true to you is true, for no more reason than it just seems true to you, even if different things equally seem true to others in account of the same information -- that's underdetermination there -- until such time as some limits on what could possibly be true are found.
To think something is true is to have a faithful commitment to it, as I see it. To entertain something as a possibility is not to believe or disbelieve it. Popper, as I remember it, cites the example of the belief that the Earth is the center of the Cosmos. It was testing that belief that led to the heliocentric model, and later to the realization that the Sun is one among countless stars.
The belief was generally held, to be sure, but those who were minded to test it would have counted it as one possibility that needs to be tested, in light of the observations that had been made that did not fit the model.
I'd just like to add that fideism is oddly binary and extreme. IOW it prioritizes faith, at least when it is centered on religion (and perhaps philosophy) and denigrates reason.
I think that's problematic when taken as one of the two main choices and when applied to beliefs in general.
There is no reason I can see not to have a mixed epistemology. I think we HAVE to have one to manage to live. With beliefs being arrived at in a variety of ways. Adn one need not denigrate the various methods and choose just one.
I pretty much agree with this.
Quoting Coben
I don’t see fideism as one of the two main choices, but as one of the two main types of error, and the view I advocate is explicitly meant to be a mixed epistemology, taking things that each of those erroneous approaches get right (the criticism of cynicism and the liberalism of fideism), while rejecting both of their problematic extremes.
I’m not sure I know what you’re asking, but maybe a picture will clarify my answer anyway:
I’m advocating for the middle area, and against either of the two extremes ends.
Quoting Coben
Me too.
Quoting PfhorrestThe picture made it more clear, and I generally understood.
Where would you see beliefs based on intuition?
Fideism demands faith, rather than merely accepting it. Or better put denigrates other methodologies. But there are various non-rational processes that lead to choices, actions and beliefs. Liberalism would seem to allow for these. Critical liberalism would, it seems, be critical of them, but allow them until, if I have interpreted you correctly, such time as they fail repeatedly or are disproved. Is that a fair take?
Sounds right to me! :smile:
It's the 'as if' people live up to mono-epistemologies that I think creates a great deal more strife. And ironically, it create a kind of holier than thou immaculateness in people who are presenting themselves as disliking such metaphors.