Critical liberal epistemology
I started another thread recently specifically about the definition of knowledge, particularly Gettier problems, and the implications of critical rationalism on them. That thread has begun growing into a general discussion of critical rationalism, while I have been waiting for it to die before posting another thread on that broader subject. So I've decided to just go ahead and start that other thread here.
My general position on the methods of knowledge is what I call critical liberalism. That is to say, I hold that rather than by default rejecting all beliefs until reasons can be found to justify them, all beliefs should be considered justified enough by default to be tentatively held (the liberal part) until reasons can be found to reject them (the critcal part). It is only when one wishes assert one belief over another that reasons need to be presented to show the other belief to be in some way wrong, and that alone does not in turn show that the proposed alternative is the one unique correct alternative, only that some alternative is needed, with the one put forth being merely one possibility.
In this manner, knowledge-building is not, on my account, about starting from nothing and building up to grander and grander certainties piece by piece, but rather about starting with limitless possibilities, yet no certainty as to which of them is correct, and then embarking on a never-ending process of narrowing down the range of possibilities by eliminating those that can be shown to be incorrect.
This epistemological view is more generally known as criticism, critical rationalism, or as applied to a narrower set of beliefs about empirical phenomena, as falsificationism; and it has been promoted by philosophers such as Immanual Kant and Karl Popper.
An immediate consequence of this view is the rejection of a view called confirmationism, which is the common view that if a belief has implications about what else one should expect to find true, and those expectations are later borne out, that that confirms the original belief, or in other words gives further reason to continue holding that belief. Falsificationists and critical rationalists more generally, including myself, hold this to be straightforwardly a case of a logical fallacy called affirming the consequent: given a conditional statement of the form "if P then Q", it does not then follow that "if Q then P", so even if it's true that if P then Q, and you find that the consequent Q is indeed the case, that does not thereby imply that the antecedent P must be the case; it might be, but it just as easily might not be, and to suggest that it must be just because the consequent Q was found to be true is to commit the fallacy of affirming the consequent.
The classical example of this is that if it were true that all swans were white, then any particular swan encountered would be found to be white, but encountering a particular white swan, or even many particular white swans, does not thereby prove that all swans are white, because it might still turn out to be the case (as in fact it is) that some swans are black, no matter how many white swans you've seen. (Indeed, as Carl Hempel points out, if that form of inference were valid, then because "all swans are white" and "all non-white things are non-swans" are logical equivalents, called contrapositives, the observation of any non-white non-swan, such as a green leaf or a red rock, would also count as evidence that all swans were white, which is intuitively absurd).
Thus one can never in any way positively confirm any beliefs to be true, just by finding that everything else so far seems to accord with those beliefs, because any new piece of evidence might always be the one to show those beliefs false. Beliefs — whole systems of belief, as no individual pieces of them can be tested in isolation — can only be shown false, or not yet shown false; never positively shown true.
But this does not imply that all beliefs not yet shown false are equal. Beliefs not yet shown false can still be more or less probable than others, as calculated by methods such as Bayes' theorem. Falsification itself can be considered just an extreme case of showing a belief to have zero probability: if you are frequently observing phenomena that your belief says should be improbable, then that suggests your belief is epistemically improbable (i.e. likely false), and if you ever observe something that your belief says should be impossible, then your belief is epistemically impossible (i.e. certainly false).
When it comes to practical decision-making, it is often most reasonable to act on the beliefs that have such a greater probability, to ensure a greater chance of success. But it is not epistemically wrong to believe something that is unlikely but not actually shown false yet, and as falsificationists like Popper have argued, it is in some ways even better to do so. That is because beliefs that are more specific and detailed, having higher information content, are inherently less likely to be true — or conversely put, a belief that is so broad and general that it could not possibly be false accomplishes that by claiming nothing of substance at all, leaving no claims open to falsify — so such unlikely, high-information beliefs that, nevertheless, still have not been falsified, have withstood much more testing than those that put forward nothing to test. And it is only by taking such risks, sticking our necks out and risking being wrong, that we can hope to find out more about what is wrong, and so narrow in further still on what in turn might still be right.
In general, I hold, we should tentatively adopt more specific and so risky beliefs when we can afford to risk being wrong, but when we cannot afford that risk, we should act in accordance with those beliefs that have the greatest probability of being true.
My general position on the methods of knowledge is what I call critical liberalism. That is to say, I hold that rather than by default rejecting all beliefs until reasons can be found to justify them, all beliefs should be considered justified enough by default to be tentatively held (the liberal part) until reasons can be found to reject them (the critcal part). It is only when one wishes assert one belief over another that reasons need to be presented to show the other belief to be in some way wrong, and that alone does not in turn show that the proposed alternative is the one unique correct alternative, only that some alternative is needed, with the one put forth being merely one possibility.
In this manner, knowledge-building is not, on my account, about starting from nothing and building up to grander and grander certainties piece by piece, but rather about starting with limitless possibilities, yet no certainty as to which of them is correct, and then embarking on a never-ending process of narrowing down the range of possibilities by eliminating those that can be shown to be incorrect.
This epistemological view is more generally known as criticism, critical rationalism, or as applied to a narrower set of beliefs about empirical phenomena, as falsificationism; and it has been promoted by philosophers such as Immanual Kant and Karl Popper.
An immediate consequence of this view is the rejection of a view called confirmationism, which is the common view that if a belief has implications about what else one should expect to find true, and those expectations are later borne out, that that confirms the original belief, or in other words gives further reason to continue holding that belief. Falsificationists and critical rationalists more generally, including myself, hold this to be straightforwardly a case of a logical fallacy called affirming the consequent: given a conditional statement of the form "if P then Q", it does not then follow that "if Q then P", so even if it's true that if P then Q, and you find that the consequent Q is indeed the case, that does not thereby imply that the antecedent P must be the case; it might be, but it just as easily might not be, and to suggest that it must be just because the consequent Q was found to be true is to commit the fallacy of affirming the consequent.
The classical example of this is that if it were true that all swans were white, then any particular swan encountered would be found to be white, but encountering a particular white swan, or even many particular white swans, does not thereby prove that all swans are white, because it might still turn out to be the case (as in fact it is) that some swans are black, no matter how many white swans you've seen. (Indeed, as Carl Hempel points out, if that form of inference were valid, then because "all swans are white" and "all non-white things are non-swans" are logical equivalents, called contrapositives, the observation of any non-white non-swan, such as a green leaf or a red rock, would also count as evidence that all swans were white, which is intuitively absurd).
Thus one can never in any way positively confirm any beliefs to be true, just by finding that everything else so far seems to accord with those beliefs, because any new piece of evidence might always be the one to show those beliefs false. Beliefs — whole systems of belief, as no individual pieces of them can be tested in isolation — can only be shown false, or not yet shown false; never positively shown true.
But this does not imply that all beliefs not yet shown false are equal. Beliefs not yet shown false can still be more or less probable than others, as calculated by methods such as Bayes' theorem. Falsification itself can be considered just an extreme case of showing a belief to have zero probability: if you are frequently observing phenomena that your belief says should be improbable, then that suggests your belief is epistemically improbable (i.e. likely false), and if you ever observe something that your belief says should be impossible, then your belief is epistemically impossible (i.e. certainly false).
When it comes to practical decision-making, it is often most reasonable to act on the beliefs that have such a greater probability, to ensure a greater chance of success. But it is not epistemically wrong to believe something that is unlikely but not actually shown false yet, and as falsificationists like Popper have argued, it is in some ways even better to do so. That is because beliefs that are more specific and detailed, having higher information content, are inherently less likely to be true — or conversely put, a belief that is so broad and general that it could not possibly be false accomplishes that by claiming nothing of substance at all, leaving no claims open to falsify — so such unlikely, high-information beliefs that, nevertheless, still have not been falsified, have withstood much more testing than those that put forward nothing to test. And it is only by taking such risks, sticking our necks out and risking being wrong, that we can hope to find out more about what is wrong, and so narrow in further still on what in turn might still be right.
In general, I hold, we should tentatively adopt more specific and so risky beliefs when we can afford to risk being wrong, but when we cannot afford that risk, we should act in accordance with those beliefs that have the greatest probability of being true.
Comments (246)
You're incorrigible, we're discussing this exact thing in the other thread and instead of answering the questions there, you just re-iterate the same theory in another thread? @StreetlightX, @fdrake could we please just move this to the thread that's already about this so that we can keep all the responses together, It's an interesting topic to participate in, but not split between two almost identical threads.
The point of that thread was not to discuss critical rationalism generally, but just about defining knowledge, especially in light of Gettier problems, in the context of that viewpoint. I have a bunch of things about epistemology -- my kind of critical rationalist epistemology -- that I want to talk about, but rather than one enormous post with all of my thoughts about everything to do with epistemology all at once, I'm trying to break it down into bite-sized pieces. I've had the OP of this thread written for a while and have just been waiting for that other thread to die out to move on so that I don't have more than one active thread on the front page at a time.
@StreetlightX, @fdrake, I started this in lieu of asking you to split the tangential discussion out of there for me, so as not to create more work for you, but if you really want to do something about it, feel free to split the argument about critical rationalism more generally out of that thread and move it here. Or not, if you don't care to bother, it's fine by me.
If a religious person started a thread about a particular narrow philosophy of religion topic, and a proposed solution to that narrow problem specifically from a theist viewpoint, and that thread devolved into a general "existence of God" argument rather than the narrow topic it was supposed to be about, would that not call for splitting off into a new thread?
The trouble is these threads have a trend, you start a thread seemingly about A and B in which you make the argument B follows from A. People chime in with arguments against A or B, and you reply "no, this thread is just about the fact that B follows from A, not about either B or A" - then you start a thread about how C follows from A and the same thing happens. The problem is that usually you're not far off right. B following from A is perfectly reasonable, and trivially so. What's of interest is A, not it's relation with B. So if we followed your restrictions, you'd just end up with a series of threads where you proposes strict logical relations between terms and the only answers you'd get are "Yes -that's is a reasonable thing to think" restating the principles of rational thought doesn't make for a very interesting thread. Discussing different beliefs which can nonetheless arise from the application of such principles, does.
I think, if you you start a thread in which A is already quite a specific proposition, and its relation to B is trivially shown, we needn't start a new thread to discuss A, we'll end up with a massive number of very short threads that way, and a great difficulty choosing where best one should respond.
In this case, the foundational arguments underlying this point of view that is the topic of this thread are to be found in this thread, but I've stopped linking back to that because you derailed the entire thing into an attack on one tiny facet of the general view espoused there, so that thread didn't end up actually being about the general principles I'd like to refer back to, but just about your objections to moral objectivism, which was not the point.
Again, as much as I hate comparing myself to religious people: if someone wanted to discuss a theistic view on ontology, a theistic view on epistemology, a theistic view of mind, a theistic view of ethics, a theistic view of will, a theistic view of politics, etc... and every single time they tried to discuss some specific view on a specific topic that presumes theism, the thread became just a big general free-for-all about the existence of God in general... that's kind of derailing the thread.
And now you're derailing this thread, like you have many before, by complaining about its very existence, since you've already been (analogously) "arguing about the existence of God" in another thread where that wasn't the topic, and now you're offended that I appeal to that same thing as a foundational belief in another thread about another topic.
It's the same topic you derailed the other thread into, it's not the same topic as the OP of the other thread.
ETA: Actually it's not even the same topic you derailed the other thread into, because you derailed the other thread into an attack on rationalism generally, not even critical rationalism specifically.
A, ~A, B, ~B, C, ~C...
Then,
Quoting Pfhorrest
So we now want to assert A over ~A; it seems we take some other belief, say D, such that D implies that ~A is false, and hence D implies A.
But why do we believe D? By the starting hypothesis, we have not decided that D and not ~ D is the case...
Do we refer to some further belief, E? Then we simply iterate the problem.
I can't see how this gets epistemology anywhere.
The trouble is, if you are going to show that the grand conjecture is false, you are going to need something that is both incompatible with that conjecture, and true...
Hence, you will need to already have other true propositions, besides the grand conjecture, in order to try to disprove the grand conjecture...
Confirmation holism, again...
Except it's still not clear that this is what your method does. Where's the "false" in your "falsifying"?
Quoting Pfhorrest
Yes, that's entirely what we're saying (and @Srap Tasmaner, above, too, I think). Your process isn't 'sorting beliefs'. It's pointing out that you ought to do some choosing between those that are contradictory. That isn't actually doing any sorting at all. Falsification does not provide the rational method of comparison, so I don't see how banno and I arguing against it amounts to us saying that beliefs are "immune to any rational process of comparison".
Quoting Pfhorrest
Repeating it doesn't just make the counter-arguments go away. Lack of proof is not the starting point. It is neurologically impossible to derive a belief without proof and extremely difficult (read impossible for all but the severely mentally ill) to maintain one contrary to all proof.
I don't think that's shown, or right. If I believe Jon has blonde hair, I can positively affirm this. Rather, it is general laws, such as are sought in science, that cannot be confirmed, because exceptions to those laws are always, in principle, discoverable.
It's possible that a given belief is not falsifiable at all, which should be a sign that it is extremely unlikely rather than robust. For this reason, I prefer a default position of scepticism for want of a good cause to entertain the idea.
The other benefit of scepticism is that saves one believing in two contradictory but as yet unfalsified theories: pending good cause to believe in one over the other, I'll usually credit neither.
Yes, but an important thing is that some beliefs are about the relations between other beliefs. If C = "A implies B", then you can rule out the possibility of belief D = "A and ~B and C". You still don't know whether C, and if C, whether A or ~B, but you know for sure that ~D.
Quoting Banno
Yes, that's the idea.
Quoting Banno
No, you only need to show that the grand conjecture contains inconsistencies.
I gave this example in the other thread but I'll repeat it here. Say it seems to you that doing a certain dance causes it to rain. Why it seems like that to you isn't important, that's just your "grand conjecture", it's the possibility out of the infinite possibilities that you've initially picked for whatever non-rational reason. Then one day you think you did the dance, at least you sure tried to, but then it doesn't seem to rain, at least not like you expected it to. So now it seems to you that A implies B, but also that A and ~B, which aren't possible together. So you have to revise your beliefs somehow or another.
You could just reject that A implies B: give up the theory that dancing causes it to rain. (But you don't have to, and actual sophisticated Popperian falsification never said that you do have to; you're arguing against the strawman of dogmatic falsificationism).
You could instead reject that ~B: insist that it did rain in a way consistent with the rain dance theory, but for some reason it just didn't seem to rain to you.
Or you could instead reject that A: figure that you must have done the dance wrong somehow.
In any of those cases, you're also going to have to rearrange the rest of your beliefs somehow or another to accommodate whichever of those you chose to revise. There's going to be many, many ways you could revise the rest of your beliefs to accommodate any of those. But somehow or another, you've got to change something, on pain of inconsistency, since you can't consistently believe that dancing makes it rain, you danced, and it didn't rain.
Quoting Isaac
I think this highlights a possible source of confusion between us here. When I'm speaking of sorting beliefs, I'm speaking of sorting between entire systems of belief, not merely between atomic propositions. You can always save some atomic proposition by sacrificing others instead, but every time something seems to happen contrary to what your complete system of belief says should happen, you've got to make some change or another to your complete system of belief, and the repetition of that gradually sorts out subsets of the set of possible systems of belief.
Quoting Isaac
I think you're confusing me with @Philosophim here, and also conflating "proof" with "suggestion".
It was Philosophim who was saying that you could come to some belief completely at random, and I agree with your criticism of that (as did he, in the end). I'm not saying everyone starts out with a blank mind. We start out with some ideas or others about how things are, but those ideas aren't proven yet (and that's fine), they're just our intuitive impressions of things, and different people may have different intuitive impressions of the same things (and that's fine).
What I'm saying is that by default, none of those differing intuitions has the burden of proof against the other; they're all equally fine interpretations of the limited available information, so long as they have all accounted for the same available information, and it's not until new information can be found to rule one or another out that there's any epistemological difference between them.
Quoting Kenosha Kid
Finally, someone comments on what I expected to be the controversial aspect of this (the "liberal" part), and not the boring generally uncontroversial aspect of it (the "critical" part).
Quoting Kenosha Kid
Can you really though? I mean, pragmatically speaking, in an ordinary sense, sure you can: you can look at Jon and see his hair is blonde. But in a technical sense, in the way Banno and Isaac are on about, it's always possible to instead revise a bunch of other beliefs to account for why it seems to you like Jon has blonde hair but somehow he really doesn't.
Quoting Kenosha Kid
There are two kinds of skepticism to distinguish here. I am completely supportive of one of them, the kind I call "criticism" (whereby it is possible to show reason to reject a belief), which Banno and Isaac are arguing against; but the other, which I call "cynicism" (whereby it is necessary to reject any belief until reason is shown to accept it), is what I'm arguing against, and which you seem to be arguing for here.
The most archetypical kind of cynicism, in this sense, is justificationism, of which most theories of knowledge are a form, though usually only tacitly, without their proponents realizing it. Justificationism is just the position that rationality means only holding opinions when you have reason to hold them.
But a famous trilemma, known by various names such as Agrippa's Trilemma or Munchausen's Tremma, illustrates how this principle leads directly to cynicism in the sense I mean here, or else to something tantamount to fideism (the rejection of what I call "criticism") instead. For any reason put forth in support of some opinion is itself another opinion, for which the justificationist must then, if consistent with this principle, demand yet another reason. But that in turn will be some other opinion, for which the same demand for justification must be made. And so forth ad infinitum. This can only lead to one of three outcomes:
- The most typical one is foundationalism. This abandons the principle of justification at some point by declaring some step of the regress of demands for justification to be self-evident, beyond question, without need of further support. That is transparently tantamount to fideism. Nevertheless, as I will soon explain, I have sympathy for the need to hold some opinions without them being rigorously supported from the ground up. I simply reject holding them to thus be unquestionable.
- Another possible outcome is coherentism. This appeals at some point to an earlier step in that regress as support for a later one, establishing a circular chain of reasons that together can then support other reasons. I am sympathetic to the coherency criterion employed here, as surely all of one's opinions must be consistent with each other, and finding inconsistencies is a good reason to rule out some opinions. But while that is a necessary feature, I think it is not a sufficient one: mere consistency is not enough to justify opinions in the sense demanded by justificationism, without again falling to fideism. For as that whole circular chain of reasons is then collectively unsupported and held as needing no further support besides itself, it is then, as a whole, tantamount to one big foundational, and therefore fideist, opinion.
- The last possible outcome, and the most honest application of justificationism (in that it never breaks from the demand for reasons, to hide instead in fideism), is infinitism. This accepts the infinite regress of demands for justification, leaving the initial opinion, any and every initial opinion looking to be supported, forever insufficiently supported. That leaves one unwarranted in holding any opinion, and so is transparently tantamount to nihilism. Self-avowed infinitists do at least nominally hold that knowledge is still possible, and therefore conclude that it must somehow be possible to have an infinite chain of justification, even while acknowledging that it would be impossible for anyone to ever complete one in practice. While I am again sympathetic to this unending search for deeper and deeper principles to underlie our opinions, as I will soon elaborate, this infinitist position seems to me simply incoherent when framed as a form of justificationism: if you cannot ever complete the chain of justification, and you must have justification to have knowledge, then you cannot ever have knowledge.
Most theories of knowledge are either foundationalist or coherentist, and most of those who reject both of those conclude that therefore knowledge is impossible, seeing infinitism to be as incoherent as I do.
But a few philosophers, including Immanuel Kant and Karl Popper, have instead rejected the justificationist principle tacitly underlying all of those positions, and instead say, as do I, that it is not necessary to reject every opinion until you can find reasons to justify it; it is only necessary to reject an opinion if you find reasons to reject it, and it is acceptable to hold any opinion, for no reason at all, until such reasons to reject it are found.
Like with coherentism, contradictions between different opinions are good reasons to reject some or all of them; and like with infinitism, this process of whittling away incorrect opinions is unending. But because both coherentism and infinitism tacitly accept the justificationist principle, neither of them quite adequately escapes the dilemma of either following it into nihilism, or else abandoning it for fideism.
When considering reasons to intend something rather than reasons to believe something, this anti-justificationism seems largely uncontroversial. Most people will accept that it is acceptable to do something simply because you want to do it, for no particular reason, so long as there is not a good reason not to do it. We don't demand that everybody stop doing anything at all until they can show that what they want to do is justified by the need to do something that is justified by the need to do something that is justified by the need to do something... ad infinitum. We instead just accept that they're free to do whatever there's no reason not to do.
My rejection of justificationism includes that kind of freedom of intention, and to deny such freedom of intention, as in to insist that nobody does anything until it can be shown that there is a good reason to do so, would also qualify as a form cynicism in the sense that I am against here. But my rejection of cynicism also extends equally to a freedom of belief like that put forth by philosophers such as Kant and Popper. I say that it is not irrational to hold a belief or an intention simply because you are inclined to do so, for no reason; it is only irrational to continue to hold it in the face of reasons to the contrary.
But in rejecting justificationism, I am not at all rejecting rationality, or the importance of reasons. I am still against fideism, against irrationally holding opinions in the face of all reasons to the contrary of them, or asserting them to others with no reasons to back them. I only hold, for the reasons I have shown, that such an anti-justificationist position is the only practicable form of rationality, the only one that leaves us with reasons from which to reason.
Justificationism, if true, would make it impossible to ever rationally hold an opinion, instead insisting either that we hold no opinions, or else hold some core opinions to be, quite irrationally, beyond question.
In rejecting justificationism, we make room to hold some opinions, still open to question, that can nevertheless serve as reasons to hold or reject other opinions. We do lose any hope of ever having absolute certainty in any of those opinions, as they all remain constantly open to question and revision, but justificationism never offered any hope of rational certainty anyway, only the irrational false certainty of fideism (or else none at all), and with justificationism out of the way we can at least begin to compare our tentatively held opinions against each other and progress towards gradually better sets of opinions.
No, you don't, on your very own account.
Anything goes. It didn't rain because the third violinist hit a bum note, or because the chorus girl's mum was a non-believer.
Nor have you found reason for rejecting justification. Anything goes.
You got that D = (A and ~B and C) where C = ~(A and ~B), so D = (A and ~B and ~(A and ~B)) and so is flatly self-contradictory, right? That's why we can know that ~D.
Quoting Banno
In which case the simple belief that dancing makes it rain is false, and needs to be modified with something else that takes into account the violinist's performance or people's beliefs too.
Quoting Banno
Then "not just anything goes" goes too.
Why are you even arguing if you think there's no such thing as any opinion being wrong? It's not like you think I'm wrong or something then, is it?
But now you're talking about ceteris paribus clauses and that's a whole 'nother minefield, as Nelson Goodman showed.
Best thing you've written in a while.
Ceter parabis clauses are exactly the thing at issue here. Falsification in the Popper sense, but not the strawman sense, doesn't assume there are any ceter parabis clauses. If you think that dancing causes it to rain, without any ceter parabis clauses, that means you think that if you dance then it will rain, period. So if you do actually dance (you didn't dance wrong or something) and it doesn't actually rain (it didn't just seem to not rain somehow), then your belief that dancing makes it rain must be false as stated. But maybe some modified version of it, with ceter parabis clauses, could still be true.
Adding them doesn't help because they can't be properly specified. Roughly I took@Banno's point to be that, since you are putting an essentially undefined set of beliefs on the table, you have far too many options for disconfirmation. It's the same as what goes wrong with c.p. clauses.
Firstly, that's not falsification. Falsification is about scientific theories, not logic. All you've done there is assert the laws of logic. If what you want to say is that one can hold any belief as justified so long as it doesn't contradict the laws of logic, then I don't think you'll have a single opposition. as I said above it's trivially true. The point a bout the laws of logic is that we didn't create them out of thin air and then teach them to everyone. they're like the law of gravity, things always used to fall toward the earth, we didn't force them to by the new law. People think in ways broadly construed by the laws of logic, which we then codified to help us avoid a few complex mistakes.
Secondly - mistakes. As I mentioned before (which you ignored) comparing the logical coherence in this way between even as few as ten networked beliefs will require you to carry out 3,6368,800 tests. Given that the types of belief you're talking about are very fine grained here, we'd bee needing to compare several thousand interconnected ones. We all know that we shouldn't hold two contradictory beliefs. The challenge is finding a pragmatic way to avoid it. Testing each one for logical coherence with each other one and discarding only the belief that two logically incoherent beliefs can be held simultaneously, is an impractically obtuse way of doing it. Or as Srap put it
Quoting Srap Tasmaner
Quoting Pfhorrest
Nobody believes that anyway. There's been no result gained from your experience of dancing. It was already that case that you couldn't believe these three things simultaneously. They merely exhaust the set of all possibilities, we can see that without dancing. You're implying the processes of falsification (testing one's beliefs) but reverting to simple logical laws when that process fails to yield anything useful. The actual testing of the theory "doing a certain dance causes it to rain" has no effect whatsoever on the conclusion you claim that test yields "you can't consistently believe that dancing makes it rain, you danced, and it didn't rain" We knew that by the laws of logic before we did the test.
Quoting Pfhorrest
No you absolutely don't I've just demonstrated that with your own example. the required change to you system of beliefs is not impacted one iota by the "something happening contrary to that system". All you're saying in your own example above is that you cannot have a logically inconsistent belief system. We knew that before anything happened.
The meat of the process happens when "you're also going to have to rearrange the rest of your beliefs somehow or another to accommodate whichever of those you chose to revise". If you think you did the right dance, and dancing supposedly makes it rain, but it doesn't seem to rain, and then you decide to resolve that conflict by rejecting the apparent fact that it didn't rain, then you're going to have to revise a whole lot of something or other to explain in what way it "actually did rain" even though it doesn't appear in any way to have rained to you. Invent a deceiving demon, or invisible rain, or something to excuse the appearance of no rain despite the "reality" you want to claim that it did rain.
And likewise with any of the other decisions you could make to resolve the conflict: reject that you did the dance right, or reject that dancing causes it to rain, and you have to revise whatever beliefs lead you previously to believe that you were doing the dance right, or that that would make it rain.
You have some complete network of beliefs according to which you ought to conclude from your experiences (i.e. you have all these theories that laden your observations such that you interpret them to mean) that you did the dance, that that should cause it to rain, but that it didn't rain. Yet you can't conclude all of those things at once. So you have to change something about that complete network of beliefs (the theories ladening your observations) to allow you to interpret your experiences in a way that doesn't imply that contradiction.
Yep. A fact which...
A) is not what falsification is about
B) we knew was true no less prior to the 'testing' than we did after it. This fact was completely unaffected by the actual testing of the theory
C) helps us not one iota to sort our beliefs because the only one we must reject is that we can believe contradictory things concurrently, a belief which we never had in the first place.
See earlier about strawmanning falsification. Actual falsification that Popper et al supported is not the dogmatic falsificationism that Quine et al opposed.
Quoting Isaac
We knew prior to the testing that we could not hold beliefs that would result in a contradiction. We did not know prior to the testing that our beliefs would result in a contradiction.
According to the beliefs we held before, what we seem to have observed should not have been logically possible, and therefore should not have been observed. Yet we seem to have observed it anyway. Therefore we must revise the theories ladening those observations, so that what we observed is not interpreted as being that logical impossibility.
I’d also like to repeat to you the same question I posed to Banno. If you think it is not possible to show any opinions to be incorrect, what exactly are you trying to do by arguing against mine? If you’re right to think that nothing can be wrong then I’m consequently not wrong to think that some things can be wrong.
And again, I really really didn’t expect the “sometimes beliefs can be shown to be incorrect” part of this to be the hot-button issue, but rather the “you’re not required to show that your beliefs are correct, you can just have them as you like (until they’re shown incorrect)” part, which I’d think you would like (besides the part in parentheses).
I think the idea of falsification as a way of narrowing the range of what could be true is really appealing, it's just not the whole story. One of the things that's wrong here, I think, is that the set of beliefs under consideration is treated as if it's frozen; it's entirely retrospective. There's some sense to this for the start of diagnosis -- I'm in an epistemic pickle, how did I get here? -- but there doesn't seem to be a way out if all you do is re-evaluate and re-arrange and re-classify that frozen set of beliefs. There are always ways to do that.
As Dewey would insist, we live constantly projecting into the future. The resolution to this kind of problem is going out and getting more data, which is what we naturally do anyway. We are never properly represented as having a fixed set of beliefs to play with; the contents of that set -- insofar as there even is such a thing -- is constantly shifting, in large part because we make it so. But that means the freedom you think you have to re-arrange your web of beliefs however you like is probably illusory, because taking action and gathering more and new types of data actually works, and there's no reason it should, if it's just a matter of choosing, even arbitrarily, what to keep and what to jettison.
I'll give an example, one that I always thought kind of illustrates the implicit existence of the web of beliefs, but will make Dewey's point as well: the discovery of cosmic microwave background radiation. I always loved this story. Penzlas and Wilson weren't even looking for it, but they had a nice radio telescope on the roof and had done a good job isolating what it should pick up. But there's a hum. At the very beginning, you count this as completely unexplained. What's helpful about the web of beliefs thing is that you can take a step back -- what other assumptions are we making? Top of the list is that the equipment is working properly. In the movies, at this point someone (or the machine itself) will "run a full diagnostic". So they did that. Maybe there's a problem on the roof -- in other words, something we can't even standing here in the lab. They climb up on the roof and find their beautiful dish full of pigeon nests and pigeon poop. Chase 'em off and scrub the thing clean, then check again. It's still there.
Sussing out your assumptions is helpful, because it could be one of those assumptions (that the machines are working, that there is no obstruction in the dish) could be wrong. But then you take action. You check. And once you've "run a full diagnostic" or maybe two, you count the equipment as working. You're done there. It doesn't stay forever in epistemic no-man's land as maybe still not working. Same for the roof. Once you've climbed up there and taken the action to nail down this assumption -- that the dish is in working order -- you're done. Now you have actually ruled out the hum as being an artifact of your equipment in some way, and you conclude that it is real and worth thinking about.
But in none of this are we just playing a formal game with a frozen set of beliefs and making choices about which to keep and which to discard.
I'm not sure exactly what point you're arguing for here.
Previously I thought you were arguing along the same lines as Isaac and Banno that you can't ever conclusively falsify any particular belief because you can always revise a bunch of other beliefs instead to excuse retaining that particular belief in the face of evidence that would otherwise seem contrary.
But now you seem to be saying that you can, with enough effort and checking, conclusively rule out some of the possible alternative explanations (the equipment isn't working, the dish is dirty, etc) and so be compelled to accept some particular conclusion (there actually is microwave-frequency radiation coming from every direction in the sky).
I actually agree with both of those things, in different ways. Technically you can always make something up to excuse any observation without it compelling you to reject some particular belief you want to retain. But practically there comes a point when what you have to make up to excuse the observation is so far-fetched, meaning that it requires you to change so much else about your belief system, that it doesn't make pragmatic sense to go that route rather than the far more parsimonious route.
E.g. with my previous example scenario about the rain dance theory, you could excuse the apparent lack of rain, without rejecting that you did the right dance and that that dance causes it to rain, by saying that the rain is invisible, intangible, etc. But then you have to rearrange all the rest of your beliefs to accommodate such a thing as undetectable rain. Probably far easier to just reject the rain dance theory. Actually, it's probably easier still to just reject the belief that you did the dance right. But then after trying all the different variations on dances, checking all the other confounding factors -- just like running a diagnostic on the equipment, making sure the dish is clean, etc -- at some point it becomes more plausible, requires fewer stretches of the imagination (modifications of other beliefs), to just reject the rain dance theory entirely.
1. Falsification of p is necessarily confirmation of not-p, however general not-p might be. It follows that the logic of falsification is no different than the logic of verification.
2. No one believes anything just because they feel like it, so the benefit flowing from that purported freedom is an illusion.
3. No system allows us to sidestep fideism, because given the scope of human knowledge, any individuals will necessarily take the majority of her or his beliefs on faith.
The logical forms of falsificationism and confirmationism/verificationism are completely opposite: one is the valid deduction of modus tollens, the other is the fallacy of affirming the consequent. One is "if P then Q, not Q, therefore not P" (valid), the other is "if P then Q, Q, therefore P" (fallacious).
Then there's the discursive difference, in terms of "epistemic rights", of critical rationalism (of which falsificationism is just the empirical species) vs justificationism: under critical rationalism you are permitted any belief by default until reason to prohibit it is found, while under justificationism you are prohibited any belief by default until reason to permit it is found.
Quoting Janus
"Taking things on faith" in the sense you seem to mean there is the exact same thing as I mean by "believing because you feel like it": something just seems true to you, you can't conclusively prove that it is, but you believe it anyway because you pragmatically have to believe something or other.
"Faith" in the sense of the fideism I'm opposed to is not just that. That is just the principle of "liberalism" I mentioned in the OP. Fideism is the opposite of the principle of "criticism" I mentioned in the OP. In terms of epistemic rights as above, fideism is not merely taking beliefs as permitted by default (which I'm for), but as obliged by default, as epistemically necessary, and so immune to questioning (which I'm against).
The opposite of the "liberalism" I mentioned in the OP, what I called "cynicism", either requires you resort to fideism in the above sense to find some ground to build up from, or else resign yourself to nihilism, having no ground to build from. "Liberalism", the opposite of that "cynicism", frees from needing a ground to stand on, letting you just float (so long as you avoid things that would drag you down), and so saves you from nihilism without resorting to fideism.
As I see it falsification has nothing at all to do with deductive logic. It has to do with the inductive and abductive logic of our dealings with the empirical world. Falsifying, for example, the assertion "Fire is phlogiston" is exactly equivalent to verifying "Fire is not phlogiston"; the logic is the same. If you think there is a difference there, then I'd be keen to hear your explanation.
Now say instead you think fire is not phlogiston. On account of that you expect to see certain things. You go make observations and you don't see those things. On that basis, you hold your theory that fire is not phlogiston to be falsified. That is a case of modus tollens: you think that not-A implies not-B, you see B, and you take that as evidence that A. That's a valid inference.
Well, that's a debated issue, but in neither Quine's view nor Lakatos's is it described as merely eliminating the logically inconsistent. It is about the role of observation, in whatever interpretation.
Quoting Pfhorrest
You're ignoring the argument. It's pointless just repeating the same assertion without addressing the issue. You said...
Quoting Pfhorrest
So the only thing you are able to rule out here is belief D that "A and ~B, where A implies B". No observation rules this belief out. It is ruled out by logic, it doesn't require any observation at all. So "we did not know prior to the testing that our beliefs would result in a contradiction" is false in respect of the beliefs you're claiming to be able to rule out "beliefs about beliefs".
Quoting Pfhorrest
Show your opinion to be less pragmatic, less honest, less useful... Show other people that there are reasonable alternatives...Undermine the rhetorical power such views have... There's all sorts of reasons to argue against an opinion other than the expectation that it can be proven wring by some mathematical algorithm.
But other (background) beliefs would compel us to interpret our observations (i.e. laden them with the theory contained in those beliefs) as demonstrating that self-contradictory belief to be so. Since a self-contradictory belief cannot be so — which we already knew, yes — we cannot within contradiction maintain those beliefs according to which our our observations demonstrate the contradictory belief to be true.
You keep saying I’m just repeating claims, but I’m repeating parts you apparently didn’t read, since they’re already addressing the criticisms you yourself are repeating.
If we ignored the potential fallibility of our background beliefs, an observation that is contrary (as interpreted by those background beliefs) to the particular belief we’re aiming to test would straightforwardly falsify that belief. That’s dogmatic falsificationism. Confirmation holism rightly points out in response to that that those background beliefs through which we’re interpreting our observations are themselves as subject to revision as the belief we’re aiming to test.
But that still leaves you with an observation that your background beliefs plus the theory under consideration together say should be impossible — logically impossibly, in conjunction with all those beliefs. All those beliefs and the thing they would have you say you observed cannot all be true at once. So, given that you did certainly observe something or other, you’ve either got to straightforwardly follow the implications of that observation (as interpreted through your background beliefs) on the falsity of the theory under consideration, or change something else about those background beliefs to allow you to reinterpret your observation as something else that doesn’t falsify the theory in question.
Either way, the observation of something your beliefs say should be logically impossible compels the revision of some beliefs or others to avoiding having to conclude that you observed something logically impossible.
But they don't "demonstrate the contradictory belief to be true". Observations don't demonstrate any belief to be either true or false, they underdetermine.
Try it in these terms...
Belief A is a belief about logic {X and ~Y, where X implies Y, is inconsistent}.
Belief B is an inductive belief {that X implies Y}
Belief C is a belief in an observation {that I just observed X and also ~Y}.
Resulting from C one could believe ~C, ~B, or ~A. One cannot believe A, B, and C. But we knew this all along. Prior to C, one could believe A and B. After C, one could believe A and B. C hasn't changed anything
Quoting Pfhorrest
Not with sufficient additional beliefs about C, and additional beliefs don't falsify anything.
Going from believing A+B to believing A+B+C+D hasn't falsified A, B, C, or D has it?
They underdetermine precisely because they are theory-laden and those theories with which they are laden can always be changed to change what the observation is taken to mean. Changing those theories to reinterpret the observation in a way that doesn’t falsify the theory you’re trying to test is still changing what theories you believe in response to observation. That you have a choice of which theories to change doesn’t undermine that.
Quoting Isaac
Belief C hinges on the theories with which the observation is laden. If you reject C, then you have to change those beliefs that would otherwise lead you to conclude that C. If you accept C, then you have to reject B (or A, but if we’re getting into the realm of possibly rejecting logical entailments then we’re free to be wildly inconsistent and not reject anything; all this is premised on caring about logical consistency).
In any case, you have to reject some beliefs you already had: either throw out B (the obvious first choice), throw out some part of the background beliefs that lead you to believe C (probably a much taller order), or throw out A (if you just want to give up on logical consistency entirely).
Quoting Isaac
If you have to add additional beliefs to hang on to your belief system — your belief system cannot retain consistency with your experience without adding those other beliefs —then you have falsified the negations of those beliefs. You still had to change something about your belief system, because the belief system exactly as you had it before proved irreconcilably with observation, i.e. it was falsified.
Here's an example that might make it clearer to you: say the hypothesis is that evolution has occurred; if that were true we might expect to see fossil remains that show structural similarities that we could take to support that conjecture. If we don't see fossil remains that show such a development that does not falsify the hypothesis of evolution definitively, because the fossils may have been destroyed by processes of which we are not aware.
To put it another way, if 'evolution then fossils with commonalities of structure in different species' can just as well be reversed: 'if fossils with commonalities of structure in different species then evolution'. This reversal would be a fallacy in deductive reasoning but not so in inductive reasoning. Of course there is no logical entailment in either of these inductive conjectures.
The whole point is that inductive reasoning is not valid. No number of observations of the consequent of an implication can tell you that the antecedent of it is true. It's invalid for one case, and it's still invalid after a million cases.
If all swans are white, then this swan will be white... this swan is white, therefore all swans are white? Of course not, but that swan is also white, and that one too, I've seen a million white swans, so clearly all swans must be white, no? Oh look, here's that new bird from Australia... (Also: here's a green leaf... and a million other green leaves. "All swans are white" = "All non-white things are non-swans", and a green leaf is a non-white non-swan, so a million green leaves proves that all swans are white... no?)
OTOH even a single observation of the negation of the consequent validly tells you the antecedent is false.
The thing is, the antecedent is more complicated than you'd initially assume, a la:
Quoting Janus
This is the underdetermination and theory-laden-ness that Isaac and Banno keep harping on, which is completely correct, but is not a point against falsificationism. It's not so simple that "if evolution then fossils"; it's actually "if evolution and a bunch of other assumptions then fossils". Not seeing fossils doesn't necessarily falsify evolution, but it falsifies something; if you don't reject evolution, then you have to reject some of those assumptions that would lead you to expect to never have evolution with no fossils, and in either case you're rejected something you believed on account of evidence that was contrary to the conjunction of all your beliefs.
Formally, you've got "if (A and B and C and ...) then Z", and "not-Z", therefore "not (A and B and C and ...)" or equivalently "not-A or not-B or not-C or ...".
When it comes to science inductive and abductive reasoning is all we have. My point has been that there is no real difference in inductive and abductive reasoning between verification and falsification; insofar as neither verification nor falsification are deductively certain, ;logically entailed. And that is why I say it is oversimplifying and misleading to present them as being logically the same as modus ponens and modus tollens respectively.
Also, the point about the whole network of knowledge being in play is well taken against oversimplifying the issue I think.
On a justificationist account, sure.
The critical rationalist / falsificationist account is aiming precisely to rectify that problem.
Induction is fine as a way of generating hypotheses, by identifying patterns in observations. But then when it comes to seeing if those hypotheses really work out, finding out if they're really true, seeing that the pattern you suspects always holds has continued to hold so far doesn't really tell you much of use.
That last sentence reminds me of a great little video that turns out to be about exactly this subject:
[video]https://www.youtube.com/watch?v=vKA4w2O61Xo[/video]
I don't see how it can do that when it necessarily relies on the inductive assumptions that have been codified as the so-called laws of nature. If nature were not assumed to be invariant, then all of science would be utterly at sea; and I can't see how falsification could help with that, because the compass is always abductive reasoning based on induction (the expectation that the law-like behavior of things will remain as it has been found to be). Falsification can never be definitive, any more than verification can. How much less definitive would it be if there were no inductive assumption that the laws of nature cannot change?
When it comes to simple observation claims I agree that verification and falsification do appear different. But again they can be reversed to make verification and falsification logically equivalent. So, I could assert "Not all swans are white", and the observation of a black swan would verify this claim just as it falsifies the obverse claim "No swans are white".
PS. Cute video! I got the rule before the participants, but it took a little while.
Assuming that there are some invariant laws of nature or another is not itself induction. Only seeing a lot of examples of a pattern and taking that as evidence that that pattern is an invariant law of nature is. Falsificationist methods still look for invariant laws of nature, but rather than inductively inferring them, they try to find bounds on what they could be. That's what the video demonstrates: finding examples of your hypothesis being right doesn't tell you anything. Finding out what hypotheses are wrong tells you something useful.
Quoting Janus
The observation of a black swan would falsify that all swans are white and whatever background assumptions, which does indeed then entail that not all swans are white or some of those background assumptions are false; but that's just because "there exists a non-white swan" is logically equivalent to "not all swans are white", so if there seems to be a non-white swan then either there is indeed a non-white swan or some of your assumptions through which you're interpreting the apparent observation of a non-white swan are false.
The observation of a black swan would tell you nothing at all about whether "no swans are white". Just flip it around for the obvious counterexample: medieval Europeans would have claimed that no swans were black, and looking around to see lots of white swans... would not confirm that, because they always might, and eventually would, come across some black swans anyway. There being no white swans doesn't even demand that there be any black swans... there could be no swans. It's true that there are no pink unicorns, not because there are lots of blue unicorns, but because there are no unicorns at all.
We assume that the laws of nature are invariant because all our observations to date verify that they have been. Or obversely, you can say that none of our observations have falsified that they have been. The assumption that they will be invariant in the future is simply an expectation that we share with the animals; the habit of thinking that things will be as they have been. The point here is that verification is nothing more than lack of falsification, and vice versa; they are two sides of the one inductive coin; and neither of them definitive.
Quoting Pfhorrest
Sorry, that wasn't as I intended ; it should have been "So, I could assert "Not all swans are white", and the observation of a black swan would verify this claim just as it falsifies the obverse claim "All swans are white".
It is. Your claim is the ability to falsify them, not change them around.
Quoting Pfhorrest
No, as I said below, usually all that's needed is additional beliefs - no changes required.
Quoting Pfhorrest
Great, glad we agree there, if we can drop ~A as an option, it makes discussing the example simpler.
Quoting Pfhorrest
a) Why is throwing out C a much taller order? In the vast majority of complex cases, the idea that our observations are incorrect is the most go-to answer to any inconsistencies. We can't even trust our observations with the simplest of matters (see optical illusions), so when it comes to interpreting complex scientific experiments, rejecting C is the number one choice.
b) More importantly, you're doing your old trick of completely ignoring the bits of the counter argument you don't like. I've just explained how there's no need to reject C, one usually adds beliefs to C, such as the example Srap gave (we are seeing these signals on the device, but additionally we believe there's some dirt on the dish, that would explain them). They weren't entertaining the idea that the readings they saw were illusionary, only that, the world contained a reason for them that they did not currently know - an additional belief.
Quoting Pfhorrest
Which is, I believe, what @Janus is trying to explain. You can't just claim falsification by subsuming falsifying the negation of a belief. That's argument by re-definition of the terms. How is falsifying ~D different from verifying D? D and ~D exhaust the set. By your own examples of the difference it's evidently not true that you have falsified ~D by requiring D to shore up your belief systems. E might have done the same job, or F, or G. The fact that D happens to work as an additional belief to make observation C fit with belief B does not in any logical sense imply we have falsified ~D. Having claimed to understand underdetermination, you then proceed to presume it doesn't exist at every new turn of your argument. The additional beliefs which allows C+B is underdetermined by the range of available additional beliefs which would do that job.
Being epistemically compelled to change beliefs just is falsifying them. It feels like you're being intentionally obtuse here. If you can't (without inconsistency) keep believing things exactly like you believed them before, but have to make some change to what you believe, then the conjunction of the things you believed before has been falsified. To insist that it has to be one particular belief that you were specifically setting out to test that is falsified in order for it to count as falsification is to argue against the strawman of dogmatic falsification.
It really seems like we're getting into a stupid argument over what does or doesn't count as "falsification", when that's not even the main label I apply to my views; I only mention it as another name more commonly given to them, even narrower than "critical rationalism", which I what I started out talking about in the other thread, so as not to bring my specific epistemological framework into it too much. The unique thing about critical rationalism / falsificationism that my view shares is the principle I call "liberalism", which says to go ahead and hold beliefs even if you can't prove them conclusively.
But the aspect of my view you seem to be arguing against is the one I call "criticism", which is a disambiguated subtype of the broader category commonly called "skepticism" (as @Kenosha Kid called it earlier) or "rationalism" generally. It's just the negation of fideism, where that in turn is the claim that some beliefs are beyond question, beyond refutation, unable to possibly be shown false or incorrect or wrong. You seem to be taking the even further stance that all beliefs are like that, which really surprises me coming from you, because I had you figured for the hard-science irreligious type, but now you seem to be saying "everyone can just believe whatever they like and there's no figuring out who's right or wrong". (Including, because you're jumped on this strawman before too, any less boolean degrees of rightness or wrongness, not just some black-and-white absolutism).
Quoting Isaac
Adding a belief to your set of beliefs is changing that set of beliefs, and as above, if you're epistemically compelled to make that change, that's the same thing as that set of beliefs being falsified.
Quoting Isaac
This is a good point. I was imagining the rain dance example scenario, where "throwing out C" means positing invisible rain, not the radio telescope example scenario, where "throwing out C" means positing dirt on the dish. I did only say that throwing out C was "probably", not "necessarily", a taller order, but yes, which revision to your beliefs is a bigger, less pragmatic hassle will vary from scenario to scenario.
Quoting Isaac
You mean the parts I've already given counter-counter-arguments to? Yeah, I don't proceed on the assumption that something I just showed false was actually true.
Quoting Isaac
And I've just explained how adding to C is changing C, and changing away from something just is rejecting it.
In Srap's example, "C" is the set of all of the background assumptions made when first making the observation, which include that the dish is clear of debris. Upon seeing an unexpected signal, a possible revision to the beliefs to account for that is "maybe there is dirt on the dish". Because "there is no dirt on the dish" was one of the beliefs within C, positing that maybe there is dirt on the dish is a change to C, a change away from the old C to some new set of background assumptions very much like C but different in whether there is thought to be dirt on the dish. That constitutes a rejection of C.
(Of course, in the actual case of Srap's example, that replacement for C in turn was quickly falsified itself, as the observations expected from the hypothesis that there is not dirt on the dish soon failed to materialize, when they didn't see any dirt on the dish. Sure, they could have still hypothesized invisible dirt instead of abandoning that hypothesis, but supposing there's a CMB was less of a huge change to the accepted view than everything that would be required to suppose there's invisible dirt on the dish).
Quoting Isaac
Yet nevertheless additional beliefs are required, which constitutes a change away from the previous set of beliefs, which constitutes a rejection of that set of beliefs as it was. That the new set of beliefs is very similar to the old one doesn't make a difference.
Yet...
Quoting Pfhorrest
So, what is it you imagine the fideist thinks in our scenario that is different from the critical rationalism you espouse? They believe A. They believe B, they believe C (which logically contradicts B without some additional beliefs - ie contradicts A). What then? Using a real example, what does this mythical fideist then think?
They're not allowed to explain C away using any revisions or further beliefs, because that's just 'critical rationalism' apparently. So they must literally believe two contradictory things with no reason as to why, despite also believing it is impossible for two contradictory beliefs to coexist.
Who are these people? Examples please.
A fideist thinks at least one of their beliefs is not subject to question. In any of the A-B-C scenarios we've been discussing, they hopefully will admit to A (not be explicitly logically inconsistent), and probably have some B that they hold immune to question, and so will resort to revising C.
Take for concrete examples any tortured argument that some evidence that seems to disprove a religious belief isn't really the evidence it seems to be because [convoluted excuses]. Or substitute "religious belief" with "conspiracy theory"; the pattern is the same. Anyone who clings to some particular belief with unreasonable tenacity and will jump through whatever mental hoops necessary to excuse or dismiss any evidence that would otherwise apparently disprove it.
Quoting Isaac
No, they are allowed to explain away the observations by revising C. That's something that critical rationalism has in common with fideism, and different from justificationism, not something unique to critical rationalism. I've only been arguing against your apparent claim that critical rationalism cannot do that and still be critical rationalism, not that that is something only critical rationalism can do.
Fideism shares the "liberalism" part of my "critical liberalism"*, but it negates the "criticism" part of it: the fideist agrees with me that you are free to hold beliefs without proving them first, but also thinks that you may hold beliefs as utterly beyond disproof.
What I term "cynicism" (comprised mostly of justificationism minus that parts that jump ship to fideism) instead shares the "criticism" part with me, but negates the "liberalism" part: the "cynic" agrees with me that everything is subject to questioning and might be disproven, but also thinks that you have to conclusively prove anything before it's okay to believe it.
*(The term "critical rationalism" is not "criticism + rationalism" in the same way that my term "critical liberalism" is "criticism + liberalism", even though they mean the same thing. Instead, it's "rationalism inasmuch as it it critical, but not inasmuch as it is 'cynical'", where "not 'cynical'" is precisely the thing that I mean by "liberal").
So how do you, in practice, distinguish such thinking from the critical rationalist scientist who, we've just established, is more than likely to pick C to revise also? Is there some psychological test of one's intention behind choosing C to revise?
Quoting Pfhorrest
How are you determining "unreasonable tenacity"? We've just established revising C is a perfectly reasonable option, so you can't judge it simply on a preference for revising C.
Again, you've failed to provide the asked for examples. What we need to go through is a concrete example of a fideist refusing to revise B in the light of C where it is 'unreasonable' of them to do so, as compared to a critical rationalist deciding not to revise B, but rather revise C and it being 'reasonable' of them to do so. Otherwise you're at risk of arguing against a straw man version of fideism.
Underlining your model here, just as in your meta-ethic, is an unwritten clause that you personally get to be the final arbiter. Here, we can't distinguish between the fideist rejection of C and the rationalist rejection of C without drawing on your personal subjective judgement of what is 'reasonable tenacity' and what isn't.
That doesn’t mean we can’t discuss the merits of using different methods ourselves in the first person. Which is all I do in my arguments for my methodology: illustrate why doing things otherwise is more likely to lead you into or keep your in error than this way, so it’s in your interest, if you care about figuring out the truth, to do it this way. It’s not all about judging other people.
Fideism proper really only applies to beliefs that are unverifiable. Such beliefs are obviously also unfalsifiable, since verification and falsification are just two sides of the one (non-definitive) coin. Belief in God, Karma or rebirth are examples.
In practice we are all fideistic, insofar as we all believe things we have no hope of personally confirming or dis-confirming. The vast bulk of what every person takes for granted falls into this category. The only difference between this practical everyday fideism and fideism proper is that the things believed in the former case may be confirmed or disconfirmed in principle, if not in practice.
Yes, I call that “transcendentalism” and reject it precisely because it demands fideism. I considered mentioning that in my response to Isaac just above, because that is a circumstance where you can be sure someone is using fideism since there is no alternative then, but I decided not to complicate things.
I see logical/mathematical propositions as being essentially different than philosophical and moral claims inasmuch as their rule-governed truth or falsity is intuitively self-evident to the unbiased observer (given they have acquired the requisite expertise). In other words when it comes to logical and mathematical propositions there are definitive rule-based correct and incorrect answers.
It seems obvious that philosophy is not like this, and the history of philosophical ideas confirms that; there is widespread disagreement among the experts. There may be near universal agreement about moral principles concerning certain extreme acts like murder, rape, child abuse and so on; but I think this is likely underpinned by what might be called "normal" human faculties for compassion and empathy, as well as the inevitable social conditioning that comes with the pragmatic need to proscribe anti-social acts, particularly the more egregious ones.
This might be the right point to confront something @Isaac is always reminding us about: the stories we tell about our beliefs are post-hoc. They are rationalizations. That needn't mean they are bad or untrustworthy or invalid or indefensible, but it's worth bearing in mind.
What is the situation when our boys "switch on" the radio telescope? What "set of beliefs" do they hold? There's no reason to think they believe there are no pigeons nesting in the antenna; I believe they discovered them when they checked the antenna, and they thought this explained their results. Do they hold some more general belief that they antenna is unobstructed? I don't know, and I doubt you do either. So far as I can tell, they would have no reason to hold a belief either way about it being obstructed. They probably observed its construction or installation, and would have memories of seeing at that time that it was unobstructed; does that mean they held a continuing belief that it continued to be unobstructed? I doubt it, but we'll come back to this in a minute. (Btw, pictures show the radio telescope not to be on the roof and not exactly a dish either, both mistakes of mine.)
Similar remarks about the equipment in the lab: did they hold a belief that it was all in working order? More likely, but again there's a temporal issue: did they believe it was a-ok as they got the readings that puzzled them? Surely, else they wouldn't have been taking readings. Maybe in preparation for taking first readings, they did some tests. What if they didn't? If you grab a jug of milk out of the fridge, do you hold a belief that it won't split open? What about a belief that a hole won't spontaneously appear in the bottom?
We're accustomed sometimes when doing philosophy to talk about "belief" this way, as a sort of abstract mental correlate of the actions we take. (I have defended talking this way on this very forum.) Sitting "implies", in some sense, a belief that the chair will hold our weight, that it's real not an illusion, that it won't turn out to be made of some other material than it appears to be, that it won't spontaneously move or even disappear, and so on.
One reason this attribution of belief feels okay is our experience of finding that an assumption we've made was incorrect. But what does that mean exactly? What is an assumption like? An awful lot of assumptions, including the ones that turn out to be incorrect, are not held explicitly; does it help to describe them as being held implicitly? Some we might be inclined to attribute to people in order to make sense of their behavior; if you fish a coin out of your pocket and put it into a vending machine, you must be assuming the coin is legal tender the machine won't reject. You're not holding such a belief explicitly, but you're assuming it's the case, and even that only implicitly.
How does that actually help us? Suppose the coin is accepted; does that justify our assumption that it was legal tender? There's no logical reason not to say that, I don't think, but it's not the first thing I'd reach for in describing the situation. What if it's rejected? We try again, and it's rejected again -- sometimes they just don't quite catch right. What would you do next? You'd have a close look at the coin. Is it damaged? No. Maybe it's fake, doesn't have the right weight.
What's going on here? Have you found out you must reject your belief that the coin was genuine? Maybe, kinda. But when did that happen? And how? You expected the coin to just work, that much is clear; when it didn't, you could shrug it off and try another coin (vending machines are a little unpredictable) and never think about it, or you could look for an explanation.
I suspect cases where the natural next step to take is the logical analysis of the set of beliefs you held right at the moment when things started going wrong are pretty rare overall. The natural step is often going to be investigating, at least a little, looking at stuff. And some theorizing, or hypothesizing. I think this is the moment where you might identify an assumption that the coin is genuine, but only because it is now suddenly in question whether that's true. In other words, it might occur to you (or not) that the coin being fake would cause the machine to reject it. "The coin is not genuine" would appear in your world not as the negation of some belief you actually held, implicitly, but as an hypothesis that could explain why it was rejected. Implicit assumptions seem generally to show up this way -- not in themselves, and not in the form we are claimed to have held them, but negated, when the converse might be the explanation we need.
So in Holmdel, New Jersey, did Penzias and Wilson assume the equipment was still working having checked it out at some earlier time? Why not just say that it occurred to them that a malfunction might cause the readings they were getting. Did they assume nothing was obstructing the antenna? In particular that there were no pigeon nests in it? Of course not. But it might occur to them that some kind of obstruction might cause the results they got.
You can patch these things together after the fact into a logical structure -- we're really, really good at rationalizing, but so what? I hope it's clear, I'm not trying to reform how we talk about assumptions and so on, but I do think trying to formalize this way of speaking into a logical system that allegedly explains how people come to believe what they do or how they change what they believe -- it's a mistake. I think its mistakenness shows up in part in its inability even to do what it claims -- eliminate false beliefs. It also fails to account for the fact that investigating actually works -- it shouldn't, because you can always just reject the new observation, or you can find some way to take it on-board without falsifying anything, always.
That's my sense of things. I think the whole approach (and it used to be mine too) is a mistake, just the wrong way to think about beliefs.
Quoting Srap Tasmaner
I think the reason for that is that the adjustments that must be made to consistently accommodate the new observation without falsifying the most obvious thing it would falsify (the thing you nevertheless want to continue believing it) quickly become more and more unwieldy.
"Strange radio signal? Maybe it's an equipment error. No. Maybe it's something on the dish. Pigeons were on the dish, now they're not, so that fixed it? No. Maybe something else is still on the dish. There was dirt on the dish, now there's not, so that fixed it? No. Maybe there's... invisible dirt on the dish? No, that's too far-fetched (would require modifying too many other assumptions). So maybe there is a real radio signal. Is it coming from other terrestrial sources? No. Other astronomical sources? No. Could there be... invisible sources? That's almost as far-fetched as invisible dirt. (But lets check anyway... nope). So maybe there really is a microwave-frequency signal coming from all directions? Well it looks like it's either that or something like invisible dirt, and as weird as some cosmic microwave background radiation is, that's less weird (requires modifying fewer other assumptions) than invisible dirt."
On what grounds do we judge things to be "far-fetched" if not on the basis of inductively formed beliefs or attitudes which are adopted implicitly or explicitly, or else simply believing some "official" story which is itself in the final analysis purportedly based on inductive confirmation? You won't be able to eliminate inductive thinking as easily as you apparently think you will.
Quoting Pfhorrest
But more than the amount of modifications, it's about the amount of exceptions to an otherwise more parsimonious system of beliefs. If you have the choice between a parsimonious system plus a huge mountain of exceptions, or a slightly less parsimonious system that's still more parsimonious than the other plus its mountain of exceptions, it's pragmatically more useful to go with the latter.
If we didn't care about parsimony at all, we could always just hold a belief system that consists of an unorganized list of all of the uninterpreted particulars of every experience we've ever had, but that wouldn't mean anything to us, it wouldn't show us any connections between things or highlight any patterns in any way that allows us to usefully interact with the source of those experiences. The whole reason to form theories at all instead of just keeping unorganized lists of experiential minutia is to have that easier-to-use, more-parsimonious abstraction to work with, so it's counterproductive to pick a less-parsimonious explanation when a more-parsimonious one that equally fits the experiences is available.
My only point about induction is that it doesn't prove anything. If you induce from a pattern of white swans that all swans are white, and someone else disagrees, pointing to more white swans doesn't rationally settle that argument, i.e. it doesn't show that you're right and they're wrong, or even that you're more likely to be right and they're more likely to be wrong. (Consider the possibility that they're from Australia and know firsthand that there are black swans; no number of white swans you show them will matter at all to them.)
In order to advocate it you must at least have judged that there exist people which do not follow this method, otherwise it's like advocating breathing. So it is implicitly, very much about judging other people, especially as you've advocated this method on a board dedicated to the discussion of philosophy, not class of primary school children whom you might prima facie suspect of benefiting from guidance.
No. It's very much about judging other people.
The point here is that you cannot get out of your 'algorithmic' method without resorting to subjective judgements of
Quoting Pfhorrest
Quoting Pfhorrest
Quoting Pfhorrest
Quoting Pfhorrest
So you cannot dismiss anyone's though process without that dismissal simply being grounded on the fact that you personally find their revision of C (rather than revision of B) to be 'unreasonable' in the circumstances - yet you've given no account at all of how you justify that assessment.
We can of course know when they tell us, but that's not the scenario you asked about; you asked how we can tell. There are plenty of people who tell us that they use (and advocate the use of) fideistic methodologies; basically all of "Reformed epistemology" is about that. See for contemporary examples William Lane Craig or Alvin Plantinga.
Quoting Isaac
You always seem to forget that I consider all of the philosophy I'm advocating to be a shoring-up of common sense against badly done philosophy. I'm not trying to say that ordinary people all do things wrong and here's the secret way to do it right. I'm trying to explicate what is right about the way most people usually do things, and identify the kinds of deviations from that that can lead to ludicrous philosophical nonsense.
Quoting Isaac
See the several preceding posts where I discuss parsimony as the rationale behind things like "unwieldy". The rest of those quote snips are either explicitly describing someone else's subjective judgement, or speaking loosely in conversation (assuming that we have some common ground in our casual, on-the-ground opinions, that I can refer to, despite our disagreement on technical philosophical things) and not as part of explicitly defining my philosophical position.
The more you write the more convinced I become that you're not arguing in good faith, but either have some kind of vendetta against me in particular (for reasons I can't even guess) or else just always argue to "win" rather than have an honest cooperative investigation of ideas.
But is this what we do? Is it even what we should do?
Penzias and Wilson switch on the machine, expecting not to be receiving a signal. But they are. That expectation has clearly not been met.
For you, what's been "falsified" is a a two- or three-layer cake: background assumptions, working theory, specific prediction. You think the next step taken is logical analysis, even if only implicitly: some member of the conjunction of the members of the set of beliefs held at the moment is false, making the conjunction false, preserving the truth of the conditional with a false consequent. Any falsehood will do for this to work, and in a sense this saves you from having to, per impossible, enumerate the background assumptions, because you can just examine them as they come up: if this one's still true, fish the next one out of the bag and check it.
While this is more or less fine from a logical point of view, it leaves out a lot of what we know about how people actually go about this, and how they can do so successfully, in a way that is worth the rest of us considering a model of rationality. You'll tend to shrug off some of this as if it's okay to have a general theory and a practical way of applying it -- but that's not okay in this particular domain, as ought to be obvious.
For instance, how are the background assumptions and theoretical commitments in your big conjunction ordered? Order doesn't matter for conjunctions. In what order are they examined? Is there a method, or is it more like the random 'fishing a belief out of a bag' I had above?
And what does it mean to examine a background assumption and see if it holds? Is that a logical process or is investigative, gathering more information? For instance, you could take an assumption, once somehow identified, and ask, could this be true and my original expectation fail? Swell, but the list of assumptions that will pass that test is uncomfortably large and most of them aren't helpful for what it sounds like they're trying to help with: not explaining the failure of the big conjunction, but the fact of the new observation, which happens to differ from what we expected but is a positive fact in its own right.
In real life, we don't churn through the big conjunction; instead we hypothesize explanations for the phenomenon it turns out we are observing, though we didn't expect to be. Penzias and Wilson look at the readout and are surprised. The question they will now try to answer is obviously, what caused this? Candidates include a fault or even a design flaw in the equipment, or maybe something obstructing the antenna. They're looking for a particular sort of thing that would cause a constant signal to be reported.
As you would have it, they consider statements like this: "If assumption A is false, that's consistent with prediction P failing." But in real life, people consider candidates like "If A2 is the case, that would cause P2" -- where A2 is one of the ways A could be false, and P2 is the observed way that P is false. There's an asymmetry here that cripples the formalist approach: "2 + 2 = ___" has one way to be true but a literally infinite number of ways of being false. That applies both to the prediction and to the so-called assumptions. We don't need a way of corralling those infinities because they're not real for people dealing with real problems.
You can try to layer on more formalism to bring your theorized process of belief revision closer to what we know people do and to what we know works -- rather in the style of talking about measuring the distance between possible worlds -- or you can just accept that the model you started with is actually getting in the way of understanding what really goes on and what is known to work.
OK, I thought you were wanting to say more than that. Sure, induction is not deduction: Hume made that point more than 200 years ago.
It remains the case that inductive thinking is indispensable to our endeavors to discover comprehensive models of natural processes which are consistent and coherently incorporated into a 'master model'.
We might be free to choose arbitrary hypotheses which are not derived from inductive thinking, but although such hypotheses, as Popper points out, may inadvertently lead to inductively based discoveries and theories; they remain peripheral, if not completely dispensable, to the process, it would seem.
The point that you seem to be glossing over is that it is on the basis of inductive thinking that we decide which of the range of possibilities we can imagine are "far-fetched".
I don't see what's obvious, or what particular domain you're referring to.
A large part of all critical rationalism, including mine, is that there's a lot of freedom that's rationally permitted in the epistemological process, something I'd think Isaac and Banno would like. Using the epistemological-deontological analogy again, we normally recognize it as a crazy extreme to either say that every action is either mandatory or fobidden, nothing merely permitted but omissible; or conversely to say that absolutely anything goes and there's nothing at all that is mandatory nor forbidden, everything is permitted and omissible. I'm applying that same standard to epistemology as we ordinarily do to deontology, saying that there's a lot of the process where you don't have to do it one way or another, you can do it however you like, and so long as you stay within the wide bounds of the few things where it does matter that you do it one way instead of another, you'll be fine.
Point being, the fact that I haven't specified exactly how to do every step of the epistemological process is a feature, not a bug. I'm not trying to give mandatory procedure for every "what do I do now?" question that comes up in every investigation. For a lot of those questions, the answer is just "try something, anything", and then if that doesn't work out, the parts of the procedure that are specified will eventually tell us that.
Quoting Srap Tasmaner
It's like "fishing out of a bag", but there's a rough natural order to even the process of fishing something out of a bag. You reach in blind, not knowing what's in the bag or aiming to grab any one thing in particular, but you're more likely to seize onto one of the largest things in the bag first, and only after all the bigger things have been pulled out will you end up grabbing the tiny pebbles and grains of sand in the bottom of the bag.
Quoting Srap Tasmaner
A little bit of both. Reach in the bag of assumptions and grab the first thing you find -- probably some big obvious thing. Ask yourself, "without this, would I have expected these results?" (e.g. "if there were dirt on the antenna, contra my implicit assumptions, would I expect to see this signal?") If no, put it back and fish something else out until you get a "yes".
If yes, look for something that would test the new set of assumptions. (e.g. "do I see dirt on the antenna, as I would expect to if I thought there was dirt on the antenna?"). If the test is successful (i.e. you don't see anything unexpected anymore), then you're good to go for now.
If the test fails (e.g. you don't see dirt on the antenna, as you would expect to if you thought there was dirt on the antenna), you could start fishing around the bag of assumptions for something that would explain that observation, or you could fish out a different assumption to explain the first unexpected observation instead.
Which path you go depends on what the next biggest thing you grab onto when you reach into that bag of assumptions is. (E.g. if "there's not radiation coming from every direction in space" comes up first, i.e. with less digging for deeper, harder-to-find assumptions, before "dirt can't be invisible", then you try eliminating the first one before you resort to the second).
I have already described what I mean by far-fetched in a way that has nothing to do with induction, and everything to do with parsimony.
This post in response to you yesterday.
Not on my reading of it it isn't. Reformed epistemology is just saying that belief in God might be a basic immutable belief (like we've already agreed logic is) in a world where God exists. So if there's a God, then a belief in God gained from introspection (the 'feeling' that there's a God) would be expected, and therefore it's a reasonable conclusion. They're suggesting that the 'feeling' that there's a God can be given equal footing as the 'feeling' that there's a table in front of me. Neither can be proven, and in that sense their position is, as you've said, akin to yours (and mine, incidentally) against foundationalism.
Quoting Pfhorrest
Hang on - are you describing the way our minds actually do work, or advocating a way they should work? You seem to flick between the two. If the latter then you are definitely criticising the entire group not currently using that technique, which must be at least a majority otherwise your model would be primarily descriptive. If the former, then you're just plain wrong. this is not how people think. I suggest getting out of the armchair and doing some research. The third option - a 'common sense' model of how people think they think - is next to useless. Why would we want that?
Perhaps we could have an example of this "badly done philosophy" that we can work with. Something which seriously advocates the permanent ignorance of all contrary evidence - so we can see what you're up against.
Quoting Pfhorrest
Ah - so we can add 'parsimony' to the list of your personal subjective judgements used to decide who's beliefs are justified and whose aren't - or are you claiming there's some objective algorithmic method of determining parsimony? Do we parcel up our beliefs into atomic packages and enumerate those required to shore up some theory or other and decide based on the final tally which to believe?
Quoting Pfhorrest
Well, if I've taken them out of context, then let's clarify. The context I'm now specifying is in the matter of judging whether someone's revision of a C-type belief (observation/interoception) is motivated by fideism about the the B-type belief that a first reading of C would contradict, or a genuine rational assessment that their C-type belief is simply the better one to ditch on this occasion. How do we judge? - without using the subjective measurements (that were apparently not applicable to this context) -
Quoting Isaac
and we can add 'parsimonious' to that list too.
That is exactly what I mean by fideism. If you think any beliefs are basic and immutable, not subject to question, then that's fideistic.
Quoting Isaac
Foundationalism likewise is all about basic beliefs.
You seem to have missed a long post early in this thread where I went through all three branches of Agrippa's/Munchausen's Trilemma, concluding that both foundationalism and coherentism are fideistic, while infinitism is nihilistic, and on those grounds reject justificationism in its entirety in favor of critical rationalism instead.
Quoting Isaac
Primarily the latter, but I don't think that that's generally in conflict with the former. I think most people generally act like they agree with the broad strokes of this methodology, they're just not consistent about it -- when threatened with the frightening prospect that maybe they were in error and someone else is about to "win" against them, they look for a way out, and cheat the system they otherwise seemed to agree with until then.
People tend to argue about things as though some of their opinions are right and others are wrong, or at least some are more right or wrong than others; and as though they can sort out which of their opinions is which, or at least which lies more or less in one direction or the other. Otherwise, they wouldn't be arguing in the first place.
I think that those basic implicit premises of every argument should be treated as correct, because either “I’m just right and you’re just wrong” (supposing that some answers are unquestionable) or “there’s not really any such thing as right or wrong” (supposing that some questions are unanswerable) are lazy ways to dodge the argument, avoiding the potential of having to change one’s opinions, and so cutting one off from all potential to learn, to improve one’s opinions.
All the rest of my philosophy stems from rejecting those two cop-outs and running with whatever's left.
Quoting Isaac
Yes, and that is the topic of the next thread I have written up already.
Quoting Isaac
This is every bit as subjective judgement as my use of "obvious" for the same purpose earlier. I think you and I, who seem to have similar on-the-ground beliefs despite our philosophical differences, would likely see the same reading as "obvious" / "first", but if "obvious" is too subjective then so is "first reading".
Quoting Isaac
In the third person, we can't, at least not conclusively. We'd have to rely on their self-reports of whether there is anything that could possibly change their mind about B or if that's a "basic immutable belief" to them.
We can take an educated guess at whether they're holding B like that or not, though, based on how un-parsimonious a system of beliefs they're willing to construct to excuse the preservation of B. If they're doing all kinds of twisty mental gymnastics full of exceptions upon exceptions to preserve B when it would be much easier to just reject B and leave everything else simple and elegant, that suggests -- though doesn't prove conclusively -- that they're likely unwilling to question B.
Like the belief in the truth-preserving property of logic?
Quoting Pfhorrest
Right, so that at least entails a judgement, as I said earlier, that a significant number of people don't think this way (otherwise it would be like writing a lengthy treatise in how we ought really to breathe). And yet you claim to have no power other than your own ad hoc reckoning to justify an assertion that anyone does, in fact, think in a fashion other than this method you're expending so much effort advising us all of.
Quoting Pfhorrest
If it did, we'd have little argument. As it is, the rest of your philosophy seems to either be trivially true to the point of uselessness, or to be based on assumptions about the methods by which you distinguish those parameters. What frustrates me about this approach (my 'personal vendetta', as you put it) is that you keep trying to muffle these subjective judgements, to hide them behind some wall of logic when in fact they are the only serious and interesting point of discussion. I'm really not trying to 'win' some argument with you, I'm trying to open up a discussion about the important and difficult matters that are raised by your posts. You seem to just want to drag them back to the trivial ground on which you are right, but uninterestingly and uncontroversially so.
Quoting Pfhorrest
Oh good God, no!
Quoting Pfhorrest
I only meant their 'first reading'. The first reading of C they're aware of is not particularly subjective. Arguable, maybe, but not subjective like s third-party judgement of what is 'obvious' and what is not.
Quoting Pfhorrest
No doubt we're about to be told how the judgement of parsimony is also carried out by some logical algorithm?
Quoting Pfhorrest
I see. Now we can add 'simple' and 'elegant' to our list of subjective judgements about other people's belief systems - as if we could somehow 'see' the structure! Nonsense on stilts!
I'm with @Isaac here, @Pfhorrest, for the most part. This is what I was trying to get at it, how you sort of oscillate between "hard" and "soft": there's a methodology for belief revision that looks to be rule-governed or algorithmic. How do we form beliefs in the first place? "Do something reasonable." How do we apply the rules of the method? "Do something reasonable." How do we decide what belief to drop? "Do something reasonable." How do we gather and weigh new evidence? "Do something reasonable."
You have described this as a feature rather than a bug, but it repeatedly appears that your theory has no theory in it.
There is of course a fundamental problem to face up to: is reason computational? On the one hand, modeling reason in the obvious way with primitive formalizations of reasoning like classical logic leaves out about as much as your account; on the other hand, we need whatever model we come up with to be instantiated in a human being, and it's no good just retreating to some vague, pre-Darwin, gentleman's club sense of "reasonableness", a characteristic that cannot be described operationally. We know that it must be describable in operational terms *and* classical logic is not that description.
So there's real work to do. Your approach seems to want to give both of the failed approaches a seat at the table and hope that works, when we really need to try new things.
No, that’s something completely different. Basic beliefs are the kinds of things one would use as premises in an argument. The validity of logical inference itself is not something you ever need to put in a premise of an argument, because if you did you would just get an infinite regress: “if P then Q, and P, therefore Q” would have to become “if P then Q, P, and if ‘if P then Q’ and P then Q, therefore Q”, ad infinitum.
Quoting Isaac
And other people’s explicit advocacy of methods to the contrary, as I already said.
Oh, and I forgot in my answer to last post, something someone else (Janus?) already brought up in this thread earlier: some kinds of beliefs can only be held on fideistic grounds, like if you believe in the kind of God that cannot possibly be detected observationally. So if someone believes in that kind of thing, you know they’re believing it fideistically.
Why would anyone ever believe in something that no observations could possibly have led them to think was real? You’re the one saying everyone always has experiential reasons for their beliefs. My hypothesis is that they arrive at these kinds of unassailable but useless beliefs after they’re challenged in arguments and modify their old beliefs however necessary to avoid “losing”, even if it requires methodologically “cheating”.
Quoting Isaac
To someone who agrees with it, I would hope it would. Premises are supposed to seem trivially true in any argument, since starting off with controversial premises just begs the question. To someone who can easily see the implications of those premises, the conclusions should seem equally trivial. It’s only people who already agree with the premises but didn’t realize their implications who are surprised to learn something from an argument — any argument, not just mine. It’s the people who didn’t think through all the implications of these trivial premises I’d hope everyone would agree are obvious that I’m hoping to reach with my arguments.
Quoting Isaac
See, this is the kind of thing that makes me think you just want me to stop talking.
The parsimony thread I have queued up doesn’t hinge on critical rationalism, it touches on things that could apply in a justificationist epistemology too.
Like I said at the start of this thread when you were upset that I dared to post this, I’m trying to start separate discussions on each little piece of each topic that I have some original thought on, rather than just one huge 80,000 word “here is everything I have to say about philosophy” post. And I’m spacing them out so I don’t flood the front page with dozens of threads all at once. Of course a lot of them are going to connect to each other, because everything in philosophy connects to everything else.
Besides just not posting, or that one huge 80k-word post, or maybe quarantining all my posts in one General Forrest Thread (should all users be quarantined to one thread like that? Lots of people start lots of threads wherein they repeatedly touch on the same theme; anything by schopenhauer1 is probably anti-natalist for instance), I just don’t know what you want from me.
Quoting Isaac
Obvious synonym for “parsimonious” is obvious.
Quoting Srap Tasmaner
I don’t say “do something reasonable”, but just “do something” — presumably you’ll do what seems most reasonable to you, but whether that’s actually reasonable and what “actually reasonable” means in that context is not something my model cares about.
Go ahead and believe something, for any reason or no reason, it doesn’t matter. (This is the “liberal” plank of my system, contra “cynicism”).
When you experience something contrary to what you believed you would experience, change your beliefs, exactly how and why doesn’t matter. (This is the “critical” plank of my system, contra “fideism”).
Repeat forever and you’ll get less and less wrong over time. Just keep trying on beliefs (never give up and say you’re never believing anything again until it’s first proven for certain), and changing them when they fail you (never give up and say some belief you hold just has to be right because it just does), and you’ll continually improve.
How you pick your initial beliefs and how you change them doesn’t matter, so long as you keep up the process. Some methods could certainly be more efficient than others, and that’s an interesting question itself, but that’s a different question from whether they are epistemically valid or not. On my account, epistemic validity just requires that you believe something or other regardless of how little you have to go on, and that you remain willing to change anything you believe when you encounter evidence to the contrary.
All this stuff sounds so good in theory.
We like falsification because we can imagine science as one Michelson-Morley experiment after another. It's not, of course, but it somehow works anyway.
We like holism because we're familiar with finding out our assumptions and presuppositions were wrong.
We like the Asymptote of Truth because of the succession of theories and because probability.
In a sense all you're doing is reinventing the dual process model. System 1, that "machine for jumping to conclusions" as Kahneman calls it, can be counted on to continually produce new beliefs, and when there's trouble system 2 attempts some logical process of evaluating and revising. How that's done is apparently, in some sense, within our control. That we do it is more or less a fact.
For all that, when I want to know if I should stop for milk on the way home, I look in the fridge, and I don't need Popper or Quine or Peirce to get a definitive answer.
Well, you needn't use infinite regress as logic is a formal language and so self-reference can be dealt with using Tarskian meta-languages, but that's not the point. The point here really is that infinite regress doesn't define what beliefs are. Beliefs are just tendencies to act as if.... We tend to act as if logical function preserve truth, so it is a belief. As @Srap Tasmaner is pointing out, you've got it back to front here. We're biological creatures first and foremost. There are things we tend to act as if were the case, these are fundamental beliefs and we don't question them. something like Reformed epistemology is just positing that the belief in God might be one of these, and in a world where a God existed, it's not a bad assumption.
Quoting Pfhorrest
Yes, but we're still discussing whether you have actually shown this. This is another issue we're having here, we're in the middle of disagreeing over whether some issue has been shown and yet you still later refer back to it as if it had. We've yet to be shown an example of some philosophy advocating the complete ignorance of all evidence that contradicts their belief - the example you gave is one I disputed and you've not yet settled that dispute.
Quoting Pfhorrest
That's not a belief. A belief is a disposition to act as if... Anything less is a meaningless statement and it's pointless to create a model of it, you might as well build castles in the air.
Quoting Pfhorrest
Well good. So what tests have you carried out to check that hypothesis? What papers have you read that support it? There's been a great volume of study done on religious belief, even this very topic. An hypothesis is useless unless you're going to test it. You seem to be ignoring your own advice here - there's a whole bookshelf full of papers studying the causes and maintenance factors for religious belief and you've not cited a single on in support of your hypothesis - they should be ready to hand surely?
Quoting Pfhorrest
"We shouldn't just believe stuff without being willing revise that belief (except stuff like logic which we have to believe on pain of chaos). We shouldn't believe nothing, that would get us nowhere because we have to at least act on some basis and underdetermination means we can't 'prove' each step. We shouldn't believe all things because a) that's impossible in one person, and b) we're not going to get any better in our beliefs if we don't at least try and get them more right"
There you go, 100 words or so. See if anyone (serious) disagrees, if they don't, let's get on to the interesting stuff.
Quoting Pfhorrest
How? I don't see how this process will lead to you being less wrong. It could just as easily lead to you constantly shifting beliefs to favour one experience only to find they now contradict an experience previously modelled well by your theory. The net result of such a change will be no movement in the direction of being less wrong. There's nothing in your model to prevent this from being a permanent state of affairs.
Quoting Pfhorrest
A good neat summary. So the entire matter rests on a judgement (both third party and introspective) of 'willingness'. Something which is a) entirely subjective, b) scalar, and c) has no proveable zero point as it anticipates future events. These are the 'interesting' questions, and without answering them you have no theory because, as written above, you have no useable definition of epistemic validity without a method of judging willingness. If you're happy to let 'willingness' remains something naturalistically obvious to any rational person, I'd have no objection to that, but you have to then concede you have a naturalistic argument, not a logical one. Implicit in this concession is the requirement to absorb that which the proper sciences are showing to be the origins of such natural thought.
I don't know how one knows one is willing to revise a belief. Or, perhaps better put, I think people's self-evaluations on such an issue are radically biased.
Yep. That's exactly the point I'm making to @Pfhorrest.
This may be a root of our disagreement. I do agree that well-formed beliefs are coextensive with "tendencies to act as if...", but there is a broader sense of "belief" that I am also concerned with here, a sense something like "propositions one would assent to".
Quoting Isaac
We tend to resist questioning them, sure, but rationally speaking we need to always be open to questioning them if pressed. Look at how many widespread intuitive assumptions about the nature of the world have been overturned in modern theories of physics, for example. If we hadn't been willing to question those things, we wouldn't be where we are now in our understanding of the universe. Our intuitions are frequently wrong, sometimes even our deepest and most securely-held (and widely-shared) intuitions.
A basic belief, in the sense of foundationalism, of which Reformed epistemology is a species, is something held to be beyond such questioning, and it's my position that nothing is to be held as beyond questioning.
Quoting Isaac
As consistent with critical rationalism, there is no burden of proof on either of us to convince the other before we're allowed to continue believing as we did before. Claiming that I'm wrong and demanding proof doesn't require I give up my beliefs until I can do so. I think something is the case, you think it's not, and if you make some assertion on the grounds that it's not, I'm free to point back at my position that it is; that you've not conclusively established that it's not, so your assertion doesn't rest on solid ground since that's still in dispute. And you of course find my assertions to the contrary not to rest on solid ground either, since that's the same ground that's still in dispute.
My point being this isn't a one-sided thing; until the ground is settled, we both think the other is making an unfounded assertion by appealing to that ground, and neither of us is more right or wrong in thinking so, until the ground is settled. IOW I see you as doing the same thing you see me as doing.
Quoting Isaac
Well you'll find plenty of people right here on this very forum claiming that God as they conceive of him is not empirically testable. I agree that this is a poor kind of belief, and ultimately claims of that sort are meaningless, but nevertheless people assent to the truth of such meaningless propositions. Showing why that's a useless or erroneous way of thinking is part of the aim of my philosophy.
It seems like you really want to restrict the topic of discussion to the subset of discourse where people are already being fairly reasonable, when all I'm trying to do is show why discourse beyond that subset is useless or erroneous. All the possibilities within the domain you're concerned about discriminating within are already A-OK by me; I'm only concerned with those who wander far outside that domain.
Quoting Isaac
That hypothesis is not central to my project, so it's not something I've researched in any depth, and if the hypothesis turns out false it has no bearing on any of my main points, which are all about why it's counterproductive to do certain things, not what inclines people to do them. I'm just venturing a guess, informed mostly by my own interactions over many years with people who do those things (including their responses when I inquire as to why they do them), as to why they do them.
Quoting Isaac
As I said in my last post, I don't expect anyone to disagree with those premises. It's the implications that they have on other, common philosophical positions that would be contentious. Rejecting justificationism, the default form of rationalism most philosophers tend to assume, because it inevitably leads to either fideism or nihilism, for example. I expect most rationalists (e.g. most philosophers) to agree that fideism and nihilism are wrong (but not all of them, of course), yet not to have realized how all three justificationist possibilities (from Agrippa's/Munchausen's Trilemma) inevitably lead to one or the other.
And that is what I find to be the interesting stuff. You seem to find the interesting stuff to be the things that I say are work beyond philosophy and more the domain of more specialized sciences. Which makes sense, since you're a... neuroscientist? Psychologist? I forget what you do exactly but you study brains in some capacity, no? So it makes sense that you're more concerned with the nitty gritty details of how human brains in particular work. I don't think that's the domain of philosophy -- it's still important work, but not philosophical work -- and I'm focused on the broader philosophical stuff within which that kind of work is conducted.
Quoting Isaac
Only if you discard your previous experiences that were modeled well by the old theory, which I assumed was obviously not implied. As you accumulate more and more experiences, the range of possible sets of belief that could still be consistent with all of them narrows.
Quoting Isaac
The third party assessment isn’t important at all for strict epistemological purposes; at most it’s useful for deciding whether you think it’s worth your time engaging in a discussion with someone who doesn’t seem open to changing their mind, but you can never be sure that they’re not and if time and effort were no consideration and all we cared about was arguing until we settled on the truth then guessing whether the other person is fideist or not would be irrelevant; we would have to assume they were not.
And it’s only in the third person that subjectivity is a problem: in the first person, you just decide whether you’re willing to change your beliefs or not.
It’s only in the first person that that matter, as one needs to remind themselves to consider all possibilities, even the possibility that one of their most cherished beliefs is false, if they really do care about figuring out what’s true. That’s not a scalar quality, that’s a boolean choice: “am I open to reconsidering this belief or not?” The only thing that makes it seem scalar is how integral to the rest of one’s belief system that belief is: one can be in principle willing to reconsider any belief, but if some beliefs would require that the whole rest of one’s belief system be made much more convoluted to accommodate their removal, then one is pragmatically right to consider other alternatives first.
Quoting Isaac
A naturalistic account of epistemology cannot help but be circular, because to do the natural sciences soundly you need some epistemological account of what soundly done science is, and if that account in turn depends on the results of the natural sciences that in turn depend on the epistemological account for their soundness... well there’s your circle.
Funnily enough, I spent a good 15 years of my academic career studying the differences between "tendencies to act as if..." and "propositions one would assent to", but the former are an indicator of at least some mental connection between the state of affairs believed in and the action about to be taken reliant on that state of affairs. The latter is a completely different indicator of the statements which constitute a membership criteria for social groups to which one aspires to belong. I'm not sure what you're going to get out of mixing the two other than a mess. What people actually believe and what they publicly assent to (or even self-deceptively assent to in internal verbalisation) are two completely different things with completely different origins and processes, they involve different parts of the brain, they're about as disconnected as it's possible for two mental activities to be.
Quoting Pfhorrest
This depends on how you verbalise the belief. Prior to Einstein humans didn't have a fundamental 'belief' in classical gravity, they had a fundamental belief that when you throw things up in the air they come down in a predictable way, they still do after Einstein. what changed there was the beliefs about the deeper scientific model of why, those weren't fundamental at all. That being said, I agree with you that in principle, any belief could be wrong, not matter how fundamental (people have to behave in unintuitive ways in space for example, they're simply not expecting there to be no up and down), but I don't think there's any evidence that people don't or wouldn't do this, so I think expending effort on explaining why they should is pointless.
Quoting Pfhorrest
Yep. The difference being I haven't written a long series of posts on a public forum under the assumption that other people could benefit from my insight on the matter. That sets the threshold of justification higher for you than for me. You asked me for my opinion (implicitly, by posting on a forum), I never asked you for your, you decided it was important enough for other people to hear. We are not doing the same thing here.
Quoting Pfhorrest
Quoting Pfhorrest
But it does if your hypothesis is wrong to the extent that there is no proper target for your normative assertion anymore. That's the point. You need a good prior hypothesis that people do genuinely believe there's a rational argument for them to deny all contrary evidence for a belief they hold (or against nihilism, that people genuinely believe no beliefs are more reflective of what is the case than others). Absent of this hypothesis you're presenting a normative theory to an audience who, to a man, already act that way.
Quoting Pfhorrest
Well then a good place to start would be some quotes or texts in which these philosophers make the case that you're claiming they're mistaken in, or reach conclusions that you're claiming have missed a crucial step. Otherwise it's very difficult to see what you're arguing against.
Quoting Pfhorrest
I'm a psychologist (academic, not clinical). But I disagree that scientists work 'within' broader philosophical stuff of the nature you're investigating here. Your arguments are littered with assumptions about how brains and minds work which can be (and should be) tested and modelled by scientific investigation. If you can conduct the philosophical investigation you're interested in without making a single assumption about how people's minds actually work, then you're welcome to it, but I contest that you cannot, and here that has certainly not been the case. Philosophy, when it's done well, works with the information the sciences provide, not outside of it.
Quoting Pfhorrest
Only if you attend to every single experience you've ever had simultaneously. Which is flat out impossible. Otherwise any 'new' or 'revised' belief could well be inconsistent with some previous experience and will continue to be so until you happen to attend to it. Given the sheer number of beliefs (tens of thousands at least), the number of potential divisions of experience, the relatively short time we have here, the limited bandwidth of the working memory and the limited neural firing speed, there are practical parameter set by basic natural conditions which limit the possible solutions to the problem of maintaining a set of right beliefs.
Quoting Pfhorrest
Again, this comes back to the fact that you're presenting this normative theory, the very act of doing so assumes there is a target who do not behave this way already which itself is a third party judgement.
Quoting Pfhorrest
Only if you've started from a premise of denying naturalism about truth already. If you haven't, then you do not need to take that step. "Here is a hand" does not start with an assessment of how we know what it is that's 'here', it's starts with "Here is a hand".
Okay, well part of my position could be phrased in your terms here as "don't assent to things you don't actually believe", though I would phrase that instead as "don't believe things that have no bearing on your experience of the world".
Quoting Isaac
I was thinking more of things like the relativity of simultaneity, which is far more counterintuitive than just a different explanation for why things fall down.
Quoting Isaac
It's sounding more and more like you think my positions are generally correct, and only object that they are trivially so. If you find them trivial, that's fine with me. I don't especially care to convince anyone who thinks these things are trivially true that they ought to find them more significant than they do. I'm only really concerned about reaching people who either don't think these things are true, or who don't think they're trivial.
If that's not you, that's fine. I don't really get why you even bother responding in that case. If I see someone post something that's just obviously correct to me, I either don't respond or just post a thumbs-up emoji or something. Seems pointless to belabor how obvious (to me) the thing they're saying is. That you do that towards me just comes off as you being somehow offended that I dare say something so obviously true. It kind of reminds me of complaints about "mansplaining", like you feel like I'm condescending to you personally by saying something you already well know. If you already well know it and think it's uninteresting and obvious, that's fine. If everyone else agrees, I'll get no responses, and the post will fall off the front page quickly. That'd be disappointing, but better than the pointless tediousness that our conversations always turn into.
Quoting Isaac
The entirety of Descartes Meditations is basically an exercise in this, starting off with a cynical justificationism rejecting everything that can't be positively proven from the ground up, then claiming some beliefs are basic and unquestionable (not just the cogito, which is much more subtle in its flaws, but he basically grounds everything besides his own existence on "God exists and wouldn't let me be deceived"). It's classic foundationalism.
Even further back, Aristotle (in Posterior Analytics) explicitly explored the three branches of the trilemma and decided that since the only alternatives were circular reasoning or infinite regress, some beliefs had to be regarded as basic and unquestionable.
Quoting Isaac
Sure, but that just means humans are incapable of perfectly conducting the epistemic process, which is uncontroversially true. Humans are limited and fallible. Saying what they should aim to do doesn’t require that they be capable of doing it perfectly. Just that they should do it as well as they can manage, and if they fail at it in some way, that’s an error they should aim to correct. (E.g. if you come up with a new theory that disregards some old observations, that’s a mistake, and hopefully peer review will catch it and help keep it from spreading).
Quoting Isaac
Or if that’s in question, and you’re not asserting it as an unquestionable foundational belief. You say “here’s a hand” and Descartes asks “Is there really though? I mean it sure looks like one, but my senses have deceived me before...”. I agree in the end that this is a dumb line of questioning he’s starting, but the goal of my project is to make explicit WHY that (and other things philosophers say and do) is not as wise as he thinks it sounds, but rather goes against not only what everyone commonly thinks, but what they’re right to think: yeah, here IS a hand, for reals (unless maybe [unlikely alternatives]).
If everyone already agrees that there’s a hand, great, we can move on from there and do real science. But philosophy is about things like what to do when that kind of thing is questioned, and whether we can find our way back to that common sense or are compelled to believe some strange nonsense instead. I think we can find our way back to common sense, but it’s worthwhile addressing the nonsense people and showing them that; and preemptively inoculating others against such nonsense, too.
Why not? It's really super useful to assent to things you don't believe. It greases the wheels of social interaction, it bonds social groups, it might even create useful beliefs in the long-term. I'm not sure why you'd want to rule it out, except fro some Kantian obsession with radical honesty.
Quoting Pfhorrest
Same thing would apply. The matter that people would have fundamental beliefs about would not be the matter that Einstein theorised about. People do not have fundamental beliefs about models of physics, people have fundamental beliefs about what will happen in their day-to-day lives - how objects respond to manipulation, move through 3-dimensionsal space, interact. None of these things are affect by belief in the models which explain them.
Quoting Pfhorrest
No. I don't know how to make this any more clear. I object to the implication (resulting from such a long exposition) that there exist people who seriously disagree with you but who do so only because they haven't seen the strength of your argument. If you think these people are irrelevant then it seems petty to disabuse them of the cruxes. If rather, like me, you think these people's purported beliefs can be quite importantly damaging, then it seem crucial to find out exactly why they have them (or pretend to), not just guess at it from your armchair.
Quoting Pfhorrest
Yeah. And do you think it's a coincidence that deeply religious person in a deeply religious society concluded from his 'radical doubt' that there must be a God? Of course it's not foundationalism, it never was, it was never doubt either. It was a convoluted post hoc rationalisation for a belief which he already held for mush the same reasons as you're here advocating (in his case it would go something like "I believe there's a God because I've ben told there is, so I'll hold that for now" (liberal part) - "Literally everyone I speak to who I consider an expert in the matter says there is a God, and I've personally experienced no contrary evidence, so I think I'll keep that belief"(critical part). The meditations was just Descartes re-arranging his beliefs (exactly as you advise) to accommodate some inconsistencies in observation (the lack of clear connection between the outside world and the mental picture he had of it).
People will say any old thing to make a narrative out of their beliefs, you really should take too much notice of it.
Quoting Pfhorrest
I strongly disagree. It's absolutely imperative if you're going to advocate a task that the task is either achievable or, if not, then the partially achieved task is worth the effort that must be put into it relative to other methods. We can't fly either, despite the fact that it would be great if we could (save a lot on fuel). Do you think on those grounds alone it would be sensible to advise that we 'keep trying' to fly, just do our best, keep flapping those arms and jumping even if we only get a little bit off the ground because flying would be so great if we achieved it. No. If it is abundantly clear that a method cannot be achieved, then we need to consider the next best alternative - not just assume that a partially achieved version of the first idea will automatically be the next best thing.
So you admit that such people do exist. Why then were you pressing me for proof of them? This is the kind of thing that makes me suspicious that you're not arguing in good faith, when you radically doubt things that I would really expect you to already agree with... and it turns out later, you do.
Anyway, I never said those people are irrelevant, I said that it's irrelevant for the purposes of philosophy to know why they believe those things (or pretend to, if you like). For philosophical purposes, all that matters is whether those (purported) beliefs are true, or at least justifiable, and the causes of people believing falsehoods don't tell us whether or not they're false or unjustified.
For broader social purposes, it's good to know why people believe falsehoods. But to know that they believe falsehoods, you first need to know whether or not the things they believe are false.
It would not be very epistemologically sound, or discursively fair, to approach someone espousing something you think is false and reply to that only with an analysis of the conditions that have caused them to come to believe that, as if presuming that they are a crazy person who can't think rationally, just because they've reached a different conclusion than you. If what they believe is actually true, then you'd be dodging the issue they're trying to talk about entirely.
When people do irrationally believe falsehoods (or meaningless nonsense), it is good to figure out what's causing them to do that, but first we need to assess whether what they believe is false, and whether they believe it on rational grounds. To do that, we need to determine what the rational grounds for believing things are... and that brings us back to epistemology again.
(And if they are believing falsehoods on rational grounds, then doing philosophy with them, i.e. having a rational argument, is the most epistemologically sound and discursively fair way of changing their mind anyway. Only once that fails, and we conclude that they are not thinking reasonably, should be begin concerning ourselves with the irrational causes of their nominal belief).
Quoting Isaac
Unless you think fideism or nihilism will get us anywhere (which it seems you don't), then whatever other method could possibly get us anywhere will be some subset of my method, because it's just the negation of those two things.
Your flight analogy is actually quite similar to an analogy to this general balance of neither-fideism-nor-nihilism that I thought up a while back. That balance recurs throughout my philosophy, and I thought this analogy up originally in terms of existential nihilism etc, but it works just as well for other cases like this:
We're on the surface of an infinitely deep ocean, with the infinite sky above us. Therefore we cannot stand on the bottom, because there is no bottom. And we cannot grab for the sky, because the sky isn't some solid ceiling above us; nor can we just stick our arms up and hope our imaginary friend Superjesus will save us from drowning or anything like that. If we try to do either of those things, stand on the bottom or hang on to something above us, we will surely drown. Therefore we have to do something other than those things: neither try to stand on the bottom nor hang on to anything above us.
In other words, we have to swim. I'm not specifying how to swim, nor saying that the specifics of how to swim are unimportant. I'm just pointing out that there is no bottom to stand on and hanging from the sky isn't an option either, so we've got to do something else directly involving the water we're immediately surrounded by instead.
What exactly to do is beyond the scope of philosophy, and the realm of more specialized sciences.
That's actually nice, but it only works if you believe there's nothing to stand on and nothing to hold onto, and if you believe you don't have to demonstrate your faith by allowing yourself to slip into the water before His Hand reaches down to save you (think: the binding of Isaac). But as a description of philosophy "starting in the middle", I wholeheartedly approve. You just seem to think this is some use in dealing with people who don't already agree we have to start in the middle, and I don't see how it possibly could be.
(By the way, this is exactly how Quine defends the naturalization of epistemology against charges of circularity: if science is the source of the doubts about the results of science, then we may legitimately use science in the defense of those results.)
Something else I've had on my mind. It is sometimes said that epistemology is a search for a method that, if followed, would produce two results: (1) believing things that are true; (2) not believing things that are false. Critical rationalism is a claim that we get (1) for free so long as we do (2), at least in the very long run. But that only makes sense for finite sets of beliefs, hence you're inclined to model a person's web of beliefs as a snapshot that is at least arguably finite, on the grounds that it's hard to see how a person could hold an infinite number of beliefs.
But that model could itself be wrong, if you include within our beliefs not only closed propositions about particulars but also material inference rules that are open-ended. That is, if we actually had belief generators that ought to be dealt with. (Can you throw a car over your house? What about that car? What about that car? What about that car? ...)
And, as I think I've tried to say before but maybe didn't, it's entirely retrospective and we actually live with a stream of incoming new beliefs, so even if the model is okay we never quite have the opportunity to hit the pause button and use it.
No, I started off with arguments for why we must start in the middle. I’m not repeating those arguments in full in every thread, but exploring the implications of that conclusion on each sub-field of philosophy, thread by thread.
Quoting Srap Tasmaner
As I frame it, believing anything at all always has some odds of believing things that are true, so if we go about eliminating beliefs in things that are false, we increase the odds that our remaining beliefs are true. And this works whether we have finite or infinite beliefs. Take an infinite continuous plane, and repeatedly draw lines across it marking the boundaries of possibility, and you will end up enclosing a smaller and smaller area between those lines, even if there are still infinitely many points in that area.
That depends obviously on the lines, so unless you're really working up some math here, this analogy is not so good.
I understand the impulse to talk in terms of weeding out and pruning and increasing the odds; there's reason to think this is a sound procedure in some situations (my "Poe-Doyle rule" is an example). I remain skeptical that it can be generalized so easily, and think it more likely that pruning only works on already bounded solution spaces. If you want to generalize, you want an account of how we narrow the range of options to a manageable set we can successfully prune. But you're going to refuse to do that, because you refuse to acknowledge the conditions that make pruning an option. You want a single universal method that can get you from anything at all to probably true; I am doubtful there is any single such skeleton key.
A belated response; I've been a little busy on the farm.
To my way of thinking achieving parsimony is done by weeding out propositions or assumptions that are not grounded in anything other than imagination, association or feeling. Earlier, unless I am mistaken, you said somewhere that you wanted to dispense with induction; that critical rationalism does not depend on inductive thought. I remain unconvinced on this point, because it is precisely those propositions that spring from imagination, association or feeling that are not, properly at least, based on inductive thinking as we moderns now conceive it.
I gave earlier the example of phlogiston. There never had been any observations or experiments which supported the idea of phlogiston. It was displaced by the idea of oxidation, and the inductively derived proposed commonality of burning, rusting and respiration.
Of course induction proves nothing; but if we had been clear all along about the difference between logical entailment and expectation based on accumulated cross-referenced experience and the inter-cohering conceptual systems (chemistry, physics, biology and so on) that have developed based on the inductively derived notion of law-like behavior coupled with empirical observation, we would not have needed Hume to point out the difference.
So, we don't, as you point out, hold belief systems that consist of "unorganized list(s) of all of the uninterpreted particulars of every experience we've ever had". Such lists would not be systems at all but merely arbitrary associations. Inductive thinking just is systematic thinking, and this goes for pre-scientific, non-empirical thought as well, because such thinking is always based on the inductive notion of cause, even in the absence of any observable mechanisms or measurable forces to be causally theorized.
Think about the idea of chi in Chinese thought for example. Chi is conceived as a unifying force that produces law-like outcomes. So the expectation of predicted phenomena is operative in that system, as much as it is in modern science. Modern science rejects chi because there is no observable structure or mechanism that could confirm or dis-confirm its actuality. So the idea is not actually induced by systematic observation, but by associative imagination. And although it is quasi-inductive in the sense of being induced by associative imagination, it is not properly inductive insofar as it is not induced by examining observable phenomena.
I’ll save further discussion on parsimony for the thread on that, but for now I’ll just say that your characterization of it does not bear any resemblance to mine.
Quoting Janus
I said earlier that induction is perfectly fine (and indeed very useful, though not exclusively so) as a way to come up with beliefs in the first place, hypotheses to test. It’s just not a proper part of the process of testing which beliefs are correct when there are multiple competing hypotheses.
Two hypotheses reached by induction can’t be judged for their relative merits just by finding more things that fit either pattern. You need to instead find something that doesn’t fit at least one of the patterns — which is then into falsification, and so critical rationalism.
I did mention this earlier, but I think that hypotheses are the result of abductive thinking. We imagine hypothetical causal systems of interacting forces based on what has previously been observed and confirmed by repeated observation.
Then we run experiments and make observations to discover whether the predicted phenomena do indeed occur. If what we have predicted is observed then our hypothesis is provisionally verified and if not it is provisionally falsified. Neither verification nor falsification follow logically.
So, you say "You need to instead find something that doesn’t fit at least one of the patterns — which is then into falsification, and so critical rationalism.". Where I am disagreeing is that I would say, as the other side of the coin you seem to be wanting to dispense with 'You need to find something that does fit at least one of the patterns — and it is on the basis of that that one pattern will be preferred over another — and this is how science is generally done; a critical process jointly empirical and rational'.
And when trying to choose between them, finding more results that follow the prediction of one of them tell us nothing, unless those results also go against the predictions of the other one, in which case you’ve falsified the other one. But you’ve still not verified the alternative, because it’s impossible to ever verify anything, since the verification (or confirmation) process is logically invalid, just affirming the consequent.
The main points that you still seem to be resisting seeing is that just as verification or confirmation is not deductively certain, insofar as verification does not strictly logically follow from observing what is predicted by an hypothesis, exactly the same pertains to falsification, and that fasification of A is logically equivalent to verification of not-A, in any case.
On the one hand you want to agree to the distinction between deductive and inductive thinking, and on the other you want to use the principles of deductive validity, where they don't belong; in the inductive domain.
Even in the deductive domain falsification of A is verification of not-A. I gave the example earlier, with a mistake in the way I set it down, and you addressed that, unfortunately irrelevantly because of the mistake. But you didn't attempt to address it after I had corrected it.
Here is it is again: falsification of "all swans are white" is logically equivalent to verification of "not all swans are white".
You can say “if not P then Q, not Q, therefore P” and that works just fine. You can also say “if P then Q, not Q, therefore not P” without problem either. It doesn’t matter whether the antecedent (the beliefs you’re testing) is some proposition or the negation of some proposition. What matters is that you can’t say “if P then Q, Q, therefore P”, or “if not P then Q, Q, therefore not P”. Both of those are invalid inferences, whether the antecedent is a negation or not. The point is that you can only validity deny the consequent from an observation contrary to the consequent, you can’t confirm the antecedent from an observation affirming the consequent.
Likewise, you can falsify a belief that P and so prove that not P, by observing something contrary to the implications of P; but you can’t just observe something implied by not P and take that is proof that not P.
Sure you can prove anything by observing something contrary to the implications of its negation, whether that thing is it’s even a negation of something else or not; but you can’t prove anything just by observing something consistent with its implications.
Because it's important that you specify. Imagine we we're arguing about the nature of 'true believers', back and forth for pages, only to find out that 'true believers' for you were the Catholics, whereas I presumed you mean the Protestants. The specific people whom you consider as being either fideistic or nihilistic matters. These people aren't one or the other solely on your say so, nor even on theirs. If there's a fact of the matter about what their thinking methods are then it is independent of either your beliefs or their beliefs about that matter.
Quoting Pfhorrest
All of this is begging the question. It assumes that there is some method of doing this which is the very matter about which there is disagreement. You'd have to first resolve Van Inwagen's argument from epistemic peers (broadly, if you don't already know it - if one of your epistemic peers disagrees with you about a matter, then that proves it is possible for someone with your knowledge and skills to be wrong despite thorough application to the issue - if that's the case then how will you ever know it's not you who are wrong?)
Quoting Pfhorrest
That you think it's just the negation of those two things is one of the matter under contention. No-one is contesting that your position at least vaguely holds together as a system, you've built a perfectly adequate castle. We're contesting it's lack of foundations, not the integrity of it's later structure. Your claim that it is the best (or even the only) epistemological method for establishing which beliefs are justified is not supported by pointing to it's internal consistency. It's like trying to win an argument about which is the best car by pointing to the fact that yours has wheel firmly attached. Well, they all do, that's not what's at issue.
Quoting Pfhorrest
I can do no better than @Srap Tasmaner in responding to this, so I won't bother repeating it, but rather jump straight to the issue
Quoting Pfhorrest
This essentially repeats the same foundationalist error you're trying to eliminate. Your previous arguments are not somehow 'foundational' to these, knowledge is not built up like an inverted pyramid, there are no 'implications of that conclusion' that are not also implications on that conclusion.
It's an interesting exercise nonetheless. Watching you oppose foundationalism by claiming you're building up an argument from foundational principles, watching you oppose dogmatic fideism by dogmatically sticking to you unaltered beliefs despite nearly all of your epistemic peers disagreeing with you...
Doing what? Differentiating more correct beliefs from less correct beliefs?
I’ve already given an argument for why we could only ever assume one way or another about that, and why we pragmatically ought to assume there is, instead of just assuming there’s not.
Also, per critical rationalism generally, saying “you can’t prove that” is not an argument against anything ever. You need an actual disproof, not just pointing at how someone is assuming something, became we’re all always assuming tons of things and that’s not wrong, only persisting contra counter-evidence is.
Quoting Isaac
The resolution to this is built right in to the system I’m advocating. There are always a range of possibilities that someone with the same knowledge and skills can rationally disagree within, on my account, because nobody can ever pin down one exact comprehensive truth. Within that range you can’t know you’re not wrong; “you might always be wrong” is a core upshot of my view. You can only ever know what possibilities are for sure wrong, never which are for sure not wrong.
Quoting Isaac
It’s literally just defined as such. Anything at all that is neither fideistic nor nihilistic is okay on my account. I think you think I’m advocating something much narrower or more specific than I am.
Quoting Isaac
Foundationalism starts with basic beliefs that are taken to be self-evident or indubitable. I don’t do that. I start with reductio arguments against certain broad classes of view — fideism and nihilism — showing how assuming that those are true leads to problems, and then just taking whatever else is left over, which is a really broad class of things. Exploring the implications of that on other things just means seeing what possibilities on those other subjects fall outside those bounds, into the already ruled-out realms of fideism or nihilism, and what remains still in the acceptable domain between them.
Quoting Isaac
It really doesn’t seem so much like disagreement as it does a strangely aggressive agreement aggravated by some sort of miscommunication.
It’s like I’m saying something is a quadrilateral, and others are vehemently opposed to that because it’s obviously a trapezoid, or a parallelogram, or a rectangle, or a rhombus, or a square... and I’m saying yeah, it’s all of those things, all of those things are quadrilaterals and also it’s possible to be all of those kinds of quadrilateral at the same time. Why do you think we’re disagreeing? My only point is that it can’t have fewer than 4 sides, and it can’t have more than 4 sides, so it must be a quadrilateral.
When it comes to scientific theories, the situation is not really like this at all; it is never so clear cut. When the logical positivists speak about verification, they were not stupid enough to be imaging that observing the predictions of an hypothesis logically prove it to be correct. They saw verification as provisional. Likewise, when predictions fail to be observed this does not logically disprove an hypothesis; there may always be other unknown factors in play. Fasification in science is as provisional as verification is. When predicted results are observed, all that is verified is that we might be onto something; when predicted results are not observed all that is verified is that we might not be onto something.
Quoting Pfhorrest
We all start with basic assumptions taken to be self-evident. Your argument against fideism and nihilism; your assumption that they must lead to problems, is just such an assumption. If, as I suspect, your problem with those is that they either believe without evidence (fideism) or deny all evidence (nihilism) then you are assuming the provenance of inductive thinking; which is fine since all scientists do that. All scientists conduct their enquiries as if the laws of nature have been proven; the basic assumption of science is that nature is law-like and that we have discovered some of her laws and that our scientific knowledge is a great coherent system based on those natural laws.
I’m not talking about categorical or not, nor about particular atomic statement at all. I’m talking about how experiences in agreement with your (entire set of) beliefs don’t give justification to your beliefs over others, they don’t tell you anything about the relative merits of your beliefs compared to the alternatives; unlike how experiences counter to your beliefs give you justification for discarding your beliefs in favor of others.
But now that you mention non-categorical beliefs, there is an asymmetry there: you can never finish falsifying (at least empirically) a non-categorical belief, because the negation of “there are some
black swans” is “there are no black swans” and you can never be sure that you just haven’t seen whatever black swans there are YET.
Quoting Janus
But there is no rational reason to think that the truth of the consequent of an implication gives even weak support to the antecedent. It’s not just less than certainly, it’s nothing at all.
Quoting Janus
This is only if you’re talking about one atomic statement, which we already went over extensively in this thread is not what we’re talking above. Falsification is falsification of the entire set of beliefs, and in that sense it is certain disproof, because you still have to change something about your beliefs or another.
Quoting Janus
But we already might have been onto something, so nothing new is learned.
Quoting Janus
No, rather, they we are definitely off track, somewhere or another, even if it's not perfectly clear where.
Quoting Janus
That's just asserting foundationalism, and so begging the question.
Quoting Janus
It is not an assumption, its an inference. You're familiar with reductio ad absurdum, no? You start with the hypothetical assumption of some premises, derive an absurdity from them, and conclude the actual rejection of those premises. That's what I do to both fideism and nihilism, and then proceed from their negations.
Quoting Janus
That's not fideism, that's merely what I call "liberalism". Fideism is believing against evidence, flatly rejecting the possibility that some particular part of your beliefs could be wrong, and so rearranging the rest of your beliefs however possible to excuse away evidence against those protected parts.
Quoting Janus
That's not nihilism. I'm not sure exactly what you mean there. The counterpart to fideism in epistemology is what I call "cynicism", which I reject because it leads to nihilism (which is an ontological position, with a counterpart I call "transcendentalism", which in turn leads to fideism), which is basically an overzealous skepticism, saying "prove it or lose it" to every belief, including those offered in proof of those ones, and those offered in proof of those in turn, ad infinitum, resulting in the demand that you reject all beliefs, forever, i.e. nihilism.
I'll do my best.
1. Beliefs are not propositions. Beliefs are states of mind equivalent to a tendency to act as if... As such it is a) not possible to have a belief which is contrary to the evidence of your senses (beliefs are formed by a neurological process of response to stimuli), and b) people's stated propositions are not necessarily reflective of their beliefs and it is a category error to develop an understanding of one based on experience of the other (just because people say their 'belief' is based on foundations, doesn't mean it is; just because people say they doubt everything, doesn't mean they do)
-- this leads to the more general criticism that there is no target of the normative claim, it's like telling people that they ought to breathe.
2. If you look at the graph where 'critical liberalism' is defined (on the other thread) you see it is based on avoiding extremes of two axes. One is 'willingness to change one's belief in the light of evidence to the contrary', the other is 'extent to which beliefs are accepted/justified without foundation or evidence proving their necessity'. Going not far enough in the first is 'fideism', going too far in the second is 'nihilism'. But both of these scales contain subjective judgements and are superlative. As such they are useless normatively, which is the intended realm of the original proposition. Given my definition in (1), above, I contend that no-one would hold their beliefs were impossible to change even in the light of overwhelming evidence to the contrary, and that no-one would hold themselves to have no beliefs at all because they can't be proven. Rather what we find are degrees of faith and degrees of doubt about some belief(s). The former is that the belief should be maintained so long as it is even remotely possible to do so. The latter is the complete absence of any preference. The reasons for that choice, the extent to which they're maintained and the methods by which they are, are all subjective.
-- this leads to the general criticism that there is no conclusion to the normative claim because all the important elements required to use it are missing from the claim. Like telling people they ought to consult a certain etiquette pamphlet at all times but neglecting to give them the pamphlet.
The argument against 1 seems to be that people do claim beliefs to be immutable, such as Reformed epistemology, but I find this uncompelling because in such systems reasons are given for why the belief should be held.
The argument against 2 is that this is just a preliminary stage and that objective measures of methodological questions will follow. I find this argument uncompelling (notwithstanding the fact that I anticipate such methods will prove just as subjective) because I find it to be foundationalist. It appears to cement each 'foundation' and then move on assuming the only direction of play is from these foundations forward to their consequences, whereas properly their consequeses should no less be considered reasons to reject/alter the prior conclusions.
I hope that's served to clarify things.
More foundationalism. If your conclusion is in any way problematic, that is just as much cause to question your premises as your premises are cause to reach your conclusion. You can't keep referring back to things you've 'shown' before as if those matters were unaffected by the issues here whilst simultaneously maintaining an opposition to foundationalism (or at least maintaining an understanding of Quine).
Quoting Pfhorrest
Right. So how do resolve Van Inwagen's position about possibilities which are 'for sure wrong'?
Quoting Pfhorrest
Yes, but your definitions are subjective (see my reply to Coben if you want a summary of why), so this amounts to nothing more than "anything which I find to be the right balance is OK", which is virtually tautologous.
Quoting Pfhorrest
Are you suggesting that 'absurd' is some kind of objective measurement? Otherwise how is your belief that your reductio arguments show what you claim they show not then "basic beliefs that are taken to be self-evident or indubitable"? You literally claim (by introducing a reductio) that it is self evident that fideism and nihilism lead to absurd or repugnant consequences. Furthermore, by limiting discussion to the consequences of this conclusion, you're holding those beliefs to be immutable.
This ought to be a clue that you've chosen the wrong way of formalizing the process, because confirmatory evidence just obviously does matter. If you've seen thousands of swans in your lifetime and they were all white, there's nothing at all irrational about believing that swans are probably all white, or believing defeasibly that they are all white.
Given your general approach, I'm just not at all clear why you're so attached to this mid-century Quine-Popper thing instead of going in for something more like formal epistemology. Have you considered following the Quantitative Way? (LessWrong, SlateStarCodex, Overcoming Bias, et al.) I have some reservations, but it's a much more defensible model of rationality than yours, and it seems like it would be right up your alley.
You seem to be doubling down, so I'm only going to address this one point which is really the crux of where I think you are going astray. In empirical matters confirmation is not an abstract logical entailment, but we are induced to think that belief in the invariances that we unfailingly observe is justified by lack of any observed counterexamples. To labour the point: it does not follow that those invariances are logically proven; science and everyday inference is not strictly logical like that.
Serendipitously, this morning I was reading a passage in Carl Sagan's The Demon Haunted World speaking about the popular belief that arose in the eighties or nineties that there is a giant sculpted human face on Mars:
"Even if those claims are extremely improbable-as I think they are- they are worth examining. Unlike the UFO phenomena, we have here the opportunity for a definitive experiment. This kind of hypothesis is falsifiable, a property that brings it well into the scientific arena. [.......] Even if it becomes plain to everyone that these Martian features are geological and not artificial, monumental faces in space (and allied wonders) will not go away."
Sagan writes "faslifiable", but then what he writes in the next quoted passage amounts to saying that the falsification of the belief that the features are artificial just is the confirmation that they are geological. How would we know that they are geological? On the basis of past experience and the knowledge accumulated therefrom, of course; in other words from inductive investigation and analysis. This is the way (or one of the main ways) that science works, and no amount of armchair philosophizing will change that.
This is a purely semantic disagreement that has no bearing on the substance of anything.
(I'm curious, since you're a working psychologist, do you differentiate between sensations, perceptions, and beliefs? I learned of the sensation-perception distinction that I employ in my philosophy from psychology classes, and it sounds like the sense of "belief" you're using is more or less what I would term a "perception" instead, so I wonder if and how you differentiate "belief" from "perception", presuming you also differentiate "perception" from "sensation" as my old psych textbooks said to do).
Quoting Isaac
It's like arguing against arguments that you ought not breath, or claims by people that they don't breath, on the grounds of the clear trouble you'd get into if you didn't, and that no matter how much you may try not to, you're going to end up doing it anyway, or else dying, so stop saying that you don't or telling other people not to. Instead, embrace the fact that you can't help but breath (or else die), and focus on doing it as well as possible.
Quoting Isaac
Repeatedly asserting that doesn't make it so.
Quoting Isaac
Foundationalism isn't just any deduction from prior premises, it's only holding some premises to be immune to questioning. But I start off with questioning certain possible premises, finding them to inevitably lead to problems, and then further exploring the area of possibilities remaining once those are excluded. Possibilities that fall into the area already excluded are not live possibilities anymore, and the simple fact that that area has already been excluded doesn't mean that the possibility of them is not open to question.
(ETA: I just wrote something relevant to this in a different thread:
[hide="Reveal ETA"]Quoting Pfhorrest[/hide]
...end ETA.)
Quoting Isaac
Later consequences can be considered reasons to modify prior assumptions made within the realm of remaining possibilities, but they cannot be reasons to say that previous reasons to exclude certain possibilities are not good reasons anymore. The old reasons that lead to the exclusion of those possibilities and the new reasons from the new problems found have to be considered in tandem to narrow down the range of remaining possibilities; neither old nor new reasons can justify breaking back out into a range previously excluded by the other.
Quoting Isaac
You'll have to clarify this, because I think I've already answered this question, and if you don't think I have I don't know what you're still asking.
Quoting Isaac
In the general use of the word, no, but in the technical sense used in a reductio argument, yes.
Quoting Srap Tasmaner
"I'm just obviously right" is not an argument. I don't doubt that many people act like it matters. But there's good reason to think that it doesn't, once you actually think about why it would or wouldn't.
Quoting Srap Tasmaner
From my exposure to them, they're all about Bayes Theorem, which I mention in the OP as being continuous with my own approach:
Quoting Pfhorrest
It kinda seems like a lot of the nominal disagreement with me maybe stems from missing things like this that were right there in the OP, to instead attack some preexisting straw concept of falsificationism that I don't adhere to.
Quoting Janus
That sounds like you're saying we in fact tend to think that way, which I'm not disputing. I'm disputing that there is any reason why we ought to think that way rather than otherwise; anything that says that way is the right way to think.
Also, again, it doesn't settle anything between people who both think that way but come to different conclusions that way. You see one pattern, someone else sees a different pattern in the same data, and then you see something new that fits your pattern... but if it also fits their pattern, we've learned nothing new. You need to see something that doesn't fit at least one of the patterns to judge between them.
Quoting Janus
Since he seems to be using "geological" just to mean "not artificial", then sure. But that conclusion was not reached by the confirmationist process: it was not "if these were geological we would see X, we see X, therefore they are geological", which would be wholly fallacious. It was "if these were non-geological (artificial), we would see Y, we don't see Y but instead X, therefore these are not non-geological, or in other words they are geological (non-artificial)". It's the difference between those two processes that's the point here.
Quoting Janus
Nobody here is disputing that past experiences matter or that we accumulate knowledge from them, or advocating "armchair philosophizing" in place of empiricism. The issue at question is the process by which experiences are applied to our beliefs to develop knowledge. My position is that experiences that agree with your beliefs do not elevate them over any alternatives, unless those experiences are also counter to the alternatives, because it's only experiences that go counter to some beliefs that elevate the alternatives over them.
The vast bulk of science based on thinking that way is (for all intents and purposes, although not absolutely of course) settled though, so I can't see the point of your objection.
Quoting Pfhorrest
There is no difference between relying on the observation of features to determine that something is geological or that it is artificial, or obversely, to determine that it is not geological or not artificial. The whole process relies on our ability to distinguish between geological and artificial structures. Such a determination is not absolutely certain in the deductive sense; it is always logically possible, however unlikely it might be, that we could be wrong.
Geological means not merely "not artificial". The point to what Sagan is saying is that we are able to distinguish between something that has evolved geologically and something which has been created deliberately. In the inductive terms of science "if these were geological we would see X, we see X, therefore they are geological" is not fallacious at all (although note that "X" is not always or even often some single feature, but a suite of features).
It would only be fallacious if you thought it to be deductively certain, but I doubt many intelligent scientists would think that. You seem to keep falling back into the same conflation between deduction and induction. Based on that conflation you might say we have no justification to think that way, but the point is that scientists routinely do think that way, and the justification is that it works, has worked, to produce the comprehensive and (mostly) coherent body of knowledge we call 'science'. I don't know how many times I (and others) will have to try to make this clear before you finally get it.
It has been settled by falsificationist methods: the alternatives have been shown false or at least much less likely or less parsimonious. Things are settled by showing the faults of their alternatives. That is “the scientific method” inasmuch as there is such a thing. You keep claiming otherwise and I’m hesitating to engage with that because it’s beside the philosophical point what people do actually do — they could still be wrong even if it’s the popular way — but in matter of fact science is not done confirmationally and hasn’t been for a long time, since the problems with that were first pointed out.
Quoting Janus
If they were geological we would see X.
If they were artificial we would see X.
We see X.
Therefore... nothing. We’ve learned nothing.
To learn anything it needs to be:
If they were geological we would see X.
If they were artificial we would see not-X.
We see X.
Therefore they are not artificial.
Therefore if the only alternative to artificial is geological (which you’ve just denied) then we can conclude they are geological;
Else, we only know they’re not artificial somehow or another, not necessarily geological.
Quoting Janus
Nobody here is denying science. I am (and many others, in actual publications, not here on this forum, are) denying that science works the way you say it works. I don’t know how many time I will have to try to make this clear before you finally get it.
Firstly I did not say that we would see the same X in case they were geological and in case they were artificial, so your first part here is irrelevant.
Your second part is somewhat badly worded. It should be 'if they were artificial we would see X, and if they were geological we would not see X'. If they were artificial we would see tool or machine marks, if they were geological we would not see tool or machine marks. I haven't said that there are other alternatives in this case, so I think you need to read more carefully. The point is that tool marks confirm artificial and absence of tools marks confirms geological, leaving aside any other unimagined possibilities (are there any other empirical possibilities you can imagine?). Obviously there would be many other criteria to confirm geological origin, since geology is a well-developed science with a good body of experientially derived and inductively justified knowledge.
Quoting Pfhorrest
I have not been talking about "the way science works" I have said that scientists generally think inductively, and that this way of thinking and the hypotheses it generates have worked to develop the body of knowledge we call science. For example, in relation to what I said above about geology being a well-developed science with a good body of experientially derived and inductively justified knowledge,do you not see that geologists' understanding of, and faith in the geological processes that they identify via associated them with observed geological structures relies upon the wholly inductive assumption that the laws of nature don't change?
Anyway it has become obvious to me that you are heavily invested in your own ideas, regardless of the fact that I and others have shown them to be either trivially true (in the deductive context) or mistakenly applied (in the inductive context) so if you don't produce any new arguments I am going to leave you to it.
I'm not saying you said that, I'm pointing that out as the problematic scenario that demonstrates why confirmationism doesn't work. If the predictions do not falsify one of the possibilities, then seeing those predictions pan out tells us nothing, it doesn't distinguish between alternative possibilities. It doesn't matter if you see what your theory predicted, unless other theories predicted otherwise; it's the falsification of them that tells us something. Seeing something your theory predicted can't distinguish between that theory and any other theory that would also predict the same thing, and so tells us nothing.
Quoting Janus
You said 'Geological means not merely "not artificial".' That implies that you think there could be something not geological, without being artificial; or something not artificial, without being geological; i.e. there's (at least) a third option.
Quoting Janus
And I've said that induction is perfectly fine as a way of generating hypotheses, but it doesn't help us pick between competing hypotheses. The latter is where science differs from guessing and intuition.
Quoting Janus
Fine with me, I'm tired of repeatedly trying to get through to people who are saying things I already agree with as though they contradict me and then ignoring the actual arguments to the contrary of their other assumptions. I'm eager to let this thread die and move on to something different.
I agree with that, and that's perfectly fine in principle, but how rare are such cases; where two scientific theories predict exactly the same things? Can you think of a single example
Quoting Pfhorrest
All I meant by that is that the science of geology is not generally concerned with artificiality, and that the understanding of geological processes is not generally measured against the understanding of artificial processes. It is true though, that there is a general distinction between man-made and naturally evolved phenomena in science. No one is much interested toady in the the other possibility; that God did it.
Quoting Pfhorrest
And I've said it is the comprehensive and cohesive knowledge that is based on inductive thinking, assumptions, investigations and analyses that enables a choice between competing hypotheses. You haven't produced any plausible alternative to that.
They never predict all of the exact same things (else they would be exactly equivalent, different formulations of the same thing), but there is usually significant overlap. GR, QM, and Newtonian physics all agree about their predictions on the medium scale we humans are accustomed to. So pointing at a ballistic missile flying as Newton would predict isn't evidence in favor of Newton vs GR, because GR also makes that same prediction. To decide between them, you have to pick a prediction that they disagree on; and then you've ruled out whichever one loses, but not supported the remaining one in any way against any other theories that also make that same winning prediction.
Quoting Janus
It's the investigations and analyses that do the heavy lifting there. Inductive thinking and assumptions give you your competing hypotheses. Analysis of those gives you the expected observations. Investigation, i.e. empirical observation, compared against those expectations tells us which hypotheses we can keep and which we have to throw away. But it's the "throwing away" part that makes progress: we can no better tell between any of the "keepers" based on investigations that let us keep them, all we can tell is whether they're okay to keep or whether they must be thrown away.
I appreciate the effort.
Quoting IsaacWould this mean then that animals have beliefs?
Quoting Isaac Does this mean that one cannot come to believe things that are counterintuitive: relativity, for example, or that the earth actually revolves around the sun. If we take the latter case that we can find empirical evidence that this is the case, very few people actually do that. Or that color exist outside us.
Quoting Isaac
I agree with this. I do think that people can be mistaken about their beliefs. though I think that their other beliefs are propositional, just dissonent with what they want to belief or they have contradictory beliefs (just as one can have contradictory tendencies to act as if.Quoting IsaacWhat was his normative claim?Quoting IsaacIf you belief in the Christian model of faith, you might well do that. At least one is encouraged to by some versions of that faith. Though one might be better off putting on that end of the spectrum 'beliefs that are not supported at and do not seem to fit current models in science, say, or perhaps in general' It could be without the latter part of that. IOW one could try to believe only those things that you can demonstrate or have been demonstrated by experts to be justified OR you could accept things without justification (at least conscious justification one has access to) and ignore counterarguments. i would say most people do this about something.
I'll start there.
No one would dispute that if theory A predicts X and theory B predicts ~X, and then we observe X, that we can claim progress by eliminating theory B. In practice, it's messier than that, in principle falsifying some theories is progress.
But what about theory A? Part of Popper's program, as I understand it, was focused not on direct falsification as the way to choose between hypotheses, but on distinguishing science from non-science by requiring falsifiability of theories. Theory A has been submitted to being falsified and it wasn't. I think most scientists would claim that their confidence in A was increased by this result. For instance, when GR predicted that light waves would bend passing near the sun, but Newtonian mechanics did not, and then we observed that, you want to say we can now discard Newton, but our confidence in Einstein should not increase.
But there are two points about this observation that I think merit attention: (1) it was a prediction, not a retrodiction; (2) what was predicted was unlikely to us, a surprise. Philosophers have definitions of "surprise", but in this case we could just say, observation for which no explanation is ready to hand. Einstein of course offers both prediction and explanation, a prediction no competing theory was offering, and reasons for that prediction. I think that's why everyone's confidence in GR was bolstered by the result. If we had made the observation by chance, we would be baffled.
Of course, our confidence increases but not to the point that we think GR is the absolute truth; we only conclude that it's closer to the truth than other theories, in the specific sense that new theories will have to retrodict what GR predicted, so we know future theories have to look a little like GR.
My question then is this: why shouldn't we count as progress a theory submitting itself to being falsified and passing? We already have the theory, so we don't learn anything new, agreed; but we do learn that the theory isn't crazy, and that it might, like QM, like evolution, survive many many more rounds of potentially being falsified and not be. Also: each successful non-falsification adds another requirement to future theories, another datum they must retrodict, so there is a similar pruning function, only it's for possible rather than actual theories. Surely that ought to count for something.
It’s something like the relativity of wrong, if you’re familiarity with that. Newtonian physics is wrong, but it’s not AS wrong as Aristotelian physics. I picture concentric rings, or like topographic lines, centered on whatever the complete truth is, where we draw a new innermost ring with every observation, and the further out in the rings a theory is, the “more wrong” it is. As we keep drawing more and more rings, and a theory continues to still be within the innermost ring, it is “elevated” RELATIVE to the others in the outer rings, even those we never enumerated in the now-outer rings.
But the thrust of falsificationism in this metaphor is that that relative elevation is in fact due to each new ring lowering the possibilities outside of it: the things still remaining within the inside ring merely retain their initial elevation. But that doesn’t mean no progress has been made, because the progress is in the drawing of the rings (and so weeding out the possibilities outside of them), not in actually elevating the things in the middle.
You're being cute -- "guess" is wildly inappropriate and if your approach steadfastly refuses to see the difference between a guess and a real theory, what's the point of any of this?
Every observation we make will constrain our future theorizing, whether the observation was predicted or not, whether it falsified any existing theory or not. If progress is only a matter of constraining the range of theories that might be true, all we really need are the observations and no theories at all. But no one does research that way and it is doubtful anyone could. Why is that? One reason to bother with theory is to know what kinds of observations there are, which should count as the same kind of thing -- so not really adding constraints, or not much -- and which are genuinely different, and especially which would be surprising.
Sure, I agree with that, and I’m not at all advocating that we go without theory in any way.
So assigning a theory the status of "falsified" or "not-yet-falsified" is not the only way to make progress.
So to you the only value of GR was in making a prediction that, if observed, would allow us to rule out Newtonian mechanics, and that observation did nothing to confirm GR, or nothing we should care about, nothing we should allow to increase our confidence in GR. In particular, the explanatory framework that comes along with GR, responsible for the prediction, that's of no interest.
There's two things that seem to be getting conflated here: one of them is what makes a theory useful, and another is what makes observations useful. Theories are useful for the reasons stated above. Observations are useful for telling us which theories are wrong and which are maybe not wrong. In adopting theories instead of just keeping unorganized lists of observations, we save effort, but at the cost of sticking our necks out and make assumptions without adequate justification, which could therefore be wrong. In making observations targeted at checking where they might be wrong, we make sure that our theories are not wrong in at least that respect, and so are safe to use at least in that domain.
The above referring to NM, GR and QM: You don't need to decide between them. They work in different contexts. NM as methodology is not falsified by GR. The greater accuracy to finer scales of GR might lead you have less confidence in the metaphysics associated with NM, but that would only be because we now have greater confidence in the GR metaphysic.
Quoting Pfhorrest
Hypotheses and theories are neglected, when other ones that prove to be more accurate and thus give us more reason to adopt them are found. They are not "thrown away". NM is still perfectly workable for many precise operations. It is simply not as precise as GR. Here's a challenging question for your position: what exactly do you think it was about NM that was falsified by GR?
Also something you said I meant to address earlier was that faith is belief in the face of contrary evidence. I think this is completely wrong; I think people cannot believe something contrary to evidence that they accept as such. One person's evidence, except in the most mundane empirical matters, is not another's.
Take, for example, faith in Christianity, in God; there is no evidence that God doesn't exist, and that Christ is not God incarnate, and that his message is not a revelation. There is no evidence that God does exist, that Christ is God incarnate and that his message is a revelation, either. So, faith is belief in the absence of evidence, not belief against evidence.
When it comes to evolution, the fossil record, and the inductively derived theories concerning genetic mutation and natural selection, give us positive reason to believe that species have evolved, and that confidence gives us reason to think that the world could not have been created 6,000 years ago. On that basis creationism is ruled out, and most intelligent believers don't reject the evidence for evoiution that the fossil record, genetic theory and the theory of natural selection represent. Those that do continue to believe in creationism simply reject the evidence for evolution, which allows them to continue to believe in creationism.
The problem with your view is that on the basis of its logic there can be no evidence for anything, only evidence against. What you fail to see is that if there cannot be evidence for anything, then there cannot be evidence against anything either; they are two sides of the one inductive coin.
The whole picture of NM. Which is not to say every piece of the picture, but the conjunction of all of the pieces together. (The negation of a conjunction is not the negation of every conjunct, only of some of the conjuncts).
That NM is still good enough for some purposes is besides the point. We know for sure that NM is not completely correct.
(We also already know for sure that GR and QM are not completely correct, but they are still less wrong than NM, and we don't yet know what in turn is less wrong than them).
Quoting Janus
The bold part is important. Finding reasons to reject that some observation is evidence to the contrary of a belief is exactly the behavior that someone clinging to that belief in the face of evidence would do. (Yes, it's also something that's sometimes done by people who aren't doing that, and we can't know for certain based just on that behavior whether they are doing it or not; only the person doing it themselves can know, if even they do. As already gone over extensively with Isaac in this thread).
Quoting Janus
Depending on what you mean by "God", there is, or sometimes it's not the kind of thing held susceptible to evidence at all, and so must be rejected on those grounds.
Quoting Janus
This is a semantics game again. I specify two different terms, "liberalism" and "fideism", exactly to avoid playing this semantics game. I'm not against believing in the absence of evidence, I am against believing against evidence. Call those whatever you like, names don't matter except for convenience.
Quoting Janus
Repeating this over and over again despite my repeated refutations isn't going to make it true.
It's possible to prove something true by falsifying the predictions of it's negation. It's not possible to prove something true by confirming its own predictions. Different kinds of proof, not different kinds of statements. That is the important distinction. And neither of those is "induction"; that's something else entirely, that comes well before the stage of testing like that.
Hmm. But for you, beliefs can't have and don't need justification. You describe things here as if relying on a theory incurs a risk because we overstep what we actually know, we project beyond what we have adequate justification for.
But isn't that all belief? Isn't all our knowledge only probable? Or are there beliefs you will countenance treating as certain? Is relying on theory truly different? Or is it the same because we are always relying on some theory without adequate justification?
Yeah, that's what I meant. ("Theory" and "belief" are being used as rough synonyms here, along with "model" or "hypothesis"). Only a (wholly impracticable) completely non-theoretical approach relying on nothing but the aggregate of our particular experiences, without extrapolating at all, would not be sticking our necks out like that, and nobody does or in practice can do that, and it would be horrendously inefficient to do so even if we could. It's entirely pragmatic to trade the risk of error for the ease of theory.
What does it mean to say it is not correct, though? What specific part of it is not correct, as opposed to merely not accurate enough?
Quoting Pfhorrest
There is no certain criteria as to what counts as evidence, though, and this all the more so given your position.
Quoting Pfhorrest
Give me an example of the evidence you have in mind.
Quoting Pfhorrest
Dismissing an argument as being merely semantics seems like a cop out. If you want to give new or eccentric meanings to terms you should be able to support their use. Faith as it is generally understood is believing in the absence of empirical evidence. The faithful will not see themselves as believing in the absence of evidence tout court, but they will counts different things as evidence than the empiricist will. So it is only from the perspective of the empiricist that the faithful believe in the absence of evidence. But to say they believe against the evidence is a step too far, given that what is believed is not subject to empirical verification or falsification.
Quoting Pfhorrest
This is as clear as mud. Please give an example.
To say it’s not correct is to say that some observations one would expect on account of it are contrary to the observations that are actively had.
For example, Mercury does not move in the way one would expect from NM.
Quoting Janus
If what you mean by “God” involves being all knowing, all powerful, and all good, then the occurrence of evil is evidence against the existence of such a God.
Quoting Janus
Give me a better word to use for the refusal to question an opinion, then; something that contrasts it with being open to an opinion not yet proven. I think those are both different sense of “faith”, and I could think of an alternative name for the former but not the latter. I did also consider “dogmatism” over “fideism” once, but the principle I’m naming is not only applicable to beliefs but also intentions, and “dogma” etymologically refers to beliefs specific.
I’m always open to new words, and frustrated with some of the word choices I’ve had to make already, so better alternatives are welcome.
Quoting Janus
If what they say is not subject to empirical testing, then that is on itself a reason to reject the belief, because that makes it unquestionable in principle, and all beliefs must be questionable in principle.
But also, lots of people believe in a God that is subject to empirical tests, since their concept of God is supposed to actually have some noticeable impact on the world.
Quoting Janus
We went over many examples before.
If the Face On Mars was artificial, we would expect it to be made of baryons.
It is made of baryons.
Therefore it’s artificial?
Or:
If it was natural, we would expect it to be made of baryons.
It is made of baryons.
Therefore it’s natural?
No, because it would be made of baryons whether it was natural or artificial. That’s not a prediction that rules any possibility out, and it can’t confirm every contrary thing that it doesn’t rule out, so it confirms nothing.
But if it was artificial, we would expect to find tool marks.
We do not find tool marks.
Therefore it is not artificial.
“Not artificial” = “natural”.
Therefore it is natural.
Yes, but one would expect that only if one believed the system to be perfectly accurate at all scales. People, assuming a certain metaphysics, did believe that, now they no longer do. GR probably has its limitations too. They are both models, correct within their limits. GR if "finer grained" than NM, but doesn't falsify it, because within its limits of scale NM is perfectly accurate. So GR does not "faslify" NM, but demonstrates its limitations. You are yet to show what about NM is falsified by GR. You might want to say that the belief in its perfect accuracy is falsified; but that is not an inherent part of NM.
Quoting Pfhorrest
It is only evidence to those who believe that humans notions of omnipotence, omniscience and omnibenevolence are sufficient to understand God.
And even if that were accepted, a transcendent God who did not possess such attributes could be believed in without fear of encountering any empirical evidence for or against its existence.
Quoting Pfhorrest
The contrast, as I already pointed out, is really only between people who accept different kinds of things that they count as evidence. It is your prejudice against the idea that there could be any other kind of evidence than the empirical or the logical which leads you to characterize the faithful as refusing to question their beliefs. (Note; I think it is certainly true that there there can be no inter-subjectively corroborable evidence other than the empirical or the logical, but when it comes to what experiences or scriptures or whatever one counts as sufficient evidence for their own beleifs the inter-subjective corroborability of such "evidnece" may be, with perfect consistency seen as irrelevant).
Quoting Pfhorrest
Lack of ability to empirically test a belief is not reason to reject the belief, unless you count empirical evidence as the only kind of evidence for a belief. The belief that your wife loves you cannot be definitively empirically tested, because however she treats you, you can never be certain about her motivations or psychological pathologies. Most philosophical ideas cannot be empirically tested. How would you test whether there is a Platonic realm of Forms, for example?
Quoting Pfhorrest
This is a poor example, because it is not realistic. What could the presence or absence of baryons have to do with the natural or artificial origin of the Face on Mars? (Again the fact that baryons have nothing to do with the question shows the role of inductive thinking; no plausible mechanism by which baryons could have anything to do with the origin of the Face on Mars can be given. Such plausible mechanisms are made possible by our inductive understanding of the world and its invariant law-like processes).
Please supply a real world example.
This fundamentally misunderstands falsification. One theory does not falsify another. Observations falsify theories. And showing an inaccuracy of one is the same thing as falsification.
Quoting Janus
Like I said, it depends on what you mean by "God".
Quoting Janus
Yep. As established in my earlier thread on empirical realism, I think the whole of a thing’s reality is its empirical properties.
Quoting Janus
Her behavior is evidence of her mental state; all empirical properties of everything are behaviors of some sort of another.
Even then, her brain state is in principle empirically observable, even if that’s impractical with today’s technology, leaving only gross motor behavior to go on.
Quoting Janus
Most properly philosophical ideas (in today’s narrower sense excluding “natural philosophy”) are not beliefs about the facts of the world, but ideas about the framework through which to investigate things like (but not exclusively) the facts of the world. Since they’re not making claims about reality, empiricism is not relevant to them; which is good, because whether or not to use empiricism is one of those topics, and if it were to be settled empirically that would be circular.
Quoting Janus
There won’t be any reasonsble examples of real scientists doing things so obviously wrongly as to make clear to you what the problem is, because real scientists aren’t that stupid.
The point is that accounting for what real scientists do with confirmationism would suggest that absurd cases like this were perfectly fine, because they follow the same confirmationist form.
Cases where it looks like confirmationism is working, like you keep giving, are cases where it’s falsification doing all the heavy lifting. That nobody would even try a case that isn’t like that suggests that falsification more accurately models our intuitions, even though we intuitively say otherwise.
To do what though?
Again, what about quantum mechanics and evolution? Neither body of theory is entirely satisfactory to much of anyone, but the fundamentals are the most confirmed scientific theories we have ever had, and that seems to matter to scientists an awful lot.
Should it?
To differentiate the merits of different theories.
The cases where confirmation seems to work, the ones Janus has been giving at least, are cases where observations can show one of several competing theories false. The counter-cases I've provided where confirmationism is useless in comparing competing theories are all absurd because nobody would bother doing an observation that can't distinguish between them, but confirmationism as an account of science implies that those cases should nevertheless give support to the theories not being differentiated from each other.
Quoting Srap Tasmaner
They are the least wrong theories in their respective fields we have. That they have known faults just means that there are some still-less-wrong theories we need to find. Until we find those alternatives, these theories, plus ad hoc exceptions as necessary to limit their application to domains outside those in which we know they fail, are the best we have to work with.
Nothing in any of that is against anything in my view.
Showing an inaccuracy does not falsify a theory. We need to know why that inaccuracy is appearing. In the case we are discussing the inaccuracy was thought to be caused by some hidden planetary or asteroids, but is now according to GR thought to be caused by the greater warping of space in close proximity to the sun. That was only known due to GR, so if NM was falsified it was indeed GR that falsified it. I don't agree that it was falsified, though, it was merely shown to be limited insofar as it is unable to account for the warping of space.
Quoting Pfhorrest
Nonsense, she might act as though she loves you and yet not; or conversely act as though she doesn't love you even though she loves you.
Quoting Pfhorrest
No, it also depends on what you count as evidence and how you define omnipotence, omniscience and omnibenevolence, and how you think an infinite eternal being would manifest those qualities (whether you think human understanding of those qualities is adequate).
Even if we accept fro the sake of argument that it merely depends on what you mean by "God", the questions remains as to whether the existence of a transcendent God, however that is otherwise specified, can be confirmed or disconfirmed.
Quoting Pfhorrest
Right, so if we believe one philosophical idea rather than another it is merely a matter of faith then because it is believing without inter-subjectively corroborable evidence.
Quoting Pfhorrest
It appears as though you are going to continue to simply assert this without giving any good examples of how it supposedly works.
Suppose Steve and I are watching a high-stakes poker tournament, and Steve tells me that one of the players has a tell, but it takes a stopwatch to "see" it: whenever they raise after thinking less time than the last player to bet, they're bluffing. I'm doubtful, so we test his theory as we watch, and it works just the way he said: always bluffing when raising and quicker, never bluffing when not raising or slower.
That's certainly a falsifiable theory. Shouldn't I have greater confidence in it now? Steve thinks so: "We watched twelve hands and I was right every single time." What theory was Steve's competing with? There were no other theories. The null hypothesis? That's just another way of saying that observations can be confirmatory.
It absolutely does. That's what falseness is: not being an accurate model of reality.
Quoting Janus
If NM was correct, we would expect there would be some previously unaccounted-for mass causing the unexpected motion of Mercury. So either there is such a mass, or NM is incorrect. There is not such a mass, so NM is incorrect.
Quoting Janus
In a colloquial sense, sure, people can pretend things. You're not arguing in good faith here anymore if you think I wasn't conceding that.
Notably, you ignored the bit about her brain state being empirically observable in principle.
Quoting Janus
That's all a part of that definition of God.
If you define God differently, sure, you can come up with something that's not falsified yet.
Or something that's not testable at all. Better be consistent with that definition though and not act as though anything is evidence of God acting on the world or something.
Quoting Janus
Any transcendent anything cannot be tested. It also has no effect whatsoever on anything in the world, because that is exactly what makes it transcendent and impossible to test. And that is all the reason to reject belief in any such things.
Quoting Janus
Math is not subject to empiricism either, but that doesn't mean all mathematical theorems are just taken on faith.
Not everything is a claim about reality, so not everything is tested against empirical experience, but every claim of any sort has some sort of truth-maker against which it is to be tested, philosophical and mathematical and ethical claims included.
Quoting Janus
It appears as though you are going to continue ignoring the plentiful examples I have given over and over again of the absurd implications that would follow if confirmationism were a sound method of inference.
Didn't you ragequit this thread already, yesterday?
Before you were unaware of a pattern in your observations. Steve pointed out a pattern. So far it holds up. You now have a belief where you had none before. Continuing to see the pattern doesn't give more reason to hold that belief than you already had just from noticing the pattern he pointed out in the first place.
None of our models can ever be definitively shown to be accurate models of reality, or for that matter be shown not to be.
So,
Quoting Pfhorrest
So now you claim that it has been verified that there is not such a mass. That would mean, could only mean, that the GR idea of space being warped has been verified to be correct.
Quoting Pfhorrest
You haven't explicitly conceded that, so how would I know? In any case the fact that people can pretend things doesn't alter the fact that whether your wife loves you or not cannot be empirically demonstrated. The bit about her brain state being "observable in principle" is irrelevant, because that would require that a certain pattern of neural activity could be reliably verified to be equivalent to being in love. But you say no verification is possible.
Quoting Pfhorrest
I've already said that I think there cannot be any empirical evidence either confirming or dis-confirming the existence of God. It would help if you read more carefully.
Quoting Pfhorrest
Exactly what I've been saying all along; that some beliefs are faith-based insofar as there cannot be any inter-subjectively corroborable evidence to confirm or dis-confirm them. Your preference for rejecting such beliefs is just that, and nothing more; your preference. (Of course I agree that such beliefs cannot be argued for or against because that would require inter-subjective corroboration of some sort; either empirical or logical; we probably agree about that much. But I also think that no one has the right to determine what should or should not reasonably motivate privately held beliefs, because you have no way of knowing what another has experienced).
Quoting Pfhorrest
Not a good example; in math there are determinably correct and incorrect answers.
Quoting Pfhorrest
I can't ignore what hasn't been presented. I never "ragequit" the thread; I said that if you continued to say the same baseless things I would not continue. Since then the thread has either taken a slightly more interesting turn, or else I have managed to somehow magically increase my level of interest. Who knows; but in any case there was never any rage.
You got the first half right, but the second half wrong. It's trivially easy to show a model to be inaccurate.
Quoting Janus
I'm sidestepping all the complex Quinean stuff about background theories ladening our observations, because you're having trouble understanding just the simple straightforward problems with confirmationism that don't depend on any of that.
Either there is not such a mass, or some much deeper and more subtle assumptions according to which we interpret our experiences are wrong. We've searched thoroughly for such a mass and found nothing, so either we're doing something subtly and fundamentally wrong with how we search for things in space, or there is no such mass and so NM is false.
Quoting Janus
Something more like GR than NM has to be true, yes. Whatever the accurate model is, it has to agree with GR's predictions within that domain. That doesn't mean that GR specifically is completely accurate, and we actually know that it's not. But it's more accurate than NM in every domain, and so less wrong than NM; and if we hadn't already found problems with it, it could still have been possibly right, while NM is in any case definitely wrong.
Quoting Janus
If her love has any effect on the world at all, and isn't just some kind of epiphenomenon, then it will make an observable difference of some sort, from which we can in principle tell whether or not she's in love.
The lack of any causal effect on the world, and so the unobservableness of mental states in principle, is a primary argument against epiphenomenalism and the like.
You don't have to verify the correlation between being in love and observable neural states, you just need to be able to falsify the alternative. What would be the observable consequences of her not being in love? If those are not observed, then she is in love.
This is just a repeat of your same misunderstanding of what falsification is about. It's not at all about whether the thing being tested is phrased as a negation of something else or not. You can always rephrase something as just a different term that doesn't involve negation: "natural" and "artificial" can be taken as negations of each other, and either tested for without saying "not-" the other.
Whether there's a "not" in the proposition being tested is completely irrelevant. It's about whether you're deriving support for it from observing its expected consequences, or from observing things contrary to the expected consequences of its negation. Those aren't the same thing, in the same way that f(x) is not equivalent to ~f(~x), for whatever f() and x. In this case, f() is "the consequences of" and x is some theory. Observing the negation of the consequences of the negation of some theory is not just the same thing as observing the consequences of the theory.
Quoting Janus
Yes, and I'm just conceding that you can define what you mean by "God" that way, but lots of people define what they mean in different ways, and too often people are not consistent with which definition they use. They'll retreat to the "untestable" definition to protect themselves from having to change their beliefs, and then proceed to make decisions on the assumption of a God that actually intervenes in the world and so would be testable.
The latter is the only kind of God anyone would have any reason to care about the existence of anyway, since the former kind by definition would make no noticeable difference on the world whether he existed or not -- since if he did make a difference, that would be a way to test for his existence.
Quoting Janus
It is not just a preference, I have given an argument for it. Beliefs about transcendental things cannot be questioned, so if we must subject all beliefs to questioning then we cannot hold beliefs about those those. We must subject all beliefs to questioning if we care at all about the truth because not questioning beliefs is a surefire way to avoid making any progress toward the truth. We should care about the truth if we care about anything at all because all progress in every domain hinges on having correct beliefs and correct intentions, as all actions are guided by the difference between our beliefs and intentions. If you don't care about anything... well, I don't believe that you don't, but if you truly didn't, then I wouldn't care about your opinions, and there'd be no point continuing discussion.
Quoting Janus
If they have experienced something, then that is empirical evidence... for something. FWIW, I have frequently experienced the "religious experiences" some point to as evidence of God, and I remain an atheist, because supposing the existence of God is far from the best explanation for those experiences, and raises far more problems than it would solve even if it were.
Quoting Janus
You presume there are not in philosophy, ethics, etc? I have arguments why you should presume to the contrary, but I've already given them in previous threads and don't want to rehash that here. We've been over objectivism before just like we've been over empiricism before.
Not having convinced people that something has been determined to be correct is not the same thing as it not actually being correct. Look at all the people right here on this forum who disputed the determinably correct mathematical fact that 0.99... = 1.
God never makes any difference in the world, but only in the individual who believes.
Exactly, individuals are not the world, but only a part. So then God can only indirectly make a difference in the world, by mediating through individuals. God has no direct relation to the world, except perhaps for pagans.
Inaccuracy is not a black and white thing; there are obviously degrees.
Quoting Pfhorrest
Although we cannot presently know it is always possible there is a mass we cannot detect. In any case NM is highly accurate in most cases, so the most we can say is that there are situations it apparently cannot deal with, not that it is false.
Quoting Pfhorrest
That just isn't necessarily so. Her love may have a profound effect on her, but for her own reasons she keeps it entirely hidden.
Quoting Pfhorrest
No again you are making unwarranted assumptions; there may be no observable consequences of her not being in love, just as there may be no observable consequences of her not being in love.
If natural and artificial can be tested for that means there are observable marks of each that confirm
one or the other.
Quoting Pfhorrest
Again, you misunderstand the nature of faith. People care about the existence of God and believe or disbelieve in it because of the effect belief or disbelief has on them.
Quoting Pfhorrest
Sure, and all that says more about you than anything else.
Quoting Pfhorrest
The answers in math are provably right or wrong. There are no provabnly right or wrong answers in ethics, aesthetics, epistemology, or metaphysics, etc. It's simply not a good analogy.
Things can be more inaccurate or less inaccurate, but they are either accurate (completely) or inaccurate (to some degree). Being inaccurate is the same thing as being false (for a descriptive proposition at least). Things can be more or less false, but they have to be completely non-false to be true.
Quoting Janus
That is basically what I just said. This tangent is besides the point though. Either there is some mass there despite our best efforts to find one failing, or NM is false. That it's close enough to true in some contexts is irrelevant.
Quoting Janus
If there are literally zero observable consequences to her state of love, then her being in love or not makes no difference whatsoever -- because if it make any difference, we could in principle tell whether she was in love or not based on those differences.
You're conflating a trivial colloquial sense of hiding the evidence of something with there being literally no empirical evidence of it. Burying some treasure somewhere obscure and so hiding it from the world isn't the same thing as making that treasure have no empirical properties and making it in principle impossible to tell whether the treasure exists or not. Your wife can hide her feelings, obfuscate them, trick you into thinking she's feeling something she's not, but if her feelings make any difference at all, then by that difference there is in principle some way to tell what they are. (Or at least, back to the point, what they aren't).
Quoting Janus
No, that falsify one or the other. It's about the form of the inference, again. This is the whole point that you just keep missing.
You can show that something is non-natural if it's missing things it would have if it were natural, and you can show that something is non-artificial if it's missing things it would have if it were artificial, and if natural just equals non-artificial and vice versa, then you can conclude that it's whichever one you didn't just disprove.
But you can't show that something is artificial just because it has things it would have it if were artificial, unless those are also things it wouldn't have if it were non- artificial. We would expect the Face on Mars to be made of rock whether it was natural or artificial. So reasoning "if it's artificial it'll be made of rock, it's made of rock, therefore it's artificial" is fallacious (and obviously so, which is why nobody actually does that); it would be made of rock even if it were natural.
The consequent needs to be something a non- artificial scenario wouldn't have -- like tool marks, say -- and in that case, observing that falsifies the claim that it's non-artificial, it doesn't confirm the claim that it is artificial.
If it did confirm it, then the same form of reasoning could also confirm both that it's natural and that it's artificial from the fact that it's made of rock. That'd clearly be fallacious (which is why nobody actually does that), which shows that form of reasoning, confirmationism, to be fallacious.
Do you get that? This is the important point I keep repeating and you keep ignoring.
Quoting Janus
No, by my responses to Merk, if God makes a difference in believers then he makes a difference in the world. Stating (or implying) a conditional is not affirming its antecedent.
Quoting Janus
If the antecedent above were true, it would. If God existed and did something to believers that wasn't consistent with the non-existence of God, observing the believers having that done to them would show us that God existed.
Quoting Janus
So you're saying people don't actually care whether or not God really exists, they only care about the impact that believing he exists has on them, whether or not he really does? In that case I don't care what they believe, since they're not concerned with the truth, they're not trying to figure out what's real or not, they're just trying to make themselves feel good. Good for them feeling good, but I don't want to argue about the truth with someone not interested in it.
Quoting Janus
You seem to be confusing the effects of God existing with the effects of believing that God exists. Believing that God exists clearly has an observable effect on people, and consequently on the world. But that doesn't prove anything about whether those beliefs are correct or not. To know whether they're correct or not, we have to know what the implications on the observable world would be if he didn't exist compared to if he did. Then we could try to find observations contrary to those implications, and so falsify the claim that he doesn't exist.
If there are no implications one way or the other, then the belief in him is beyond questioning, and since we must question all beliefs if we care at all about figuring out what's true, we must reject all beliefs about such things that are not amenable to questioning.
Quoting Janus
It says that so-called "religious experiences" are not evidence of God's existence.
Quoting Janus
Then what are you here arguing about? If nobody's right or wrong, what do you care whether people say things you think are wrong? What's the point in convincing anyone otherwise?
In any case, it's just your opinion that there are no right or wrong answers in those fields, it's not even a broadly accepted opinion, and I think it's a provably wrong opinion.
Is that a universal fact? Does making a difference to a part always make a difference to the whole? Say I change a tire on my automobile, does it become a different automobile? Or, does it become the same automobile with a different part?
It’s not a completely different automobile, but your automobile is now different than it was before.
There are signs that rock formations are natural, and there are signs that they have been modified. If rock formations display tool marks then we know they have been modified.
I'm not going to bother responding to anything else because it's just you making the same (what I consider either baseless or trivial) assertions over and over. I have tried to show why I think they are baseless, but it repeatedly falls on deaf ears, so I have reached the limit of my patience.
I'll just leave you to think about a few points against your position that I consider germane and important:
Most scientists accept theories on the basis that all predicted outcomes have been consistently observed. Of course this is the same as to say that contradictory results have not been observed; but the point is that we do have reason to believe theories which are consistent with all known observations, and not merely because they are yet to be falsified, but also because they are consistent with our entire knowledge of what we think of as the laws of nature (which themselves could only be falsified if the nature of nature suddenly changed, something which is by no means logically ruled out, but which seems to be vanishingly unlikely to thinking people, and lack of falsification alone gives no reason to have any such confidence).
The fact that it seems vanishingly unlikely cannot be based on the lack of falsification alone, because that alone tells us nothing about how many confirmational observations have been made. From the point of view of falsification considered on its own absent confirmation something has either been falsified or not, and this is simplistic black and white thinking.
I have no doubt you will respond to this; probably with some more of your assertions that I don't understand falsification etc., but I won't be responding further. I've given you my honest assessment of what you have been claiming and have said all I have to say on this subject.
Sure, because natural rocks would not have tool marks, so we can falsify that they are natural, leaving non-natural, i.e. modified, artificial, whatever you want to call it, as the only alternative.
You can tell some theories are better than others based on differences in what they predict, and observations that go against the predictions of some but not others, thus falsifying some, and keeping the others.
If you only look at observations that don't go against any of their predictions -- like looking to see whether the rock formations are made of rock, which every theory predicts -- then your observations will tell you nothing.
Can you not see that it's falsification doing the heavy lifting here?
I keep repeating this because you keep failing to acknowledge that this is the point, and not any of the non-sequiturs you keep bringing up instead.
But if you're tired of this, feel free not to respond. I'm tired of this thread too, and only respond because I can't help myself.
BTW, something useful to me came out of this discussion after all. Because of my mention to you earlier about how I had once used "dogmatism" in place of "fideism", but discarded it because "dogma" had etymological roots that suggested a narrower application than I wanted for it, I looked up the etymology of it again and found that I had been wrong about that.
So now I'm switching to using "dogmatism" in place of where I've been using "fideism", and instead adopting "fideism" as an umbrella term encompassing both "dogmatism" and "liberalism": two different kinds of "faith".
Spurred by that, I also remembered a similarly annoying conversation with Isaac (I think, and perhaps others) where he(/they) took different implications from my term "objectivism" than I meant, thinking it meant more like what I call "transcendentalism", when I really meant only something that might otherwise be called "universalism".
So, I decided to also change to saying "universalism" where I've been using "objectivism", and instead adopting "objectivism" as an umbrella term encompassing both "universalism" and "transcendentalism.
This nicely mirrors the pre-existing way that I had "criticism" and "cynicism" as two kinds of "skepticism", one that I support and one that I oppose.
And I realized that I could also stick "phenomenalism" and "nihilism" under the umbrella of "subjectivism", each contra one of the subtypes of "objectivism".
So now:
No, I'll prove myself a liar and just say this one last thing: falsification and verification are two sides of the one coin as I see it. On this I'm satisfied to agree to disagree with you, since this issue itself cannot be definitively resolved by confirmation or disconfirmation issuing from any empirical observation, or mathematical or logical deduction.
Because you're not seeing the different issue at hand, and only focusing on the one that you think I'm talking about, but I'm not.
Quoting Janus
I don't actually disagree with what you're saying (falsifying P proves that not-P, falsifying not-P proves that P; knowing that something is false tells you its negation is true).
It's just a non-sequitur that's not contrary to what I'm saying ("if P then Q, Q, therefore P" is not the same kind of inference as "if not-P then not-Q, Q, therefore P"; the latter is valid, the former is not, even though they both prove the same theory P via the same observation Q).
I can't help myself either because this response shows again that you have not been listening to what I've been saying. I have been pointing out that the deductive inferential fallacy: "if P then Q, Q, therefore P" is not relevant to the empirical domain.
"If artificial, then tool marks, tool marks therefore artificial" is indeed not deductively valid as I've acknowledged. It is however inductively useful, and is just the kind of inductive inference commonly employed in science, because if gives us good reason to think that the phenomenon is artificial.
"If not-artificial, then not tool-marks, tool marks therefore not-not-artificial" is indeed deductively valid, but it relies on the (unproven) premise that tool-marks could not exist on a natural structure. The proper conversely expressed verificationist counterpart to "if not-P then not-Q, Q, therefore P"; would not be "if P then Q, Q, therefore P", but "if Q then P" (if tool-marks then artificial), which is precisely the unproven premise in "if not-P then not-Q, Q, therefore P".
Quoting Janus
I want to return to this to clear up what may have been a misunderstanding. When I say that if rock formations display tool marks then we know they have been modified, I don't mean that we know that with deductive certainty, because we don't. We know it in the same sense that we can be said to have any scientific knowledge, none of which is infallible. So "know" in this context means something more like "have very good reason to think" or "have no reason to doubt".
But "if it's artificial it'll probably be made of rock, it seems to be made of rock, therefore it seems it's probably artificial" is just as invalid as if we were speaking of certainties. It's not about the certainty, it's about the form through which support is supposedly lent. Even if we're only talking probabilities, the form still matters.
Quoting Janus
If you read this carefully you'll see that the falsificationist argument "if not-P then not-Q, Q, therefore P" relies on the the premise "if Q then P". No one expects "if Q then P" to be proven, nothing in science is proven. The conjecture is "if there are tool marks, then we can reasonably believe that the structure is artificial". This conjecture is reasonably based on countless instances of experience.
That's inevitably true of falsification as well.
Falsification is important because you cannot affirm general laws empirically. You can never show that all swans are white; you can only show that this swan is white. The falsification that all swans are white is the confirmation that this swan is not white.Most knowledge isn't about general rules (I think?).
Quoting Pfhorrest
Effectively yes, although you refer to it as rejection -- forging a belief 'X is not shown therefore not true' -- whereas equally if not more important is suspending judgment: 'X and ~X are not shown therefore I do not believe X or ~X'. This is contrary to the idea that X ought to be tentatively accepted until falsified, and avoids the problem of tentatively accepting both X and ~ X.
Quoting Pfhorrest
I don't see a problem with this. Pragmatically, it's not ad infinitum but to some degree of consistency. One explores an idea in the context of other ideas one finds uncontroversial. Sometimes one or more of those ideas become overthrown (one has to examine the context, i.e. examine new ideas that are consequences of the idea under consideration), sometimes everything fits nicely, sometimes it just doesn't fit at all.
Sorry this is late, I apparently failed to hit send.
Seeing something to the contrary of what the negation of your theory predicts does show your theory true, though. And if the negation of your theory only has something predicted to be unlikely, then seeing that something only makes your theory likely true. I even mentioned this in the OP, and have quoted it again later in the thread since:
Quoting Pfhorrest
Your "if Q then P" is, as you said, a equivalent to "if not-P then not-Q". If that is (probably) true, and Q is true, then P is (probably) true. But that's the falsificationist method, not the confirmationist method.
The confirmationist method would say that if "if P then Q" is (probably) true, and Q is true, then P is (probably) true, and that's simply not a valid way of reasoning, even with the "(probably)"s in there.
Quoting Kenosha Kid
I never said it ought to be, only that it may be. Pending evidence either way, both X and ~X are permissible beliefs. To say that pending evidence either way, both X and ~X are impermissible beliefs (what I mean by "cynicism") would make it impossible to ever have evidence either way (because you would need some beliefs to be the evidence, but you couldn't hold those without others that you also aren't permitted to hold yet, ad infinitum), and so impossible for any belief in anything to ever be permissible.
(It might be worth reiterating here that by "belief" I don't mean anything anywhere near dogmatic, merely "thinking something is true". There are all kinds of things we think are true, but are perfectly open to evidence that they're not).
Quoting Kenosha Kid
Is it okay to believe in those things you think are uncontroversial without first proving that they're correct from the ground up? My "liberalism" says yes, and its negation I call "cynicism" says no.
This is not correct, you keep presenting it backwards; which amounts to refuting a strawman. Sticking with the present example, the confirmationist method says precisely that tool marks would reasonably be thought, based on an enormous body of experience, to be the main sign of artificiality. The presence of tool marks, for all intents and purposes, confirms artificiality, but can never prove it.
The problem with simply saying that falsification "does the heavy lifting" is that it ignores that fact that nothing can be falsified without other things being confirmed; or to put it in other words anything is falsified only to the degree that other things are confirmed.
Quoting Pfhorrest
Quoting Pfhorrest
So it is the liberal part I'm referring to: that any given belief should be considered justified enough to be tentatively accepted. This would include any absurd yet so far untested belief that I might make up: that spiders are telepathic, or that The Great Geoff lives on an asteroid orbiting the black hole at the centre of our galaxy, or that the CIA are controlled by a secretive Inuit conglomerate.
The things I believe are not quite this random I hope. I believe the things I do not just because they have not yet been falsified, but also because there is some reason to do so. Popper's criterion is not just that an idea isn't falsified, but that it is nonetheless falsifiable, i.e. we can test it, and see if the world fails to work as if the idea is false. It is not proof because another test might falsify the idea yet, but it is a stronger grounds to believe than 'hasn't been falsified'.
Quoting Pfhorrest
But this is not what I suggested. I said that I can suspend judgment on either given no facts to support either. We aren't obliged to take a firm position on everything. Do I believe Jesus lived or not? Neither. I don't know, and I don't really care. It's a matter of supreme indifference to me.
Quoting Pfhorrest
Yes, but then I don't subscribe to the view that ideas either need to be proven or distrusted any more than I subscribe to the view that ideas must be falsified or held tentatively as true. They are false dichotomies in my view.
Pragmatically, beliefs are tools for predicting the world, the best founded beliefs being those best aligned with experience, the worst founded being those conflicting with evidence (falsification). Those exactly in the middle which are neither falsified nor supported (as opposed to proven) are likely useless, probably meaningless too. The more evidence for an unfalsified idea, the stronger the basis for belief.
However strongly justified a belief, a reasonable person must reject it the moment they see it falsified. In the meantime, so long as the belief is both falsifiable and consistent with the world, the believer is perfectly justified in holding it to be as if it were true, i.e. to have assumptions about the world.
I’m refuting the thing that’s called “confirmationism” in distinction from “falsificationism”. If that’s not the thing that you support, then that’s great, but that’s the thing that’s called that name in philosophy of science. I’ve been saying since the beginning that what you’re saying isn’t counter to what I’m saying, because what I’m say is precisely an argument against that form of inference. It’s not a straw man just because you don’t support it; I didn’t start out arguing against you, saying this is what you believe, you just came in arguing nominally against what I was saying, with things that weren’t actually against it.
I'm doubtful that verificationists (for example Ayer or the Logical Positivists) were stupid enough to believe that argument is valid, so I think they must have been talking about something else.
The something else I think they were talking about is that repeated observations coupled with an enormous accumulated body of theory based on those observations does give us good reason to think in many contexts that when we observe what is predicted we are warranted to hold (always provisionally) many things to be confirmed.
In short I don't think there was ever any coherent ""confirmationism" in distinction from "falsificationism"": or vice versa. If you still disagree that's fine; we'll just have to agree to disagree.
Just confirmationism, not verificationism as in the Positivists. Confirmationism is something broader (as in less specific, less comprehensive) than verificationism; non-verificationists can still be confirmationists.
And as I've just said a bunch of times, it's not just about deductive validity. It's that that form of argument doesn't even give probablistic support to P. If you reverse (or equivalently negate) the antecedent and consequent of the first premise, then it does, yes, but that's precisely the switch from confirmation to falsification.
Quoting Janus
That's precisely the same thing as "it's probable that if P [the body of theory] then Q [the predicted observations], Q, therefore probably P". Which there's no good reason to think, and plenty of good reason to think otherwise. (I'm avoiding the word "valid" here because you'll think I'm talking about deduction again).
If you reverse the P and the Q, or equivalently negate them, then there is good reason to think that way, yes. But that's precisely the switch from confirmation to falsification.
Quoting Kenosha Kid
It's the "enough" part that matters there. It's not that you should tentatively accept it, but that you should not demand proof from others if they want to tentatively accept it. (Nor, if you yourself feel inclined to accept it, demand proof from yourself or else reject it; if it seems true to you, go ahead and believe it).
Quoting Kenosha Kid
Yes, if you're inclined to believe those things, then go right ahead, and if someone else is, let them, unless you have reason to suggest you or they should not. (NB that this doesn't mean that you have to accept whatever nonsense someone else is inclined to believe, just that if either of you wants to change the other's mind, you need to show that they're wrong, not just point out that they can't show that they're right).
Quoting Kenosha Kid
Then that is not the thing I call "cynicism", and I think that's perfectly fine.
Quoting Kenosha Kid
That's not my view. My view is that non-falsified beliefs are permissible beliefs (that you can hold them as true without committing an epistemic error), not that they are obligatory beliefs (that you must hold them as true or else you're committing an epistemic error).
Quoting Kenosha Kid
The inability to ever have evidence for something, rather than merely against the alternatives, is the whole point of falsificationism.
Quoting Kenosha Kid
:up: That's exactly my position as well.
Something 'seeming true' is a reason to believe it, not the believing of it. Some of those are rational, such as empiricism; some are not (e.g. I cannot stand the idea that... I don't want to live in a world that...). What I'm getting at is that these are not equal. I actually wouldn't advise that someone goes ahead and believes something true that is not falsified or evidenced, especially if it's opposite or negation is evidenced albeit unproven, but that aside simply down to the lack of good reason to believe.
Quoting Pfhorrest
Not at all, you can always have evidence for something. A witness testimony is evidence that the accused was at the scene of the crime, for instance. It just isn't proof. Evidence for something is always incomplete; evidence against it is always terminal. That is the whole point of falsification as I understand it.
We're not a million miles apart but the above distinction is the difference. Evidence is not all or nothing. There are many degrees between a completely arbitrary unfalsified belief and a well-founded unfalsified belief.
To be "consistent with the world" just is to be confirmed by observation, which is of course the same as to not have been (yet) falsified. The belief that tool marks are the main sign of artificial rock formations does not derive from falsifying anything but from countless confirmations that those rock formations displaying tool marks are indeed artificial. So, that is at odds with your "liberal" notion that we do and should, "believe whatever we like as long as it has not been falsified". This point has been the crux of all my responses to you in this thread.
Yes, what you say here is essentially what I (and others) have also been arguing against the OP.
I consider that a good (but not complete) reason to disbelieve something. You probably shouldn’t believe things that are probably false. I’ve said above (in the OP and requoted twice since then) that I’m not advocating a black and white view: things that are not completely falsified can still be epistemically unlikely, and you’re not completely wrong to believe those, but you’re taking a big risk. (I actually think this is perfect analogous to risky but not impermissible behavior, as well: it’s not wrong per se but you probably shouldn’t do it).
Quoting Kenosha Kid
I would say instead that there are many degrees between a completely falsified belief and a mostly-unfalsified one. See the discussion above with Janus about the different kinds of probabilistic inferences. If P probably implies Q and Q seems to be the case, that doesn’t even give you the tiniest additional support for P; but if Q implies P (or equivalently not-P implies not-Q) and Q seems to be the case, then that does give incomplete support to P, but only because it’s equivalent to a probabilistic falsification: something that’s likely to be false if P were false seems to be true, so P is probably true, precisely because not-P is probably falsified.
Quoting Janus
It’s to be not falsified, but that’s not the same thing as being confirmed.
Quoting Janus
Where our beliefs originate from is not the issue at hand here. How to decide between conflicting beliefs is. You’re also changing which belief you’re talking about in the example. First you were talking about the belief that the face on Mars is artificial. Tool marks were the implication of that belief that we would check that belief against. Now you’re talking about the belief that that implication is true. We would need a different implication from that belief in the first implication in order to test whether the belief in the first implication were false or not, and that testing would have to be done falsificationistically.
Of course it is; beliefs that derive from well-examined repeated experience should inspire more confidence than those which do not. I'm not changing the example at all. The belief that the face on Mars is artificial can be checked by examining whether or not tool marks are evident. If they were evident we would have good reason to believe that the face is artificial, if they were not evident we would have no reason to believe the face is artificial.
Which is to say, beliefs that have survived many potential falsifications.
Quoting Janus
Let P = "the face on Mars is artificial"
Let Q = "there are tool marks on the face of Mars"
We're previously been discussing how we would test P. My entire point on that subject has always been that P implying Q, plus Q being true, does not give support to P; only not-P implying not-Q (or equivalently, Q implying P), plus Q being true, would give any support to P.
In your previous post before this one, you mentioned "The belief that tool marks are the main sign of artificial rock formations". That's not our "P". That's "if Q then P" (if I'm generous in interpreting you there, and you didn't mean "if P then Q" instead).
Let R = "if Q then P".
We believe R based on repeated observations wherein it is never the case that Q and not-P, which suggests to us that not(Q and not-P), which is equivalent to Q implying P. That's what it seems like you're saying, and that's a perfectly fine reason to come to believe that.
But now say we want to test that belief against other possibilities, maybe because someone else doesn't think the evidence suggests that belief, or just because we're undecided between multiple interpretations of the evidence ourselves.
To test R, we need to derive an implication from not-R. Call that implication not-S. I don't know what that implication would be off the top of my head, but maybe you can come up with something that would have to be false if "Q implies P" was false, and we can call that thing "S".
If R merely implied S, and S were true... that wouldn't give us any support for R. We'd still be free to believe R, just because it seems true to us, but if it seemed false to someone else, or we were undecided on it ourselves, we wouldn't have anything decide that disagreement or indecision with... unless the non-R possibility also implied a non-S observation, in which case we could rule out that possibility, but still not all possible alternatives to R.
Only if not-R (any and all alternatives to R, any scenario where R was not the case) implied not-S, and S were true, would that give us support for R.
Both can be true.
The characteristic decay chains of the Higgs boson in the LHC data are evidence, not proof, that the Higgs exists, sufficiently so to earn Peter Higgs a Nobel prize. It is not evidence against ~Higgs, since there are potential theories that could explain the same data with more than one particle. But the more signals corresponding to expected decay chains we see (more have been discovered very recently), the better founded the belief that the Higgs mechanism is a good model of reality. It is not proven, but nor does it have the status of 'merely unfalsified' which might apply to something that has not been tested at all.
By "mostly unfalsified", I assume you mean falsified with less than 100% certainty. An example might be Trump's claim of election fraud, insofar as the few concrete claims have been mostly thrown out or pulled as they don't agree with fact. Nonetheless there is no obvious feasible means of completely killing off the broader claim.
Third, where evidence against not P is evidence for P. Is the ball under the left cup or the right? Assuming the ball is under one of the two cups, falsifying the theory it is under the left cup is identically evidence for it being under the right cup. There's no distinction between falsifying ~P and verifying P. (Of course, by sleight of hand or a tricksy table, it might be under neither, but the *belief* it is under the right cup is affirmed, though not confirmed.)
LOL, I've already explicitly stated that if P is the proposition that the face is artificial, and Q is the presence of tool marks, and if we understand tool-marks, based on extensive confirmatory experience, to be the main sign of artificiality, then the proper equation is "if Q, then P"; it was I who pointed that out to you much earlier on. I also pointed out that it is the hidden confirmatory premise in your falsificationist equation: "if P then Q, not-Q, therefore not-P".
Yet you continue to try to tendentiously distort everything to pass through your preferred lens of falsification. As I've said nothing you've presented has given me any reason to change my view that verification and falsification are the two faces of the one coin. I think to get clear on this you will continually need to remind yourself that there is no deductive certainty, on account of there always being unproven premises, in either verification or falsification.
Why do those observations not equally lend support to the other theories that are just as consistent with them? I suspect you can actually provide an answer, because I trust that working physicists actually are smart enough to be using sound methods.
So I’m expecting that there is something about each of those alternative theories that is less consistent with all of the observations than the Higgs is; that each theory concords with fewer observations, or expects those observations with lower probability, or some combination thereof. In other words, the observations weigh against the other theories more than they do against Higgs. That is just a probabilistic version of falsification, which I mentioned in the OP and have requoted three times since then.
Quoting Kenosha Kid
I mean a theory that at worst says that the observations we’ve made are a little unlikely. Complete falsification is when a theory says that the observations that we see are not possible, which thus renders that theory (epistemically) impossible (i.e. certainly false). A theory saying the observations we see are merely improbable in turn makes the theory (epistemically) improbable, which is a lessened version of falsification, or a partial falsification if you will.
Quoting Kenosha Kid
Like I’ve been saying to Janus over and over, that’s beside the point. Sure, if you can show that not-P implies not-Q, and that Q, then you can show that P, via falsifying not-P. But that’s not what falsification was ever against.
What it’s against is saying that if you can show that P implies Q, and that Q, then you can show that P. That’s what confirmationism says, and what falsificationism denies.
Either of those can be rendered probabilistic instead of absolute as you like, and the same difference applies.
Quoting Janus
You know, I’m getting kind of tired of being condescended to by someone with such obvious reading comprehension difficulties.
I know you were talking about the same P and Q. But as I just said in my last post, we were talking before about how to test whether P or not using Q, and then you suddenly switched to talking about why we think that Q implies P.
I also know you were talking about Q implying P already (and how that’s equivalent to not-P implying not-Q), but that reversal from P implying Q just is switching from confirmationism to falsification.
"If P then Q, Q, therefore P" is confirmationism.
"If Q then P, Q, therefore P" is falsificationism, because it's equivalent to
"If not-P then not-Q, Q, therefore P".
You’re not showing that confirmation is hidden within falsification, you’re showing that falsification is the thing we need to do instead of confirmation, which is my whole point.
But I’ve explained all this many times before and you didn’t get it then so I don’t know why I expect you to get it now either.
That was precisely my point. Other than the absence of a historical competitor theory, the initial evidence is no more for one theory than another which yields those particular outcomes. As one increases the number of successful predictions, one ought to eliminate possible competitors, else the two theories are empirically indistinguishable. That process is ongoing, but there's no point at which LHC data will suddenly rule out an alternative model; in fact, I always assume that, in future, observation will lay waste to most of our models. The alternative is that I live in a privileged era.
Point being that we increase our faith in the model the more it fails to be falsified, without it ever being proven true. And this is not because we have falsified particular known competitors, and certainly not because we've falsified ~Higgs, but because we have narrowed down what a competitor theory can predict that is different to the Higgs model. One doesn't actually have to formulate the competitor theory to falsify it: it is sufficient to know that, as each data point is collected that is consistent with Higgs theory, so long as no data point is collected that rules it out, whatever potential competitor theories might be formulated are either falsified or equally consistent with the data do far, i.e. are *like* the Higgs theory to an increasing extent.
This gives us some confidence that, while Higgs might yet be ruled out (and probably will be), it is encouragingly close to reality. And it is this increasing confidence that increases our belief in Higgs. We did not start out with that level of confidence, or that strength of belief.
Quoting Pfhorrest
I included this partly for completeness and partly because you seem to interpret evidence for P and against ~P as purely falsificationist, i.e. only ~~P. But this is P so it's an erroneous distinction imo. In this class, there is no distinction between falsifying ~P and verifying P.
I already tried exactly this line of argument (beginning here and here). It won't work.
As far as I can tell, nothing will budge @Pfhorrest from his position. Nor should anything, in a sense, since the principles in play are not themselves falsifiable. That's irony or necessity, as you like, I suppose.
Personally, I think classical logic is just too primitive a tool here: material implication is not a good model for causation or for the relation between a theory and a prediction of that theory. Conditional probability is a better fit for both, and if you take a Bayesian approach you still get falsification as a special case, while picking up a reasonable treatment of confirmation.
*
But, hey, you do you, @Pfhorrest. It looks more and more like you're not getting the kind of feedback you wanted, and endlessly defending your fundamental principles has become tiresome for you, which is certainly understandable. (Some people are here precisely in order to have the same argument over and over again and prefer saying exactly the same thing over and over again, so we had no way of knowing you weren't one of those.) Is there an earlier post that makes it clearer what sort of feedback would be more useful to you?
Quoting Srap Tasmaner
Not empirically falsifiable of course because these are not empirical issues, but open to logical falsification, like reductio ad absurdum or something, sure. I am not clinging to these principles against arguments to the contrary, because nobody has actually said anything to the contrary of my principles. They’ve only been attacking strawmen and saying things I already agree with as though that refutes the things I think, or conflating multiple things together so as to sneak in something unsupported along with something I already agree with.
Quoting Srap Tasmaner
I said exactly that in the OP, and have referred back to or requoted it many times since.
Quoting Srap Tasmaner
No, I’m not looking for anything in particular. I just wish that clarifying that I am not actually against the things you think you’re refuting me with would settle these nominal disagreements. The kinds of responses were fine at first (besides Isaac objecting to having the discussion at all), I just wish it wouldn’t go around in circles so much.
You really didn't, and if you had said it you'd be wrong. Conditional probability is a whole different animal from material implication, and no adding of "probably" changes that, as David Lewis showed, like, forty years ago.
That's why I keep saying you have to choose between the logico-deductive model and the Bayesian model. You don't think you have to choose, but you're wrong.
If I said the thing that you just said, I'd be wrong?
What I said was:
Quoting Pfhorrest
And you said:
Quoting Srap Tasmaner
Sounds like the same thing to me.
Quoting Srap Tasmaner
Can you please elaborate on this? My adding of "probably" to the conditionals under discussion was not meant to be a formal thing at all, but a loose way of phrasing the idea that, as I said in the OP:
Quoting Pfhorrest
Which as I understand it is what Bayes is all about.
Yes, we're not a million miles apart. Subtract your tentative holding true of untested beliefs, which the above does not require, and we're more or less on the same page.
I also don't require it, I only permit it, so it sounds like we agree.
If your use of "probable" isn't formal, you're not going to be "calculating" anything.
Bayes' rule allows for your confidence, or your subjective degree of belief, to increase given new evidence. You allow change only in the sense that some alternatives are no longer viable, but the surviving theory still has exactly the same status it had before. If that's a Bayesian view of evidence, it's one I'm not familiar with.
Anyway, look into or don't. It's your project, not mine.
Let x be a real number between 0 and 1
(1) P( x is rational ) = 0
(2) x is not rational.
Does (1) materially imply (2)? It does not, x=0.5 is a counter model. It can be true that x is rational even when P( x is rational ) = 0.
(1) clearly does support (2) in some way. But it's not a material conditional (1) -> (2) as there's a counter model. If (1)->(2) is false, then the material contrapositive not(2)->not(1) is false too as they're logically equivalent. Clearly observing not(2) is amazing evidence that not(1) ("They said it could never happen but it did!"), but it's not a raw modus tollens refutation - it's some different form of inference.
To put a super fine point on it: from not(2) it should be inferred somehow that not(1), but not(2) does not materially imply not(1).
Quoting Pfhorrest
No this is precisely the point your position relies on which is incorrect. "If P then Q, Q, therefore P" is simply an invalid deduction. Confirmationism is an inductive, not an invalid deductive, thought process. As I pointed out already many times, but which you, due to your inability to countenance anything which is counter to what you have stipulated, the correct formulation for confirmationism is "if Q then P", where Q is believed to be, not a logically necessary sign of P, but a very strong inductive support for it.
So, in the example of the "Face", tools marks are thought to be a sign strongly suggestive of artificial structures, and that conjecture is confirmed, although not logically proven, by countless examples drawn from experience. If tool marks are observed then the proposition that the structure is artificial would be confirmed, which simply means that we have good reason, according to our experience, to think that the structure is artificial.
This has nothing to do with falsification other than that, if tool marks were not observed, then the proposition that the structure is artificial would be falsified, not proven false, mind, which means that we would not have good reason to believe it is artificial. Verifying and falsifying are thus two sides of the one coin; and you have provided no arguments, but instead merely the same stipulations over and over, to support your mere assertion that they are not.
It would make it seem much more like you are arguing in good faith, rather than doubling down on your position if you actually responded to what I am arguing here, (and others have argued) rather than playing a 'tit for tat' game of accusing me of poor reading comprehension, simply because I won't acquiesce to your stipulations.
I accuse you of poor reading comprehension because I have been responding to your arguments, and you never seem to understand the responses. You are mostly putting forth things that I don't disagree with, as though they disprove the things that I am saying. So I'm not going to put forth counter-arguments to show that the things you're putting forth are wrong, because they're mostly not. They're just beside the point of anything that I was saying in this thread, not against it, and I'm trying to show you why that is.
For example:
Quoting Janus
This is what I mean about you changing the focus of the discussion. First we were talking about how we could test whether or not the Face was artificial. We could test that using the implication of tool marks to artificiality, and we were discussing the right way to use that implication to test for artificiality. But now you're talking about how we could know whether tool marks imply artificiality. That's a different thing. In a complete investigation that is a further question that we could step back and ask too, but it's not the same question we were asking about at the start.
Anyway, on to the meat of things.
Quoting Janus
I have repeated over and over again that I'm not simply talking about deductive vs inductive implications. I'm talking about the direction of implication. Inferrring from "if P then probably Q", and "probably Q", to "probably P", all merely probabilistically, is still not a good inference.
(I'm saying "good" instead of "valid" here so you don't think I mean non-probabilistic deduction; I've done that before as well. Also, probabilistic inference is not the same thing as induction, but let's leave that aside for now).
Meanwhile, inferring from "if Q then probably P, and "probably Q", to "probably P", even if it's all merely probabilistically, is a good inference.
And that is the whole point of falsificationism, because "if Q then P" is logically equivalent to "if not P then not Q", as you already know.
(This is another place where you've been just talking past me. You speak as though you pointed out the equivalence of those to me, and that that defeated my point, when I already knew they were equivalent, and I in turn pointed out that switching from "if P then Q" to "if Q then P" makes all the difference; I never objected to "if Q then P", only "if P then Q", because the former is equivalent to falsification, and the latter is the only kind of confirmationism I'm against here).
So if you're affirming that the good direction of inference is from "if Q then (probably) P", and "(probably) Q", to "(probably) P", and you're not saying that the inference from "if P then (probably) Q", and "(probably) Q", to "(probably) P", is a good inference, then you're completely agreeing with me, even if you think you're not.
Quoting Janus
I just went to look up a quote about confirmationism in a reliable source, the Stanford Encyclopedia of Philosophy, to settle this merely nominal dispute once and for all, and I found something interesting: there are two mutually contradictory things both called "confirmationism".
I learned confirmationism in school as synonymous with the hypothetico-deductive method, which to quote SEP means:
Quoting SEP
Where e is some evidence, h is some hypothesis, k is some set of background assumptions, and ? is a symbol for entailment, i.e. necessary implication.
So on that hypothetico-deductivist confirmationist account, if the hypothesis (and background assumptions) entail the evidence (but the background assumptions alone don't), then seeing that evidence confirms the hypothesis. That is exactly what I have been saying confirmation claims, except using "P" for the hypothesis+assumptions together, and "Q" for the evidence: that if P implies Q, and Q is the case, then P is the case.
But, it seems, Hempel's model, which I learned in school as something against confirmationism and more a step toward falsificationism -- because it says exactly the opposite of that hypothetico-deductivism above, just like falsification does -- is apparently also called "confirmationism", and I'm guessing that that's where you're coming from, Janus. Hempel's account says:
Quoting SEP
Which is pretty much the same thing falsification says, except it doesn't call that "confirming" h, because falsificationism uses the term "confirmation" to refer to hypothetico-deductive confirmation. It just says that that's falsifying ¬h, which of course entails that h as well.
I get the feeling that maybe your point is that zero probability is not quite identical to impossibility, just very close. But even given that that's the case, it's not counter to my main thrust here, since as explained to Janus above, I'm not depending on these implications I'm loosely using here being completely strict absolute deductions; the same points I'm making apply even if they're taken only to be probabilistic relationships, as I'll explain more to Srap below right now.
Bayes theorem is:
P(H | E) = P(E | H) * P(H) / P(E)
Some simple algebra can rearrange that to:
P(H | E) * P(E) / P(E | H) = P(H)
Things like “P(X | Y)” are often phrased as “the probability of X given Y”, but that means the same thing as “the probability that X is true if Y is true” or “the probability that if Y is true then X is true” or “the probability that Y implies X”. “X given Y” = “X if Y” = “if Y then X” = “X implies Y”. It’s all the same thing. Just wrap a “the probability that” around any of those and you have what “P(X | Y)” means.
So we can rephrase the above formula as:
P(E implies H) * P(E) / P(H implies E) = P(H)
Meanwhile, the standard form of a falsificationist inference is:
~H implies ~E, and E, therefore H.
or equivalently:
E implies H, and E, therefore H.
If we want to make that probabilistic instead, using formal notation instead of just sticking “probably”s in there as I’ve been doing because I assumed we were all smart people who can read between the lines, we’d then have:
P(E implies H) * P(E) = P(H)
or if you like:
P(H | E) * P(E) = P(H)
The only difference between that and Bayes formula is that Bayes also has P(E | H) in the denominator on the left (of the rearranged presentation), i.e. the probability of the hypothesis is also inversely proportional to the probability that the hypothesis implies the evidence, i.e. the probability of the evidence given the hypothesis.
In other words, not only does Bayes theorem say that the more probably that the evidence implies the hypothesis, the more probable the hypothesis is true — which is exactly what falsification says, since “E implies H” is equivalent to “~H implies ~E” — additionally, the more probably that the hypothesis predicts the evidence, the less probable that the hypothesis is true — the exact opposite of what confirmationism would have you think.
That is to say, the standard form of a confirmationist inference:
H implies E, and E, therefore (probably) H.
rendered into proper probabilistic notation would be:
P(H implies E) * P(E) = P(H)
or if you like:
P(E | H) * P(E) = P(H)
where the first term there is exactly what Bayes theorem has the inverse of, in addition to also having the converse (“P(E implies H)” or "P(H | E)").
Let me repeat all that. Falsification rendered probabilistically:
P(H | E) * P(E) = P(H)
Bayes theorem, algebraically rearranged:
P(H | E) * P(E) / P(E | H) = P(H)
Versus confirmationism rendered probabilistically:
P(E | H) * P(E) = P(H)
So if anything, Bayes theorem is doubling down on falsificationism vs confirmationism, besides just rendering the whole thing into a probabilistic form.
Quoting David Lewis
Quoting Richard Jeffrey
Maybe you're right, @Pfhorrest; but maybe Lewis and Jeffrey are right. Don't care.
Which seems to be the same thing as I'm saying. I don't yet know how he reconciles that with the bit you've quoted. I suspect that perhaps this will turn out to be a roundabout way of saying that natural language conditionals don't mean exactly what material implications mean, i.e. a natural language assertion of "if P then Q" doesn't just mean "not(P and not Q)", but instead it means something more like the "(Q|P)" in probabilistic notation. If so then that's fine with me, as I'm using conditionals in a natural language way here, not especially tied to them meaning precisely what material implication means.
ETA: Yep that looks like it, as just below your quote from Jeffrey he writes of Lewis' work:
Which equivalence hinges on the conditionals being taken as material implications.
If you're willing to "pass up" the conceptual hierarchy to more everyday language use of the concepts, I think that's fine. So long as you're aware you're talking about different (but related in some way) concepts than material implication and probability.
I was busy for a few days and this thread seems be going nowhere so I'm sorry if this reply is no longer relevant, but I didn't want to just leave it hanging.
Quoting Coben
I'm not a purist about language except within context. 'Beliefs' can obviously mean a range of things depending on how we're using the word, so I want to be clear that when I say beliefs are 'tendencies to act as if...' I mean that in the context of psychology. As such, intelligent animals can have beliefs, but machines can't, simply by definition (beliefs are something which minds have). In a functional sense, I'm quite happy to see a belief as no different to the tendency of a thermostat to turn the radiator down when the room is to hot, but that wouldn't be a belief - despite it's functional similarity - because it's not a state of a mind.
Quoting Coben
No, because there's no obligation on our senses to deliver us a coherent set of data. We can observe the sun rise and set, we can observe pictures of the earth from space, we can listen to scientists whom we trust talk about orbits and construct a mental image of such - all these may well range from slightly inconsistent to completely incompatible. We can believe one at some time and another at another, we can believe one in one context and another in another. Nothing enforces coherence.
Quoting Coben
The actual normative claim seems to be that we should reject a system of beliefs which, in it's entirety, is inconsistent, in favour of one which is consistent, but that we should not prefer one consistent set of beliefs over another. so long as they are consistent, they're OK.
Since I doubt anyone would agree that we should maintain an inconsistent belief system, or that we can dismiss the beliefs of others on grounds other than consistency, I don't see how there's any proper target here.
The problem is, @Pfhorrest only resorts to this wider sense of the claim when pushed, as soon as that pressure is released we go back back to the much more narrow sense - that some people hold beliefs which (my personal take on) empirical evidence shows to be wrong, and they shouldn't.
Like, I'm going to out on a limb and say the vast majority of these sorts of philosophies, they're looking for a stick with which to beat their moral or ideological opponents, and no stick is bigger or heavier than "the world says you're wrong", or "logic says you're wrong". Unfortunately the world turns out to be fiendishly complicated and if it says anything at at it's in virtually indecipherable code so these projects always fail.
OK, so you agree that confirmationism and falsificationism are, as I have been arguing, two sides of the one coin with both being in play in science and everyday empirical matters? I have to say it hasn't seemed like you have agreed to that. Also I never accepted that the invalid syllogism "If P, then Q, Q therefore P" represents verificationist logic. I doubt the members of the Vienna School would have, either. And yet despite my objection to that you have kept trotting it out to use it as a strawman to support your contention that it's all about falsification. And you accuse me of poor reading simply because I don't agree with you.
Then when I told you the correct confirmationist formulation is "If Q, then P", you tried to claim that this is essentially falsificationist which it isn't at all. Its equivalence with with "If P then Q, not-Q therefor not-P" show that both confirmation and disconfirmation are in play in empirical investigations, and yet for some reason you don't want to admit that. You seem like a dog with a bone that it won't let go of. If you don't agree then tell me what exactly it is that you've been claiming that you think I have been disagreeing with all along.
I'll just repost the most important bit that I think cleared everything up.
--------
Quoting Janus
I just went to look up a quote about confirmationism in a reliable source, the Stanford Encyclopedia of Philosophy, to settle this merely nominal dispute once and for all, and I found something interesting: there are two mutually contradictory things both called "confirmationism".
I learned confirmationism in school as synonymous with the hypothetico-deductive method, which to quote SEP means:
Quoting SEP
Where e is some evidence, h is some hypothesis, k is some set of background assumptions, and ? is a symbol for entailment, i.e. necessary implication.
So on that hypothetico-deductivist confirmationist account, if the hypothesis (and background assumptions) entail the evidence (but the background assumptions alone don't), then seeing that evidence confirms the hypothesis. That is exactly what I have been saying confirmation claims, except using "P" for the hypothesis+assumptions together, and "Q" for the evidence: that if P implies Q, and Q is the case, then P is the case.
But, it seems, Hempel's model, which I learned in school as something against confirmationism and more a step toward falsificationism -- because it says exactly the opposite of that hypothetico-deductivism above, just like falsification does -- is apparently also called "confirmationism", and I'm guessing that that's where you're coming from, Janus. Hempel's account says:
Quoting SEP
Which is pretty much the same thing falsification says, except it doesn't call that "confirming" h, because falsificationism uses the term "confirmation" to refer to hypothetico-deductive confirmation. It just says that that's falsifying ¬h, which of course entails that h as well.
Because it's not incorrect, it's just a different sense of the word "confirmationism" than you're using. Hypothetico-deductivist confirmationism is just like I've been saying confirmationism is, because that's the thing I've been arguing against since the very beginning.
Which is as I've said many times before, I'm not strawmanning you're position, you're identifying yourself with the position I'm arguing against and then saying that that position isn't actually like that but is instead like something I never disagreed with.
Apparently, since there's a whole name for that methodology and it was taught as one of the several views discussed in my university philosophy of science class.
I already clarified that verificationism and confirmationism don’t mean the same thing here.
Etc.
[i]Although Karl Popper's falsificationism has been widely criticized by philosophers,[19] Popper has been the only philosopher of science often praised by many scientists.[12] Verificationists, in contrast, have been likened to economists of the 19th century who took circuitous, protracted measures to refuse refutation of their preconceived principles.[20] Still, logical positivists practiced Popper's principles—conjecturing and refuting—until they ran their course, catapulting Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna.[11] And his falsificationism, as did verificationism, poses a criterion, falsifiability, to ensure that empiricism anchors scientific theory.[2]
In a 1979 TV interview, A. J. Ayer, who had introduced logical positivism to the English-speaking world in the 1930s, was asked what he saw as its main defects, and answered that "nearly all of it was false".[18] However, he soon admitted to still holding "the same general approach".[18] The "general approach" of empiricism and reductionism—whereby mental phenomena resolve to the material or physical, and philosophical questions largely resolve to ones of language and meaning—has run through Western philosophy since the 17th century and lived beyond logical positivism's fall.[18]
In 1977, Ayer had noted, "The verification principle is seldom mentioned and when it is mentioned it is usually scorned; it continues, however, to be put to work. The attitude of many philosophers reminds me of the relationship between Pip and Magwitch in Dickens's Great Expectations. They have lived on the money, but are ashamed to acknowledge its source".[2] In the late 20th and early 21st centuries, the general concept of verification criteria—in forms that differed from those of the logical positivists—was defended by Bas van Fraassen, Michael Dummett, Crispin Wright, Christopher Peacocke, David Wiggins, Richard Rorty, and others.[2][/i]
It seems that the nub of the problem is that universal statements cannot be definitively verified for obvious reasons; we can never observe every case, or even if we have observed every case, know that we have. So universal statements can only be falsified. Apparently some of the logical positivists were not happy with this because according to their own criterion universal statements would have to be thought to be meaningless. It's surprising that something so obviously wrong would be clung to by highly intelligent thinkers.
Interestingly, the situation is reversed when it comes to statements that are not in the universal form; they cannot be falsified but only verified. So, to return to a previous example "All swans are white" is a universal statement that can never be definitively verified, but can be definitively falsified. Conversely "Some swans are purple" can never be definitively falsified, but could be definitively verified.
Quoting Janus
Only given certain background assumptions which theory-laden that observation, which assumptions may themselves be false (and so that conclusion as well). One can be certain of having had some experience that seems to them to have been of a purple swan, but one can never be definitively certain that “there exists at least one purple swan” is in fact the correct interpretation of that experience, e.g. maybe you have been somehow deceived and in fact there are no purple swans despite this convincing appearance of one.
Of course that also applies to falsifying particular universal hypotheses — maybe your falsifying observation isn’t genuine somehow — but in that case you’ve still falsified the conjunction of that particular hypothesis and the rest of the background theory that ladens your observations such that the hypothesis seems falsified.
The problem is only with the definition of the set. If you say all Xs just happen to have property Y, you're expressing a probability function between two independent variables, this can be dealt with using Bayes. If, on the other hand, you're proposing some relation between X and Y, then the strength of your claim depends on the mechanism describing the function of Y given X.
In neither case do we need to observe all Xs to verify the universal statement. The only situation in which we would might be a frequentist probability function of independent variables, but there's no reason why we'd need such a thing.
So "all swans are white" does not need observation of all swans. If 'swaness' and 'whiteness' are independent, then verification is just the approaching to 1 of P(white|swan), but if we know 'swaness' and 'whiteness' are independent we must already have prior beliefs about both. In the more normal case 'swaness' and 'whiteness' are not independent, which means that "all swans are white" can be verified by the definition of 'swaness' - say, for example, some gene which codes both for 'whiteness' and some other defining characteristics such that nothing not white could possibly be a swan by definition.
Also, if we have reason to believe that P(A|G)=1 where G is some set of variables (x,y,z...) then in the case that P(y),P(z)...etc are non zero (ie, not yet falsified), an increase in P(x) does indeed lead to an increase in P(A). So where z implies A, z does increase the likelihood of A given a set of prior beliefs about the set of variables conditional for A, of which z is part.
In short, your instinct was right, they weren't that stupid. The confusion comes from a naïve treatment of our beliefs as if they were a half-dozen independent logical correlations rather than a complex network of hundreds of thousands of interconnected implications.
I'm not familiar, other than by hearsay, with Bayesian thought, but what you are saying sounds like what I have been groping towards. My main point has been that in order for one thing to be falsified, a whole interconnected range of others things must be counted as being confirmed. The example I gave of the proposition "some swans are purple" would not be taken seriously unless there were some good reasons within the coherent network of our beliefs, to think that a swan could be naturally purple. So, I don't see how inductive and confirmationist thought can be dispensed with, or that falsification, to quote Pfhorrest, "does all the heavy lifting".
Where falsification does all the heavy lifting is in deciding between competing beliefs that both fit some pattern that induces us to believe them. In that case, observing something that is predicted by one of them doesn't help at all (contra what hypothetico-deductive confirmationism would have us think), unless it's also against the predictions of the other (i.e. falsifying it).
The point about the interconnected beliefs was discussed at length earlier with regard to confirmation holism and theory-laden observation etc. If you observe something that agrees with all of your beliefs, you've learned nothing, as described above. If you observe something that's contrary to any of your beliefs, then you've learned that that combination of beliefs is not possible. It may not be that you have to reject the one belief you thought you were testing -- you could reject some other beliefs instead -- but still you've learned that you have to change something about your beliefs.
This was what I was bringing up with the purple swan earlier. You can't be sure that you've observed a purple swan, because the observation of what seems like a purple swan could always be interpreted as not really a purple swan if you change some of your background beliefs instead. So in the strictest sense, you haven't confirmed that there exists at least that one purple swan, even though in a colloquial sense we can often be sure enough for practical purposes.
That does means likewise that you can't be sure that you've falsified that all swans are white. But you can be sure that you've falsified something, because you can't both keep all of your background beliefs that lead you to interpret that observation as a purple swan, and also keep your belief that all swans are white. If you see something that seems to be a purple swan according to your background beliefs, you've got to reject either some of those background beliefs, or the belief that all swans are white. In either case, the full set of beliefs you had before are now known to be for sure false, even if you don't know exactly what change to your beliefs is the best one to make yet.
This still accords with Bayesian reasoning, because you could reason along the same lines, but probabilistically instead of in those absolute statements. If the thing you're observing is very likely to be a real purple swan given your background beliefs, and yet it's very likely that all swans are white given what you believe about swans, then what you're observing must be very improbable. Contrapositively, some or other of the beliefs that lead you to believe you're observing that must be very improbable: either your background beliefs, or your beliefs about swans. But it's very unlikely that you're both probably right about all swans being white, and probably really seeing a purple swan, so you're probably wrong about at least one of those things.
I had thought that you said early on in this thread and repeatedly thereafter that it doesn't matter what we believe (that is falsifiable) as long as it hasn't been falsified.
Quoting Pfhorrest
Yes, but that "strictest sense" as I already said applies to everything; it is merely the fact that there is not any absolute certainty; no deductive strength proof of anything, when it comes to empirical matters. But that is really a useless wasteland of weeds we don't need to get into.
You could say the same about black swans; that it could never be absolutely proven that they are in fact swans.
Quoting Pfhorrest
Yes, inconsistencies do spell trouble for belief systems to be sure.
Yes, which means that induction is a perfectly fine way of coming up with beliefs. There's nothing wrong with using induction to get to something or another to believe. It just can't tell you that your beliefs are more right than some other beliefs.
But it's also possible that different processes for coming up with beliefs will be more or less productive in coming up with beliefs that are unlikely to be falsified. Believing that patterns you've observed are likely to continue (i.e. induction) could very well be one of those safer methods (and I'm intuitively inclined to say it probably is, but I don't have any arguments to that effect).
Quoting Janus
Yes, that's correct. But it is that "strictest sense" that we're talking about here. It's in that sense that although we can never be sure any particular set of beliefs is the correct, we can be sure that some particular set of beliefs is an incorrect one, if e.g. that particular set of beliefs says both that all swans are white and that the thing we're seeing right now is a purple swan. We can't be sure which of those (or which part of which of them) is incorrect, but we can be sure that the world definitely isn't exactly like that, it's different than we thought in some way or another.
So what belief has been falsified here? Not the belief that there are only white swans. Not the belief that I haven't seen any purple swans (I'm presuming I was deceived). Not the belief that my observations are always accurate and unambiguous (I never believed that). Not the belief that I haven't ever seen anything which even looks like a purple swan (That belief was true at the time - all time-dependent beliefs change as time changes - I've never seen 7:45 on the 20th November 2020...until now).
So exactly what belief has now been falsified which anyone ever actually had prior to this observation?
Other way around: you change the beliefs which initially lead you to construe your experience as genuinely seeing a real purple swan, if you instead conclude that you must have been deceived.
If you already believed that you were being deceived into seeing a purple swan, then none of your beliefs would change, but then your experience wouldn't be contrary to your prior beliefs either (you would be expecting to see what appears to be a purple swan), so there would be no prompt to change any beliefs, and so no falsification.
Which beliefs? I never believed that all my observations are accurate and unambiguous. What I believed prior to seeing the purple swan was that some observations turn out to be true and others don't.
...which equates to your error above (not that this hasn't already been pointed out). P(A|B) is not the same as P(A and B). You're misunderstanding Bayesian probability - which is fine if what you're doing is inquiring, but when you're declaring some theory to be consistent with it you really ought to do more than just glance at a Wiki page on the topic.
So you were not surprised by the apparently purple swan, and it was consistent with your prior beliefs? Then you have no contradictory observations to falsify anything. You just saw something consistent with your expectations.
Quoting Isaac
I never said it was. I say that the probabilistic equivalent of a conditional statement is a conditional probability: the probabilistic equivalent of "B if A" is "P(B|A)". (I did misleadingly say that "P(B|A)" was equivalent to "P(B if A)", but that was for a natural-language reading of "B if A", and that equivalency is only problematic when "B if A" is taken as a strict material implication).
Surprise has nothing to do with it. I might be surprised by a purple swan because I wasn't expecting one. This is the part about time-dependant beliefs I mentioned earlier. I don't believe I can predict the future, yet things still surprise me about it. I have expectations about future events despite not believing that I can predict them accurately. I believe the set of observations which seem to me to be accurate up to this very moment in time, but I cannot believe now those that I will see in future.
This is where probabilistic belief becomes crucially important to our understanding of belief. I believe almost nothing 100%, I believe different things with different strengths. The strength with which I believe future events is inevitably altered as the time becomes the past, again no-one doesn't already believe this.
Neither confirmationists, nor fideists, nor nihilists, nor any of your stated targets believe they can predict the future with 100% percent accuracy. So none of them are disproven by your statement that observing something surprising contradicts a belief (an expectation) that we wouldn't.
Quoting Pfhorrest
Your claim above is Quoting Pfhorrest
So let A be "the thing you're observing is very likely to be a real purple swan given your background beliefs" and B be "all swans are white given what you believe about swans". You're saying that the probability of A and B ("then what you're observing must be very improbable") is P(A and B), but it is not - under Bayes - it's P(A|B) which is a different calculation.
You’re the only one bringing up any belief about being able to predict the future perfectly. That’s not anything I’m talking about at all.
I’m saying that if you see something and think “whoa a black swan, I didn’t think those were possible...” and then either “I guess they are possible after all” or “it must be fake somehow”, you’ve revised the beliefs you initially had. (Switching to black for this example to illustrate how either is a plausible option).
If instead you see the same thing and think “oh look, somebody painted that swan black...” then you don’t have to revise any beliefs because a fake black swan is what your background beliefs initially lead you to perceive and that doesn’t contradict any other beliefs such as that all swans are white.
Quoting Isaac
No, I’m saying that if P(A) is small and P(B) is small then P(A)*P(B) is small. “P(A)*P(B)” is the probabilistic equivalent of “A and B” in the same way that “P(A|B)” is the probabilistic equivalent of “A if B”.
For me the fundamental presupposition of an invariant nature, without which neither confirmation nor falsification could gain any traction, or even have any meaning, is indispensable to all our investigations and conjectures. That is the basic issue I have with the idea that we can believe whatever (in principle falsifiable) things we like as long as they haven't been falsified. I don't think we actually do, quite apart from whether we ought to, believe, or even entertain, any such things in any case; usually our hypotheses are extrapolations from, and both coherent and consistent with, the vast store of what we take to be observationally and experimentally confirmed scientific knowledge.
Quoting Pfhorrest
So, here's a great example: you cite just two possibilities; the impossibility that the laws of nature could have suddenly changed and things might just morph at random from white to black to purple or whatever is implicit in the way you arrive at what you consider possible. This is entirely on account of inductive expectation; it's not logically prescribed that the laws of nature cannot change.
If the laws of nature had just changed such that swans could now change color willy-nilly, or even if they had always been such that that was possible, then that would make “all swans are white” no longer (or maybe never have been) true, so changing your belief that all swans are white would cover that.
More generally though, on this topic of the laws of nature not changing: that is not something we believe because of induction, but something we must believe to do induction. If we don’t assume that that’s the case, then there is no reason to expect patterns to continue as we have seen them do thus far.
On my account, that is one of the background assumptions not about the content of reality per se but which we use to structure our experience of reality whatever its contents should be. It’s up there with assuming there are physical substances that bind together all of the attributes of things and not just improbably coincidental constant conjunctions of the same attributes moving through space in unison.
These are things like the assumption of objectivity about which we could not possibly know one way or the other whether they are true, but which we cannot help but assume one way or the other through our actions, and without the assumption of which we could not possibly hope to ever know anything, thus pragmatically requiring us to always act as though they are true or else give up all hope of knowledge.
I disagree: I think we believe in an invariant nature because that is all we have ever experienced; that's induction in a nutshell. That's not even an arguable point as far as I am concerned. If you disagree with that, then I can only wonder what planet you've been living on.
I disagree: It seems obvious we believe in an invariant law-like nature because that is all we have ever experienced; that's induction in a nutshell. That's not even an arguable point as far as I am concerned. If there were not universal invariant patterns in nature science would be impossible; human life, any life, any stability at all, would be impossible. If you disagree with that, then I can only wonder what planet you've been living on.
(If it's not clear, I'm pointing out that that's circular reasoning, which is the root of the problem of induction, and the post you're responding to is my solution to that problem).
Meh. You should keep in mind that circular arguments are perfectly valid.
Quoting Pfhorrest
So, this is where your thinking is going astray as I see it. Of course we don't know, in the sense of being deductively certain, that the laws as we have discovered them will not change, but, based on all our experience, we have confidence that they will not change.
Denying induction is also irrational.
From the SEP article I just linked:
Quoting SEP
If you mean that there can be no reason to think one way or the other about induction, then I pretty much said exactly that at the end of the previous page:
Quoting Pfhorrest
No, I mean that induction as such works. You and I use it to make tea.
And that's the problem with this thread, and the OP. Explaining things that do not need explanation.
To people who already agree, sure. But what of those who disagree, like Hume does on induction? Just let their arguments go unaddressed, instead of refuting them?
I'd really like to take "this is obviously true" as a compliment, but people like you seem to want to make it an insult.
Well, first off, Hume uses induction extensively in his writing. Funny, that, if he thought as you do.
And then...
Every time I put the kettle on, the water boils. What more could you ask for, by way of 'addressing the argument'? What place has doubt here?
Your program wants to reduce induction to deduction. Why bother?
It does not, and I have repeatedly said as much.
It just doesn't rely on induction for the task of differentiating between competing beliefs.
Induction is a fine way of coming up with beliefs. But induction on the same set of observations may come up with different beliefs. (And non-inductive processes may result in still other beliefs too). How then to choose between them? That's the issue at hand here.
Did you see this video I linked earlier?
[video]https://www.youtube.com/watch?v=vKA4w2O61Xo[/video]
Everyone there is using induction to come up with their hypotheses. All of their hypotheses are consistent with the data. But they're all different hypotheses. And they're all wrong, until someone finally catches on to try seeing what doesn't fit the data, and figures out the correct solution.
Also, you get that I'm not arguing against induction at all? The bit of my past post I just quoted ended, in case you didn't read to the end:
Quoting Pfhorrest
Sure. There's a whole literature around rule following that derives from PI, and then again from Kripkenstein, that makes a related point.
Notice, though, that in your example there is a single correct answer - that chosen by your pundit. That's an unusual case. More often there is more than one answer that gets us to where we want to be, and the choice is rather arbitrary.
Sure looking for contradictions is a neat way to refine your beliefs.
But it ain't the whole story.
Come back to the basic point; Polly put the kettle on in order to make tea. She did not put it on in order to test an hypothesis. There was no doubt that putting the kettle on the heat would boil the water. Nor should there have been.
I addressed exactly that difference in the last paragraph of the OP:
Quoting Pfhorrest
So sure if you're just trying to make tea, not test a hypothesis, then roll with the belief that's more probably correct, so you don't mess up your tea, even though you might possibly learn something by messing up your tea.
Yeah, no, that does not address the issue. Polly's belief was not at all tentative.
So she is not at all open to the possibility that it could be wrong, if she should see something that would show it wrong? She believes that dogmatically? Because the contrary to that is precisely what "tentative" means in that context: not dogmatic.
In any case, the "tentative" there wasn't even in reference to the "safe" beliefs like Polly's, but to the "risky" beliefs of someone who was testing a hypothesis rather than making tea. Can you not read more than half a sentence at one?
Did she contemplate the kettle sitting on the heat and yet not boiling? What do you think?
Quoting Pfhorrest Well, yes; and indeed, she was correct to do so, as evidenced by the need for Sukey to take the kettle off again.
Again, it would have been irrational, unreasonable, improper, even crazy, for her to doubt that the kettle would boil.
Yet you would make such doubt the cornerstone of epistemology.
I wish I could say it was amusing to watch someone selectively read and willfully misinterpret so as to score cheap internet points instead of having an honest conversation, but reality it’s just petty and irritating.
For instance:
Quoting Banno
If you actually read anything in this thread at all, you would see that fully half of my point is against the same greedy skepticism you’re arguing against here.
I didn’t ask if she doubted it, and I didn’t say she should. I asked if she would have been open to revising her belief had there counterfactually turned out to be evidence against it. Because that’s all that “tentative” means, and “dogmatic” is the opposite of that.
You are conflating dogmatism with what I call “liberalism”, and also conflating “critical” skepticism with “cynical”skepticism, exactly as I said opponents to the OP would mistakenly do.
No pleasing some folk.
And that strategy of claiming that you have already addressed the issues raised - might work were you a philosopher of note. But that you ain't.
You put up a variation on Popper. I put up the very arguments that undermined his philosophy: Feyerabend, Wittgenstein, Strawson. You haven't addressed them.
I had thought you might be interesting. But no. OK, I'll go back to ignoring your posts.
I have no problem with actual criticism of my position, if you've actually got something new to add that I haven't already accounted for. And I have absolutely no problem with people being unpersuaded by my arguments; I generally don't expect anyone to ever be persuaded by anyone.
The only thing I’m getting tired of is people attacking positions I don’t hold as though they were disagreeing with me, often by using points I already agree with myself. And when I point out that that’s not my position they’re attacking, and that I already agree with the points they have against that strawman, I’m accused of “digging in my heels”, “reiterating the same claims with no further argument”, “not addressing counter arguments”, etc. I generally feel like I'm faced with people seemingly offended that I’m not persuaded by their arguments to abandon the position that I never held in the first place, in favor of their position that I already agreed with.
TL;DR: I'm not bothered that anyone disagrees with me. I'm bothered that people seem bothered that I (they think) I disagree with them. Like the only thing that would make them happy is if I just said "ok you're right" and shut up, no responses allowed.
Except that...
Quoting Pfhorrest
is exactly that, a prediction about the future - that there would be no observations of black swans in it.
Quoting Pfhorrest
Again, you're ignoring the effect of states of uncertainty. I know you keep saying that you've included uncertainty by attaching the word 'probably' to your original theses, but tacking on the word 'probably' doesn't even begin to address the complexities of adding probability and uncertainty (or it's opposite) to the understanding of beliefs. What I think just about every other person involved in this thread is trying to tell you (in one way or another) is that situations where we are in no doubt (such as @Banno is pointing out), and how we resolve uncertainty when we are in doubt (such as @Janus and @Srap Tasmaner are discussing), are key to understanding beliefs.
You treat them as if we can innumerate and resolve them one at a time (again, despite your protestations to the contrary, you keep coming back to simple examples as if they encapsulated a principle which applied more widely). In the situation regarding the observation of a black swan - if you isolate the observer from all social connections, all linguistics, all embodiment and all cognitive context - it may just be the case that he would simply choose between the two options you describe. But there are no such people, and in reality the situation is vastly more complex to a point where this simple algorithm is next to useless.
...Not that it matters now, but
Quoting Pfhorrest
...is exactly what I said you were doing (incorrectly), so that paragraph shouldn't start with "No", it should start with "Yes" followed by an explanation of why you're doing that when A and B are not independent variables and you were supposed to be explaining how such matters were resolved using Bayes.
So you think that nobody holds any categorical beliefs like that, even defeasibly, such that they could find out that such beliefs were wrong when they observed something that was unexpected according to such a belief? Nobody can ever be wrong, because nobody ever has any expectations of how the future will turn out?
Quoting Isaac
I never said that just tacking on the word “probably” solved everything, that’s just the natural way of casually conveying in normal language the gist of the actual statistical math that needs to be done on the ground. What else besides “If A is true then B is probably true” would be a natural-language way of conveying the meaning of something like “P(B|A) is close to 1”?
Quoting Isaac
You’re ignoring that one of those two options is the extremely broad “or something else that I’m assuming, which leads me to believe this is a black swan I’m seeing, is false”. There are lots of (perhaps infinitely many) possibilities embedded in there. All the complexity you say I’m ignoring is in there.
The point is simply that if all your beliefs tell you that you’re seeing something that’s impossible (or improbable), then that combination of beliefs is impossible (or improbable), so you should (probably) change those beliefs, somehow. I’m not saying here how exactly you should, just that you should.
Yes, but it does use an extensive interrelated network of inductively derived beliefs without which it would be operating in a vacuum, and be unable to confirm or dis-confirm any hypothesis. So, given that, it doesn't make sense to minimize the role of induction, and claim that is is really just falsification doing all the work.
Quoting Pfhorrest
This response shows again that you are trying to apply the criteria for valid deduction to inductive reasoning. It puzzles me that you apparently can't see that.
Quoting Pfhorrest
This again shows that you are not acknowledging the role of induction. The general regularities of nature that we all observe, and never consistently observe any counterexamples to, and are thus induced (induction) to believe in must persist, because otherwise there would be no science, no human life, no life at all, and thus certainly no falsification.
It operates ON that network of beliefs, or any other network of beliefs formed in any other way; it doesn’t at all depend on the network of beliefs being formed by induction.
I was just thinking earlier today that even as much as I dislike Feyerabend overall, my principle of “liberalism” is actually most of the way to his “epistemic anarchism”, in that it says that any method for coming up with beliefs is fine, there is no prescribed method that you have to follow in order to be initially permitted to hold a belief. But on my account there is still the requirement that you be open to revising any beliefs, however you formed them, and not hold any of them beyond all question.
I’m not super well versed on Feyerabend, so I might be wrong about this, but I suspect he would agree with that last caveat as obvious and not at all against his point, which makes me now wonder if his whole point wasn’t just basically the same as my “liberalism”, and maybe I should give him another more charitable reading.
Quoting Janus
I’m not using deduction there at all. Induction has seemed to work many times in the past, so inductively that should give us some (but not deductively certain) reason to think that induction will always work. If that not your argument? Does not that argument rely on already accepting that inductive reasoning gives some reason to believe something, in order to show that inductive reasoning gives some reason to believe something? Is that not circular, even though no deduction is involved?
Quoting Janus
I agree that we have to assume the universe obeys regular laws in order to get anything done, but I think that that practical reason is enough to justify assuming that, even if we hadn’t yet noticed any particular lawlike patterns yet. That assumption is in turn necessary to do induction — which is the whole Humean problem of induction, because he shows that it can’t be inductively proven without circularity and it can’t be deductively proven. I think the solution to that problem is that it can be pragmatically “proven”. You on the other hand just run around one circle of Hume’s fork.
Without that network of beliefs there would be nothing to operate upon. Even when a creative leap of the imagination is involved in coming up with novel hypotheses, that is only possible on the background of the general inter-subjective inductively derived network of knowledge that we accept as established.
Quoting Pfhorrest
Sure, it looks circular, but it's not really because the premises (what you invariably observe) don't assume the conclusions (what you generalize from those observations), You don't have to believe that the laws of nature will not suddenly change, but you do have good reason not worry about such a seemingly small (given the totality of human experience) possibility.The problem with circular deduction is that it tells you nothing. Induction however, tells you everything you know (or believe, if you prefer) about the world.
My point is that we do, and must, accept inductive reasons to believe things. You said earlier that Hume has refuted induction, but as @Banno noted he uses inductive reasoning in his writings; cf. A Treatise of Hum(e)an Nature :wink: ; just think about what that title implies.
Quoting Pfhorrest
Right it can't be deductively proven, but circularity is not a problem for inductive reasoning, because it is based on observation and experience which is not merely a matter of the premises assuming the conclusion. We accept experience and observation because we must; there simply is no other way to gain the material from which we can extrapolate our hypotheses about the way the world is and works.
It makes no sense to say that anything can be pragmatically proven, not if you are thinking of "proof" in the deductive sense at least, because none of nature's invariances are logically entailed by anything. Although having said that, and as I said earlier, without the regularities of nature there would be no network of beliefs or hypotheses, no human life, no life at all, and that we are here living, believing, hypothesizing does entail those regularities, but that entailment is material, not logical.
Sure, I never disputed that.
Quoting Janus
IF induction works, which you would have us believe on the grounds that “it aways has worked”, which would only be reason to believe induction worked if you already believed indiction worked.
I am not rejecting induction here, I have my own solution to the problem of induction I’ve already given, I’m just pointing out that your solution doesn’t work as an argument. If you were to argue to someone who doesn’t believe in induction that it always has worked so we should expect it to keep working, they would only be persuaded by that if they ALREADY believed in induction, which they don’t. That’s the heart of the circularity: it’s not convincing to someone who doesn’t already agree.
Quoting Janus
No I didn’t, I said that he presented the Problem of Induction, which needs to be addressed. I think it can be, and I think I know how. It seems like Hume must have figured there must be some way, because he like everyone acts like induction works, but didn’t know how exactly to give reason to think it would.
Quoting Janus
This is a pragmatic argument much like my own, not a circularly inductive one.
Quoting Janus
That’s why I put “proven” in scare quotes. Hume talks about “demonstrative” (deductive) and “probable” (inductive) arguments, and shows how neither kind can give reason to believe in induction (e.g. to convince someone who doesn’t already believe in it). I instead give, and above you seem to give, a pragmatic argument, a reason why we must act on the assumption, even though it’s not possible to prove it either way, because to do otherwise would simply be to give up.
But none of that has anything to do with anything I’m talking about in the OP.
Induction obviously does work; to give us all our beliefs and understandings of the world. So, there is no need to provide an argument for that. Are you denying that induction has worked, or what? The problem is that it is not at all clear just what you want to argue. Every time someone critiques what they think you are claiming, you say 'no, it's not that', and yet you seem to be incapable of explaining what else it is you are trying to convey.
Quoting Pfhorrest
Yes, and the problem of induction as I understand it, according to Hume, is that it cannot deliver deductive certainties. if we don't erroneously expect it to do that, the problem dissolves. So you are falling into the trap of thinking that induction is some kind of problem; presumably because it cannot deliver deductive certainty. If you have some other reason for thinking it a problem, you are yet to present it.
"It's obvious" is not an argument, and you do need an argument if you want to convince anyone who doesn't agree with you to change their minds.
Quoting Janus
No, I've been saying all along that induction is just fine, it simply has nothing to do with the point of this thread. I was never arguing against induction as a means of coming to our beliefs, only that by itself it doesn't give us a way of choosing between competing beliefs, and that it's not necessarily the only way of coming to believe things either. (And for my purposes it doesn't matter whether it is or isn't the only way, because my methodology is okay with any ways of coming to believe things, so long as you leave them open to falsification).
Quoting Janus
I think it's because I'm really saying very little at all here, and everyone seems to think I'm saying much more than I am.
I'm saying first of all that within some very broad limits, anything is okay. Those limits are:
- don't demand absolute proof of anything before allowing yourself (or others) to believe it, go ahead and believe things for whatever reason you're inclined to (induction or whatever else);
- so long as everything you believe, you believe tentatively, fallibly, non-dogmatically, in a way such that you would discard that belief if you came across reason to do so.
(I expected that on a philosophy forum, philosophy being traditionally a reason-centric enterprise, everyone would take the latter as a given, except maybe some religious folks; and the former would be the point of contention, from self-identified "rational skeptics" who don't realize that that methodology applied consistently would leave nobody able to ever learn anything at all).
And then I'm saying as a consequence of that broad approach, hypothetico-deductive confirmationism has no legs to stand on, because:
- just seeing an expected consequence of your belief can't give you any more justification than you already had to continue believing it (because you were already perfectly well-justified in believing it, for whatever reasons you already did);
- and it doesn't necessarily rule out any of the alternatives (and in the cases where it does, that's falsification right there, so the only the cases where such confirmationism works are the cases where it's indistinguishable from falsificationism).
Quoting Janus
As I understand it, the problem is not only that it can't deliver certainty, but that there's no good reason to think it would deliver any support at all, even merely probabilistic support.
We all naturally act like it does, Hume included, but Hume noted that if we really question whether we should or shouldn't act like that, there's no way of presenting an argument that we should, because that argument would either have to rely on us already thinking that we should (in relying on an inductive argument to argue for relying on inductive arguments), or else require that its negation be self-contradictory (which it's just not). That doesn't (even supposedly) prove that induction doesn't work, only (supposedly) that there can be no good explanation as to why we should expect it to work.
You, like I, have been arguing that we pragmatically have to expect it to work (or rather, in my case, why we should expect the universe to behave in a lawlike way, which is a prerequisite for induction working), or else we can't get any kind of science done. Elsewhere I've fleshed out a more rigorous version of that kind of argument, and I think that that's a fine reason to expect the universe to behave in a lawlike way. I don't know why you think a circular argument for induction that relies on induction is even necessary, given such practical lines of argument, never mind how you don't see why they don't work.
But it is obvious that induction has worked; there is no more need of argument for that than there is to justify saying the sun has always risen on earthly life.
Quoting Pfhorrest
Can you give an account of any other means of arriving at confirmable/ dis-confirmable beliefs than induction? You haven't so far. Can you give an example of two competing beliefs, and explain how induction would not be at all involved in deciding between them. Just one example will suffice.
Quoting Pfhorrest
Yes, of course we shouldn't demand absolute proof of anything before believing it; that is very the nature of induction, that is just how it differs from deduction. There cannot be any absolute proof of any belief about empirical matters.
It is only in relation to empirically induced beliefs that any inter-subjectively corroborable reasons to discard the belief can be encountered. Other kinds of belief; aesthetical, ethical or metaphysical are discarded, if they are, for personal reasons; there can be no inter-subjectively definitive reasons in those cases. By that I mean there can be no reasons that a suitably educated unbiased observer would be bound to accept.
Quoting Pfhorrest
That's where we definitely do disagree. I think that the constantly observed regularities of nature and the complete lack of observed counterexamples, give us every reason to think that belief in those regularities and their highly likely continuance is justified; in fact there simply couldn't ever be any better reason than that.
To those who find it is obvious, sure. To those who don't, what then?
Quoting Janus
There is if you want to change the mind of someone who doesn't already believe that.
NB that I'm not saying "you need to show me an argument if you want to believe this". That's half the point of my view in the OP: nobody needs to show an argument to anyone else just to be allowed to believe something. But if you want to tell someone else that they're wrong (e.g. if someone doesn't believe in induction, and you want to tell them they're wrong not to), then you absolutely do need an argument.
Quoting Janus
Someone tells you something, and you don't know any better to the contrary, so you believe them.
Or:
It'd be really nice if something were the case, and you're not certain that it's not, so you tell yourself that it is.
Just for two examples.
Quoting Janus
Some phenomenon has occurred in the past in this pattern: twice one day, four times the next day, eight times the next day.
One person believes on the basis of induction that that phenomenon always occurs in pairs, and always increases, though in no particular pattern.
Another person believes that it always increases exponentially, but not necessarily in pairs or even power-of-two exponents, this one just happens to be doubling.
They look to see how the pattern continues: the next day, the phenomenon occurs 16 times.
That fits both of their induced hypotheses, and so doesn't tell us which of them is more or less correct.
Quoting Janus
Yes, which is why induction is a fine reason to believe something on my account.
NB though that on my account, if someone else believes something, and you don't see a reason to, but they do, in the same data, you can't demand that they convince you of the pattern that they see to justify their belief via induction, or else that they discard it. They don't need to show you a reason to believe it in order to be allowed to believe it. You need to show them a reason not to believe it, if you want them to change their mind.
Quoting Janus
I disagree, but that's beyond the scope of this thread.
Quoting Janus
You misunderstand me, I wasn't saying what I think there, but what my understanding of what Hume says is. As I understand Hume, he is pointing out that there can't be given any convincing reason to think (i.e. any way to change the mind of someone who doesn't already think) that induction lends any support to anything, because its negation is not a straightforward contradiction (which would be needed for an a priori deductive argument), and the only other kind of argument (so he thinks) is itself an inductive one, which of course won't convince someone who's not already convinced of induction.
We can only conclude that they are blind, ignorant or in denial.
Quoting Pfhorrest
Well, I guess what he is pointing out is open to interpretation. As I said earlier I think he is just pointing out that there is no deductive certainty, no logical necessity, that nature's observed regularities will remain in place in the future.
That means, not that there is no reason to believe that they will abide into the future, but that there is no purely logical reason to believe that they will.
Basically, he's pointing out that induction is not deduction in other words.
Quoting Pfhorrest
Induction gives us a rational reason to believe things. Those two examples are of things we have no rational reason to believe unless, in the first example, we have inductive reason to believe that the person is generally a reliable source, and in the second case there can be no rational reason to believe unless there are inductive reasons. Of course there can always be emotional reasons, but that is another story.
Quoting Pfhorrest
So, yes, but it is not merely a "fine reason" it provides the only rational reasons to believe some claim about the way the world is.
He explicitly considers the possibility of inductive arguments in favor of that, though, and then rejects them as being circular, which suggests that he would have been satisfied that there was reason to believe it if an inductive argument to that effect could work, no deduction required -- but because an inductive argument would have to rely on exactly that thing he's looking to argue for, it can't work, because of circularity.
There is no argument for it other than constant experience of it, and total lack of counterexamples to it. You keep falling into the trap of thinking there must be a purely logical argument. We're just going in circles now; time to quit I think.
...which is only an argument at all if you think induction works already, which is why that makes the argument circular.
Quoting Janus
I think a pragmatic argument works fine, and since you also seem to give pragmatic arguments, I don't see why you think there also has to be anything more. Hume is the one who doesn't seem to consider pragmatic arguments as a possible avenue, and who thus sees a problem.
Quoting Janus
Just like Hume said you would! ;)
Quoting Janus
I've been trying to quit this thread for weeks[hide="(?)"](?) [time has no meaning anymore][/hide] now.
I just can't imagine how anyone with eyes open could fail to see that induction works and thus think that it does; and it seems to me that anyone who doesn't think that must be so stupid, close-minded or in denial for emotional reasons as to be insusceptible to counterargument in any case. Why would I worry about convincing such people?
The point of Hume's problem is not to question whether induction works, but whether we have any purely logical reason to believe it will continue to do so. We know it always has worked and that it works now, we don't know with absolute certainty that it will work in the future, but given that it always has worked and that it works now, we have no reason to think that it won't in the future and certainly far less reason to think that it won't than that it will.