Evidence and confirmation bias
I am a scientist.
Some time ago I was talking to a colleague about whether or not to accept a research paper for publication.
It had methodological weaknesses but he said he could accept these because the study findings were “right” – i.e. he agreed with the results even though the methods used were vulnerable to bias.
I said we shouldn’t lower the bar just because the results fit with our preconceived notion of what is “true”. He replied:
“If someone tells me there is a horse in the field behind their house, I won’t need any more evidence to believe them than their word… but, if they tell me there is a unicorn, I wouldn’t believe it even if they showed me photographs”.
This has troubled me ever since as it is not a question that I have been able to resolve.
Should we accept less evidence (e.g. weaker research studies) if they support what we already believe? Or is this just giving in to confirmation bias?
Can philosophy help?
Some time ago I was talking to a colleague about whether or not to accept a research paper for publication.
It had methodological weaknesses but he said he could accept these because the study findings were “right” – i.e. he agreed with the results even though the methods used were vulnerable to bias.
I said we shouldn’t lower the bar just because the results fit with our preconceived notion of what is “true”. He replied:
“If someone tells me there is a horse in the field behind their house, I won’t need any more evidence to believe them than their word… but, if they tell me there is a unicorn, I wouldn’t believe it even if they showed me photographs”.
This has troubled me ever since as it is not a question that I have been able to resolve.
Should we accept less evidence (e.g. weaker research studies) if they support what we already believe? Or is this just giving in to confirmation bias?
Can philosophy help?
Comments (8)
However in science, the tried and tested methods are well established. If a research paper follows rigorous methods, then it is science. If its conclusions are banal, (ie already known) it can be passed over, but if its conclusions are new or controversial it should be published. If it doesn't follow rigorous methods it should be consigned to the flames, irrespective of its conclusions.
Belief is not algorithmic. It doesn't follow from a set of rules.
(There's a pretty straight forward argument for this in which, in order to believe in the rules one would have to apply them; but to apply them one would have to already believe them.)
SO at some stage you are just going to have to make a decision as to if the methodological weakness was enough to invalidate the findings.
Hello.
Good question!
It looks like you've shifted from the trust put on the person claiming the existence of the 'horse in the field behind their house' to the existence of an object -- the unicorn.
Ask yourself whether your colleague trusted the methodology, the person making the claim, or the claim itself.
This is an example of "extraordinary claims require extraordinary evidence", which you may have heard. The claim that unicorns (mythological creatures) exist is extraordinary for obvious reasons. Extraordinary claims tend to not be true because they usually turn out to be mistaken ordinary claims (e.g., UFO claims). So if a claim is extraordinary, it can usually be safely dismissed unless the person making the claim provides really compelling evidence. Sometimes it's wrong to dismiss extraordinary claims. "The Earth is round" was probably an extraordinary claim for much of our history. It certainly doesn't look round. Same with the Earth going around the sun. Quantum mechanics makes a lot of extraordinary claims. Quantum physicists still can't agree on what's going on in experiments that have been around for decades (e.g., Many Worlds Interpretation vs. Copenhagen Interpretation).
Another thing that's helpful to do is assign degrees of belief to claims. The claim "there's a unicorn in the backyard" would be assigned a very low degree of belief (percentage). The claim "the sun will rise tomorrow" would be assigned a very high percentage. Claims that have a high degree of belief are "safe" claims to believe in- you probably won't be wrong. Claims with low degrees of belief require evidence to "safely" believe in.
However, take the reverse side of this coin of rationality - intuition. Great mathematicians like Gauss have claimed that intuition - that vague, nebulous, almost magical way of gaining deep understanding of an issue - beats methodical stepwise logical thinking any day. It's possible that this intuition can translate in to what at first glance appeare to be simple wishful thinking but may actually indicate a realization of a fundamental aspect of an issue.
Just saying...
I had to read this a couple of times: at first I thought that your colleague was responding to a different question. I can kind of see some connection to what you were asking, but only by a loose association.
If the work was poorly done, what does it matter whether what it sought to establish was plausible or not? If it doesn't meet the standards for publication, it shouldn't be published. If you think it's a borderline case, and perhaps there were objective limitations that accounted for its methodological weakness, then look at other factors.
Ordinary claims are ordinary because they fit in with things that are already backed by strong evidence. If the study confirms something that is already well-established (as your colleague seemed to imply), then what's the point of adding a poor quality study to the mix? (Unless perhaps it develops an original, independent line of evidence.) But if it is something rare or unexpected, meaning that there probably hasn't been any better evidence, that might actually actually make even a weak study more valuable, especially if the conclusion would be significant, if true.