What is "proof?"
Different categories of science have different procedures and protocols and requirements to say that something is proved to be so. Technically they have not proven that smoking causes cancer because you can't ethically take nonsmokers with no tendency towards cancer and have them start smoking. We just have too much evidence that the result would likely be cancer.
In general what is the value behind saying that something was proved?
In general what is the value behind saying that something was proved?
Comments (20)
I think it is not, that is to say, we will never be able to prove anything is 100% True whilst we are using 'relatively blunt' tools like 'eyes', 'mathematics' and 'reason'.
The logical conclusion of the empirical scientific method to me, is that if no experiment or observation can be conducted or recorded which disproves the leading theory, then the leading theory is generally considered to be “knowledge” or “proven” within that scientific field. This may not apply to new and emerging fields of study, as there is new ground being covered there. I am referring here to the classical sciences – biology, physics, astronomy etc.
I think of Newton's conception of physics and gravity; and how - for hundreds of years, the brightest minds and sharpest thinkers the world could produce were not able to refute or disprove his theories. It was taught in schools, it was considered knowledge. Even when it seemed the theories were beginning to not accurately predict more technologically complex experiments as time moved forward, none other could conceive a theory so complete as Newton’s - until Einstein. With Einstein, we saw a “change of knowledge” in essence, it was also “change of something which was once considered proven”.
And now, scientists seem to be demonstrably reaching the limits to Einstein’s equations and are seeing the breakdown of Einstein’s theories at the quantum scale and in black holes. Here is a quote from Andrea Ghez, an astrophysicist at UCLA working on black hole research:
“Newton had a great time for a long time with his description [of gravity], and then at some point it was clear that that description was fraying at the edges, and then Einstein offered a more complete version. And so today, we're at that point again where we understand there has to be something that is more comprehensive that allows us to describe gravity in the context of black holes.”
The more advanced we become technologically: the finer observations we can take, the further into the cosmos we can look, the more data we can map and model, the smaller the objects and forces we can detect. We will always be changing our theories. We will always be changing our knowledge, and I don’t believe we are equipped to ever arrive at the underlying Truth of anything, really.
However, it does favor us in many circumstances to perceive these un-true theories as irrefutable facts of the universe. eg: to get to the moon, we don't need a perfect True theory of gravity, we just need one that is good enough to get us to the moon. So, it helps those with the goal of trying to get to the moon by accepting that Einstein's theory of gravity is proven fact, and using it to calculate their trajectories.
To answer your question: I reckon the value of scientific fields having threshold criterion for a "proven fact" is so progress can be made. If we were never able to pin something down as a "fact" and teach it in schools because in 200 years it might be proven wrong by a smarter primate, we dont have any progress. I see it as a "near enough is good enough" approach to our collective human knowledge. How near and how good is up to the scientists, I suppose.
Technically, they have, according to the very criterion of proof that you give in the first sentence - that is to say, the epistemic standards of proof have been met to the satisfaction of most practitioners in the field (of epidemiology).
In one of its forms, to prove something is to show that the conclusions follow from axioms that are already accepted by the audience, which is a fairly reliable concept.
However it is possible, if the 'practitioners in the field' hold to false premises, or are working from incomplete knowledge, it is possible to prove something that is not true.
I know it is a kind of slippery slope...
One sense of the verb "to prove" is "to probe, investigate, analyze". So it doesn't necessarily imply that absolute Truth has been revealed. In Science, a "proven theory" is one that has produced useful pragmatic results, but may still have room for more "proof" (evidence). For example, Darwinian Evolution was a good theory for its time, but it has been modified as more relevant evidence has been literally dug-up. The "value" of such imperfect "proof" is practical applications, as opposed to theoretical speculations. :smile:
Prove : to subject to a test, experiment, comparison, analysis, or the like, to determine quality, amount, acceptability, characteristics, etc.
Are blackholes proven? I think Stephen Hawking concluded that there is no reason why they need to exist.
But otherwise, a strong justification of something, perhaps meeting some epistemic standard.
A proof to the contrary, i.e. disproof, could be a counter-example.
So, proofs are usually supposed to give knowledge in some context.
Eh. Reliable (enough) verification typically by means of evidence. Right?
The first reply to the thread is what I'd go with. And so, the more things that are proven as a(n) (exclusive) result of said proof, the more likely it is.
In sciences, proof in general is if you use generally accepted methodologies in your field, and your results are generally accepted in a specific field, then it is "proven". Until someone disproves it, but I cant see any proofs as eternal, because of always chance to fallibility.
That it means more than saying that something was merely asserted. Evidence matters, if you don't believe that avoid going to a doctor if you break a bone or severely wound yourself, just let people pray for you. Of course, you will not do this, and neither will any religious person. Medicine has to conform to some kind of empirical standards of proof. Not saying it's perfect, but it sure as hell beats the witch doctor.
That said, in science, hypotheses and theories are only confirmed which is a far cry from proof but, falsifying hypotheses/theories counts as proof because it's a deductive argument.
Recommended: The Half-life of Facts, by Samuel Arbesman. Less philosophically sophisticated than I was hoping but interesting material.
Addresses exactly the issue raised, and attempts to do so scientifically!
Proof can be positive or negative. In science, as @aporiap already said above, no positive proof is possible but it can be proven that something is logically false or shown that something does not work.
In synthetic sciences proof is treated with more rigor, which is justified if you consider that it requires greater abstraction, and if subjects accepted weak ideas without justification there would be debilitating consequences. Proving something in a court of law on the other hand, will be less rigorous due to limiting circumstances, and its arguments will rely to a greater extent on formal inferences between content as a means towards its finished concepts.
That being said, proof is a valuable concept not just for its results, but by making clearer the proper way of reasoning, that both sustains and justifies the rationing of formal and logistical unity between social and cultural institutions.
Lab experiments are certainly not fit for this kind of study -- it would be unethical, if not criminal. Natural observations of organisms/animals/people are used instead. Letting them live their life and then watching what happens. If we have a problem with this kind of experiments, let's ask ourselves -- why?
What is it lacking or unsatisfying that we are skeptical of its findings? Why do we need directional arrows to clearly point us to the direction of .... results that we could truly declare, this is the culprit!
Some people want directional arrows and connecting dots showing the way to results, if scientific findings are to be believed!
Anti-scientific stance often tries to find faults in the methodology and what-if scenarios detailing a thousand ways a particular scientific findings could go, then moves on to introduce an "equally legitimate" way of finding out the "facts". Note I say "legitimate"-- as it often a plea to be allowed to live alongside scientific endeavors. Neighbors, if you must.
The same applies to scientific hypotheses. Before the discovery of spreading from the mid-Atlantic ridge in the 1960s, most geologists thought Wegener's hypothesis of continental drift, for which he presented overwhelming evidence in his book, was "obviously" absurd. (What force could move something as big as a continent?) Why were they not open to the evidence? Because they had been taught differently, committed to what they had been taught, and staked their reputations on it. So, to accept the new theory would be to show that they had erred -- that they were not infallible and that accepted science is not always right. There are many other examples, among them Copernicus' heliocentric theory and Pasteur's crazy idea that germs cause anthrax.
Evidence is only useful if your audience has an open mind -- which means having the humility admit that you may have been wrong.
Exactly. The hypothetico-deductive method cannot prove anything to be true, although falsification can prove a hypothesis false. What it can do is show that our hypothesis is adequate to the facts we know (confirmation) and so a rational basis for moving forward. Often, that is all we need. If you are building a bridge, then the fact that Newtonian physics is adequate to our needs suffices. If we are constructing a "theory of everything" (or at least 6% of everything), it is not. So, what we rationally accept as true depends on the issues we are dealing with.
We can prove things by abstracting from, rather than generalizing upon, experiential data. In the Hume-Mill model of induction, if all we see is black crows we are justified in saying "all crows are black." Of course this is not strictly proven. What Hume-Mill induction does is add to the data the assumption that other cases will be like those we have experienced. In the Aristotelian-Thomist model of induction, we abstract from what we know -- leaving behind notes of comprehension we are not concerned about. So, in learning arithmetic a child learns that 2 pennies plus 3 pennies is 5 pennies, and 2 oranges plus 3 oranges is 5 oranges, and then comes to see that the conclusion is implicit in counting, and so, no matter what is counted (abstracting from what is counted), 2+3=5.
So, as long as we base our conclusions on premises justified by abstraction rather that generalization, we can prove things. There are two difficulties here. First, natural science seeks conclusions which can't be justified by abstraction. Newton's great insight, that the same laws that apply on earth apply throughout the cosmos is an essential generalization, and it can't be justified by abstraction. Second, once we start abstracting, we tend to forget that we are dealing with abstractions, not with reality in its full contextual complexity. In Science and the Modern World, Whitehead called this "the fallacy of misplaced concreteness."
We can illustrate this fallacy by considering an electron. If we consider the abstraction of an electron in isolation, there is no way we can know that it will repel other negatively charge particles. We can only discover this by observing its interactions with such particles -- by considering it, not in abstraction, but in context.
The same reasoning applies to the thesis that biology can be reduced to physics. It cannot be for the simple reason that in doing physics we abstract away all the contextual data that biologists study. In physics, we do not care in what context an electron occurs -- e.g. whether it is in a eukaryote or a prokaryote. So, physics lacks data relevant to this essential biological distinction -- and, if it has no data on some biological fact, it surely cannot be an adequate basis for deducing that fact. So, reductionist commit the fallacy of misplaced concreteness as part of their stock-in-trade.