Precision & Science
[quote=Lord Kelvin (26 June 1824 – 17 December 1907)]There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.[/quote]
In short precision is a very significant aspect of scientific theories. An example: Newton's gravity theory was imprecise when it came to predicting the planet Mercury's behavior. Enter Albert Einstein's theory of relativity and it solved the problem - Mercury's orbit could now be predicted precisely.
What is precision?
To me, as how I see it, the more decimal places there are in a measurement, the more precise it is. For instance, 2.09165 meters is more precise than 2 meters.
What I can't wrap my head around is why increasing precision (a teensy-weensy change) requires entirely new scientific hypotheses/theories (a huge change)? It almost seems chaotic. The example of how Einstein's theory supplanted Newton's is a case in point.
A difference in degrees (precision) requires a difference in kind (a radically different theory/hypothesis)!
It's like saying the more precise I want to be about what good is in (say) utilitarianism, the more likely it is that I'll have to abandon utilitarianism and develop a totally novel theory that doesn't look anything like utilitarianism.
In short precision is a very significant aspect of scientific theories. An example: Newton's gravity theory was imprecise when it came to predicting the planet Mercury's behavior. Enter Albert Einstein's theory of relativity and it solved the problem - Mercury's orbit could now be predicted precisely.
What is precision?
To me, as how I see it, the more decimal places there are in a measurement, the more precise it is. For instance, 2.09165 meters is more precise than 2 meters.
What I can't wrap my head around is why increasing precision (a teensy-weensy change) requires entirely new scientific hypotheses/theories (a huge change)? It almost seems chaotic. The example of how Einstein's theory supplanted Newton's is a case in point.
A difference in degrees (precision) requires a difference in kind (a radically different theory/hypothesis)!
It's like saying the more precise I want to be about what good is in (say) utilitarianism, the more likely it is that I'll have to abandon utilitarianism and develop a totally novel theory that doesn't look anything like utilitarianism.
Comments (40)
There are various definitions of fascism, for instance; of democracy, of capitalism, of imperialism, of all sorts of things. What may be a good definition of fascism in 1925 (near the time the term was coined) may be less apt in 2021; same for democracy. Democracy in 1776 and 1976 may be dissimilar. Democracy in England may be quite unlike democracy in India. Greater precision doesn't seem to be the critical factor (though precision may be helpful).
if one is a molecular biologist wielding CRISPR, more precision is definitely a good thing.
Quoting Bitter Crank
I'm, as mathematicians say, merely extrapolating the results.
I don't know how familiar you are with math but increasing precision in a physical law e.g. Newton's F = ma, can be done, at least I think it can be done, by making more precise measurement.
Say, m = 2 kg, a = 3 [math]m/s^2[/math]
F = ma = 2 × 3 = 6 Newtons of force.
Now, if I measure the mass more precisely e.g. 2. 014 kg and I do the same thing to acceleration, a = 3.009 [math]m/s^2[/math] what I get is
F = 2.014 × 3.009 = 6.060126 Newtons
In other words, precision is a matter of inputting finer measurements into Newton's formula.
However, that's not how it actually happens in science. If I'm correct, don't bet on it. we need an entirely new formula as part of an completely novel theory/hypothesis to achieve greater precision.
Note: I'm not a scientist or a mathematician, cum grano salis.
Newton's gravity was even more precise than GR. It made a very precise prediction about Mercury. But Mercury replied not precisely.
And indeed My Lord Kelvin was mistaken. Not only were there many, many new things to be discovered (Radioactivity, relativity, a slew of new elements and subatomic particles, quarks and their properties, superconductivity, semiconductors, etc, etc) , but one of the discoveries (quantum mechanics) was that nature itself is imprecise.
You have made a step into the next level of philosophical thinking. Yes, the reality of many "ideologies" is they are imprecise but easy to digest ideas to be used as guidelines. They work for general use, but begin to fail when you want clarifications on specifics. Newton's theory of gravity is a fantastic example. Newton's gravity works for almost all of our daily experiences on Earth with bodies to our scale. It begins to break down when bodies become incredibly large, like solar systems, or incredibly small, like the sub atomic level.
Philosophy is the same. Utilitarianism is fine as a general ideology for perhaps your day to day thinking and living. But when greater precision is needed, when the scale changes, more questions than answers begin to form.
A belief in ideologies is for the beginners of philosophy. It is for the casual thinker that needs a rationale or inspiration to live or change the way they live. Just as the true physicist understands that the layman's concept of physics is not functional for in depth discoveries, and that it is merely an attempt to explain what is not fully translatable to English, so does the true philosopher understand ideologies are digests, and ultimately worthless labels when you are ready to dive into the deep logic underpining their conclusions.
I dislike saying "[insert theory here] is just a theory" because that saying is often used to dismiss science. However, it seems intuitive to me that saying "gravity is only a theory" is indeed correct. Any theory of gravity is just that: a theory. It's just framework for describing and predicting how nature behaves at a certain abstract physical level. A very successful framework of course, yet it's still manufactured by physicists. Who knows, scientists might even come up with a new theory that rivals Einstein. If that happens, the same things we say about Newtonian laws being inaccurate or "imprecise" will equally be said about Einstein's general theory of relativity.
No. The relevance of precision in this case is that precise measurement of Mercury's orbit showed that Newton's theory was not imprecise but wrong.
Interestingly enough, Newton wasn't wrong. It was simply not precise enough for large bodies. You can take the theory of relativity and reduce it down to Newton's equation for regular sized bodies. It is evidence that certain equations are useful for particular scales, but breakdown in others.
This is precisely wrong for reasons that I just explained. Newton's theory doesn't break down at large or small scales. Nothing special happens at those scales - it continues to give precise predictions. It becomes less accurate at high energy scales (a fact that we were only able to discover thanks to its great precision!) The theory breaks down at singularities, which it does not rule out in its minimal formulation - but that is true of Relativity as well.
The distinction between precision and accuracy is an important one, because both are important, but in a sense they are pulling in opposite directions. A theory can be made more accurate at the expense of precision, and conversely the more precise a theory is, the riskier its predictions are (to use Popper's language) in terms of accuracy. Vague astrological predictions can be quite accurate, but quite useless at the same time.
A quibble.
:lol: Good one!
Quoting unenlightened
:up: What do you mean by "nature itself is imprecise" vis-à-vis QM? There's wiggle room at the bottom but not so much up here at human scales? Do you have or is there an explanation for this seemingly odd fact?
Are you perchance referring to Heisenberg's uncertainty principle which states, thoroughly confirmed by experimental findings, that more precise the measurement of one of either the position or the momentum of a particle, the more imprecise the value obtained for the other?
Interestingly, if we ignore received wisdom on the matter which claims there really is no workaround for Heiseberg's uncertainty principle, wouldn't what I said in the OP entail that a better, brand-new theory could, somehow, solve the problem?
Shooting in the dark here, kindly excuse my ignorance on questions as profound as this.
Quoting SophistiCat
Either you're mistaken or Lord Kelvin is talking out of his hat.
By the way what's the difference between precision and accuracy?
I remember a darts analogy in a biochemistry book I read long ago.
Accuracy: How close your darts are to the bullseye? How close to the true value your calculations/measurements are?
Precision: How close your darts are to each other? I suppose clustering of the darts would mean high precision. Basically, your calculations spit out numbers/values that, well, huddle together, are in a bunch.
So, your stance is that we revise theories in order to attain greater accuracy but not precision?
Put simply, Newton's classical formula for gravity is less accurate than Eistein's relativistic formula for the same. How come though that Newton's formula and Einstein's formula differ only in the number of decimal places (precision) at nonrelativistic speeds? They both do hit the bullseye (equally accurate).
Well said! The complexity we're faced with boggles the mind. Reminds me of Kurt Gödel's incompleteness theorems. As we make our concepts more and more exact, we need more and more sophisticated theories and, just guessing here, this process may go on ad infinitum.
:ok:
Quoting T Clark
I believe such black and white, binary, thinking, although apt on certain occasions, more obfuscates than clarifies. I remember reading that Newton's formulas are precise enough for space exploration. That's a big nod of approval - a lot is at stake and even one tiny error could jeopardize enitre missions.
In addition, one possibility that bothers me and can't be ruled out is another planet with an orbit within that of Mercury's that could explain why Newton's theory can't account for Mercury's behavior - something similar happened with Uranus and Neptune (got that from astrophysicist Neil deGrasse Tyson). I'm, of course, ignoring the other experiments that confirm Einstein's theory of relativity.
Quoting TheMadFool
I'd normally not comment on this, outside of grading homework, but since precision is what this thread it about: Your last line is slightly problematic. A better version looks like this:
F = 2.014 kg × 3.009 m/s² = 6.060 N
I re-added the units, but never mind that. The relevant point is that the output is never going to be more precise than the inputs. Here, both of the inputs are precise to 4 "sigfigs" ("significant figures", which is similar to but more inclusive than the "decimal places" you touched on in the OP), so the output will be precise to 4 sigfigs at most. The additional numerals "126" are arithmetic artifacts, and contain no physically meaningful information.
The reason including them is potentially harmful, and not merely pointless, is that a number like "6,060" contains an additional piece of information in this context. Namely, it implicitly tells you the precision of the value, by how many sigfigs it gives. An explicit equivalent for "6.06" is "6.06 +/- 0.005". For "6.060", it's "6.060 +/- 0.0005". And for "6.060126", it's "6.060126 +/- 0.0000005". And that claim clearly can't hold here.
Unsurprisingly, what I just said is itself imprecise. Properly, combining the uncertainties in the input values into an uncertainty in the output value takes statistical methodology. And when it matters, that's what the professionals do, too. And then you get results along the lines of "6.0601 (-0.0007)(+0.0008) N", where the numbers in the parentheses specify the interval within which the true value is expected to fall with a given confidence, like 50% or 90%.
End of tedious aside. :)
Why?
Quoting onomatomanic
Give me a crash course on signficant figures.
Quoting T Clark
Depends on who you ask.
In the context of modern physics, it's pretty much the heart of the matter. Newtonian mechanics isn't false, and Relativity isn't true. Both are simply models, and it's not even as simple as that Einstein's model is unequivocally better than Newton's.
Models approximate reality. Newton's model doesn't approximate it as well as Einstein's, so it's worse in that sense. But it's also considerably lower-effort, which is a point in its favour. Choosing a model to apply is like choosing a tool to use: The optimal choice depends on the job at hand.
By that standard, Ptolemaic astronomy isn't wrong, it's just less precise than Kepler. Which is ok with me. I understand what you're trying to say.
Quoting onomatomanic
The general proof again needs statistical methods, no doubt. For the specific case of a multiplication like F = ma, though, just think of the inputs as the length and width of a rectangle, and the output as its area. If the length is known perfectly, and the width has an uncertainty of 10%, say, then the area will have an uncertainty of 10% as well. Vice versa, if the length has the 10% uncertainty, and the width is known perfectly, same result. So when both the length and the width have a 10% uncertainty, it should be clear that the area now has an uncertainty of more than 10%. Is that good enough? :)
Quoting TheMadFool
Let's write the earlier result like this, for the sake of illustration:
000 006.060 126 000 +/- 0.000 5
The leading zeros are insignificant, in that dropping them doesn't affect the value. Ditto for the trailing zeros. And the "126" portion is also insignificant, in that it's below the "certainty threshold" we're specifying. The remaining figures are the significant ones, and counting how many of them there are is a useful shorthand for the value's precision. "6.06" has 3 sigfigs, "6.060" has 4, which is why they don't mean quite the same thing (in this context, this is a convention that need not apply in others).
Quite. Unfortunately, it's less precise while also being more effort. So as a model, it's objectively worse, and there is no situation in which it would be preferrable to use it. But I take your point. The standard is the one that modern physics applies to itself, primarily, and applying it outside of that domain can be a bit absurd.
As I said, I understand the point you are trying to make.
Oh! I see. Is the following correct then?
For F = ma (Newton's force formula)
A) If m = 2 and a = 3, F = 2 × 3 = 6
B) If m = 2.1 and a = 3.1, F = 2.1 × 3.1 = 6.5 [ I dropped the 1 after 5]
My precision in B is greater than my precision in A.
If so, my question is does Newton's and Einstein's theories differ in this respect? Put differently, is Newton's theory less precise than Einstein's?
I think the answer to the above question is "yes". If Newton had very precise measurements of mass and distance, he would've realized, given his genius, immediately that his formula [math]F = G\frac{m_1m_2}{r^2}[/math] was wrong, way off the mark as it were as demonstrated by Einstein. In short, Newton was working with poor quality measurements with fewer significant digits.
True value of m: 2.0165394830013 kg
Instrument X
m = 2.017 kg
Scientific theory T
Instrument Y
m = 2.01654 kg
Scientific theory U
Instrument Z
m = 2.0165395 kg
Scientific theory V
Is it that T = U = V?
OR
Is it that T [math]\neq[/math] U [math]\neq[/math] V?
Quoting TheMadFool
Yes. It gets a bit trickier when the inputs aren't of the order of magnitude of 1, which is to say, aren't between 1 and 10:
C) If m = 20.1 and a = 30.1, F = 605
3 sigfigs in the inputs, so 3 sigfigs in the output. That the figures are in different places (hundreds, tens, and ones; instead of tens, ones, and tenths) doesn't matter. This is one of the reasons why people like to use scientific notation:
C') If m = 2.01*10^1 and a = 3.01*10^1, F = 6.05*10^2
Back to not tricky at all. :)
Quoting TheMadFool
I don't quite know how to answer that - and as you've seen, others have responded in quite different ways - which shows that it's quite a good question. It seems to me that it depends more on how the theories are interpreted than on the theories themselves, ultimately.
Put simply and imprecisely: Newtonian mechanics fails for Mercury because it uses Euclidean geometry; General Relativity holds for Mercury because it uses non-Euclidean geometry, aka "the curvature of space(-time)".
The traditional interpretation of this discrepancy would be that each theory makes that assumption about the actual nature of actual space. In this interpretation, the fact that precise measurements of Mercury disagree with the Newtonian prediction tell us that its assumption was wrong, and therefore that the theory as a whole was fundamentally wrong. The imprecision is small, so the prediction is quantitatively quite good. But while convenient, that's not really the point - the way it describes the situation qualitatively is no good. So its being imprecise for once means that it was wrong all along.
On the other hand, the fact that the measurements agree with the Relativistic prediction confirm its assumption. Which does not, of course, rule out that other measurements won't say otherwise. For the present, the theory remains "unfalsified", and its assumption about the actual nature of actual space remains in the running for being actually true.
This is probably how Newton would have thought about it, and possibly how Einstein would have thought about it at least some of the time.
The modern interpretation differs, unsurprisingly. One way to put it might be to say that it treats both [s]theories[/s] models (the new label is somewhat tied to the new interpretation) as applying to distinct and equally hypothetical worlds, in which their respective assumptions hold by definition. What the measurements taken in the real world tell us is that Einstein's hypothetical world is a better approximation of ours than Newton's. Nevertheless, in the vast majority of practical situations, the disagreement between the two approximations is negligible. The fact that Newton's approximation is discovered to be non-negligibly imprecise under certain circumstances simply tells us not to rely on it in those sorts of circumstances. And the fact that Einstein's approximation holds up doesn't mean that it ceases to be an approximation, just that we've not yet achieved the precision or encountered the circumstances under which it, too, buckles. So both models are considered, a priori, to be precise within their hypothetical worlds and imprecise in the real world. Newton's model is lower-precision than Einstein's, but also lower-effort. Pick whichever fits a given situation, and don't worry about that elusive concept called "truth".
I don't think so. In calculating complicated three or four body problems in classical mechanics a huge effort can be invested. GR is not even able to approach this problem. There is more precision in the Newtonian approach than in the GR approach.
Do you mean that our mathematical methods and computing resources are insufficient to apply GR to certain classes of problems, or that the model itself is less powerful than Newtonian mechanics? If what you mean is that for a given investment of effort, Newtonian methods will more often than not yield better results than Relativistic methods, then we're saying the same thing in different ways.
One more reason for failing to limit global warming (regardless of what the reps at the COP26 say) is inaccuracy and imprecision in measurement. The result is a kind of climate-fraud, where officials claim accomplishments which simply do not exist. A report in the Washington Post noted that carbon from SE Asia palm oil production is underreported, thanks to both imprecision and willful errors. In the US, the Post reported that 25% of the gas in retail cooling systems is lost every year. Is that because of neglect, indifference, imprecision, inaccuracy, or what?
We will not be able to save ourselves if we continue sloppy manufacturing and agricultural operations. Without precise data we are wandering around in the hot dark.
That's what I mean indeed. Sometimes the universe is classical absolute Newtonian, in other situations it's classical relativistic Einsteinian.
I see. So one way polluters (governments, big oil, etc.) can wiggle their way out of a tight spot is to fudge the numbers - lower the resolution of relevant figures (make them imprecise) and suddenly we lose the clarity necessary to hold entities to account. Nice!
[quote=Mark Twain/Benjamin Disraeli]Lies, damned lies, and statistics.[/quote]
[quote=Luis Alberto Urrea]Numbers never lie, after all: they simply tell different stories depending on the math of the tellers.[/quote]
1. Am I correct about what I said about Newton? Had his measurements for mass and distance been more precise (had more decimal places) than what was available to him, he would've realized that the formula [math]F = G\frac{m_1m_2}{r^2}[/math] was wrong.
2. Why can't the output of a formula not be more precise than the input?
What is of concern to me is why an entirely new model needs to be built from scratch simply to explain a more precise measurement if that is what's actually going on? Something doesn't add up. It's like saying that measurement data gathered using a high school student's ruler/scale requires a different explanatory model than measurement data acquired with a physicist's vernier calipers. I think I'm getting mixed up between accuracy and precision here but somehow I don't think it's my fault (see Lord Kelvin's quote in the OP).
Quoting TheMadFool
Unlikely, I'd say.
What one learns in school about the Scientific Method is that when a new practical result turns out to contradict the old theoretical system, what scientists do is throw away the old system and replace it with a new one.
What happens in the real world is a lot messier, because there are always a bunch of possible reasons for such discrepancies. Maybe the result was a fluke. Maybe there was a systematic error in how it was obtained. Maybe it doesn't show us a single effect, but how various effects interact, and the old theory works fine for the primary one but doesn't apply to each of the secondary ones, or one of the theories that do apply to the secondary ones is the one that's dodgy, or some of those other theories don't even exist yet because this is the first time this effect has shown up. Or, or, or.
For an illustration, imagine aliens living on our Moon using a high-precision optical telescope to observe a cannon firing on Earth, and noticing that the cannonball's trajectory doesn't quite match Newtonian predictions. Do they need to invent Relativity? A far likelier explanation is that they've not properly accounted for atmospheric effects like drag, given that their Lunar environment doesn't have much of an atmosphere.
For an example, have a look at Pioneer anomaly @ wikipedia.
So that's one good reason not to give up on a theory at the first sign of trouble. Another one is that until there's a new theory,.you use the old one, whether or not you know it to be flawed. In the traditional interpretation, in which theories can be true or false, that's a bit distasteful - but in the modern interpretation, in which models can only be better or worse approximations, there's nothing wrong with it.
With all that in mind, what would Newton have done with those high-precision measurements? It's not like he was in a position to go ahead and come up with Relativity himself: None of the theoretical groundwork that Einstein built on was in place at the time, not least because the bulk of it was ultimately built on Newtonian foundations in turn. Reasonably, it would have made little difference, other than to make him suspect that some other effect, like the atmospheric drag in my illustration or the thermal recoil in the Pioneer example, comes into play at some point.
Quoting TheMadFool
Did you not like my eariler explanation?
Quoting onomatomanic
Quoting TheMadFool
Part of the problem may be that you're thinking in terms of individual measurements. Think in terms of datasets instead:
The upper dataset is low-precision, and can be "explained" as the blue line, which is straight. The lower dataset is high-precision, and must be explained as the green line, which is curved. The old model was quite good, in the sense that it predicts parameters (offset and slope) for the straight line that put it in the right place. But straight lines is all it can do, so it's not good enough for the higher-precision data. The new model is better, in the sense that it can do what the old model can do, plus predicting curvature parameters. Still, the old model remains better in the sense that it's less cumbersome to work with, so it makes sense to keep using it whenever either the line doesn't curve or the needed precision isn't high. (Hm, that actually worked out even nicer than I anticipated!)
Thanks. Reality is hardly ever cooperative enough to fit neatly into our equations. There's always some wrinkles that we just have to ignore. Nevertheless. an approximation - something - is better than nothing.
I'd like you to go over the following:
Take the Parker Solar probe. It's speed = 111 km/s
1. Newtonian velocity addition: u = u' + v
If two Parker Solar probes were travelling towards each other, their relative velocity, R1 = 111 + 111 = 222 km/s
2. Relativistic velocity addition: [math]u = \frac{u' + v}{1 + \frac{u'v}{c^2}}[/math]
Plugging in the numbers, their relative velocity, R2 = 221.9999696082 km/s
km/s
Salient points
(i) The relative velocity calculated in a Newtonian way and that calculated in a relativistic way differ but we could and do say that the ever so minute difference is negligible. That's the reason why Newton is still in the game in this scientific epoch of Einsteinian relativity. I'm sure you'll agree.
(ii) If significant digits matter, as you say they do, R2 should be rounded to 222 km/s (dropping the "false" precision of 0.9999696082) If we do that, relativistic velocity addition becomes, in a certain sense, meaningless. That, to me, doesn't add up. After all, Einstein's theory completely rests on that additional precision represented by 0.9999696082.
Conclusion
Your claim that an output of a physics formula can't be more precise than the inputs doesn't seem to hold water. As seen above, the precision in the output, higher though they may be compared to the inputs, makes a huge difference, requring an entirely different model/theory.
Okay, I think I see now what you're grappling with. The point is this one:
A) Low-precision version of the experiment
Data
Theory
The measurement tools used in this version are precise to a few m/s, which shows up as noise at the level of the 6th sigfig. Using more sigfigs in the computations would be pointless and misleading. The measured values and those derived from the old and new models are all close enough to each other to be considered identical. We've simply confirmed both models, lacking the power to discriminate between them.
B) High-precision version of the experiment
Data
Theory
Now we're using tools precise to a few mm/s, and so increase our working precision to 9 sigfigs. This extra precision is what allows us to say that there is a non-negligible difference (~30 mm/s) between the predictions made by the old and new models, and to meaningfully compare the experimental data with either one. The data disagrees with the old and agrees with the new model, which is strong confirmation of the latter.
If you're still not quite comfortable with sigfigs, remember that they're merely a shorthand for how much error there is in a value. Maybe the readout of the low-precision tool uses 9 figures, and gave us v1 as "111,109.876 m/s". There's nothing wrong with reporting that as "111.109876 km/s, with a margin of error of 3 m/s", say. It's just more verbose and "not the done thing" in this context.
Happy? :)
I wonder what Newton and Einstein have to do with happiness, my happiness to be precise. Curious but definitely worth exploring. Thanks.
Quoting TheMadFool
Agreed, but with reservations. We can "parametrise" the speed summation equation like this in general:
v = gamma * (v1+v2)
According to Newton, gamma = 1. According to Einstein, gamma = 1 / (1 + v1v2/c^2). It's instructive to consider how Einstein's expression behaves as v1 and v2 approach 0 on the one hand - approaches the Newtonian limit - and the speed of light on the other hand - approaches 1/2, which then keeps v from ever exceeding c.
And if one thinks of the Newtonian, constant value as an approximation, either of the Relativisitic expression or of reality, then this introduces an imprecision into the output of the equation that is disconnected from the imprecision of the inputs of the equation.
This, I believe, is not how physicists typically do think about it though. The reason being that plenty of physical models are explicitly constructed like that, whereas in this case it would be more of a retcon. More importantly, to be considered sound, those models must themselves supply a means of estimating the magnitude of the imprecision they contain. For Newton, you have to step outside the model to come up with such an estimate.
Quoting TheMadFool
Precisely. In F = m*a, the imprecision in F is the combined imprecision in m and a, both of which need to be measured. In v = gamma * (v1+v2), the imprecision in v is the combined imprecision from taking gamma to be a constant and from the straight summation of v1 and v2, which again need to be measured. The only way not to "miss it completely" is for the parametric contribution to be the dominant one, which in practice means either Relativistically high speeds, or high precision in measuring those speeds, or ideally both.
Re-reading the recent posts, I think any remaining confusion comes down to theory versus application, more than anything else. The concept of "precision" comes into it on both those levels, and it means fundamentally the same thing on both of them - but what it means specifically depends on the specific context.
To illustrate, let's consider everyone's favourite thought experiment, flipping a coin.
Theory: The simplest model, let's label it "Alpha", says that there are only two outcomes, heads and tails, and that they have the same probability, Ph = Pt = 50%. Well, actually, there is a third outcome, in which the coin balances on its rim. So in model "Bravo", we treat the coin as a cylinder with radius R and thickness T, and say that the probability for that third outcome depends on those new inputs, Pr = f(R, T), and that the two original outcomes remain equally likely, Ph = Pt = (100% - Pr)/2. But actually, a cylinder has at least two further equilibrium positions, in which it balances on a point along one of the lines at which the rim and the faces meet. So in model "Charlie"...
Application: Flip a coin, repeat N times, count how often each outcome occurs. The ratio Nh/N measures the probability Ph for heads, et cetera.
Now, which model is more precise, Alpha or Bravo? A case can be made either way. Alpha predicts Ph to be 50%, which is perfectly precise in the sense that no source of imprecision is included in this model. It's not 0.5 precise to 1 sigfig, or 0.500 precise to 3 sigfigs, but 1/2, the ratio of two integers.
Bravo, by contrast, expresses the probabilites in terms of physical properties that have to be measured. Those measurements are necessarily imprecise, and because imprecise inputs yield imprecise outputs, this model's numerical predictions cannot be perfectly precise. Bravo is a less precise model than Alpha, in this sense.
However, treating the coin as a three-dimensional cylinder with thickness T is closer to reality than treating it as a two-dimensional disk with thickness zero. So Bravo can be thought of as approximating reality, and Alpha can be thought of as approximating Bravo, for a typical coin. Being only approximations, neither prediction should be considered precise, but it's reasonable to expect Bravo to be less imprecise than Alpha, in that sense.
On the applied side, how precise are those measured probabilities? For one thing, a ratio like Nh/N isn't quite the same as that 1/2 above, because the numerator and denominator aren't integers in quite the same sense. As N gets large, miscounting gets inevitable, so a result like 12345/23456 shouldn't be thought of as perfectly precise any longer. If we estimate the uncertainty to be on the order of 100, say, we can employ scientific notation to write that as (1.23*10^4)/(2.35*10^4) to make that point.
For another thing, by design, this is about chance, and so there's always a chance the measured probabilities won't agree with the theoretical predictions regardless of whether the model is good or bad. For N=2, there're four simple outcomes - heads then heads again, heads then tails, ... - and half of them are best explained by a model that says "the coin keeps doing the same thing". Fortunately, such flukes get less likely as N gets large - unfortunately, that means that measurements can't avoid both types of imprecision at once.
TLDR, lots of stuff may be thought of as imprecision, and doing so may provide little insight.
I'm unable to tell whether the extra digits in relativistic velocity addition compared to Newtonian velocity addition is a question of accuracy or precision.
The calculated velocity has to be measured for confirmation of either theories (Newton's & Einstein's). In other words, the deciding factor is a speedometer's precision and accuracy.
Suppose the actual velocity is 2.0189 m/s
The speedometer is both accurate and precise.
It measures 2.0189 m/s
Newtonian velocity addition says the velocity should be ?
Relativistic velocity addition says the velocity should be ?
In a thought experiment, you can have such a thing as a perfect speedometer, and use it to perfectly determine relative speeds, and use those to test models against each other, as long as their predictions differ at all.
In the real world, a speedometer can't be perfect, only better or worse than another speedometer. To be able to test models against each other, their predictions need to differ by enough to overcome those imperfections.
Quoting TheMadFool
In the real world, there's no point in supposing such a thing, because the only way we can find out is to meaure it. In a thought experiment, there may be a point - but thought experiments can't confirm theories, only falsify [s]theories[/s] hypotheses that are internally inconsistent.