You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

A question about time measurement

TheMadFool November 05, 2017 at 06:56 16325 views 165 comments
When we measure length we specify a unit (cm, inch) and then use this unit to measure lengths of objects. Then we say an object is x cm or inches. Not much of a problem in this but what is essential in ''all'' measurements is regularity. What I mean is that once we define a cm or an inch, the specific length that is a cm or an inch must NOT change. Otherwise measurement would be meaningless.

[I]Regularity[/i] - the unit of measurement should be constant.

Now let's see how we measure time. Time is measured in seconds, its multiples or subdivisions. The second, today, is defined in terms of how long it takes for a specific atom to vibrate some number of times.

This seems problematic (for me) because how do we know the vibrations of the atom used to define a second is regular? To me the only way we can decide this is by using another process or phenomenon we know to be regular but then how do we know that particular process or phenomenon is regular? And so on...

A classic case of the chicken-egg problem.

Your thoughts...thanks.

Comments (165)

t0m November 05, 2017 at 09:17 #121571
Quoting TheMadFool
This seems problematic (for me) because how do we know the vibrations of the atom used to define a second is regular? To me the only way we can decide this is by using another process or phenomenon we know to be regular but then how do we know that particular process or phenomenon is regular? And so on...


That's a great point. It looks like we have to trust in some kind of uniformity. But there's also Hume's problem of induction. So the trust in uniformity is also there in the laws themselves that involve time. Science looks to be founded on some basic sense of the order in the world, a sense that it also encourages.
noAxioms November 05, 2017 at 13:12 #121656
Quoting TheMadFool
This seems problematic (for me) because how do we know the vibrations of the atom used to define a second is regular?
Because you get the exact same result from countless repeatings of the experiment.
I get different results from the time measurement of my grass to grow 5cm, so using grass growth as a clock works, but not very accurately. Good clocks use very consistent processes.
Streetlight November 05, 2017 at 13:18 #121657
Er, all you need are two measures you think are regular in relation to each other. If one, or both are not in fact regular, at some point they will go out of sync. If they don't, you're good. And even if they are out of sync, if the divergence isn't too bad, all you have to do is recalibrate every once in a while - like we do with leap years.
sime November 05, 2017 at 13:41 #121661
Yes, and the definition of the standard metre is no longer the length of a particular platinum bar contained in a Paris vault, but the distance light travels in a vacuum in a precise time interval ideally measured by etc etc

In short our metric units are essentially imprecise and are in practice unrelated to a particular standard.

The deeper problem relates to Wittgenstein's observations concerning our notion of sameness. For our notion of when two things are "identical" *isn't* in the sense of them being equivalent in a precisely empirical sense, but in the practical sense that they can be substituted for one another in a language game.

Metaphysician Undercover November 05, 2017 at 14:45 #121671
Quoting TheMadFool
Now let's see how we measure time. Time is measured in seconds, its multiples or subdivisions. The second, today, is defined in terms of how long it takes for a specific atom to vibrate some number of times.


The second is derived from the minute which comes from the hour, and the day. The day refers to the planetary motion. So one rotation of the planetary motion (which is relatively constant) can be divided into seconds. The number of times a specific atom vibrates in one second can be counted, and observed to be consistent as well. So this is deemed as a constant as well. However, the so-called "constants" are not absolutely constant, so this produces the need to make slight adjustments now and then.

Quoting TheMadFool
This seems problematic (for me) because how do we know the vibrations of the atom used to define a second is regular? To me the only way we can decide this is by using another process or phenomenon we know to be regular but then how do we know that particular process or phenomenon is regular? And so on...


So we always compare different activities which appear to be temporal constants, the earth's revolution around the sun, the phases of the moon, the rotation of the earth, vibrations of atoms, the movement of light, etc.. From the comparisons of numerous constants, we can determine which are the more reliable constants than others. If one constant proves to have a slight variance in relation to numerous others, we can make adjustments accordingly, model that variance and look for the cause of that variance. By modeling the variance in planetary motions, the heliocentric theory of the solar system was developed and proven. The orbits of the planets were determined by Kepler to be elliptical rather than circular as postulated by Copernicus maintaining consistency with earlier proposals of the heliocentric system, and Aristotle's metaphysics. This was determined and proven through analysis and reference to the variances. These variances were the stumbling block needed to be overcome to prove the heliocentric system.

TheMadFool November 05, 2017 at 16:13 #121676
Quoting noAxioms
Because you get the exact same result from countless repeatings of the experiment.


How do you know you get the exact same result? By applying a specific standard to both to confirm repeatability of measurement. The next question, of course, is how do you know the standard you chose is regular? Chicken and egg?

Quoting StreetlightX
Er, all you need are two measures you think are regular in relation to each other.


Well, that doesn't solve the problem does it? The phenomenon itself must be regular. Taking two faulty rulers to countercheck each other doesn't solve the problem of whether we have the right measurement.

Reply to sime I see your point. Makes sense.

Reply to t0m (Y)

Quoting Metaphysician Undercover
However, the so-called "constants" are not absolutely constant, so this produces the need to make slight adjustments now and then.


How do we know that? By using, a supposedly accurate, time piece. And how do we know that that's accurate?

Streetlight November 05, 2017 at 16:22 #121677
Quoting TheMadFool
Well, that doesn't solve the problem does it? The phenomenon itself must be regular.


What 'phenomenon'? All you want is regularity. If two measures are in sync, they're regular. You really need to stop with the pseudo-problem threads.
Metaphysician Undercover November 05, 2017 at 16:40 #121678
Quoting TheMadFool
How do we know that? By using, a supposedly accurate, time piece. And how do we know that that's accurate?


An accurate time piece just utilizes one of the supposed constants. We know that the supposed constants are not absolute, by comparing one to the other, and determining the variances. If we compare numerous constants we can determine which variances are proper to which constants.
TheMadFool November 05, 2017 at 16:53 #121680
Quoting StreetlightX
What 'phenomenon'? All you want after is regularity. If two measures are in sync, they're regular.


Can you flesh this idea of ''two measures are in sync''. Can you give me a concrete instance of this ''solution''?

I'll give it a try...

Take two pendulums swinging. By ''in sync'' you mean they move to-and-fro at regular intervals. But that's exactly the issue here. How do you know they move, as you say, ''in sync''?
Streetlight November 05, 2017 at 17:25 #121681
Quoting TheMadFool
By ''in sync'' you mean they move to-and-fro at regular intervals.


User image

If the frequency of the pendulums is regular and one is, for example, phase shifted by 90 degrees, they should stay 90 degrees phase shifted (the distance between the waves should not change). If the distance remains the same, the frequency of the pendulums is regular, if it doesn't, the frequency of at least one of the waves isn't regular.

It's like asking how one can know if two lines are parallel. If you're in euclidean space, place them side by side perpendicular to a horizon, and no matter how far you draw your horizon, the lines should never meet. Then you know your lines are straight. No need to compare an infinate array of lines.
t0m November 05, 2017 at 21:22 #121730
Quoting TheMadFool
Taking two faulty rulers to countercheck each other doesn't solve the problem of whether we have the right measurement.


Right. I think you've opened a nice philosophical can of worms, kind of like Hume's problem of induction. It may not be a practical question, but I do not think it's only a matter of playing with words. I disagree that it's a pseudo-question.

[quote=wiki]
Since 1967, the International System of Units (SI) has defined the second as the duration of 9192631770 cycles of radiation corresponding to the transition between two energy levels of the caesium-133 atom. In 1997, the International Committee for Weights and Measures (CIPM) added that the preceding definition refers to a caesium atom at rest at a temperature of 0 K.[15]
[/quote]

How would we know 'officially' if the transitions were slowing down or speeding up? They are themselves the 'official' measure. Practically we would see things out sync (other periodic processes would be out of sync with the cesium.) We'd be thrown into a scientific crisis/opportunity.
apokrisis November 05, 2017 at 21:37 #121734
Quoting t0m
How would we know 'officially' if the transitions were slowing down or speeding up?


We know radioactive decay makes a good clock as we also know the physics that could change its rate.

So we have relativity theory which says everything is fine as long as we share the clock's inertial reference frame. If the clock were to get accelerated, then it would read off time differently.

And we have quantum theory to tell us that radioactive decay is an intrinsically independent process. It has a statistics which is "internal" - ruled by a constant of nature. Although again, we could affect that by "observing the decay continuously", preventing its spontaneous probabilistic decay - something called the quantum zeno effect.

So in principle, the "clock of the universe" could speed up or slow down and we couldn't notice it. But it is the fact itself that we couldn't notice a difference that then means there ain't anything to worry about - except people's metaphysical hankering for externalist accounts of reality.

We have rules - relativity and quantum theory - to handle the way time can be stretched or broken in ways we can notice. We can understand how clocks can tell a different time because of physical differences. And so, to the degree we remove those difference-making conditions, our clocks will "run true".

This may only be an internalist truth. But in the end, only internalism makes sense as epistemology.


t0m November 05, 2017 at 21:48 #121743
Quoting apokrisis
And we have quantum theory to tell us that radioactive decay is an intrinsically independent process.


What do you make of Hume's problem of induction? I have no real doubt about the uniformity of nature, but it seems to me that quantum theory is founded in our trust in this uniformity. In theory, all the order we have come to trust in could go to hell. Admittedly this would probably wipe out our ability to notice this going-to-hell. We'd die instantly. But isn't it logically possible ? I don't expect to be suddenly wiped out by a change in the 'laws' of nature, but I have yet to see a way around Hume's

'problem.'

Quoting apokrisis
So in principle, the "clock of the universe" could speed up or slow down and we couldn't notice it. But it is the fact itself that we couldn't notice a difference that then means there ain't anything to worry about - except people's metaphysical hankering for externalist accounts of reality.


I think you're assuming a 'nice' version of the scenario. If things that 'should' be in sync go out of sync, then we'd be thrown into the crisis of deciding which 'law' had been violated. We would of course to include this violation in a still more general law. We can of course postulate the law of the change of the laws. But I don't see how we aren't always relying on an intuitive faith in the uniformity of nature.

Note that I can only 'doubt' this faith theoretically, so there's no question of disregarding science here. It is almost sanity itself to project uniformity on nature.


apokrisis November 05, 2017 at 22:03 #121749
Quoting t0m
I don't expect to be suddenly wiped out by a change in the 'laws' of nature, but I have yet to see a way around Hume's 'problem.'


Where there is belief, doubt is also possible by definition. Saying A always logically permits not-A if A is in fact a meaningful thing to assert.

So the problem of induction isn't really a problem. If we couldn't doubt, how could we say we believed?

And then uniformity is just a reasonable assumption - the rational bottom-line. How can we measure a difference except against a baseline of indifference? We can't even properly, logically, conceive of a universe in which time ran faster or slower unless we first conceive of it running with some rate that would be, by contrast, constant - the rate without any difference.


t0m November 05, 2017 at 22:15 #121753
Quoting apokrisis
So the problem of induction isn't really a problem. If we couldn't doubt, how could we say we believed?


I take your point, but Hume could squeeze out enough theoretical doubt to make the issue conspicuous. I think the OP does the same thing. Maybe it's not something we can take 'seriously' away from the intellectual pleasure involved in this making conspicuous. But that's true of lots of metaphysics. A thoroughly practical mind might grudgingly/generously call it poetry or conceptual art as opposed to nonsense.

I'm not accusing you of this, but it's easy to imagine a 'smug quietism' misreading genuine logical tensions as language on holiday, complacently waiting for the acknowledgement of such tensions to become conventional, respectable. Why is the 'meaning of being' not a pseudo-question while the OP is? Is this completely divorced from the public dominance of this or that thinker? Are any of us immune to the pressure to be intellectually respectable?
apokrisis November 05, 2017 at 22:27 #121758
Quoting t0m
it's easy to imagine a 'smug quietism' misreading genuine logical tensions as language on holiday, complacently waiting for the acknowledgement of such tensions to become conventional, respectable.


But this particular issue has had really heavyweight analysis within the metaphysics of physics.

https://en.wikipedia.org/wiki/Hole_argument
https://en.wikipedia.org/wiki/G%C3%B6del_metric
https://en.wikipedia.org/wiki/Bucket_argument

Einstein, Godel and Mach are some pretty impressive thinkers. If physics doesn't seem to worry too much about "the speed of time", it is because analysis says "everything is relative".



t0m November 05, 2017 at 23:02 #121778
Reply to apokrisis
Is this relativity itself relative? Or understood as an absolute? And was it not established on an assumption of the uniformity of nature? How could any theory be confirmed or survive attempts to falsify it apart from the assumption of the uniformity of nature? If you know of a potent retort to Hume's problem, I'll check it out.
fishfry November 05, 2017 at 23:13 #121780
All measurement is approximate anyway, and any drift in the vibrational frequency of cesium atoms is probably orders of magnitude smaller than the measurement error.

Measurement works as long as it's useful. The technology of measurement moves forward together with the progress of all other technology and science. In the end there really is no such thing as a second. There's no law that there are 60 of the in a minute, that comes from the Babylonians who liked base 60. The units of time are whatever humans say they are. They're not part of nature. You can say that time is part of nature. And the cesium atom, that's a part of nature. But the definition of a second, that's not a part of nature. That's something humans did.

By the way I looked up the actual definition.

The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom.

https://physics.nist.gov/cuu/Units/second.html
apokrisis November 05, 2017 at 23:14 #121781
Quoting t0m
Is this relativity itself relative? Or understood as an absolute?


Of course it is itself relative. It's a scientific theory and so accepts an internalist inductive argument - the triadic arc of abductive hypothesis, deductive theory, and inductive confirmatory test.

If it claimed anything absolute, that would be externalism.

Quoting t0m
And was it not established on an assumption of the uniformity of nature?


Again, what else could count as a reasonable hypothesis? To explain differences that make a difference, you have to presume some baseline where any differences don't. That is what makes measurement even a possibility.

And if science didn't work, we would have given it up long ago (and never in fact arrived at where we are now.)


apokrisis November 05, 2017 at 23:18 #121782
Quoting fishfry
But the definition of a second, that's not a part of nature. That's something humans did.


So the Planck constant is a social construction and not a part of nature? -https://en.wikipedia.org/wiki/Planck_time

t0m November 05, 2017 at 23:35 #121787
Reply to apokrisis

To be clear, science is great. We have no choice. Doubts of the uniformity of nature are theoretical. They are 'silly.' But they are fascinating. To me the problem of induction is beautiful like a chess problem. How strange it is. What initially delighted me about it was that it was sologically revealed. Thinking opens a strangeness that was invisible. If this isn't practical, who said thinking should always be practical? For me the practical is valuable to some degree because it opens up the free time to enjoy reality aesthetically.
fishfry November 05, 2017 at 23:45 #121790
Quoting apokrisis
So the Planck constant is a social construction and not a part of nature? -https://en.wikipedia.org/wiki/Planck_time


As I understand it, the Planck constant is defined in terms of meters^2 times kilograms per second. The physical phenomenon is part of nature, but the units are the work of man. If we defined a kilogram differently we'd get a different number for the Planck constant. It would still be the same value in nature, but it would be described by a different real number in a different system of units.

The underlying physical phenomenon that causes there to be a Planck constant is a part of nature. How we define it is the work of man. In that sense, what we call a second is the work of man. If we had picked say, one and a half of our seconds and called that a second, all the constants in physics would change but no fundamental laws would change at all. We'd just rescale everything.
apokrisis November 05, 2017 at 23:51 #121791
Quoting t0m
Doubts of the uniformity of nature are theoretical. They are 'silly.' But they are fascinating.


How so? It is a metaphysically logical position.

Nothing can be definitely talked about except in terms of being measurably "other". If you want to talk about non-uniformity - as in a rate of time that could vary - then there is no choice but to talk about that in contrast to a rate that is uniform.

It isn't some unjustified whim. It is the way metaphysical-strength logic works.

apokrisis November 06, 2017 at 00:09 #121794
Quoting fishfry
As I understand it, the Planck constant is defined in terms of meters^2 per kilogram per second.


If you check it out, you will see that "the Planck time comes from dimensional analysis".

So you may as well set every value to 1. The dimensioned units fall out of it. We are talking only of the reciprocal relations that connect the three corners of a triad - c, G and h. We are talking about a relation with irreducible internal complexity.

So we peered into the heart of nature and found - this bare triadic relation. We have G to scale any departure from continuous flatness, h to scale any departure from discrete curvature, and c to scale the "rate" at which the G and h can "communicate".

So what we call "time" is an aspect of a triadic knot, a fundamental hierarchical relation.

In simplistic terms, the Planck time is the view of the knot that gets emphasised when you draw out its knotted relation in this fashion - hG/c. (The fuller equation is (hG/c^5)1/2.)

So the dimensioned units do drop out of the story. You can set all the units involved to 1. And then you are left with this triadic knot that ties together three things in irreducible/reciprocal, fashion - h, G and c.

So time doesn't "exist" itself in some fundamental Newtonian way. It is emergent from this triadic relation. The equation for the Planck time is the formula for making that particular kink in the knot "flat enough" to act as a measurement baseline.

TheMadFool November 06, 2017 at 02:31 #121825
Reply to StreetlightX I see. So we can simply take two objects that are ''in sync'' and take that as a measure of regularity? How do we check for synchronization? I think it'll be imprecise.
apokrisis November 06, 2017 at 02:51 #121831
Quoting TheMadFool
So we can simply take two objects that are ''in sync'' and take that as a measure of regularity? How do we check for synchronization? I think it'll be imprecise.


SX is wrong if he suggests the clocks could be used to check each other in this fashion in any absolute sense. But in a relative fashion, that's fine.

You can definitely tell if two clocks start to beat a different time. And from there, you can make an inductive case for why that might be. One clock might be faulty. Or in fact someone might be secretly accelerating it. And one of those explanations might have the grounds to be the more likely.

The same is the case if the two clocks keep perfect synchrony. It could be that one is being accelerated, and yet also it is faulty to exactly the degree needed to compensate. We are now talking of something that both remains a possibility yet is completely unlikely. And so two clocks are better than one as a constraint on such uncertainties.

Streetlight November 06, 2017 at 02:59 #121834
Reply to TheMadFool You can use your eyes, for one. Or more precise apparatuses if avaliable or necessary. But this is a question of experimental design, not principle. And the principle is what matters.
Metaphysician Undercover November 06, 2017 at 03:14 #121841
Quoting apokrisis
The same is the case if the two clocks keep perfect synchrony. It could be that one is being accelerated, and yet also it is faulty to exactly the degree needed to compensate.


That's why we need numerous different types of clocks. Each has its own peculiarities.
apokrisis November 06, 2017 at 03:23 #121844
Quoting Metaphysician Undercover
That's why we need numerous different types of clocks. Each has its own peculiarities.


Nope. We need the single best process that could be used at any time and any place. Radioactive decay would be that. Or some similar "free" quantum process.
TheMadFool November 06, 2017 at 08:12 #121878
Quoting StreetlightX
And the principle is what matters.


Well, regularity is absolutely necessary and you agree on that.

I'm still not convinced about this so please be patient.

I'll stick to pendulums for now. Let's take two A and B. We see them ''in sync'' i.e. they move in sync from one side to the other. You're right in that IF they're not in sync they'll move out of phase after ''some time''. This seems quite obvious if the periods (time taken to make one swing) are whole numbers. For instance, if A's is 2 seconds and B's is 3 seconds it becomes quite obvious that they're out of sync.

However, what if A's is 2.000000009 seconds and B's is 2.0000000000000009 seconds? This imperciptible difference will compound over time and after, may be millions of years, A and B will be out of sync.

Another thing is the assumption (is there a physics law for this?) that the pendulum swing will remain constant. We can't be sure of that without using another time piece and we're back to the chicken-egg problem.
Streetlight November 06, 2017 at 08:28 #121880
Quoting TheMadFool
However, what if A's is 2.000000009 seconds and B's is 2.0000000000000009 seconds? This imperciptible difference will compound over time and after, may be millions of years, A and B will be out of sync.


You misunderstand: it's the difference between periods which must be constant to show that both pendulums swing at a regular interval. The actual frequencies of the pendulum swings - which do not at all have to match - are irrelevant.

Quoting TheMadFool
Another thing is the assumption (is there a physics law for this?) that the pendulum swing will remain constant


What are you talking about? The comparison of the pendulums is made in order to asses that constancy. There's no 'assumption' made. Pause and think before you type, please.
TheMadFool November 06, 2017 at 08:46 #121882
Quoting StreetlightX
You misunderstand: it's the difference between periods which must be constant to show that both pendulums swing at a regular interval.


Why complicate the issue by going a step further than you need to. If I can see the pendulum swinging in sync why calculate the difference?

Quoting StreetlightX
What are you talking about?


The assumption is that the pendulum takes the same time for every swing (regularity). How can we confirm this? I can think of two ways of doing this:

1. Depend on some physical law that proves it

2. Using another time piece to confirm it

Both 1 and 2 lead to the chicken and egg problem.
Streetlight November 06, 2017 at 08:51 #121883
Quoting TheMadFool
. If I can see the pendulum swinging in sync why calculate the difference?


Because difference is all that matters when trying to determine regularity. I should not have spoken of synchronicity, which seems to have misled you. Although if difference = 0 and stays as 0 (your pendulums are in sync), then your pendulums have regular swings.
Metaphysician Undercover November 06, 2017 at 11:13 #121931
Quoting apokrisis
Nope. We need the single best process that could be used at any time and any place. Radioactive decay would be that. Or some similar "free" quantum process.


So how would you decisively determine that radioactive decay is "the single best process" without comparing it to a number of different clocks. Or is this just a bias that you hold?
apokrisis November 06, 2017 at 11:16 #121932
Reply to Metaphysician Undercover What alternative did you have in mind? Chinese water clock?
Metaphysician Undercover November 06, 2017 at 11:29 #121939
Reply to apokrisis
I never heard of the Chinese water clock, perhaps it's one possibility.
apokrisis November 06, 2017 at 11:40 #121946
Reply to Metaphysician Undercover Maybe a sundial then. That would obviously work any time, any place.
Metaphysician Undercover November 06, 2017 at 11:43 #121949
Reply to apokrisis
Another possibility, the more the better. The ancient people made a clock of the moon, the sun, every planet, and the "fixed" constellations. That's what was required to determine the nature of the solar system.
noAxioms November 06, 2017 at 14:11 #122006
Quoting TheMadFool
Another thing is the assumption (is there a physics law for this?) that the pendulum swing will remain constant.
It is not constant. Ever notice all the complexity of the pendulum on a grandfather clock, with all those bars made of different metals? It's not just decorative. It is an attempt to cancel out the normal variations in the period of that pendulum which would significantly reduce the accuracy of the clock.

The standard of time was the average length of a day, with a second being defined as a 86400th of that. I say average length because the day is about a minute longer in December than it is in June.
It would have been more accurate to slice up the time of one rotation (about 1436 minutes) since that doesn't vary significantly over the year, but nobody had a use for hours defined that way.

Anyway, we know that standard is reasonably stable since it would require incredible force to alter that rotation rate. OK, said force does exist, and we have leap-seconds to compensate. Eventually the day will be long enough that we need more than one leap second each day. Scientific definition of a second will diverge more significantly from a clock second, the former corresponding to a day length back when it was first accurately known, and the latter being a function of whatever the current average day-length is.

Quoting apokrisis
We need the single best process that could be used at any time and any place. Radioactive decay would be that.
Multiple posts that radioactive decay makes a good clock. It is unpredictable, uncaused and makes a crappy clock. Radioactive dating is accurate to no better than several percent. It serves where no other methods are available, but accuracy is hardly it's forte.



TheMadFool November 06, 2017 at 16:03 #122029
Quoting noAxioms
Anyway, we know that standard is reasonably stable since it would require incredible force to alter that rotation rate. OK, said force does exist, and we have leap-seconds to compensate.


But to know this we would have to rely on another clock, say A, and to check A we need another clock B...ad infinitum.
noAxioms November 06, 2017 at 16:44 #122041
Quoting TheMadFool
Anyway, we know that standard is reasonably stable since it would require incredible force to alter that rotation rate. — noAxioms

But to know this we would have to rely on another clock, say A, and to check A we need another clock B...ad infinitum.
No clock was used to verify this. Clocks were made to sync to this. The day verifies the clock, not the other way around.
For the length of the day to be significantly variable would require a complete rewrite of the most basic physics. The Earth rotation is regular because of the complete lack of significant force to alter it.

That is what I was saying (in the bold) in my post.

TheMadFool November 06, 2017 at 17:25 #122054
Quoting noAxioms
The day verifies the clock, not the other way around.


Yes but what verifies the day?
apokrisis November 06, 2017 at 18:26 #122072
Reply to noAxioms Hah. Yes you are right. Complete brain fart to call it radioactive decay. I was meaning the radiative decay of electron transitions in atomic clocks.
noAxioms November 06, 2017 at 20:11 #122106
Quoting TheMadFool
Yes but what verifies the day?
Quoting TheMadFool
But to know this we would have to rely on another clock, say A, and to check A we need another clock B...ad infinitum.
No. No clock is needed to know this.
The average length of the day is the arbitrary standard. There is nothing against which it needs to be verified.
fdrake November 06, 2017 at 22:10 #122142
The kind of time we're talking about isn't some phenomenological or lived time, it's temporal duration. So:

Ways of measuring duration - decided by convention. Using convenient periodic phenomena in nature and engineering (days, moons, clocks, pendulums, oscillations of a hydrogen atom).

Units of measuring duration - again decided by convention. Can be made to equate a previously conventional measure of time (quantities proportional to seconds with the same dimension) and a physical phenomenon (oscillations of a hydrogen atom).

Duration - something real that is measured. Time constrained to a start and finish.

Time - the indefinite continued progress of existence and events in the past, present, and future regarded as a whole (thanks Google).

The central concept here is periodicity, or the propensity for something to repeat with high regularity. Regularity of measurements - oscillations in phase, periodic phenomena. Corrections can be made to account for small irregularities in the oscillations OR in terms of conventional measurements of duration (years -> leap years, errors in atomic clocks).

There is absolutely nothing mysterious here. It isn't philosophy, it's well established engineering and mathematics.


apokrisis November 07, 2017 at 00:40 #122181
Quoting fdrake
There is absolutely nothing mysterious here. It isn't philosophy, it's well established engineering and mathematics.


Well hardly. Time remains physics biggest problem really.

Quoting fdrake
The central concept here is periodicity, or the propensity for something to repeat with high regularity. Regularity of measurements - oscillations in phase, periodic phenomena.


Note how these are all spatialised concepts of time. Whether it is the rotation of a clock hand or the rotation that is a periodic sine wave, it is is about repeating a round trip locally. Time is measured by the how long it takes to complete a repetitive motion. Going around in a little circle zeros the clock to make a cycle. The hand travels forever and winds back up crossing the same spot.

A problem with spatialised time is that it inherits the symmetry of spatial dimensionality. It makes no difference whether the clock hand rotates clockwise of counterclockwise. And yet time has an arrow that points in a direction. Spatialised clocks can’t measure that essential quality of actual temporal duration - the fact that the symmetry is broken.

But there is the other angle we could employ to measure time. And that would be in terms of energy, or entropy. A thermometer could measure time as falling temperature.

And indeed that is how we now measure the age of the universe. We read it off in terms of the average temperature of the cosmic background radiation.

The cosmic time is currently 2.725 degrees kelvin.

Metaphysician Undercover November 07, 2017 at 02:49 #122205
Quoting noAxioms
The standard of time was the average length of a day, with a second being defined as a 86400th of that. I say average length because the day is about a minute longer in December than it is in June.


OK, if I suppose that the standard is the day, I need to define the day empirically. I can't say that it is the time until the sun appears at the same place on the horizon again, because each day the sun is in a slightly different position. I believe this is why TheMadFool says we have to refer to another clock. I think that clock would be the year.

Quoting TheMadFool
But to know this we would have to rely on another clock, say A, and to check A we need another clock B...ad infinitum.


I don't think we need to keep going to more clocks ad infinitum, because we can synchronize a number of clocks, and make the necessary adjustments. After a full year, we can follow the sun's positioning on the horizon, and determine what NoAxioms calls "the average length of the day". Then the day is no longer the real standard, the year is, because the average length of the day is determined in relation to the year. Of course there is something called "the precession of the equinoxes", which may incline one to look for an even long period of time to determine the average length of a year. But there is no need to consider an infinite regress, as the time period of each of these standards gets longer and longer, until there is no need to go any further.

Quoting noAxioms
The average length of the day is the arbitrary standard. There is nothing against which it needs to be verified.


So the day gets verified by the year. It is the only way that we could produce an "average" length of day. We could go on to produce an average length of the year, but this would mean that we would need to place the year within an even longer cosmological time period. Right now, we just adjust with leap years as determined necessary.

Likewise, if we look to a shorter and shorter time period there would be a similar problem in inverse. The problem of the short time period cannot be so easily resolved though. The shorter the time period, the more difficult it is to find an activity to measure that period, and in theory we could assume a time period shorter than any activity. The problem of the short time period manifests in the uncertainty principle of the Fourier transform. You cannot claim to have certainty about the activity because the time period is too short, and you cannot claim to have certainty about the time period because the activity is too short. It's a conundrum.
TheMadFool November 07, 2017 at 04:53 #122222
Quoting noAxioms
No. No clock is needed to know this.
The average length of the day is the arbitrary standard. There is nothing against which it needs to be verified.


Why? How do we know that the length of the day is going to be constant, as is required? Is there a physical law that proves that the day length is constant? And how do we know that?
TheMadFool November 07, 2017 at 04:59 #122223
Quoting Metaphysician Undercover
I don't think we need to keep going to more clocks ad infinitum, because we can synchronize a number of clocks, and make the necessary adjustments.


We need to. For example we need to check all rulers/scales to the standard definition of a meter or a foot. In the case of length we don't have to worry because we can ensure regularity (each 1 foot = next 1 foot) satisfactorily.

However, when it comes to time, this can't be done without using another time piece to check the standard being used. In fact I think we do this. All time on a computer is checked against a clock in a server somewhere.
Metaphysician Undercover November 07, 2017 at 11:51 #122320
Quoting TheMadFool
We need to. For example we need to check all rulers/scales to the standard definition of a meter or a foot. In the case of length we don't have to worry because we can ensure regularity (each 1 foot = next 1 foot) satisfactorily.


This doesn't imply infinite regress though. What it implies is that we can never be absolutely certain about the length of any time period. This is because at the time when we start to measure a time period all previous time periods have gone past, so we cannot directly compare one time period to another, like we can compare the length of two physical objects. We can place one ruler beside another to see if they are the same.

So with time we always have a medium between the two time periods which are being compared, and this medium is a physical activity. When a physical activity proves itself to be very regular compared to other physical activities, we use it as that medium, through which we compare one time period to another.

Quoting TheMadFool
However, when it comes to time, this can't be done without using another time piece to check the standard being used. In fact I think we do this. All time on a computer is checked against a clock in a server somewhere.


The special theory of relativity describes the difficulties involved with comparing one physical activity to another. It proposes a resolution which assumes that each moving thing has a passing of time which is proper to it, and different from other moving things. Instead of assuming an independent, and absolute passing of time, the passing of time is dependent on the activity of the object. Each object, depending on its motion has a passing of time inherent to itself. I believe that GPS systems operate on relativity theory so they always need to re-synchronize their clocks, due to our inability to reconcile motions in an absolute way.

Some physicists, like Lee Smolin for example, propose an independent passing of time. This means that the passing of time is something itself real, and independent from the movement of objects. Then he can question whether the passing of time itself is something which remains consistent over a long period of time. But I think that to get any productive results in this line of inquiry, we need an explanation, or a description of what the passing of time is. So this is where speculation is needed.
noAxioms November 07, 2017 at 12:47 #122339
Quoting TheMadFool
Is there a physical law that proves that the day length is constant? And how do we know that?
It is reasonably constant, and the Newton's laws of motion (the first two mostly) say this. This is not proof, just a very successful set of laws that make good predictions. Come up with different laws that do as well but make the day length much more variable, and then you can introduce doubt.

I say 'reasonably' constant. When precision was needed, the second was eventually redefined against something even more regular (the caesium vibrations). Each day is longer than the same day last year, a trend that will continue (assuming other variables stay nearly the same) until the day and month are the same length. Over long times, the day length is anything but stable, ranging from around 10 to 1500 hours. But it has been consistently 24 hours for the very short duration of humans measuring it, and that consistency is what made it our arbitrary standard of time.
TheMadFool November 07, 2017 at 13:28 #122342
Quoting noAxioms
It is reasonably constant, and the Newton's laws of motion (the first two mostly) say this. This is not proof, just a very successful set of laws that make good predictions. Come up with different laws that do as well but make the day length much more variable, and then you can introduce doubt.


Thanks. I was thinking too that there's some physics law that proves some physical durations are fixed and constant. I remember in high school I read something about the pendulum's period depending on g (acceleration due to gravity) and L (the length of the pendulum). However, I don't think this really solves the problem because quantification comes first in physics and time is a quantity. In other words, we need to possess accurate instruments before we can discover the quantiative laws of nature. Anyway, what's amazing is how, even with inaccurate clocks, science has ''discovered'' so many physical laws.

Now, here's something that I just thought of...

If you'll agree with me that time measurement isn't as accurate as we think then could it be that all the laws of nature we've discovered so far are wrong? They're just approximations at best and completely bogus at worst. What if there are no laws of nature and all the patterns we see in nature (at least those dependent on time) are simply illusions created by our failure to measure time accurately?

Quoting Metaphysician Undercover
What it implies is that we can never be absolutely certain about the length of any time period.


Yes. Please read above. Sorry can't reply to you separately.
noAxioms November 07, 2017 at 14:15 #122356
Quoting TheMadFool
I remember in high school I read something about the pendulum's period depending on g (acceleration due to gravity) and L (the length of the pendulum).
This is true of weight pendulums like the one in a grandfather clock. Such clocks run slow on the moon for instance. There is a mass-pendulum in my watch, and in a typical 400-day clock. Those stay pretty accurate on the moon. Similarly your weight is dependent on G, but your mass is not.
However, I don't think this really solves the problem because quantification comes first in physics and time is a quantity. In other words, we need to possess accurate instruments before we can discover the quantiative laws of nature.
Right. So they know the length of the day was stable (plus/minus 30 seconds), so eventually they needed to build an instrument that said the same value day after day. The hourglass was not accurate enough. Oddly, it was the train and boat people, not the scientists, that drove the technology for the first accurate clocks. Train folks needed it to prevent crashes, and the boat people needed it for navigation. Science had little use for that sort of accuracy back in those days. They worked out F=MA without need of it.

Now, here's something that I just thought of...

If you'll agree with me that time measurement isn't as accurate as we think then could it be that all the laws of nature we've discovered so far are wrong?
The laws we know result in models that give relatively accurate predictions, and are not something that is wrong or right. If you want to posit different laws, you are welcome to do so, but if they make worse predictions, they're less useful laws.
They're just approximations at best and completely bogus at worst. What if there are no laws of nature and all the patterns we see in nature (at least those dependent on time) are simply illusions created by our failure to measure time accurately?
If there are no laws, then there is no time to measure inaccurately. The statement is thus incoherent, You're asking that if there is no map, is the territory an illusion? What if I have a completely bogus map that has no correspondence to the territory, and yet the nonsense map gets me where I want to go every single time? How bogus is the map then? Seems to be what you're asking.

fdrake November 07, 2017 at 15:16 #122363
Reply to apokrisis

Time is mysterious. Duration in every day contexts is not. It isn't as if the mysteries of time impede interpretation and calibration of watches. That's the point I was making.
TheMadFool November 08, 2017 at 03:12 #122548
Quoting noAxioms
Science had little use for that sort of accuracy back in those days.


I guess we have acceptable limits of accuracy.

Quoting noAxioms
They worked out F=MA without need of it.


Really? I thought time was part of A (acceleration)? Were Newton's laws theoretically derived?

Quoting noAxioms
The laws we know result in models that give relatively accurate predictions, and are not something that is wrong or right. If you want to posit different laws, you are welcome to do so, but if they make worse predictions, they're less useful laws.


Let me illustrate what I mean.

Imagine a world with a radioactive element x that decays at the rate of 1 atom every true second.

Let's suppose we have a clock that is irregular too: one tick is supposed to be 1 second but actually tick1 = 1 second, tick 2 = 2 seconds, tick 3 = 1 second, tick 4 = 2 second and so on.

If we study the element x for 4 ticks (4 seconds by the defective clock) of the clock
6 atoms decayed because 6 true seconds have passed (1, 2, 1, 2)
Time passed by the clock = 4 seconds
Rate of decay = 6/4 = 1.5 atoms/second

But...

The actual time passed = 6 seconds ( 1, 2, 1, 2)
True rate of decay = 6/6 = 1 atom/second

If the defective clock is used universally then we will never notice the error.

What do you think? Thank you for your replies. I've learned a lot.
Metaphysician Undercover November 08, 2017 at 11:42 #122671
Reply to TheMadFool
That's why we need to compare numerous physical activities to produce an accurate clock.
noAxioms November 08, 2017 at 16:55 #122714
Quoting TheMadFool
Science had little use for that sort of accuracy back in those days.
They worked out F=MA without need of it.
— noAxioms

Really? I thought time was part of A (acceleration)? Were Newton's laws theoretically derived?
Without the precision required to navigate a boat. I didn't say it was done without time measurement.
Massive precision is needed only for more recent physics like the relativity experiments done a century ago.

Imagine a world with a radioactive element x that decays at the rate of 1 atom every true second.

Let's suppose we have a clock that is irregular too: one tick is supposed to be 1 second but actually tick1 = 1 second, tick 2 = 2 seconds, tick 3 = 1 second, tick 4 = 2 second and so on.

If we study the element x for 4 ticks (4 seconds by the defective clock) of the clock
6 atoms decayed because 6 true seconds have passed (1, 2, 1, 2)
Time passed by the clock = 4 seconds
Rate of decay = 6/4 = 1.5 atoms/second

But...

The actual time passed = 6 seconds ( 1, 2, 1, 2)
True rate of decay = 6/6 = 1 atom/second

If the defective clock is used universally then we will never notice the error.

What do you think? Thank you for your replies. I've learned a lot.
Sounds like you have the beginning of a competing set of laws in which time is defined alternatively. But it fails the falsification test.

I have two such samples. One of them does 6 ticks, and the other does 2. Next iteration, the former does 3 and the latter does 4. Clearly the radioactive samples are not measuring actual time since they're not matched.
TheMadFool November 08, 2017 at 17:03 #122716
Reply to noAxioms My example used whole numbers and the error reveals itself quite easily but what if the time irregularity is in the nanoseconds or femtoseconds? Errors at such scales can be detected only over millions of years, right?

Look at the history of time measurement. Started with the sun, moon and earth - wasn't accurate enough. Then we moved to pendulums - wasn't accurate enough. Now we have atomic clocks - aren't perfect. Isn't this the infinite regress I'm suggesting here?
noAxioms November 08, 2017 at 18:13 #122722
Quoting TheMadFool
My example used whole numbers and the error reveals itself quite easily but what if the time irregularity is in the nanoseconds or femtoseconds? Errors at such scales can be detected only over millions of years, right?
My counter example works fine with nanoseconds. The radioactive samples might tick every nanosecond and the example still holds. The two samples would not be in sync ever, and thus are not representative of actual time. The decays are random events, much in the same way that Earth rotations are not.

Look at the history of time measurement. Started with the sun, moon and earth - wasn't accurate enough. Then we moved to pendulums - wasn't accurate enough. Now we have atomic clocks - aren't perfect. Isn't this the infinite regress I'm suggesting here?
Sun movement is way more accurate than pendulums, but inaccurate in the long run. The day used to be a lot shorter.

I see no infinite regress, or even finite. Yes, some things are more regular than others, radioactive decay being probably at the low end of the scale. Such accuracy is not needed except to verify very fine differences. You apparently don't accept that. You seem to assert that time cannot be known without some insanely accurate device. But somebody said that a day is defined as the time from noon to noon on some arbitrary day in say 1900, and that's the standard, period, even if we don't know how to translate that value into Caesium vibrations (something even more stable than Earth) to twelve places until decades later.

noAxioms November 08, 2017 at 18:16 #122723
Quoting TheMadFool
but what if the time irregularity is in the nanoseconds or femtoseconds?
I think you ask about what if the radioactive same ticked regularly. Then the decays would not be random events, but regular ones. All similar-rate samples would tick in sync. They don't. No way at all to predict when the next tick will come or which sample will yield the next tick.
vesko November 17, 2017 at 20:58 #125079
Reply to TheMadFool
I think that time is measured with the rotation of the earth but I don't know why the unit hour is 1/24 of one turn of the earth.
vesko November 17, 2017 at 21:02 #125081
so we do not have time, we have only rotation of earth which we consider as time.
TheMadFool November 19, 2017 at 11:24 #125626
Quoting fdrake
There is absolutely nothing mysterious here. It isn't philosophy, it's well established engineering and mathematics.


How do you check the accuracy of your watch? You must compare it to some standard clock, say A. The same question applies to A too and so on...ad infinitum. We can never be sure of the accuracy of a clock.


Quoting noAxioms
My counter example works fine with nanoseconds.


Imagine a clock, A, that's supposed to mark off seconds (1 tick = 1 second) but it actually marks off 0.9 seconds (1 tick = 0.9 seconds). How long would it take for clock A's error to be noticed if our discerning power is 1 second? Consider ticks as x. We have the following inequality:

x - 0.9x > 1...it'll be only 10 true seconds later that we will be able to notice the error.

Imagine now that for A 1 tick = 0.9999999999 seconds. Plug that in and:

x - 0.9999999999x > 1

Doing the math we need at least 317 years before we can find the error in clock A's time.

So smaller the difference between true time and the time of a clock the longer it'll take for you to detect the error.
fdrake November 19, 2017 at 12:57 #125635
@TheMadFool

How do you check the accuracy of your watch? You must compare it to some standard clock, say A. The same question applies to A too and so on...ad infinitum. We can never be sure of the accuracy of a clock.


Except no, because this isn't an infinite regress. It stops at whatever measurement of time is conventionally accepted as the definition. The duration of a second now means:

"the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom."

And so comparisons ultimately derive from this one.
Metaphysician Undercover November 19, 2017 at 13:42 #125637
Quoting fdrake
The duration of a second now means:

"the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom."

And so comparisons ultimately derive from this one.


The problem though, is that this defined "second" is also related to the length of a "day" which is defined by the rotation of the earth. So there is a specific number of those seconds in every day (one rotation of the earth). When, after a long period of counting that specific number of those seconds, without adding any "leap seconds", the point of change over from one day to the next begins to be out of sync, such that it moves from midnight toward morning or something like that, we have to ask, what is the true constant, the caesium-133 atom, or the rotation of the earth.
fdrake November 19, 2017 at 13:58 #125639
Reply to Metaphysician Undercover

The entire point of calibrating measurements of time is that there is a privileged time-measurer and other measurements of time are calibrated through their relationship to the privileged one. This is then what it means for two time-measurers to be in accord. If they are out of accord, they can be corrected. If the privileged one behaves in an unexpected way, it will be changed.

This is because the conventional definition of time with respect to the rotation of the Earth around the Sun is slightly different from the conventional definition of time with respect to the oscillations of a Caesium atom. And thus the introduction of the leap second is precisely an attempt to calibrate the atomic clock second with proportion of a year second. This is so that we can keep the conventional organisation of time in terms of hours, days, months, years and not reinvent the wheel purposelessly.

If you like you could become an advocate of year definitions without leap seconds.
Metaphysician Undercover November 19, 2017 at 14:16 #125641
Quoting fdrake
The entire point of calibrating measurements of time is that there is a privileged time-measurer and other measurements of time are calibrated through their relationship to the privileged one. This is then what it means for two time-measurers to be in accord. If they are out of accord, they can be corrected.


What gives "privilege" to one time-measurer over another? Why would the caesium-133 atom be more privileged than the rotation of the earth?

Quoting fdrake
If the privileged one behaves in an unexpected way, it will be changed.


In other words, it may turn out in the future, that we find out that we were wrong in assigning privilege to the one time measurer over the other.

Quoting fdrake
This is because the conventional definition of time with respect to the rotation of the Earth around the Sun is slightly different from the conventional definition of time with respect to the oscillations of a Caesium atom. And thus the introduction of the leap second is precisely an attempt to calibrate the atomic clock second with proportion of a year second. This is so that we can keep the conventional organisation of time in terms of hours, days, months, years and not reinvent the wheel purposelessly.


Now this just validates The Mad Fool's point. Instead of handing privilege to one clock over another, we introduce leap seconds and live with the inconsistency. One person can argue that the caesium clock gives the more accurate measure of time, and another can argue that the earth's rotation gives a more accurate measure of time. The leap second doesn't resolve anything, it just negates the inconsistency without determining which is more accurate. To determine which is more precise, we turn to a third time-measurer which is the revolution of the earth around the sun. But now we still have inconsistencies and we still have not adequately determined which is more accurate, so we could compare another time-measurer, and on and on, as The Mad Fool says, ad infinitum
fdrake November 19, 2017 at 14:21 #125642
@Metaphysician Undercover

Convention privileges a measurer of time as a definer of the second. Then other ways of measuring time are calibrated to it.

What's the time where you are MU?
TheMadFool November 19, 2017 at 16:18 #125650
Reply to fdrake How do we know that the cesium atom's radiation is regular - that one instance of 9,192,631,770 periods (let's call this a cycle) is the same as the next 9,192,631,770 periods?

If one cycle of cesium atom A differs from its next cycle we have no way of detecting the error that'll creep into cesium atom A's clock. We'll need another more accurate clock to detect the error and what if that clock is also irregular?
Metaphysician Undercover November 19, 2017 at 16:34 #125653
Quoting fdrake
Convention privileges a measurer of time as a definer of the second. Then other ways of measuring time are calibrated to it.


That's what I mean, it's just a convention, it's not necessarily an accurate way of measuring time. So the conventions change from time to time, and we still haven't found a measurer which has proven to be accurate.
vesko November 19, 2017 at 17:07 #125655
ok. if I am a person from the planet Mars f.i. what can be my measure for time, rotation of Mars around its axis or around the sun?Obviously it will be DIFFERENT from our own here on our planet.So the time in the universe will depend from the place ( planet) where a civilisation measures it.so a second here can be an year on planet x ?
fdrake November 19, 2017 at 17:08 #125656
Every clock has a measurement error associated with its time. This is literally a quantification of how accurate the clock is. For the caesium-122 clock, this is an error of 1 second in 100 million years. The reason the atomic clock was switched to over the mean-solar-day definition was that it was more accurate, it had less measurement error and variability.

The cycles of caesium atoms don't differ in any meaningful way. That's kinda the point. They're regular enough to make a measurement of time to the tune of 1 second of error in 100 million years.

Accuracy = precision of measurement. Precision of measurement = small measurement error. The absence of measurement error is impossible, all that matters is whether it is low enough to make good measurements. If a new time measuring device is more accurate like this one which won't get an error until the universe doubles in age from now, then definitions can be made with respect to the more accurate clock.

This is why the second standard based the Earth's rotation round the sun was rejected, it was demonstrably less precise. But - but - we keep leap-seconds, leap-days etc so that we stay calibrated with the Earth's rotation around the sun since we don't want to reject the solar year and its monthly/daily/hourly divisions and come up with a new manner of organising time...

This is also why the number of oscillations of the caesium atoms was chosen, since it was incredibly close to the current definition of the second but measured far more precisely.
Metaphysician Undercover November 19, 2017 at 19:28 #125671
Quoting fdrake
For the caesium-122 clock, this is an error of 1 second in 100 million years.


You don't seem to be getting The M[ad Fool's point. By what principle do you derive that margin of error? You could only determine the clock's accuracy by comparing it to another clock. So why would you conclude that the caesium clock is more accurate than the other clock? Have you recorded it for a hundred million years? What makes you think that the caesium clock is so incredibly accurate, other than your assumption that another clock which it was compared to is less accurate?

Quoting fdrake
But - but - we keep leap-seconds, leap-days etc so that we stay calibrated with the Earth's rotation around the sun since we don't want to reject the solar year and its monthly/daily/hourly divisions and come up with a new manner of organising time...


So, there is a need for leap-seconds. Why do you assume that this need is produced by the earth's rotation being less accurate than the caesium clock, instead of assuming that the caesium clock is less accurate than the earth's rotation?

Quoting fdrake
This is also why the number of oscillations of the caesium atoms was chosen, since it was incredibly close to the current definition of the second but measured far more precisely.


See, you keep make assertions like this, without explaining what you mean by "far more precisely".
fdrake November 19, 2017 at 19:48 #125674
I did some googling for you. @Metaphysician Undercover

Here's a paper that does measurement error analysis for a type of atomic clock.

Here's one that does measurement error analysis for a modern optical lattice clock.

Here's the wikipedia page on the adoption of the atomic clock standard.

Measurement error estimates in general are obtained from making repeated measurements. When there are multiple components to the measurement error like in the error analysis for atomic clocks, individual component error can be obtained by varying one component independently of the others. The errors are then usually combined through the square root of the sum of their squares, or the square root of the sum of squared %errors.



fdrake November 19, 2017 at 19:52 #125676
A cliffnotes version of the conclusion: errors in measuring the number of oscillations of atoms or lattices between different quantum states within a given duration are then translated into errors in time measurement.
TheMadFool November 20, 2017 at 04:50 #125802
Quoting fdrake
The cycles of caesium atoms don't differ in any meaningful way. That's kinda the point. They're regular enough to make a measurement of time to the tune of 1 second of error in 100 million years.


How do we know that? My watch's error can be detected by an atomic clock. How do we detect the error of an atomic clock? How do we know the ''1 second of error in 100 million years''?
noAxioms November 20, 2017 at 12:34 #125864
Quoting TheMadFool
How do we know that? My watch's error can be detected by an atomic clock. How do we detect the error of an atomic clock?
Read the links fdrake posted. They answer exactly this question. At the sort of accuracy they're talking, two clocks would need to be in exactly the same environment. Put them in adjacent parking spaces and the difference in latitude will get them out of sync.
vesko November 20, 2017 at 13:55 #125880
if you are in a space ship somewhere in the universe and you have no clock, how can you measure the time with some approximation?
vesko November 20, 2017 at 14:42 #125884
the answer is as follows :
simple way can be the measuring of our pulse which is a given by God interval we can use to measure the time .Of course pulse is not constant but we can take an average of 60 pulsatings per minute (the minute is related to our measures on earth which we are aware a priori of)
So the time is nothing else but a counting of repeated events done by humans.
noAxioms November 20, 2017 at 15:19 #125885
Quoting vesko
if you are in a space ship somewhere in the universe and you have no clock, how can you measure the time with some approximation?

Quoting vesko
the answer is as follows :
simple way can be the measuring of our pulse which is a given by God interval we can use to measure the time .
If you have a pulse, you have a clock. Lousy precision, but a clock nevertheless. You can time the boiling of your egg by counting heartbeats.Quoting vesko
So the time is nothing else but a counting of repeated events done by humans.
The counting can be (doesn't need to be) done by humans. The counting is not what time is. It is simply a human taking a measure of what time is. Plenty of non-human things utilize time measurement.

tom November 20, 2017 at 15:31 #125886
Reply to vesko By taking the temperature of the Cosmic Microwave Background.
tom November 20, 2017 at 15:36 #125887
Reply to TheMadFool

Quoting TheMadFool
How do we know that? My watch's error can be detected by an atomic clock. How do we detect the error of an atomic clock? How do we know the ''1 second of error in 100 million years''?


A physical process provides the definition of the second, the accuracy relates to the technology we have with which to measure that physical process.
vesko November 20, 2017 at 20:09 #125939
Reply to noAxioms I think that time is only a non constant reflection in our human minds and we humans can measure it ,what animals etc. can't.
Metaphysician Undercover November 21, 2017 at 00:16 #126002
Reply to fdrake
The one referred article states that the measured frequency was found to remain stable for a month. How do you make a claim about the clock's accuracy for 100 million years from this?
fdrake November 21, 2017 at 00:22 #126004
Reply to Metaphysician Undercover

Honestly I don't understand literally everything in the paper. I trust their error analysis. If you really want me to translate the error analysis in the paper to a more convenient form I could try, but not now.
Metaphysician Undercover November 21, 2017 at 00:32 #126006
Quoting fdrake
Honestly I don't understand literally everything in the paper. I trust their error analysis. If you really want me to translate the error analysis in the paper to a more convenient form I could try, but not now.


No need to do that. I just don't believe that it's possible to make a statement concerning the accuracy of a clock over a 100 million year time frame, when the activity which is used as that time-measurer has only been proven to be stable for one month.
fdrake November 21, 2017 at 00:38 #126007
Reply to Metaphysician Undercover

I think you're mistaking the one measurement for another. Can you cite the passage?
Metaphysician Undercover November 21, 2017 at 01:01 #126010
Reply to fdrake
One measurement for another? What do you mean by that?
This is from the first article you referred. The introduction I believe.

"Furthermore, two independent Sr clocks are compared
and they agree within their combined total uncertainty over a period of one month.

...

When the SrI and SrII clocks were compared over a period of one month, we found their frequency difference to be nSrII – nSrI = -2.8(2)×10-17, well within their combined uncertainty of 5.4×10-17."

fdrake November 21, 2017 at 01:07 #126012
They ran for a period of a month, and they got out of phase by 2.8 x 10^-17 seconds. That doesn't mean it's only proven to be stable for a month. Quite the contrary, the error is so low in a month that it's negligible.
TheMadFool November 21, 2017 at 20:04 #126165
Quoting noAxioms
Read the links fdrake posted. They answer exactly this question. At the sort of accuracy they're talking, two clocks would need to be in exactly the same environment. Put them in adjacent parking spaces and the difference in latitude will get them out of sync.


There doesn't seem to be a law that cleary demonstrates true regularity of any physical process. Every clock is imperfect. All we've done is postponed the event when our clocks will accumulate enough error to be noticeable. While this may be acceptable in living im the seconds, minutes, hours, days, months or years, we can't ignore it in doing science where accuracy is vital.

Quoting tom
A physical process provides the definition of the second, the accuracy relates to the technology we have with which to measure that physical process.


The physical process has to be regular. In my OP I mentioned how this is ''less'' of a problem with other quantities like length, mass, volume because we have a standard whose state has been specified. With time it's different because we can never be sure of the regularity of a time piece. We can't be 100% certain that one period of a cesium atom takes the same time as the next.

fdrake November 21, 2017 at 21:11 #126174
Reply to TheMadFool

There doesn't seem to be a law that cleary demonstrates true regularity of any physical process. Every clock is imperfect. All we've done is postponed the event when our clocks will accumulate enough error to be noticeable. While this may be acceptable in living im the seconds, minutes, hours, days, months or years, we can't ignore it in doing science where accuracy is vital.


You're always going to have measurement error in experiments. 5.4*10^-17 error in a second is ridiculously precise. As the optical lattice clock paper noted at the end - this level of precision allows all kinds of new experiments. The need for no measurement error to demonstrate anything through experiment has the opposite effect than 'accuracy is vital for the progress of science', since it completely undermines every single experiment ever done.
tom November 21, 2017 at 22:32 #126178
Quoting TheMadFool
The physical process has to be regular. In my OP I mentioned how this is ''less'' of a problem with other quantities like length, mass, volume because we have a standard whose state has been specified. With time it's different because we can never be sure of the regularity of a time piece. We can't be 100% certain that one period of a cesium atom takes the same time as the next.


In the case of the atomic clock, there is no regular process. The clock is tuned to a constant of a physical system, and the second is thus DEFINED. The only "regularity" is that all the atoms have the same physical property, which, since we know they are indistinguishable, is a non-issue.

Metaphysician Undercover November 22, 2017 at 01:37 #126191
Quoting fdrake
They ran for a period of a month, and they got out of phase by 2.8 x 10^-17 seconds. That doesn't mean it's only proven to be stable for a month. Quite the contrary, the error is so low in a month that it's negligible.


Right, for a period of one month, the error was negligible. This means that the physical activity remained very stable for that one month period. It has been proven to be constant for a month. How do we know that in 10, 100, or 1000 years, that activity will not change?
TheMadFool November 22, 2017 at 03:55 #126211
Quoting tom
In the case of the atomic clock, there is no regular process. The clock is tuned to a constant of a physical system, and the second is thus DEFINED. The only "regularity" is that all the atoms have the same physical property, which, since we know they are indistinguishable, is a non-issue.


Regularity is critical in all measurements. Consider a student's ruler. If the centimeter markings are spaced differently (the should be spaced the exact same length) then the ruler is "broken". Similarly, if the seconds ticked off by a clock are of different "lengths" then time measurement would be pointless.

Quoting fdrake
You're always going to have measurement error in experiments. 5.4*10^-17 error in a second is ridiculously precise. As the optical lattice clock paper noted at the end - this level of precision allows all kinds of new experiments. The need for no measurement error to demonstrate anything through experiment has the opposite effect than 'accuracy is vital for the progress of science', since it completely undermines every single experiment ever done.


Yes, perfection in measurement is impossible. Length units like cm, m are also imperfect but the issue with time is slightly different.

With length, we can take an object, specify the temperature, its composition and other parameters and then define the unit of length (as was done before the metric system assumed its present form). We can then easily check all rulers to this particular length.

Time, on the other hand, is slightly different. A unit like second can only be defined with periodic change that has to be regular, just like for length. But, without a time piece that is already regular we can't determine if the periodic phenomena we're using to measure time with is regular or not. See...?
fdrake November 22, 2017 at 05:11 #126224
Reply to Metaphysician Undercover

Don't you see the distinction between:

The clock only ran for a month, so the error is at least a month.
The clock ran for a month and had an error of 5.4*10^-17 in that period.

What allows the extrapolation of the error - and thus statements like '1 second in 100 million years' - is that the clock had a certain error which accrued over a month. The measurement error precisely gives 'how much it changes over time'. If the clock degraded, then the error would change. The degradation would occur in the instruments of measurements - what tracks which quantum state the clock is in, not in the oscillations between two quantum states; the latter is understood variation anyway (like variations due to gravity).

Time, on the other hand, is slightly different. A unit like second can only be defined with periodic change that has to be regular, just like for length. But, without a time piece that is already regular we can't determine if the periodic phenomena we're using to measure time with is regular or not. See...?


Ok, how do you account for the ability to assign measurement errors to clocks? What does your skepticism actually do?
BlueBanana November 22, 2017 at 06:46 #126232
We can't know the length of an object is regular either.
tom November 22, 2017 at 10:28 #126277
Quoting TheMadFool
Regularity is critical in all measurements. Consider a student's ruler. If the centimeter markings are spaced differently (the should be spaced the exact same length) then the ruler is "broken". Similarly, if the seconds ticked off by a clock are of different "lengths" then time measurement would be pointless.


The SI unit of length is defined in terms of time.

The SI unit of time is defined in terms of a property of matter.
Metaphysician Undercover November 22, 2017 at 21:17 #126398
Quoting fdrake
What allows the extrapolation of the error - and thus statements like '1 second in 100 million years' - is that the clock had a certain error which accrued over a month. The measurement error precisely gives 'how much it changes over time'.


The measurement error does not give "how much it changes over time". It gives how much it changed over one specific month of time. To conclude that it gives how much the change will be in a million years from now, is faulty logic, because all the possible variables are not known. It's like saying that if something doesn't change in a day, then it will be the same for a hundred years. But we do not know what could happen in the future, to change the rate of the activity which is being measured today.

The conclusion requires the premise that the activity which is being measured for that month will continue to be, as it was for that month, for a hundred million years. And this is an unproven premise, therefore unsound.

noAxioms November 22, 2017 at 22:24 #126410
Quoting TheMadFool
There doesn't seem to be a law that cleary demonstrates true regularity of any physical process.
Yes there is such a law, and it is used in the articles linked. They've demonstrated the accuracy of some clock to X digits, and not by using a more accurate one.
You seem to just want to deny any answer to your query. So what's your purpose in asking then?
fdrake November 22, 2017 at 22:42 #126412
Reply to Metaphysician Undercover

The laws of physics have been shown to operate over all observed parts of the universe - and thus back in time more than that. It isn't a stretch to assume if no one destroys the clock or the measuring mechanism, or turns it off, that the process operating within it that measures time will have that error rate.
Metaphysician Undercover November 23, 2017 at 03:03 #126460
Quoting fdrake
The laws of physics have been shown to operate over all observed parts of the universe - and thus back in time more than that.


You should read some material by physicist Lee Smolin, specifically "Time Reborn". The laws of physics have been proven to be reliable only in the human environment, under very specific controlled experimental conditions. And this represents a very miniscule part of the overall spatial and temporal expanse of the universe. There is no reason why we should believe that our laws are applicable in these distant regions.

Quoting fdrake
It isn't a stretch to assume if no one destroys the clock or the measuring mechanism, or turns it off, that the process operating within it that measures time will have that error rate.


Yes it is a stretch. From one month to 100 million years is a huge stretch, 1200 million times. By that same stretch the height of my body could circle the earth about fifty times.
TheMadFool November 23, 2017 at 03:25 #126463
Quoting noAxioms
Yes there is such a law, and it is used in the articles linked. They've demonstrated the accuracy of some clock to X digits, and not by using a more accurate one.
You seem to just want to deny any answer to your query. So what's your purpose in asking then?


My problem with the existence of a law that demonstrates the regularity of a periodic process is that time, space, and other physical quantities are more fundamental than any law of nature. First comes measurement, whether time, mass, volume, distance, etc., and only then can relationships between these quantities can be seen.

My physics isn't that good but look at the wikipedia article on the pendulum. The period, supposed to be regular, T = 2pi[(L/g)^0.5] where L = length of the pendulum and g is acceleration due to gravity. As you can see before we can derive this law, we need to know T, L and g. In other words, we already need a clock to measure time (T). How do we know that that clock is keeping accurate time?

Quoting tom
The SI unit of time is defined in terms of a property of matter.


Can you have a look at my reply to noAxioms.
fdrake November 23, 2017 at 07:00 #126484
Reply to Metaphysician Undercover

How does the argument go then?
tom November 23, 2017 at 10:22 #126536
Reply to TheMadFool

Quoting TheMadFool
My problem with the existence of a law that demonstrates the regularity of a periodic process is that time, space, and other physical quantities are more fundamental than any law of nature. First comes measurement, whether time, mass, volume, distance, etc., and only then can relationships between these quantities can be seen.


That is quite a startling claim given that the relationships between mass and energy, energy and wavelength, mass and velocity, length and velocity, time and velocity, ... ... (I could go on and on) were all discovered in Theory before any measurement or reason for measurement could be conceived.

So no, it is always theory first.

Quoting TheMadFool
My physics isn't that good but look at the wikipedia article on the pendulum. The period, supposed to be regular, T = 2pi[(L/g)^0.5] where L = length of the pendulum and g is acceleration due to gravity. As you can see before we can derive this law, we need to know T, L and g. In other words, we already need a clock to measure time (T). How do we know that that clock is keeping accurate time?


You do not need to know what T, L and g are. L and g can take any values, and we can still calculate T. If such a physical system existed, it would provide a perfect clock. All we would need to do is measure g and use that to DEFINE L and T. But of course, no such physical system exists.

But atomic transitions do exist, and the energy of transition can be measured. Because theory tells us the relationship between energy and frequency, and that transitions are induced in atoms when subjected to EM radiation of that frequency, we may DEFINE the second via that frequency.
fdrake November 23, 2017 at 17:31 #126578
Reply to Metaphysician Undercover

I did a bit of background reading on Smolin. From what I understand he advocates the view that the laws of physics change over time. I'm sympathetic with this view, since the different regimes of energy distribution at different stages of the universe's development give rise to markedly different topologies of physical law. By this I mean, at the start of the universe there is theorised to be a unification of the electroweak, gravitational and strong forces, and spatio-temporal variations in the ambient levels of energy unfold a universe with distinct forces and distinct length-scales for their activity.

However, the universe will still be in the same regime of energy distribution for billions of years, and there is no good reason to believe that the laws will change in this time. The mere possibility of the laws changing evidently does not impede scientific discovery and theory-forming over the different stages of the universe's chronology. I had a similar discussion with Rich once, the accuracy of measuring red-shift in photons coming to Earth gives excellent evidence of the constancy of universe over very large time scales.

Indeed, approximately 10^14-(13.8 * 10 ^ 9) = 10^13 years need to occur in order for the regime of distribution of energy to change in any meaningful way. That's 1000 times longer than the current age of the universe...

Any account of the study of physical phenomena must allow for the ability of physics to probe the beginning and the end of the universe, the near instantaneous (10^-18 of a second) to the universal (10^100 years) time scales with reasoned argument and mathematical precision. It must also allow the ability to assign errors and upper/lower bounds to these predictions and measurements.

If your metaphysical speculation is inconsistent with the sheer scope of our ability to study the universe, so much the worse for your metaphysical speculation.

Edit: another way of thinking about it - the contingency of physical law doesn't do anything to the laws revealed, other than requiring accounts for their formation and end. (and thus replacement by other laws)

Edit 2: and why would the process of measurement embodied by the optical lattice or caesium clok change anyway?
Metaphysician Undercover November 23, 2017 at 21:47 #126617
Quoting fdrake
How does the argument go then?


Which argument? The argument is yours. You are claiming that by watching something for a month, you can say something about it which will be true in 100 million years from now. If you think that this is really the case, then show me your argument.

Quoting fdrake
However, the universe will still be in the same regime of energy distribution for billions of years, and there is no good reason to believe that the laws will change in this time.


OK, I see you've made an argument now. However, I do not believe that cosmologists currently have an adequate understanding of the "regime of energy distribution" of the universe. That's why they posit "dark energy". And your argument is based on this unsound premise. Therefore there really is good reason to believe that the laws will change in this time. We will change our descriptive laws as we come to know the universe better, especially things like spatial expansion. We will have to change the laws in order to account for this new understanding, and it is likely that we will come to realize that what we say about an activity for a month right now, will not be the same concerning that activity in 100 million years.

fdrake November 23, 2017 at 22:36 #126622
Reply to Metaphysician Undercover

What scientists believe about dark energy has absolutely no bearing on whether the laws of the universe will change in a given time period. Coming to know more about the laws of the universe may reveal the reason for all the 'missing matter', but this novel disclosure has no bearing on whether the laws will change - only what the laws are believed/known to be. With that in mind:

Can you make a positive argument that the laws of the universe will change within 100 million years? Can you establish that the measurement process going on inside an atomic clock or an optical lattice clock will degrade? When will it degrade? How will it degrade?

Also, it just doesn't follow that since an estimate of something was derived from a month's work of science that any error measurement derived from it is curtailed to a month. Such a temporal localisation of knowledge removes the validity of all measurements, not just temporal ones. You read a thermometer - the thermometer's measurement is only observed now - therefore we don't know what temperature it is when we look away.
Metaphysician Undercover November 24, 2017 at 03:09 #126676
Quoting fdrake
What scientists believe about dark energy has absolutely no bearing on whether the laws of the universe will change in a given time period. Coming to know more about the laws of the universe may reveal the reason for all the 'missing matter', but this novel disclosure has no bearing on whether the laws will change - only what the laws are believed/known to be. With that in mind:


This is contrary to your stated argument though. You stated that the laws of physics will not change because the universe will be "in the same regime of energy distribution for billions of years". The existence of dark energy is very clear evidence that such a claim cannot be justified. And if it cannot be justified that the universe will be in the same regime of energy distribution for an extended period of time, then, that the laws of physics will not change, likewise is not justified.

Quoting fdrake
Can you make a positive argument that the laws of the universe will change within 100 million years? Can you establish that the measurement process going on inside an atomic clock or an optical lattice clock will degrade? When will it degrade? How will it degrade?


You seem to misunderstand my argument. I am not arguing that a change will happen, I am arguing that it is possible, because that is all that is necessary to discredit your insistence that these activities will stay the same for 100 million years. Your claim is that such a change is impossible, because if you recognized it as possible you would not insist that these activities would necessarily stay the same for 100 million years.
fdrake November 24, 2017 at 18:53 #126755
Reply to Metaphysician Undercover

I understood you were making the claim that since the laws of the universe can possibly change. I'm not making the claim that it's impossible to change. I'm making the claim that they won't change in any meaningful way for 1000 times longer than the current age of the universe.

Why would you think that because it's possible for the universe's laws to change, that they will?
Metaphysician Undercover November 24, 2017 at 21:41 #126840
Quoting fdrake
I understood you were making the claim that since the laws of the universe can possibly change. I'm not making the claim that it's impossible to change. I'm making the claim that they won't change in any meaningful way for 1000 times longer than the current age of the universe.

Why would you think that because it's possible for the universe's laws to change, that they will?


Do you know what the "laws" of the universe are? These laws are the descriptions which human beings have made in their attempts to understand the universe. The laws are made by human beings, through inductive reasoning; they are generalities produced from observing regularities in nature. We describe the various activities which we recognize and understand in terms of laws. In the last three thousand years, the laws have changed dramatically, because our understanding of the universe has change dramatically. Between the time of Isaac Newton until now, the laws have changed substantially. Why would you ever claim that the laws will not change in a period of time 1000 times longer than the current age of the universe?

Quoting fdrake
Why would you think that because it's possible for the universe's laws to change, that they will?


As I said there is very much concerning the universe which we do not understand, spatial expansion, dark energy, and dark matter for example. When we start to get a better understanding of these aspects we will have to change the laws again.
fdrake November 24, 2017 at 22:08 #126862
Reply to Metaphysician Undercover

Before Newton thought of ma=mg implies a=g, objects with little difference in air resistance fell in the same way. Before Schrodinger's equation, atoms were already probability clouds. Before the understanding of planetary accretion, the Earth formed. Reality behaves in a manner accordant with discovered physical laws because those laws describe what happens, and if they have errors - their description contradicts observation -, they are expanded, discarded or re-interpreted. The representation of a pattern in nature in scientific terms has a certain correspondence with what happens, because that's precisely what it means for something to be a physical law. They are discovered through human activity, that doesn't mean they are constrained to human activity.

The claim that the laws won't change in that time is based on 1) that the current understanding of things is basically correct and 2) that this current understanding entails that the universe will be much the same for that time period.

Even if science is wrong, that doesn't mean nature will change. Nature does not change to accommodate the beliefs of scientists. The scientific description of patterns in nature may change when previous descriptions are found incorrect or novel phenomena are studied.

You need to establish not only that the laws of nature can change - in the strong sense, that nature itself will change -, but that it will change in a manner that makes the error estimates for the optical clocks incorrect. As yet you've not. So tell me, why will the measurement process inside of the clocks change?
Metaphysician Undercover November 24, 2017 at 22:37 #126879
Quoting fdrake
The claim that the laws won't change in that time is based on 1) that the current understanding of things is basically correct and 2) that this current understanding entails that the universe will be much the same for that time period.


OK, then how do you make 1) consistent with dark energy and dark matter? These are enormous features of the universe which cosmologists admit that they do not understand. How can you say that the current understanding is correct, when the consequence of that understanding is the need to posit all of this mysterious substance?

Quoting fdrake
Even if science is wrong, that doesn't mean nature will change. Nature does not change to accommodate the beliefs of scientists. The scientific description of patterns in nature may change when previous descriptions are found incorrect or novel phenomena are studied.


If your claim is that the universe is the way that it is, regardless of how we understand it, then how is this relevant? What we are discussing is our capacity to measure the universe, specifically to measure time. So the fact that time is how time is, is irrelevant to our discussion of our efforts to measure time.
fdrake November 24, 2017 at 22:56 #126894
Reply to Metaphysician Undercover

OK, then how do you make 1) consistent with dark energy and dark matter? These are enormous features of the universe which cosmologists admit that they do not understand. How can you say that the current understanding is correct, when the consequence of that understanding is the need to posit all of this mysterious substance?


Basically correct. If you want to talk about dark energy, you have to be able to accept solutions to Einstein's field equations as correct and the web of theory and experiment around them. Dark energy only makes sense as a concept on the background of the acceleration of the expansion of the universe; and is contained in a few explanations of it.

If your claim is that the universe is the way that it is, regardless of how we understand it, then how is this relevant? What we are discussing is our capacity to measure the universe, specifically to measure time. So the fact that time is how time is, is irrelevant to our discussion of our efforts to measure time.


Your argument so far has been based on an equivocation of the following: the beliefs of scientists and the practice which generates them; usually called science, and the phenomena they study; usually called nature. If a pattern is observed in nature, and it becomes sufficiently theorised and experimentally corroborated, it will be a scientific law. Note that nature behaved that way first, the scientists adjusted their beliefs and inquiries to track the phenomenon.

You want to have it so that the changes in the beliefs of scientists over the ages implies that nature itself has changed over that time. This is a simple category error. You keep attempting to justify the idea that assigning a small measurement error to an optical lattice clock is unjustified because the laws of nature possibly will change. Besides being an invalid argument - the laws of nature would have to change , not just possibly change, in order to invalidate the current error analysis of the clock, you're using the above equivocation to justify it.

You thus have to show that the laws of nature (read - how nature behaves) will change in a way that invalidates the error analysis of the clock within 100 million years.

It's very suspicious to me that something you could have understood by reading the papers thoroughly and researching the things you didn't know to enough standard to interpret the results, but now you're attempting to invalidate a particular error analysis of a clock by either the cosmological claim that the way nature operates will change in some time period or undermining the understanding that scientists have of reality in general. Engage the papers on their own terms, show that the laws of the universe will change (not will possibly change), or stop seeing nails because you have a hammer!
Metaphysician Undercover November 25, 2017 at 13:58 #127032
Quoting fdrake
Basically correct. If you want to talk about dark energy, you have to be able to accept solutions to Einstein's field equations as correct and the web of theory and experiment around them. Dark energy only makes sense as a concept on the background of the acceleration of the expansion of the universe; and is contained in a few explanations of it.


Your claim of "basically correct" is nothing more than an assertion. So you support your assertion that the measured activity of the caesium clock will be precisely the same in 100 million years, as it is today, with the assertion that the laws of the universe, as stated by human beings today, are "basically correct". What does "basically" mean here? Does it mean that some parts of our understanding may be incorrect, but the part which relates the activities of the caesium atom to a hundred million years of time passage, is precisely correct?

Quoting fdrake
Your argument so far has been based on an equivocation of the following: the beliefs of scientists and the practice which generates them; usually called science, and the phenomena they study; usually called nature. If a pattern is observed in nature, and it becomes sufficiently theorised and experimentally corroborated, it will be a scientific law. Note that nature behaved that way first, the scientists adjusted their beliefs and inquiries to track the phenomenon.


I don't see any support for your claim that I have equivocated. I see however that you have fallen victim to two distinct unsound premises. First, you assume that if an activity of nature has been observed, as following a pattern then the description of this activity is necessarily correct. Second, you believe that one can proceed without that firm (correct) understanding of the activity, just an understanding of the pattern, to claim that the pattern will remain the same for an extended period of time.

Quoting fdrake
You want to have it so that the changes in the beliefs of scientists over the ages implies that nature itself has changed over that time.


This is a misunderstanding. What I am saying is that changes in the beliefs of scientists, over time, indicate that what scientists belief at any particular time, is not the "correct" understanding. You claim that what scientists believe now, is all of a sudden "basically correct", despite the fact that this has never been the case in the past. On what principles do you claim such a change? This is totally inconsistent with what we have observed, the believes of the scientific community change. How do you support this claim, that scientist now have a correct understanding?

Quoting fdrake
You keep attempting to justify the idea that assigning a small measurement error to an optical lattice clock is unjustified because the laws of nature possibly will change. Besides being an invalid argument - the laws of nature would have to change , not just possibly change, in order to invalidate the current error analysis of the clock, you're using the above equivocation to justify it.


Now you are equivocating. What do you mean by "laws of nature" here? As I said already, "laws" refer to descriptions produced by human beings. We know the laws will change because we know that human beings do not have a complete and precise understanding of the universe. So it is not the case that the laws of nature will possibly change, it is the case that they will change.

You seem to be equivocating from the human descriptions to the activities described. But we are talking about the human descriptions here, the laws, and their accuracy. You are assuming that the description is necessarily correct. That's your first unsound premise.

Quoting fdrake
You thus have to show that the laws of nature (read - how nature behaves) will change in a way that invalidates the error analysis of the clock within 100 million years.


This is untrue. To make my argument I don't have to show anything about "how nature behaves". What I need to show is how human beings behave. We are discussing the human activity of measurement, not "how nature behaves". This is where you are loosing track of the argument, you assume the argument is about how nature behaves, not about how human beings behave. Human beings have demonstrated over and over again throughout history, that at one time they believe to have a correct, and accurate measuring device, only to find out later that it wasn't as accurate as they thought. This is especially true in the case of measuring time.

Quoting fdrake
very suspicious to me that something you could have understood by reading the papers thoroughly and researching the things you didn't know to enough standard to interpret the results, but now you're attempting to invalidate a particular error analysis of a clock by either the cosmological claim that the way nature operates will change in some time period or undermining the understanding that scientists have of reality in general. Engage the papers on their own terms, show that the laws of the universe will change (not will possibly change), or stop seeing nails because you have a hammer!


As I said the laws of the universe will change. Human knowledge of the universe is incomplete. Things such as spatial expansion, dark energy, and dark matter, provide undisputable evidence of this fact. Human knowledge of the universe is incomplete. If you are not ready to face this fact, then that is not my problem, it's yours.





fdrake November 26, 2017 at 14:14 #127433
Reply to Metaphysician Undercover

Help me out a bit.

(1) Beliefs about nature and methods for deriving them are fallible.
(2) The laws of nature are non-necessary.
(3) The laws of nature can change.
(4) The laws of nature will change.
(5) The laws of nature only describe properties of thought.
(5) Measurement error is a property of human engagement with a phenomenon.
(6) Historically, ways of measuring time have been less accurate than described.
(7) Scientific beliefs can change.
(9) An observed pattern of nature cannot be assumed to arise or perpetuate in the controlled conditions which generate it.
(10) Scientific knowledge is incomplete.

Are all elements of your post.

Can you tell me how their combination entails:

(11) The measurement error analysis of the caesium-133 clock and the optical lattice clock are wrong.

?

Edit: if I've missed a vital element, please add it to the list!
Metaphysician Undercover November 26, 2017 at 15:07 #127447
Quoting fdrake
Can you tell me how their combination entails:

(11) The measurement error analysis of the caesium-133 clock and the optical lattice clock are wrong.


I didn't say that the measurement error analysis of the caesium-133 clock is wrong. I said your extrapolation is wrong. The measurement error is for a one month period. You extrapolate this to a 100 million year period. You base this extrapolation on the following principle: "the universe will still be in the same regime of energy distribution for billions of years". My argument is that the uncertainties involved with the concept of dark energy indicate that such a claim is unsound, and therefore your extrapolation is unprincipled.
fdrake November 26, 2017 at 15:09 #127448
Reply to Metaphysician Undercover

Ok. If the measurement error analysis in the paper isn't wrong, that means the 1 second in 100 million years isn't wrong. Since that corresponds to an error rate of about 3 * 10 ^ -16, which was derived within the month. The unit of the error rate is in seconds per second... Take the reciprocal, voila!
tom November 26, 2017 at 15:30 #127450
Quoting fdrake
Ok. If the measurement error analysis in the paper isn't wrong, that means the 1 second in 100 million years isn't wrong. Since that corresponds to an error rate of about 3 * 10 ^ -16, which was derived within the month,


If, for the sake of argument, we accept your rash extrapolation into the future, then the implication is that the clock is accurate to 1 year in every 3.154e+15 years. I'm pretty sure there will be some changes in that time, and who wants a clock that's wrong by a whole year?
fdrake November 26, 2017 at 15:38 #127451
Reply to tom

Would you be happy with 6*10^-16 seconds per 2 seconds? How about 9*10^-16 per 3? You can scale the error like that all you like, it still represents the same error rate. If you ran the clock for 3*10^15 years, of course you're going to get an error on it: what matters is that it's the same error as predicted by the analysis of the process.

Also, since you know the error rate, you know when it's going to have accumulated a second of error, so it can be re-calibrated. (subtract a second from the display, or a year from the display...)
Metaphysician Undercover November 26, 2017 at 15:47 #127452
Quoting fdrake
If the measurement error analysis in the paper isn't wrong, that means the 1 second in 100 million years isn't wrong. Since that corresponds to an error rate of about 3 * 10 ^ -16, which was derived within the month. The unit of the error rate is in seconds per second... Take the reciprocal, voila!


You don't seem to understand the issue fdrake. You haven't disclosed your principle of extrapolation, and this is what validates your extrapolation. Suppose I can eat one hot dog in thirty seconds. This is a rate of one hot dog per thirty seconds. You extrapolate and claim "therefore I can eat 120 hot dogs in one hour". Do you see the problem with this type of extrapolation? You need a principle whereby you can assert that the rate which was obtained in the short term will continue over the long term. Without that principle your extrapolation is just a baseless assertion.
tom November 26, 2017 at 16:01 #127456
Quoting fdrake
Would you be happy with 6*10^-16 seconds per 2 seconds? How about 9*10^-16 per 3? You can scale the error like that all you like, it still represents the same error rate


It seems to me that what you are implying is that expressing the extraordinary accuracy of the atomic in terms of time scales that a non-technical audience might better understand, is not the same as claiming the clock will still exist in 100,000,000 years?
fdrake November 26, 2017 at 16:24 #127459
Reply to Metaphysician Undercover

It isn't an extrapolation, it's a rounding of the error rate translated to a timescale that denotes the sheer precision of the measurement to a lay audience. See tom's post.

Reply to tom
It seems to me that what you are implying is that expressing the extraordinary accuracy of the atomic in terms of time scales that a non-technical audience might better understand, is not the same as claiming the clock will still exist in 100,000,000 years?


Precisely. There's one necessary and sufficient condition for the clock not to work in accordance with that error rate. That's for the process in the clock that measures the oscillations to change. Not the possibility of its change or the necessity of its change - that it will change.

Edit: or alternatively that the error analysis in the paper isn't accurate!

Edit 2: making this explicit, if the clock stopped working entirely, of course it wouldn't provide a precise measurement of the second. If it stopped working in a more subtle way, say a variation in the laws of physics relevant to the functioning of the clock, then it may stop working entirely or degrade in performance. Otherwise, so long as it functions in accordance with the set up in the paper, it will have that error rate.

Metaphysician Undercover November 26, 2017 at 16:32 #127464
Quoting fdrake
It isn't an extrapolation, it's a rounding of the error rate translated to a timescale that denotes the sheer precision of the measurement to a lay auidience. See tom's post.


It is an extrapolation. Your changing of the scale, "translating" the "timescale" is by definition an extrapolation. You proceed from known values to estimate values which lie outside the range of the known. That is extrapolation.
fdrake November 26, 2017 at 16:35 #127466
Reply to Metaphysician Undercover

Noo.... An extrapolation is an extension of an analysis outside the data range for which it was estimated. Say the error rate is K seconds per second, then you can scale K by a constant to obtain an error rate in terms of years, trillions of years, a googol of years. This is estimating a parameter then expressing the value of that parameter on a different numerical scale.

You may as well say that it's an extrapolation to go from 1 femtogram to 2.20462e-18 pounds!
Metaphysician Undercover November 26, 2017 at 16:41 #127467
Quoting fdrake
An extrapolation is an extension of an analysis outside the data range for which it was estimated.


Check your facts. Any extension outside the range of known facts is an extrapolation. Your "estimate" is already an extrapolation.

Besides, if you admit that your claim is based in estimation, you've forfeited any claim to necessity in your conclusion.
fdrake November 26, 2017 at 16:56 #127471
Reply to Metaphysician Undercover

It's known that the error rate for that clock is about 3*10^-16 seconds per second. This implies the error rate is 3*k * 10^-16 seconds per k*second.

I never made any claim of the necessity of any physical law, in fact if you read through my posts you'll see that I said I was sympathetic to the view that they can change. However, that they can change doesn't entail they will change in a way that destroys the accuracy of the clock. So, tell me when they will change, and how they will change so that the accuracy of the clock is destroyed.

I also said the following to you and @tom

Edit 2: making this explicit, if the clock stopped working entirely, of course it wouldn't provide a precise measurement of the second. If it stopped working in a more subtle way, say a variation in the laws of physics relevant to the functioning of the clock, then it may stop working entirely or degrade in performance. Otherwise, so long as it functions in accordance with the set up in the paper, it will have that error rate.


The clock doesn't work with metaphysical necessity. That it works isn't conditional on the necessity of physical laws. The calculation of the error rate depends solely on the physical process that constitutes the clock and the measurements it generates. So, if the physical process were to stop - if someone took a sledgehammer to the experimental apparatus - the clock would stop. If all protons had already decayed, there couldn't be a clock. What is required to invalidate the error analysis of the clock is to show that the physical process in it will change in a manner that effects the clock, or alternatively find an error in the paper's error calculation.

The error rate in terms of 'how many years would it take for a single second of error to accrue' is equivalent to the original 3*10^-16 ish seconds per second error rate. It is not an extrapolation. Let's look at the google definition:

extrapolation the action of estimating or concluding something by assuming that existing trends will continue or a current method will remain applicable.

So yes, the error rate of the clock remaining the same with changing background conditions requires that the physical process that constitutes it doesn't change in a way which renders the analysis inapplicable. It isn't an extrapolation to say if nature keeps working as it does then the clock will.

Nor is it an extrapolation to translate the error rate to a different numerical scale. Saying that the clock will be there in 100 million years? That might be an extrapolation.

You want to make it an extrapolation, so tell me how and when the physical process constituting the clock will change, in a manner that makes the error analysis inadequate.
Metaphysician Undercover November 26, 2017 at 19:11 #127492
Quoting fdrake
However, that they can change doesn't entail they will change in a way that destroys the accuracy of the clock. So, tell me when they will change, and how they will change so that the accuracy of the clock is destroyed.


If you allow the possibility of change, then you cannot derive the conclusion that the error rate will stay the same for that extended period of time. The possibility of change, and the claim that the error rate will stay the same are incompatible, they are contradictory. To insist that the error rate will stay the same is to say that change is impossible.

My argument therefore, does not require proof that the rate will change. This is the point which you do not seem to grasp. You keep insisting that I need to demonstrate that the rate will change, to make my argument, but that is not the case. Because your assertion requires of necessity, that the rate will stay the same, all I need to do is to demonstrate that change is possible to refute your claim. Therefore you cannot assert that the rate will stay the same if you also state that change is possible. This is contradiction. You might want to adjust your assertion to reflect this, by saying that it is possible that the rate will stay the same. And with some evidence you might claim that it is probable that the rate will stay the same.

Quoting fdrake
Nor is it an extrapolation to translate the error rate to a different numerical scale.


You have translated the error rate to a different temporal scale, not a different numerical scale. You have gone from an error rate derived from one month of application to an extrapolated error rate of 100 millions years of application.

Quoting fdrake
It isn't an extrapolation to say if nature keeps working as it does then the clock will.


Yes, that's exactly what an extrapolation is. So this is your principle of extrapolation then: "things will continue for 100 million years, in the same way that they have done for the last month". This premise, or assumption, makes the extrapolation is valid. The question is whether or not this premise is sound. My claim is that the concept of dark energy is evidence that things will probably not continue for 100 million years in the same way which they have for the last month, so the assumption is not sound. Notice that I have produced evidence, the concept of dark energy, and this evidence puts probability on my side. If you want to bring probability to your side, you need to produce some evidence yourself, and refute my claim of evidence.

fdrake November 27, 2017 at 16:19 #127851
Reply to Metaphysician Undercover

(1) The error analysis is correct.
(2) The derived error rate is approximately 3*10^-16 seconds per second.

Do these require metaphysical necessity and unchanging physical laws?
Metaphysician Undercover November 28, 2017 at 01:01 #127980
Reply to fdrake
Yes (2) requires an unchanging physical law. The error analysis is performed over a one month period. The "error rate" is derived from that one month period. In (2) the error rate is stated as "per second", instead of as "per second of that particular one month period". So there is a generalization, that whatever occurred "per second" in that one month period will occur "per second" in every second, in this is indicated by the generality of the statement "x per second". Therefore the derived error rate of "per second", is derived from the inductive conclusion (unchanging physical law) that what occurred "per second" in this one month period will continue to be the same throughout the passing of time. Simply put, you have taken what is true for one month, "x is the case in that month", and sated it as an unchanging physical law "x is the case", such that it now is a law for all time instead of just a description of what has occurred in that month.
fdrake November 28, 2017 at 11:39 #128107
Reply to Metaphysician Undercover

Say that the radiocarbon dating of a dinosaur fossil took a month, is it then illegitimate to claim that it's more than a month old?
Metaphysician Undercover November 28, 2017 at 12:06 #128118
Radiocarbon dating suffers from the very same issue. We understand the rate of decay of C14 from our observations over a relatively short period of time. Then, we extrapolate to a much longer period of time, assuming that the rate of decay has maintained consistency over that period. There are other assumptions involved as well, such as the amount of carbon in the atmosphere. But the point being discussed here is the assumption that the rate of decay, which is observed over a short period, remains exactly the same over a long period.
Myttenar November 28, 2017 at 12:30 #128125
The waters muddy further with consideration of time differentials that we can quantify as seconds appearing under certain circumstances while the time in another place moves at its usual pace.. I did just post a thread with a postulation for a categorization of time as energy :something we know and understand and have a basis for comparison study and principals for behaviours that we can apply to studying time and behaviours of time. Besides that it is fun to think about, even if there is a proof to refute the idea, which I am curious for; i assume someone must have had the thought before me.
fdrake November 28, 2017 at 14:49 #128157
Reply to Metaphysician Undercover

A baby is born at 10pm in New York. Someone looks at their watch. Since the measurement process took a second, we can't justifiably say the baby's been born at 10pm. When you look away from a thermometer after checking the temperature, you can't justifiably say what temperature it is. You can't justifiably say the dinosaurs were around millions of years ago. You can't date trees based off their rings. All of geological history may as well be a myth, all of evolutionary theory has to be thrown away, every single measurement or calculation ever that was done must be discarded because it can't be justified since it's an extrapolation. Measurement error analysis is impossible, every psychological experiment ever done is bunk, every piece of anecdotal evidence is in even worse standing. The fabric of our social life disappears - we can no longer learn and generalise based on our experiences.

You don't live in this world. No one does.
Myttenar November 28, 2017 at 14:56 #128160
Reply to fdrake there was no measurement but an observation. Your argument is invalid. There is no spoon.
fdrake November 28, 2017 at 15:05 #128162
Reply to Myttenar

Observing a thermometer is observing a measurement. Observing a watch is observing a measurement. Observing a radiocarbon dating procedure is observing a measurement. Observing the number of rings on a tree and dividing it by a rate is observing a measurement. Looking back through geological time based on the stratification of soil and rock deposits is a measurement. Every psychology experiment which elicits variables from subjects is a collection of measurements. Every sequencing of genes and study of their change or population genetic calculation based on real data is a measurement.

The world is so much more realistic when you restrict the knowledge of it to anecdotal evidence, which you can't form anyway since anecdotal evidence consists in records of experience or generalisations thereof that are not confined to the same time period as their generation.

You be trolling.
Myttenar November 28, 2017 at 15:27 #128167
Good response though :)

And I think you spelled refuting wrong. No troll in it and I kinda thought the first one I replied to was trolling...but yea

I can't refute this argument so I will leave you with this. Those who want to know the meaning of life do not know they have posed the answer as a question.
fdrake November 28, 2017 at 15:33 #128170
Reply to Myttenar

The word 'refuting' doesn't appear in any of my recent posts in this thread. My last response to @Metaphysician Undercover is an attempt to detail how his position undermines our ability to know pretty much anything.

Also, stop with the chicken-caesar word salad.
Myttenar November 28, 2017 at 15:35 #128171
Yeah. Was a joke about the trolling thing nevermind. Oh and you refuted your own argument and so also, invalid.

And no chicken.
fdrake November 28, 2017 at 15:36 #128172
Reply to Myttenar

Just word salad then.
Myttenar November 28, 2017 at 15:43 #128174
Indeed someone is eating their words ;)

You do realize we can't know everything I hope..

On a side note. Time is energy.
Metaphysician Undercover November 28, 2017 at 22:02 #128251
Quoting fdrake
A baby is born at 10pm in New York. Someone looks at their watch. Since the measurement process took a second, we can't justifiably say the baby's been born at 10pm. When you look away from a thermometer after checking the temperature, you can't justifiably say what temperature it is. You can't justifiably say the dinosaurs were around millions of years ago. You can't date trees based off their rings. All of geological history may as well be a myth, all of evolutionary theory has to be thrown away, every single measurement or calculation ever that was done must be discarded because it can't be justified since it's an extrapolation. Measurement error analysis is impossible, every psychological experiment ever done is bunk, every piece of anecdotal evidence is in even worse standing. The fabric of our social life disappears - we can no longer learn and generalise based on our experiences.


We're talking about precision in measurement, not whether or not we should discard extrapolations which may have some inaccuracy in precision. I am not arguing that we ought to throw away the measurements of the atomic clock, just because they may not be as precise as you think they are. I am arguing that the clock may not be as precise as you claim it to be. Do you realize that if one very small factor is overlooked, then that factor is multiplied over and over again in extrapolation?

But here again, you extrapolate using a principle which may or may not be correct. Your principle here appears to be that if it is possible that a measurement may not be as precise as some believe it is, it ought to be discarded. So you extrapolate and say that all of our knowledge ought to be discarded because it may not be as precise as some people think it is. It's not the measurement that ought to be discarded, it's the belief that the measurement is more precise than the degree of precision which is justified, that ought to be discarded.

fdrake November 28, 2017 at 22:23 #128257
Reply to Metaphysician Undercover

What reduces the accuracy of the measurement from its purported value?
Metaphysician Undercover November 28, 2017 at 22:31 #128262
Reply to fdrake
How many times do I have to repeat myself? The inaccuracy is not in the rate derived from a month of observation, the average of x amount per second, for one month. The inaccuracy is in the claim that what was for one month will continue to be for 100 million years.
fdrake November 28, 2017 at 22:35 #128264
Reply to Metaphysician Undercover

Would you agree that while the clock is going, its error rate will be as stated?
Metaphysician Undercover November 29, 2017 at 01:00 #128296
Reply to fdrake
No , I wouldn't agree to that. Try this explanation as to why I don't agree. The caesium clock uses a frequency of 9,192,631,770 Hz. This means 9,192,631,770 times per second. "Second" here is derived from the earth's orbit around the sun. So this number represents a relationship between the radiation of a caesium atom, and the earth's orbit around the sun. Until we understand why that relationship is exactly as stated, we cannot validly claim to know that the relationship will continue to be as stated.
fdrake November 29, 2017 at 12:42 #128525
Reply to Metaphysician Undercover

How much less precise is the error than stated?
Metaphysician Undercover November 29, 2017 at 13:30 #128560
Reply to fdrake
How would I know? If I knew, then those measuring would know, and you wouldn't be making the claims that you do. Remember, my claim is that error in the extrapolation is possible, and therefore the claim you make, that the extrapolation will be the case, is not justified until you provide proper principles to back up this claim. Further, I argue that such claims about accurate measurement of temporality in the past, have proven to be wrong. And, the concept of dark energy indicates that we do not have a complete understanding of temporality. This evidence supports my claim that not only is the extrapolation possibly wrong, it is probably wrong. I do not claim to know anything about what the error actually is.
tom November 29, 2017 at 15:09 #128617
fdrake November 29, 2017 at 17:04 #128632
Reply to Metaphysician Undercover

This evidence supports my claim that not only is the extrapolation possibly wrong, it is probably wrong. I do not claim to know anything about what the error actually is.


:o

I think this has gone on long enough.
Metaphysician Undercover November 29, 2017 at 18:04 #128639
Quoting tom
You need to watch this.


Interesting, but the point is this. The reason why the frequency is precisely 9,192,631,770 times per second, rather than 5 billion, 10 billion, or some other arbitrary number, is that the second is already is defined in relation to the year. So if they chose one of those other numbers, 5 billion times per second, for example, there would not be the right number of seconds in a day, and in a year. So what this statement ("9,192,631,770 times per second") represents, is a relationship between the activity of those caesium atoms, and the motion of the earth in relation to the sun. If that relationship is not absolutely stable, then that number cannot be represented as a stable number.

tom November 29, 2017 at 19:06 #128651
Quoting Metaphysician Undercover
Interesting, but the point is this. The reason why the frequency is precisely 9,192,631,770 times per second, rather than 5 billion, 10 billion, or some other arbitrary number, is that the second is already is defined in relation to the year. So if they chose one of those other numbers, 5 billion times per second, for example, there would not be the right number of seconds in a day, and in a year. So what this statement ("9,192,631,770 times per second") represents, is a relationship between the activity of those caesium atoms, and the motion of the earth in relation to the sun. If that relationship is not absolutely stable, then that number cannot be represented as a stable number.


No it doesn't. The second is DEFINED with respect to a material property of Caesium. The new definition would have been chosen to be close to a previous definition which it superseded, for convenience, but needn't be the same. I presume you are familiar with leap seconds (and leap years)?

As a matter of interest, all Imperial units also changed slightly when they became defined in terms of S.I. units.
Metaphysician Undercover November 29, 2017 at 21:43 #128708
Quoting tom
No it doesn't. The second is DEFINED with respect to a material property of Caesium. The new definition would have been chosen to be close to a previous definition which it superseded, for convenience, but needn't be the same. I presume you are familiar with leap seconds (and leap years)?


Yes, the second is defined that way, I am fully aware of this. However, the year is defined by the earth's orbit. For fdrake's claim that the caesium clock will continue to be as accurate as it is now for 100 million years to be true, the relationship between the earth's orbit, and the caesium frequency, must remain the same for 100 million years. The use of leap seconds demonstrates that this is highly unlikely.
tom November 29, 2017 at 23:09 #128730
Quoting Metaphysician Undercover
Yes, the second is defined that way, I am fully aware of this. However, the year is defined by the earth's orbit. For fdrake's claim that the caesium clock will continue to be as accurate as it is now for 100 million years to be true, the relationship between the earth's orbit, and the caesium frequency, must remain the same for 100 million years. The use of leap seconds demonstrates that this is highly unlikely.


Right, so the second is defined by a physical constant, but the year is defined by a varying quantity. Certain mechanisms are employed to keep the invariant and varying quantity in good agreement. These include leap years and leap seconds.

The clock will not be as accurate as it is now in 100,000,000 years. No one is claiming that. The clock will certainly not exist then. However in 100,000,000 years, clocks may be 100% accurate.
Metaphysician Undercover November 30, 2017 at 03:59 #128788
Quoting tom
Right, so the second is defined by a physical constant, but the year is defined by a varying quantity.


That's an arbitrary assumption, that the second is constant, and the year is variant. Because of this arbitrary assumption, any, and all discrepancy in measurement is assigned to a variance in the year, and no variance is assigned to the second, despite the fact that some discrepancy might actually be due to a variance in the second.

Consider this example. Assume that the length of the day is constant, and that the length of the year is also constant. However, they are not completely compatible, so there is a necessity of leap years. This does not indicate that one or the other, the year, or the day, is constant and the other is variable, it simply indicates that the two are incommensurable. Likewise, in the comparison of the second and the year, the need for leap seconds does not indicate that one is constant and the other is variable, it indicates that the two are incommensurable.

Quoting tom
The clock will not be as accurate as it is now in 100,000,000 years. No one is claiming that.
Actually, that seems to be exactly what fdrake was claiming.
tom November 30, 2017 at 11:49 #128888
Quoting Metaphysician Undercover
That's an arbitrary assumption, that the second is constant, and the year is variant.


We know the Earth is Moving away from the Sun and that the year is getting longer. I's been measured.

We can measure and calculate the energy of transition between hyperfine ground states of the caesium atom.

For the energy of transition of caesium atoms to change - a change affecting all caesium atoms everywhere simultaneously I presume - what laws of physics do you propose to change?



fdrake November 30, 2017 at 12:03 #128894
@Metaphysician Undercover

Actually, that seems to be exactly what fdrake was claiming.


Well, we had an argument over whether metaphysical necessity of physical law was required for the measurement to be accurate at that point. I tried to argue that that was a category error, you tried to argue that I required it. Whether in 100 million years the clock has the same error rate depends on whether the physical laws would change. One way of preventing the change conceptually would be the application of necessity to physical law. I tried to argue that that would be sufficient but not necessary, what is required that the laws would change, not that they could or must: a contingent fact, rather than the possibility of its negation or its elevation to necessity.

The quantification of the error in terms of 1 sec/100 mil years and its equivalence to the stated error rate in the paper is a separate issue. If you want to treat it as a separate issue now, that's fine with me -to me that looks like progress. Since you were arguing as if the metaphysical necessity of physical law was required for scaling the error to an equivalent rate; I argued that it wasn't.

So we had this super-discussion of the necessity of physical law - neither of us believed that it was necessary. But yeah, if you want to talk about the scaling of the error rate without, in my view, muddying the waters with all this talk of the metaphysical necessity of physical law, I'd be interested in chatting about it again.
Metaphysician Undercover November 30, 2017 at 14:32 #128913
Quoting tom
We know the Earth is Moving away from the Sun and that the year is getting longer. I's been measured.


OK, that's an example of how something which is assumed to be constant from observation on the short term may prove to be less constant on the long term.

Quoting tom
We can measure and calculate the energy of transition between hyperfine ground states of the caesium atom.


So, according to the paper that fdrake referred, this has been proven to be constant for a period of one month. On what basis does one claim that it will remain constant for 100 million years?

Quoting tom
For the energy of transition of caesium atoms to change - a change affecting all caesium atoms everywhere simultaneously I presume - what laws of physics do you propose to change?


You are falling into the same pattern of argumentation as fdrake did, asking me to prove that things will change. Fdrake insisted that this particular activity will remain the same for that extrapolated time period, so the onus is fdrake's to demonstrate that it will. From my perspective, I just need to demonstrate that change is possible, to refute fdrake's claim that this activity will necessarily stay the same.

For example, if prior to scientists knowing that the year is getting longer, some people thought that the year would remain constant for billions of years, and someone like me argued that this is a faulty extrapolation, how would that person, like me, be expected to know just exactly what was changing? It is not necessary to know what is changing in order to make this argument. All that is necessary to prove wrong the claim that things will remain the same, is to demonstrate the possibility of change. If change is possible, then the claim that things will stay the same is unsound.

My argument is that the extrapolation is faulty because there are too many unknowns which could influence things. So if you want to defend the extrapolation, then you should demonstrate that there are no such unknowns, do not ask me what the unknowns are, and how they will affect the proposed constant activity, because they are unknowns. However, I did indicate one such unknown factor, and that is what is called "dark energy".

Quoting fdrake
Well, we had an argument over whether metaphysical necessity of physical law was required for the measurement to be accurate at that point.


What do you mean by metaphysical necessity of physical law?

Quoting fdrake
Whether in 100 million years the clock has the same error rate depends on whether the physical laws would change.


Remember, we went through this, physical laws are descriptions produced by human beings. Let's see if we can maintain a distinction between "physical laws" and "the way things are". That the caesium clock has x number of cycles per second is a physical law. The evidence of experimentation demonstrates reason to believe that this is the way things were for a period of one month. In other words, the physical law which states x cycles per second of the caesium atom has been demonstrated to be accurate for a month of time.

Quoting fdrake
The quantification of the error in terms of 1 sec/100 mil years and its equivalence to the stated error rate in the paper is a separate issue.


I don't see how this is a separate issue, it is the issue. The question is whether such an extrapolation is valid.

Quoting fdrake
So we had this super-discussion of the necessity of physical law - neither of us believed that it was necessary. But yeah, if you want to talk about the scaling of the error rate without, in my view, muddying the waters with all this talk of the metaphysical necessity of physical law, I'd be interested in chatting about it again.


Perhaps I misunderstand what you mean by metaphysical necessity of physical law, but I do believe that if you want to extrapolate the way that you do, you need some principles whereby you can argue that what was observed to be the case for one month will continue to be the case for 100 million years.

Take tom's example, that it has now been proven that the earth is getting further from the sun, and the years is getting longer. That difference is so slight that people in the past would never have noticed it. They would do projections into the future, extrapolations as you do, without realizing that every year the length of the error grows by the tiniest amount. After a very long time, this tiniest amount multiplies into a larger amount. What if something similar is the case with the caesium frequency? This is just one example, of one possibility, but have you considered this possibility, that the error is cumulative?
fdrake November 30, 2017 at 15:15 #128915
Reply to Metaphysician Undercover

Take tom's example, that it has now been proven that the earth is getting further from the sun, and the years is getting longer. That difference is so slight that people in the past would never have noticed it. They would do projections into the future, extrapolations as you do, without realizing that every year the length of the error grows by the tiniest amount. After a very long time, this tiniest amount multiplies into a larger amount. What if something similar is the case with the caesium frequency? This is just one example, of one possibility, but have you considered this possibility, that the error is cumulative?


The possibility of error in the measurement in the year induced by the Earth getting further away from the sun, based upon the assumption that the Earth has a constant elliptic orbit isn't the reason why that measurement was flawed. The reason why the measurement was flawed was because there was an error in the measurement of the year induced by the Earth getting further away from the sun. The possibility of error does not invalidate a measurement, the actuality of error does. And 'the actuality of error' consists in the claim that 'the actual quantity ascribed in the measurement error analysis is wrong'. Not that it's possibly wrong. Of course it's possible wrong, scientific knowledge is fallible. Just because it's possibly wrong gives no reason to reject it.

Perhaps I misunderstand what you mean by metaphysical necessity of physical law, but I do believe that if you want to extrapolate the way that you do, you need some principles whereby you can argue that what was observed to be the case for one month will continue to be the case for 100 million years.


I actually did this. I made a case that the error rate would be the same for the same measurement process in 100 million years. There are things that would make atoms behave in different ways, like if all their protons decay (which is possible). If there were no protons, there'd be no caesium or strontium atoms, and no optical lattices, so no caesium clocks. If something like was predicted to happen within 100 million years, the claim that 'the measurement error of the clock would be the same in 100 million years' has some evidence against it. So I quoted you some stuff about the chronology of the universe - the stelliferous era, the one which we are in now, is predicted to have the same atomic physics through its duration. The end of the stelliferous era will be in about 1000 more universe lifetimes, much much longer than 100 million years. This is a matter of being consistent or inconsistent with physical theories, not one of their possibility of error. There's just no good reason to believe that atomic physics will change in a meaningful way in 100 million years. It's a tiny amount of time on the scale of the universe's chronology - 100 million years is 1*10^-9% of the lifetime of the stelliferous era, which we are in and will still be in.

Instead of focussing on what we can believe evidentially about the actuality of the laws of nature changing, you instead internalised the laws of nature to scientific consensus - claiming that the laws of nature change because of changes in science. In some trivial sense this is true; laws are descriptions of patterns in nature, if our descriptions change the linguistic formulation of patterns changes or new patterns are given descriptions. General changes in scientific consensus implies nothing in particular about the measurement error analysis of that clock.. Changes in the operation of nature might, if they influence the patterns atomic physics is concerned with in a meaningful way. Notice might, not will, since to establish that changes in the operation of nature will invalidate the error analysis a flaw has to be found in the error analysis. Not the possibility of a flaw - this is a triviality, scientific thinking is fallible, the establishment of a particular flaw in the error analysis.

And in this, you provide the claim that the behaviour of oscillations between hyperfine states has been observed for one month, therefore measurement error analysis based on that month's observations cannot be used to calculate an error rate which is beyond the month. Maybe not beyond the month, you've been admittedly imprecise on exactly how 'the data was gathered in a month' actually changes the error analysis. Saying you have no idea of how 'it was gathered in a month' invalidates the quantification of error in the measurements.

In general, this argumentative pattern is invalid. I have generalised here because you have not provided and cannot provide a way in which the duration of the data gathering for the paper influences the derived error rates. So, if the principle is 'we cannot say that the error is less than the data gathering duration because of a possible multiplicative effect on the error due to changes in physical law', which is still imprecise as it provides no translation of uncertainties in quantities of different dimensions (like temperature and time), we end up in a situation I detailed a bit earlier, but will provide more detail on now.

(1) You read the temperature from the thermometer at time t. Say that the duration of your observation was 1 second.
(2) There is a possible error associated with the thermometer and its error analysis which can multiply the error in an unbounded fashion.
(3) After 1 second, you do not know the temperature in the room since the error is possibly so large.

Try as you might, there isn't going to be any way you can establish the constancy of the laws of nature within a second through an a priori argument. All we have are perceptions of regularity and that stuff seems to work in the same way through terrestrial timescales in the real world. If this were something that could be reconciled a-priori Hume's arguments against it and Wittgensteinian-Kripkian analogues in philosophy of language and the whole problem with grue and blue wouldn't be there. It's always going to be possible that there's a huge unaccounted for error in the thermometer, therefore we don't know the temperature in the room on the thermometer's basis.

I would like to think you would also believe that this argument form is invalid, since it leads to the complete absurdity that it's impossible to form opinions based on measurements. Just substitute in 'measuring process' for thermometer and index a quantity instead of 'temperature', the argument works just the same.

And again this is an independent issue of whether it's appropriate to ask the question 'how many seconds are required to make caesium clock produce an error of 1 second' - that already assumes the clock is functioning, or would be functioning in the manner it did in the experiment for that time period. Counterfactually: if same process, same measurements, same errors. You can answer that question with a simple algebraic operation - taking the reciprocal. If my pulse has an error of 0.1 seconds per second, then it takes 10 seconds for my pulse to accumulate 1 second of error.

At this point, you said taking the reciprocal and saying the clock has amassed that error assumes the clock is working for that long. In a trivial sense it does - since if the clock didn't function for that long it would have a different amassed error but not a different error rate. Unless, for some reason, you undermine the measurement process of the clock by saying it requires the constancy of the laws of nature...

In that case, we end up in the absurd position that a*10^x per k error rate isn't the same as (b*a)*10^x per b*k - which is an error in basic arithmetic.

Edit: when I say there's no good reason to believe atomic physics will change in 100 million years, I mean that there's no good reason to believe that operation of nature relevant to atomic physics will change, not that the scientific understanding of atoms won't change in that time period. It will, it will get more expansive and more precise. If we're still even alive as a species by that point, ho hum.
fdrake November 30, 2017 at 15:47 #128917
@Metaphysician Undercover

By metaphysical necessity, I mean the metaphysical necessity of a proposition. By the metaphysical necessity of a proposition, I mean that it's something true which is not contingent. Something that must be the case of necessity, and cannot change. I'm sure you can see that 'the physical laws will not change' is implied by 'the physical laws cannot change' - and in the latter statement is the expression of what I mean by metaphysical necessity of physical law. I don't think it holds. I don't think it's necessary for the clock to function as it does, and I don't think it's required for reciprocating the error rate in terms of seconds/seconds to get how many seconds are required for amassing a single second of error.
Metaphysician Undercover November 30, 2017 at 23:49 #128992
Quoting fdrake
The possibility of error does not invalidate a measurement, the actuality of error does.


I don't claim that the possibility of error invalidates the measurement. I assume that the measurement is accurate. I claim that the extrapolation is invalid due to the likelihood of unknown factors in relating the micro time scale to the macro time scale.

You keep on assuming that the extrapolation is the actual measurement. It is not. The measurement was for a one month period. The extrapolation is for 100 million years.

Quoting fdrake
So I quoted you some stuff about the chronology of the universe - the stelliferous era, the one which we are in now, is predicted to have the same atomic physics through its duration.


You still have not accounted for dark energy yet. If I understand correctly, the so-called expansion of the universe indicates that frequencies such as that of the caesium atom, are changing. I assume that all your statements concerning the stability of the stelliferous era are unjustified until dark energy is properly accounted for.

Quoting fdrake
Instead of focussing on what we can believe evidentially about the actuality of the laws of nature changing, you instead internalised the laws of nature to scientific consensus - claiming that the laws of nature change because of changes in science. In some trivial sense this is true; laws are descriptions of patterns in nature, if our descriptions change the linguistic formulation of patterns changes or new patterns are given descriptions.


Yes, the laws of physics, which are the human descriptions of nature, change. But this is not trivial, as you claim. They change because human beings really have a very limited understanding of the vast universe, and they are always learning new things which make them reassess their old principles. You seem to think that our knowledge concerning the universe is already conclusive, and there is nothing which is unknown. Therefore you claim that our descriptions and principles of measurement will remain the same. I think this is naïve. And, my example of dark energy indicates that a huge part of the universe, that which falls into the concept of spatial expansion, remains essentially unknown.

Quoting fdrake
And in this, you provide the claim that the behaviour of oscillations between hyperfine states has been observed for one month, therefore measurement error analysis based on that month's observations cannot be used to calculate an error rate which is beyond the month. Maybe not beyond the month, you've been admittedly imprecise on exactly how 'the data was gathered in a month' actually changes the error analysis. Saying you have no idea of how 'it was gathered in a month' invalidates the quantification of error in the measurements.


As I said, I don't say that there are errors in measurement, just in the extrapolation. Do you understand the difference between measuring something and producing an extrapolation from that measurement?

Quoting fdrake
(1) You read the temperature from the thermometer at time t. Say that the duration of your observation was 1 second.
(2) There is a possible error associated with the thermometer and its error analysis which can multiply the error in an unbounded fashion.
(3) After 1 second, you do not know the temperature in the room since the error is possibly so large.

Try as you might, there isn't going to be any way you can establish the constancy of the laws of nature within a second through an a priori argument. All we have are perceptions of regularity and that stuff seems to work in the same way through terrestrial timescales in the real world. If this were something that could be reconciled a-priori Hume's arguments against it and Wittgensteinian-Kripkian analogues in philosophy of language and the whole problem with grue and blue wouldn't be there. It's always going to be possible that there's a huge unaccounted for error in the thermometer, therefore we don't know the temperature in the room on the thermometer's basis.


We are not talking about measuring something, then turning away for a second, and asking whether the measurement is still valid, we are talking about measuring something then turning away for 100 million years, and asking whether the measurement is still valid. So your analogy is really rather ridiculous.

Quoting fdrake
I would like to think you would also believe that this argument form is invalid, since it leads to the complete absurdity that it's impossible to form opinions based on measurements.


Again, as I've stated over and over, the issue is not the measurement, it is the extrapolation. For some reason you seem to still be in denial that there is an extrapolation involved here.

Quoting fdrake
At this point, you said taking the reciprocal and saying the clock has amassed that error assumes the clock is working for that long. In a trivial sense it does - since if the clock didn't function for that long it would have a different amassed error but not a different error rate. Unless, for some reason, you undermine the measurement process of the clock by saying it requires the constancy of the laws of nature...


If, the frequency of the caesium atom is actually changing over time, like in the example of the earth's orbit actually changing over time, then the error rate will change over time, unless the frequency rate is adjusted to account for that change.

Quoting fdrake
Edit: when I say there's no good reason to believe atomic physics will change in 100 million years, I mean that there's no good reason to believe that operation of nature relevant to atomic physics will change, not that the scientific understanding of atoms won't change in that time period. It will, it will get more expansive and more precise. If we're still even alive as a species by that point, ho hum.


The point is, how well do the laws of atomic physics represent what is really the case with the activities of the atoms. Hundreds of years ago people would say that there is no good reason to believe that the length of a year would change in millions of years. Now they've been proven wrong. Do you not think that the atomic physicists of today, will be proven wrong in the future?

Quoting fdrake
By metaphysical necessity, I mean the metaphysical necessity of a proposition. By the metaphysical necessity of a proposition, I mean that it's something true which is not contingent. Something that must be the case of necessity, and cannot change. I'm sure you can see that 'the physical laws will not change' is implied by 'the physical laws cannot change' - and in the latter statement is the expression of what I mean by metaphysical necessity of physical law. I don't think it holds. I don't think it's necessary for the clock to function as it does, and I don't think it's required for reciprocating the error rate in terms of seconds/seconds to get how many seconds are required for amassing a single second of error.


I can't grasp your point here at all. If you take a measurement of one month, and extrapolate that measurement for 100 millions years, then in order for your extrapolation to be correct, the physical law produced by your measurement, "cannot change". Therefore any possibility of change negates the validity of your extrapolation.
TheMadFool December 05, 2017 at 05:12 #130368
Quoting tom
That is quite a startling claim given that the relationships between mass and energy, energy and wavelength, mass and velocity, length and velocity, time and velocity, ... ... (I could go on and on) were all discovered in Theory before any measurement or reason for measurement could be conceived.


Empiricism!?

Quoting tom
All we would need to do is measure g and use that to DEFINE L and T. But of course, no such physical system exists.


The unit of g is m/s^2...time! has to be measured accurately first.

Quoting tom
But atomic transitions do exist, and the energy of transition can be measured. Because theory tells us the relationship between energy and frequency, and that transitions are induced in atoms when subjected to EM radiation of that frequency, we may DEFINE the second via that frequency.


Science is empirical. Measurement, time, length, mass, etc. comes first.
tom December 05, 2017 at 10:25 #130469
Quoting TheMadFool
Empiricism!?


One of my favourite fallacies!

Quoting TheMadFool
The unit of g is m/s^2...time! has to be measured accurately first.


With some rudimentary algebra, s = sqrt(m/g). So, we really can derive the second from other units if we wish, which is really what the SI standard does when it defines the second. It defines a frequency in terms of a measurable energy.

Quoting TheMadFool
Science is empirical. Measurement, time, length, mass, etc. comes first.


Comes before what?





TheMadFool December 06, 2017 at 10:18 #130794
Quoting tom
One of my favourite fallacies!


Really? Empricism is the working principle of science. Why is it that scientists perform experiments if empiricism is a fallacy?

Quoting tom
With some rudimentary algebra, s = sqrt(m/g)


I'm not saying g = m/s^2. The unit of g is m/s^2.
TheMadFool December 06, 2017 at 10:18 #130795
Quoting tom
Comes before what?


Before we discover relationships (laws).
tom December 06, 2017 at 15:15 #130907
Quoting TheMadFool
Really? Empricism is the working principle of science. Why is it that scientists perform experiments if empiricism is a fallacy?


To test their theories.

Quoting TheMadFool
I'm not saying g = m/s^2. The unit of g is m/s^2.


I was doing some dimensional analysis for you. The fundamental units are arbitrary.

Quoting TheMadFool
Before we discover relationships (laws).


Then why were gravitational waves known about 100years before we could detect them?