78

I looked at a few of the other posts regarding the accuracy of atomic clocks, but I was not able to derive the answer to my question myself.

I've seen it stated that atomic clocks are accurate on the order of $10^{-16}$ seconds per second. However, if there is no absolute reference frame with which to measure "real-time”, what is the reference clock relative to which the pace of an atomic clock can be measured?

Is the accuracy of an atomic clock even meaningful? Can't we just say the atomic clocks are perfectly accurate and use them as the reference for everything else?

zrbecker
  • 937

4 Answers4

105

This is a good and somewhat tricky question for a number of reasons. I will try to simplify things down.

SI Second

First, let's look at the modern definition of the SI second.

The second, symbol s, is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency ∆νCs, the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom, to be 9192631770 when expressed in the unit Hz, which is equal to s−1.

Emphasis mine

The key word here is unperturbed. This means, among other things, that the Cs atom should have no motion and there should be no external fields. We'll come back to why these systematic effects are very important shortly.

How an Atomic Clock Works

How do we build a clock based on this definitions of the second? We do it as follows. The Cs transition frequency is about 9.19 GHz. This is a microwave signal. Using analog electronics, engineers are able to make very very precise electric signals at these frequencies and these frequencies can be tuned to address the Cs atomic transition. The basic idea is to bath the Cs atoms in microwave radiation in the vicinity of 9.192631770 GHz. If you are on resonance the atoms will be excited to the excited state. If not they will stay in the ground state. Thus, by measuring whether the atoms are in the ground or excited state you can determine if your microwave signal is on or off resonance.

What we actually end up using as the clock (the thing which ticks off periodic events that we can count) is actually the 9.19 GHz microwave signal which is generated by some electronics box*. Once we see 9192631770 oscillations of this microwave signal (counted by measuring zero crossing of the microwave signal using electronics) we say that one second has passed. The purpose of the atoms is to check that the microwave frequency is just right. This is similar to how you might reset your microwave or oven clock to match your phone occasionally. We calibrate or discipline one clock to another.

So an atomic clock works by disciplining a microwave signal to an atomic transition frequency. Now, suppose you build a clock based on this principle and I also build one and we start our clocks at the same time (turn on our microwave oscillators and start comparing to the atoms occasionally). There are two possibilities. The first is that our two clocks always tick at the exact same time. The second is that there is noise or fluctuations somewhere in the system that cause us to get ticks at slightly different moments in time. Which do you think happens? We should be guided by the principle that nothing in experimental physics is ever exact. There is always noise. Atomic clock physics is all about learning about and understanding noise.

Clock Accuracy

This is the main topic of the OP's question. This is also where the key word unperturbed comes back into play. The Zeeman effect says that if the atom is in a magnetic field its transition frequency will shift slightly. This means a magnetic field constitutes a perturbation. This is one reason why your clock and my clock might tick at different moments in time. Our atoms may experience slightly different magnetic fields. Now, for this reason you and I will try really hard to ensure there is absolutely no magnetic field present in our atomic clock. However, this is difficult because there are magnetic materials that we need to use to build our clock, and there are magnetic fields due to earth and screwdrivers in the lab and all sorts of things. We can do our best to eliminate the magnetic field, but we will never be able to remove it entirely. One thing we can do is we can try to measure how large the magnetic field is and take this into account when determining our clock frequency. Suppose that the atoms experience a linear Zeeman shift of $\gamma = 1 \text{ MHz/Gauss}$**. That is

$$ \Delta f = \gamma B $$

Now, if I go into my atomic clock I can try to do my best to measure the magnetic field at the location of the atoms. Suppose I measure a magnetic field of 1 mG. This means that I have a known shift of my Cs transition frequency of $\Delta f = 1 \text{ MHz/Gauss} \times 1 \text{ mG} = 1 \text{ kHz}$. This means that, in absence of other perturbations to my atoms, I would expect my atoms to have a transition frequency of 9.192632770 GHz instead of 9.192631770 GHz.

Ok, so if you and I both measure the magnetic fields in our clocks and compensate for this linear Zeeman shift, we now get our clocks ticking at the same frequency, right? Wrong. The problem is that however we measure the magnetic field, that measurement itself will have some uncertainty. So I might actually measure the magnetic field in my clock to be

$$ B = 1.000 \pm 0.002\text{ mG} $$

This corresponds to an uncertainty in my atomic transition frequency of

$$ \delta f = 2 \text{ Hz} $$

So that means because of uncertainty about my systematic shifts I don't exactly know the transition frequency for my atoms. That is, I don't have unperturbed ground state Cs atoms so my experiment doesn't exactly implement the SI definition of the second. It is just my best guess.

But, we do have some information. What if we could compare my atoms to perfect unperturbed Cs atoms? How much might my clock differ from that ideal clock? Suppose I decrease the frequency of my clock by 1 kHz to account for the magnetic field shift so that my clock runs at

$$ f_{real} = 9192631770 \pm 2 \text{ Hz} $$

While the ideal Cs clock runs (by definition of the SI second) at exactly

$$ f_{ideal} = 9192631770 \text{ Hz} $$

Let’s run both of these for $T= 1 \text{ s}$. The ideal clock will obviously tick off $$ N_{ideal} = f_{ideal} T = 9192631770 $$ oscillations since that is the definition of a second. How many times will my clock tick? Let's assume the worst case scenario that my clock is slow by 2 Hz. Then it will tick

$$ N_{real} = f_{real} * T = 91926317\textbf{68} $$

It was two ticks slow after one second. Turning this around we can ask if we used my clock to measure a second (that is if we let it tick $N_{real} = 9192631770$ under the assumption - our best guess - that the real clock's frequency is indeed 9.192631770 GHz) how long would it really take?

$$ T_{real} = 9192631770/f_{real} \approx 1.00000000022 \text{ s} $$

We see that after one second my clock is slow by about 200 ps after 1 s. Pretty good. If you run my clock for $5 \times 10^9 \text{ s} \approx 158.4 \text{ years}$ then it will be off by one second. This corresponds to a fractional uncertainty of about

$$ \frac{1 \text{ s}}{5 \times 10^9 \text{ s}} \approx \frac{2 \text{ Hz}}{919263170 \text{ Hz}} \approx 2\times 10^{-10} = 2 \text{ ppb} $$

Frequency Uncertainty to Seconds Lost

Here I want to do some more mathematical manipulations to show the relationship between the fractional frequency uncertainty for a clock and the commonly referred to "number of seconds needed before the clock loses a second" metric.

Suppose we have two clocks, an ideal clock which has unperturbed atoms which runs at frequency $f_0$ and a real clock which we've calibrated so our best guess is that it runs at $f_0$, but there is an uncertainty $\delta f$, so it really runs at $f_0 - \delta f$. We are now going to run these two clocks for time $T$ and see how long we have to run it until they are off by $\Delta T = 1 \text{ s}$.

As time progresses, each clock will tick a certain number of times. The $I$ subscript is for the ideal clock and $R$ is for real.

\begin{align} N_I =& f_0T\\ N_R =& (f_0 - \delta f)T \end{align}

This relates the number of ticks to the amount of time that elapsed. However, we actually measure time by counting ticks! So we can write down what times $T_I$ and $T_R$ we would infer from each of the two clocks (by multiplying the observed number of oscillations by the presumed oscillation frequency $f_0$).

\begin{align} T_I =& N_I/f_0 = T\\ T_R =& N_R/f_0 = \left(\frac{f_0 - \delta f}{f_0}\right) T_I = \left(1 - \frac{\delta f}{f_0}\right)T_I \end{align}

These are the key equations. Note that in the first equation we see that the time inferred from the ideal clock $T_I$ is equal $T$ which of course had to be the cause because time is actually defined by $T_I$. Now, for the real clock we estimated its time reading by dividing its number of ticks, $N_R$ (which is unambiguous) by $f_0$. Why didn't I divide by $f_0 + \delta f$? Remember that our best guess is that the real clock ticks at $f_0$, $\delta f$ is an uncertainty, so we don't actually know the clock is ticking fast or slow by amount $\delta f$, we just know that it wouldn't be so statistical improbable that we are off by this amount. It is this uncertainty that leads to the discrepancy in the time reading between the real and ideal clocks.

We now calculate

\begin{align} \Delta T = T_I - T_R = \frac{\delta f}{f_0} T_I \end{align}

So we see

\begin{align} \frac{\Delta T}{T_I} = \frac{\delta f}{f_0} \end{align}

So we see that the ratio of the time difference $\Delta T$ to the elapsed time $T$ is given exactly by the ratio of the frequency uncertainty $\delta f$ to the clock frequency $f_0$.

Summary

To answer the OP's question, there isn't any perfect clock against which we can compare the world's best atomic clocks. In fact, the world's most accurate atomic clocks (optical clocks based on atoms such as Al, Sr, or Yb) are actually orders of magnitude more accurate than the clocks which are actually used to define the second (microwave Cs clocks).

However, by measuring systematic effects we can estimate how far from ideal a given real clock is from an ideal clock. In the example I gave above, if we know the magnetic field is less than .002 mG then we know that the clock is less than 2 Hz from an ideal clock frequency. In practice, every clock has a whole zoo of systematic effects that must be measured and constrained to quantify the clock accuracy.

And one final note. Another important clock metric which we haven't touched on here is clock stability. Clock stability is related to the fact that the measurement we use to determine if there is a frequency detuning between the microwave oscillator and the atomic transition frequency will always have some statistical uncertainty to it (different from the systematic shift I described above) meaning we can't tell with just one measurement exactly what the relative frequency between the two is. (In absence of drifts) we can reduce this statistical uncertainty by taking more measurements, but this takes time. A discussion of clock stability is outside of the scope of this question and would require a separate question.

Reference Frames

Here is a brief note about reference frames because they're mentioned in the question. Special and general relativity stipulate that time is not absolute. Changing reference frames changes the flow of time and even sometimes the perceived order of events. How do we make sense of the operation of clocks, especially precision atomic clocks, in light of these facts? Two steps.

First, see this answer that convinces us we can treat the gravitational equipotential surface at sea level as an inertial frame. So if all of our clocks are in this frame there will not be any relativistic light shifts between those clocks. To first order, this is the assumption we can make about atomic clocks. As long as they are all within this same reference frame, we don't need to worry about it.

Second, however, what if our clocks are at different elevations? The atomic clocks in Boulder, Co are over 1500 m above sea level. This means that they would have gravitational shifts relative to clocks at sea level. In fact, just like the magnetic field, these shifts constitute systematic shifts to clock frequencies which must be estimated and accounted. That is, if your clock is sensitive (or stable) enough to measure relativistic frequency shifts then part of the job of running the clock is to estimate the elevation of the clock relative to the Earth's sea level equipotential surface. Clocks are now so stable that we are able to measure two clocks running at different frequencies if we lift one clock up just a few cms relative to another one in the same building or room. See this popular news article.

So the answer to any question about reference planes and atomic clocks is as follows. When specifying where "time" is defined we have to indicate the gravitational equipotential surface or inertial frame that we take as our reference frame. This is typically conventionally the surface of earth. For any clocks outside of this reference (remember that the GPS system uses atomic clocks on satellites) we must measure the position and velocity of these clocks relative to the Earth reference frame so that we can estimate and correct for the relativistic shifts these clocks experience. These measurements will of course come with some uncertainty which results in additional clock inaccuracies as per the rest of my answer.

Footnotes

*You might wonder: Why do we need an atomic clock then? Can't we just take our microwave function generator and set it to 9.192631770 GHz and use that as our clock? Well sure, you can dial in those number on your function generator, but what's really going to bake your noodle is "how do we know the function generator is outputting the right frequency?" The answer is we can't truly know unless we compare it to whatever the modern definition of the second is. The microwave signal is probably generated by multiply and dividing the frequency of a mechanical oscillator such as a quartz oscillator or something which has some nominal oscillation frequency, but again, we can't truly know what the frequency of that thing is unless we compare it to the definition of the second, an atom.

**I made this number up. Cs transition which is used for Cs atomic clocks actually doesn't have a linear Zeeman shift, just a quadratic Zeeman shift, but that doesn't matter for purposes of this calculation.

Jagerber48
  • 13,887
  • 2
    Of course, the modern caesium atomic fountain is a big improvement on the early caesium clocks that were state of the art when the SI caesium based definition of the second was introduced. It uses ultra-cold atoms in freefall. – PM 2Ring Aug 22 '20 at 11:52
  • I'm guessing that the “original” clock was the rotation of the earth, probably standardized to the sidereal day on a particular date in history. – R.W. Bird Aug 22 '20 at 15:14
  • @R.W.Bird time has always been related to the rotation of the Earth. A brief history of distributed timekeeping begins in that 17th century on the tails of some of Galileo and Huygens work on Pendulum clocks was Greenwich observatory which housed one of the most stable clocks of the time as well as astronomical observatories so that this pendulum clock could be calibrated to apparent motion of celestial bodies. This clock was used to standardize sea and rail transport. – Jagerber48 Aug 22 '20 at 17:07
  • In the early 20th century quartz clocks outpaced pendulum clocks in terms of stability and accuracy and these were used as the time standard for some time. In 1967 we transitioned to atomic time based on the Cs transition. In the future we may transition the definition to a transition of an atom which is accessible by modern optical clocks which are more stable and accurate than the Cs clocks we currently use. – Jagerber48 Aug 22 '20 at 17:09
  • The question talks about reference frames, but this answer does not touch upon relativity. I recommend Appendix 2 of the BIPM's SI Brochure as a starting point for that. – JdeBP Aug 22 '20 at 18:31
  • 3
    This is simply too much for an in-principle simple answer! -1 – Deschele Schilder Aug 23 '20 at 13:47
  • 5
    @descheleschilder: Not only too complex, but ultimately misses the point--it fails to so much as give passing mention to what is actually used as the ultimate time reference. – Jerry Coffin Aug 23 '20 at 17:18
  • 8
    @JerryCoffin Sure it does. It answers that there is no "ultimate reference clock," but that the question of how accurate an atomic clock is is still meaningful even in the absence of an ultimate reference. – Chris Aug 24 '20 at 08:13
  • @JerryCoffin The ultimate reference would qualify as a clock. But all clocks have errors. We assess those errors by comparing clocks to each other. By doing so, we can weed out the clocks that show the greatest disagreement with the consensus of other clocks. By this means, we improve the accuracy of clock technology. – John Doty Aug 24 '20 at 14:41
  • Does the atomic clock continuously test its frequency generator against caesium atoms, or only when initially tuned? How does it deal with frequency drift, and how is it synchronized with other clocks so that seconds begin at the same time? – Fax Aug 24 '20 at 15:43
  • 2
    @Chris: I guess I can see how it can be read as implying that. I can also see how it can be read as implying at least as strongly that there isn't even an attempt at a reference to which other clocks are synchronized (which is, of course, dead wrong). – Jerry Coffin Aug 24 '20 at 15:54
  • @Fax, Good question. The brief answer is that yes, the frequency generator is periodically compared to the Cs atoms during its operation time. What varies from system to system is how often these comparisons are made. Just a note, if you perform one experiment to compare the frequency it will have a high statistical uncertainty because of electronic or quantum noise. Because of this to get a good comparison you need to take repeated measurements. A "calibration" then consists of a string of $M$ atom spectroscopy measurements where one measurement takes time $\tau$. – Jagerber48 Aug 24 '20 at 18:17
  • Then there is deadtime $T$ from one "calibration" to the next. $T$ may be made as small as possible for some systems, but for other systems $T$ may be something like days or weeks or months. All of this has to do with how much the clocks may drift over time in between calibrations. This gets to what @JerryCoffin has brought up regarding the clock fleet which is part of the official time scales. Basically this fleet of clocks is extremely stable which means that they can all run referencing themselves to eachother, only rarely being compared to the primary Cs reference. – Jagerber48 Aug 24 '20 at 18:19
  • All of that said, in the end the clock fleet must at least occasionally be compared to the primary Cs references since those are able to provide our most *accurate* estimations of the SI second. – Jagerber48 Aug 24 '20 at 18:20
37

BIPM and TAI

The International Bureau of Weights and Measures (BIPM) in France computes a weighted average of the master clocks from 50 countries. That weighted average then gives International Atomic Time (TAI), which forms the basis of the other international times (e.g., UTC, which differs from TAI by the number of leap seconds that have been inserted, currently 37).

There isn't, however, a single source that gives TAI in real time. Rather, BIPM basically collects statistics from each national lab, computes a worldwide average, and publishes a monthly circular showing how each differed from the average over the course of the previous month. The national labs then use this data to adjust their clocks so they all stay in tight synchronization.

Most of the statistics are collected by using GPS for dissemination. That is, a laboratory will periodically compare their local time to the time they receive via GPS, and send the difference they observed to BIPM. A few links (8, as of the current circular) use two-way transmission of their current time and frequency instead.

BIPM also publishes a weekly "rapid UTC" report with similar information to give national labs slightly more up to date information to help stay in sync better.

To assist the GPS based comparisons, BIPM periodically (most recently in late 2018) does trips around the world to the various national labs with a couple of GPS receivers that are used to calibrate the receivers at each lab.

Individual Labs

The master clocks from those countries are themselves an average of a number of atomic clocks, all stored in vaults to keep them in the most constant environment possible.

These are not, however, all identically constructed though. Let me give the US Naval Observatory's master clock as one example:

The atomic clock timescale of the Observatory is based on an ensemble of cesium-beam frequency standards, hydrogen masers, and rubidium fountains. Frequency data from this ensemble are used to steer the frequency of another such maser, forming our designated Master Clock (MC), until its time equals the average of the ensemble, thereby providing the physical realization of this "paper timescale."

Specifically, the frequency of a device called an Auxiliary Output Generator is periodically adjusted so as to keep the time of this maser synchronized as closely as possible with that of the computed mean timescale USNO timescale UTC (USNO), which in turn adjusted to be close to the predicted UTC. The unsteered internal reference timescale is designated as A.1, while the reference of the actual Master Clock is called UTC (USNO).

UTC (USNO) is usually kept within 10 nanoseconds of UTC. An estimate of the slowly changing difference UTC - UTC (USNO) is computed daily.

GPS

The most easily available reference clock for many people is a GPS signal, so it's probably worth mentioning a bit about it. Each GPS satellite has at least one atomic clock on board (and most have two). These are (occasionally) adjusted by a ground station (Schriever Air Force Base, Colorado), ultimately based on the master clock from the US Naval Observatory.

Also note, however, that most typical GPS receivers will use time from other satellite systems (e.g., GLONASS) interchangeably with actual GPS satellites. In fact, at any given time it's pretty routine that you're using signals from some some satellites from each system. From the user's viewpoint, the two are identical, but GLONASS is a Russian system so (unsurprisingly) it's controlled from a Russian base station and they use their own master clock as the basis for its time, though the US and Russia both contribute to TAI, so the clocks remain tightly synchronized.

Another mildly interesting point: the clocks on GPS satellites have to be adjusted due to relativistic effects--both special and general relativity affect the time (i.e., they're affected both by the fact that they're moving fast, and the fact that they're at high enough altitude that they're much less affected by the earth's gravity than ground-based clocks).

As noted in the section on BIPM and TAI, the various laboratories themselves also use GPS (and GLONASS) for their internal comparisons to help them stay in sync with each other.

Summary

The international standard is based on a weighted average of the standards from 50 different countries, each of which is (in turn) based on a weighted average of a number of separate clocks. The individual clocks are of at least three distinct types (cesium, hydrogen and rubidium).

At least for the US Naval Observatory, the official final output is actually via a hydrogen maser, which is occasionally adjusted to synchronize its current time/frequency with that of the rest of the ensemble.

The unofficial final output used by most people is GPS (or equivalently, GLONASS, etc.) These also include their own atomic clocks, but those are adjusted to maintain synchronization with the ground-based reference clocks.

TAI is approximates the SI second about as closely as current technology supports (and will probably be updated when technology improves substantially--though such a substantial change may easily lead to a change in the SI definition of the second as well). Although it's based on measurements, TAI is never really current--it's based on collecting data, averaging it, and then (after the fact) publishing information about how each laboratory's master clock differed from the weighted average of all the clocks.

References

BIPM

USNO Master Clock

USNO Time Scale

2018 group 1 calibration trip

Explanatory Supplement to BIPM Circular T

  • 1
    Thank you, I think this is a much better answer than the accepted one. Two questions: (1) how do the 50 countries' laboratories transmit their atomic clock time data to BIPM in France for BIPM to average? It seems to me that correctly accounting for the various signal delays with the required precision would itself be incredibly challenging. – tparker Aug 24 '20 at 01:40
  • (2) Is TAI defined to be "whatever number BIPM releases", and BIPM calculates that number by averaging over the 50 clocks? Or is TAI defined to be "the theoretical instantaneous average of the 50 clocks", and in practice BIPM actually calculates the value, but in principle some other organization could calculate TAI with higher precision than BIPM? – tparker Aug 24 '20 at 01:41
  • See https://www.nist.gov/pml/time-and-frequency-division/nist-time-frequently-asked-questions-faq#tai. "While the stability of TAI is achieved by this weighted average, the accuracy of TAI is derived from data from primary frequency standards". In the end the accuracy of the time scale comes from the Cs primary standards. Also, I wouldn't say TAI is considered an "absolute" standard". TAI provides our most stable and accurate realization of the SI second but it is known to not be absolutely correct according to the definition of the SI second, this would be impossible. – Jagerber48 Aug 24 '20 at 06:14
  • @tparker: I've added some more information about TAI, BIPM, and what they really do. – Jerry Coffin Aug 24 '20 at 06:27
  • 1
    @tparker For astronomy, the theoretical absolute time is known as TT. It is, by definition, impossible to know perfectly. TAI is the basis for a realization of TT https://en.wikipedia.org/wiki/Terrestrial_Time#Realization. – John Doty Aug 24 '20 at 16:17
  • You might want to compare against the mise en pratique to check that you haven't missed anything. – JdeBP Aug 24 '20 at 17:53
  • To expand on John Doty's answer to my second question: Geocentric Coordinate Time and Terrestrial Time (which is defined to have a linear dependence on GCT) are idealized theoretical standards defined independently of any human apparatus or organization, which can never be measured exactly. The best current practical realization of TT is TAI, which is defined to be the value published in BIPM's monthly Circular T. – tparker Aug 25 '20 at 01:52
  • 1
    FWIW, astronomer Steve Allen has a large collection of articles on time, with a brief history of time scales here. – PM 2Ring Aug 25 '20 at 06:52
  • To the extent that you've claimed the other answer does not state what is the "ultimate" reference (which it does), this post provides even less of an answer. It's great that you talk about ensembles of hydrogen masers and rubidium fountains and so on, but given that you don't even attempt to detail how those clocks are calibrated (hint: as per jberger's answer), this isn't really an answer to the question as posed. – Emilio Pisanty Aug 26 '20 at 14:49
15

However, if there is no absolute reference frame to measure "real time" for, what is the reference clock that an atomic clock can be measured against?

They are measured against an ensemble of other identically constructed atomic clocks (all at rest with respect to each other and under identical operating conditions). The $10^{-16}$ means that two such clocks will on average drift apart from each other at a rate on the order of a picosecond every few hours.

Dale
  • 99,825
0

what is the reference [...] to measure "real time" [ duration ]

A widely studied general reference for comparing durations is provided within (or: by) the theory of relativity: in terms of (ratios of) arc lengths of path segments of clocks, as each proceeds through a sequence of events "on its timelike path through spacetime".

The duration $\tau[ \, \mathcal A_J, \mathcal A_Q \, ]$ of a material point (participant) $A$, from its indication $A_J$ of having taken part in event $\varepsilon_{AJ}$ (i.e. in coincidence with some other suitable participant $J$), until its indication $A_Q$ of having taken part in event $\varepsilon_{AQ}$ (i.e. in coincidence with some other suitable participant $Q$), is accordingly defined as

$$\tau[ \, \mathcal A_J, \mathcal A_Q \, ] := \text{Infimum} \! \left[ \, \left\{ \, \left( \sum_{k = 0}^n \ell[ \, \mathcal A_{(k)}, \mathcal A_{(k + 1)} \, ] \right) \text{with } n \in \mathbb N, \, \mathcal A_{(0)} \equiv \mathcal A_J, \, \mathcal A_{(n)} \equiv \mathcal A_Q \, \right\} \, \right]$$

where $A$'s indications $\mathcal A_{(k)}$ are of its participation in events of its path segment between event $\varepsilon_{AJ}$ (at the beginning) and event $\varepsilon_{AQ}$ (at the conclusion), and the $\ell$ terms represent values of the so-called Lorentzian distance between the respective pairs of events in which $A$ took part.

Note that the infimum is to be evaluated of all sums (as opposed to evaluating the supremum when determining arc lengths of spatial path segements) because Lorentzian distances are superadditive by definition.

Those required values $\ell$, or more correctly: at least ratios of those values, can in turn be measured (definitively) by suitably chosen ideal clocks, such as the geometrodynamic clocks proposed by Marzke and Wheeler.

Is the accuracy of an atomic clock even meaningful?

With the described reference it could determined (at least in principle)

  • whether a given clock (and especially any given ticking clock, such as an atomic clock) has constant (tick) rate, or how "its rate" (compared wrt. to different pairs of tick indications) varied in suitably extended trials, and

  • whether the separately constant (tick) rates of any two given clocks were equal, or by how much they differed from each other.

But: Is this reference actually used in practice ? ...
Apparently not -- clearly it would be awfully cumbersome, laborious, costly, time-consuming and utterly impractical.

However, without using such a rigorous reference, it seems indeed questionable whether we could strictly speak of accuracy of generic clocks at all; especially considering the possibility of perturbations of unknown "sources or reasons", which moreover might not diminish the (mutual) precision of an actually given set of clocks.

user12262
  • 4,258
  • 17
  • 40