4

We know that 1 meter is the distance travelled by light in vacuum within a time interval of 1/299,792,458 second. My question is why we didn't take a simpler number like 1/300,000.000 or why not just 1?

Emilio Pisanty
  • 132,859
  • 33
  • 351
  • 666
Freddy
  • 452
  • 1
    The metre already existed before people were even thinking about the speed of light. It only had a different definition, see Wiki. – Hunter May 10 '14 at 13:46
  • 4
    You never, ever want a redefinition to invalidate existing measurements and standards: doing so is simply wrong. The result is that your new definition has to agree with the old one to the best current precision. – dmckee --- ex-moderator kitten May 10 '14 at 22:18
  • @dmckee: How would you then explain the fact that the candela was $($shamelessly$)$ redefined to a value close, but ultimately distinct from its initial one ? – Lucian Mar 23 '18 at 09:38

3 Answers3

9

Because it would have been incredibly expensive.

The current definition of the meter, based on a fixed value of the speed of light, was adopted on 1983, and it replaced the 1960 definition which was based on the wavelength of a krypton emission line. In essence, the light-based precision length metrology methods had become so accurate that the main source of uncertainty in length measurements was the uncertainty in the speed of light. That is, the speed of light was already determined to be $299\,792\,458\:\rm m/s$, with an uncertainty on the $\pm 1\:\rm m/s$ range and with a large established body of measurements that used it to that precision.

Now, if you're doing a change to a fixed speed of light, it is indeed tempting to change the number from $299\,792\,458$ to a nice round $300\,000\,000$, since after all they're very close - the ratio $$ \frac{300\,000\,000}{299\,792\,458} \approx 1.00069 $$ is pretty close to unity. So, why didn't we? In short, because the ratio $$ \frac{300\,000\,000}{299\,792\,458} \approx 1.00069 $$ is not close to unity at all. The two definitions differ by $7$ parts in $10^{4}$ (just short of 0.1%), and that means that every length measurement that involved more than three significant figures would've had to be re-calibrated, both in pure science as well as in industry. This would have required an enormous effort to re-write a huge fraction of the scientific and engineering literature (including technical manuals and software implementations), as well as actual physical changes to hardware to return their measurements back to round numbers (i.e. if you manufactured $5\:\rm mm$-long bolts to a $10^{-3}$ relative tolerance, then you would need to change your standards or maintain off-by-0.1% non-round-number lengths for your parts).

The change would also have similarly affected all measurements of quantities (like force, energy, pressure, and all of electrical metrology) with a nonzero length dimension that involve more than three significant figures. All told, you're talking about a significant fraction (more than half?) of all measurements.

The role of metrology is to provide a common currency in measurements that can be used for science, engineering, industry and commerce, and to be as transparent as possible. Changing standards is extremely expensive (just ask the countries that switched from imperial to metric, or the ones that haven't because it's too onerous) and the gains need to be clearly worthwhile. Rounding out the speed of light does not come anywhere close to meeting that standard.

Emilio Pisanty
  • 132,859
  • 33
  • 351
  • 666
7

Because $299\,792\,458\ \mathrm{m/s}$ is the speed of light. By using $300\times10^6$ we won't get one meter after $1/300\times10^6\ \mathrm{s}$.

We could change the speed of light and set it to $300\times10^6\ \mathrm{m/s}$ by changing either the definition of second or the previous definition of meter. However, it would be harder to do that because changing the definition of second or old definition of meter would change other units.

In short, people had defined meter by the distance between two marks on a metal rod. Then they decided to change this definition by using light so that we get the same meter as before. After all the speed of light had been measured before with the old definition of meter.

David Z
  • 76,371
MOON
  • 927
2

My question is why we didn't take a simpler number, like $1~/~300,000,000$ ?

Well, the scientists were probably tempted, and one can only speculate as to the number of sleepless nights they spent agonizing over this very issue, but I think it ultimately boiled down to one simple reason, masterfully captured by Segal's insightful witticism :

A man with a watch always knows what time it is. A man with two watches is never sure.

Without denying that $3\cdot10^8$ is indeed a rather round number, please remember that the initial definition of the metre was the ten millionth part $($another round number$)$ of the Earth's quadrant, or half–meridian. Using the data available to us through the $1984$ World Geodetic System $(\rm WGS~84),$ on which the current Global Positioning System $(\rm GPS)$ is based, a more appropriate value for the speed of light would have been $$\tilde c~=~299~733~538~~^1\!\!{\Large/}\!_2\quad\rm\Big[m\cdot s^{-1}\Big],$$ which guarantees the length of a terrestrial meridian to be as close as humanly possible to its historically intended value of $40~000$ kilometres. $($Currently, it is closer to $40~008,$ and increasing c would only serve to make this difference bigger$).$

Both $($re$)$definitions, however, suffer from one fatal flaw : the rather unpleasant definition of a second as representing

the duration of $~9~192~631~770$ periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the $~^{133}\rm Cs~$ atom.

So why not just wipe the whole slate clean, and massively simplify things by redefining the metre as measuring

the distance traveled by light during $~30~^2\!{\Large/}\!_3$ hyperfine ground state transition periods of the $~^{133}\rm Cs~$ atom ?

So, all torn up inside between these three equally attractive choices, what were the poor scientists going to do, since, on one hand, all three are enticing, yet, at the same time, each also presents a certain flaw. So why even bother modifying the current definition in the first place ? Sure, it has its flaws, but, then again, so do all its rival alternatives. And, while they might each have their strengths, the current version also has its advantages, chief among them being its consistency, namely the fact that $($almost$)$ no previous measurement has to be changed or amended in any way, as has already been mentioned twice on this very thread.

Lucian
  • 139