109

A friend and I recently discussed the idea that radioactive decay rates are constant over geological times, something upon which dating methods are based.

A large number of experiments seem to have shown that decay rate is largely uninfluenced by the environment (temperature, solar activity, etc.). But how do we know that decay rates are constant over billions of years? What if some property of the universe has remained the same over the one hundred years since radioactivity was discovered and measured, but was different one billion years ago?

An unsourced statement on the Wikipedia page on radioactive decay reads:

[A]strophysical observations of the luminosity decays of distant supernovae (which occurred far away so the light has taken a great deal of time to reach us) strongly indicate that unperturbed decay rates have been constant.

Is this true?

I'm interested in verifying constancy of decay rates over very long periods of time (millions and billions of years). Specifically, I'm not interested in radiocarbon dating or other methods for dating things in the thousands-of-years range. Radiocarbon dates, used for dating organic material younger than 50,000 years, are calibrated and crossed-checked with non-radioactive data such as tree rings of millennial trees and similarly countable yearly deposits in marine varves, a method of verification that I find convincing and that I am here not challenging.

SRS
  • 26,333
Pertinax
  • 997
  • 39
    Isn't this question of the same vein as questions about whether the fine structure, the cosmological constant, the speed of light, etc. have remained constant over billions of years? With the apparent lack of any strong theoretical argument of why these parameters should be expected to change over the past few billions of years, and the absence of any experiments or astronomical observations that suggest these parameters are changing, I suppose that most people just take the Occam's Razor approach and assume that these parameters are constant until evidence appears suggesting otherwise. –  May 23 '17 at 19:31
  • 30
    @Samuel I've got nothing against assumptions, but I like to know where they are made. I come from a discipline where people are already regularly telescoping six or seven assumptions without even realising it, justifying each one of them with Occam's razor, and arriving at a conclusion they call the "most likely" that to me sounds little better than "least unlikely". This assumption does seem very likely true, but so much in archaeology rests upon it that I would be happy if it could be grounded on more than parsimony and be observationally confirmed. – Pertinax May 23 '17 at 19:57
  • 4
    Related: https://physics.stackexchange.com/q/48543/50583, https://physics.stackexchange.com/q/7008/50583 (on variability of half-life and non-exponential decay), https://physics.stackexchange.com/q/78684/50583 (on the meaningfulness of the "change" of a dimensionful constant over time), – ACuriousMind May 23 '17 at 21:01
  • 13
    It's a good question! I don't think any of the linked questions really cover it. Decay rates can be derived in principle from the Standard Model coupling constants, and I doubt that they can be changed much without changing basically everything else (e.g. making nuclear fusion go too fast or slow, changing stellar spectra), but I don't know enough to pin it down. – knzhou May 23 '17 at 21:10
  • @TheThunderChimp See for example http://xxx.lanl.gov/abs/astro-ph/9912131 and http://xxx.lanl.gov/abs/astro-ph/9901373 – hdhondt May 24 '17 at 23:24
  • You need to define first how you would measure stability of decay rates over time. You need some standard of time. A clock. What kind of clock would you use? The most precise and stable clocks out there at the moment are atomic ones. But they would keep the rate of decay constant over any period of time almost by definition. What else could you use? A pendulum-based clock? How would you know that the gravitational constant (or indeed the mass) is really invariant over time? You need some kind of clock to check that... – n. m. could be an AI May 29 '17 at 08:08

5 Answers5

73

Not an answer to your exact question but still so very related that I think it deserves to be mentioned: the Oklo natural nuclear reactor, discovered in 1972 in Gabon (West Africa). Self-sustaining nuclear fission reactions took place there 1.8 billion years ago. Physicists quickly understood how they could use this as a very precise probe into neutron capture cross sections that far back. Actually, a re-analysis of the data [1] has been published in 2006 featuring one of the author of the original papers in the 70's. The idea is that neutron capture is greatly augmented when neutron energy gets close to a resonance of the capturing nucleus. Thus even a slight shift of those resonance energies would have resulted in a dramatically different outcome (a different mix of chemical compounds in the reactor). The conclusion of the paper is that those resonances did not change by more than 0.1 eV.

It should be noted that the most interesting outcome from the point of view of theoretical physics is that this potential shift can be related to a potential change of the fine-structure constant $\alpha$. The paper concludes that

$$−5.6 \times 10^{−8} < \frac{\delta\alpha}{\alpha} < 6.6 \times 10^{−8}$$

[1] Yu. V. Petrov, A. I. Nazarov, M. S. Onegin, V. Yu. Petrov, and E. G. Sakhnovsky, Natural nuclear reactor at oklo and variation of fundamental constants: computation of neutronics of a fresh core, Phys. Rev. C 74 (2006), 064610. https://journals.aps.org/prc/abstract/10.1103/PhysRevC.74.064610

48

The comment Samuel Weir makes on the fine structure constant is pretty close to an answer. For electromagnetic transitions of the nucleus, these would change if the fine structure constant changed over time. Yet spectral data on distant sources indicates no such change. The atomic transitions would change their energies and we would observe photons from distant galaxies with different spectral lines.

For the weak and strong nuclear interactions, the answer is more difficult or nuanced. For the strong interactions, we have more of an anchor. If strong interactions changed their coupling constant this would impact stellar astrophysics. Stars in the distant universe would be considerably different than they are today. Again observations of distant stars indicate no such drastic change. For weak interactions, things are more difficult.

A lot of nuclear decay is by weak interactions and the production of $\beta$ radiation as electrons and positrons. Creationists might argue the rate of weak interactions was considerably larger in the recent past to give the appearance of more daughter products than what occurs today. This then gives the appearance of great age that is not there. The problem with carbon dating with the decay process $$ {}^{14}_ 6C~\rightarrow~ {}^{14}_7N~+~e^−+~\nu_e $$ is that if this has changed over the last $6000$ years, a favorite time for creationists, this would mean there would be deviations between carbon dating methods and historical record.

None of this is proof really, but it does fall in line with Bertrand Russell's idea of a teapot orbiting Jupiter.

SRS
  • 26,333
  • 3
    The "Teapot orbiting Jupiter" seems a very weak response to this. That is a response for proposals that are (currently) complely unobservable, hence both unverifiable and unfalsifiable. Having provided hints about how we actually can observe indirect effects of radioactive decay rates elsewhere (and elsewhen), don't undermine that limited observability by likening it to Russell's proposition which, by design, is thoroughly undecideable. – Steve Jessop May 25 '17 at 11:04
  • 2
    Of course ignoring the hypothetical possibility of changes from a misapplication of Occam is even worse. We know that many kinds of particle behaviour at very high energies are markedly different from low energies, and hence different at very early epochs of the universe. Physicists should and do seek evidence one way or the other for whether things change, and if so what, how, why. There's a difference between looking and not finding, vs. not looking, and the situation here is the former. "Nothing to see here, move along" only needs to be deployed when you're actually hiding something ;-) – Steve Jessop May 25 '17 at 11:15
  • Comments are not for extended discussion; this conversation has been moved to chat. – ACuriousMind May 25 '17 at 14:15
  • 1
    You may wish to qualify "creationists" as "young Earth creationists." – jpmc26 May 25 '17 at 23:25
  • Having once argued the position, I can say absolutely this makes no attempt to answer the Young Earth Creationist claim whatsoever. The claim's nature is a sudden change of rate around either the flood or the event around the time of Peleg. – Joshua May 26 '17 at 21:34
34

There are various questions that one would have to answer, if one wished to claim that there had been large changes in decay rates over geological time. Here is what I think might be the best experiment to prove this claim.

Without using radiological evidence, one can deduce that the Earth is at least a billion years old by counting annual sedimentation layers and measuring thicknesses of rock strata, and cross-correlating between them by presence of identical or near-identical fossil species. This is what Victorian geologists did, leading to the only case I know where geology beat physics for deducing the truth. The physicists asserted that the world could not be much older than 50 million years, because no known chemical process could keep the sun hot for longer than that. The geologists insisted on at least a billion years, and that if it wasn't chemistry, something else must be powering the sun. They were right. The Sun shines by then-unknown nuclear fusion, not chemistry. BTW, it's "at least" because it is hard to find sedimentary rocks more than a billion years old, and such rocks do not contain helpful fossils. Tectonic activity has erased most evidence of pre-Cambrian ages ... except for zircons, but I'm jumping ahead.

Now, jump forwards to today, when we can do isotopic microanalysis of uranium and lead inside zircon (zirconium silicate) crystals. (Skip to the next paragraph if you know about radio-dating zircons.) Zircon has several unique properties. An extremely high melting point. Extreme hardness, greater than quartz. High density. Omnipresence (zirconium in melted rock always crystallizes into zircons as the melt cools, before any other minerals crystallize at all). And most importantly, a very tight crystal structure, which cannot accommodate most other elements as impurities at formation. The main exception is uranium. The only way that lead can get into a zircon crystal, is if it started as uranium which decays into lead after the crystal has solidified from a melt. That uranium comes in two isotopes with different decay times, and each decay chain ends with a different lead isotope. By measuring the relative concentrations of two lead and two uranium isotopes in a zircon, you can deduce the time since it formed using two different "clocks". These zircons are typically the size of grains of sand, so a rock sample will contain millions of independent "clocks" which will allow for good statistical analysis.

So, let's find some zircons in an igneous intrusion into a sedimentary rock whose age we know, roughly, by Victorian geology. It's best if the igneous rock is one which formed at great depth, where all pre-existing zircons would have dissolved back into the melt. The presence of high-pressure metastable minerals such as diamond or olivine would allow us to deduce this, and the fact that all the zircons have the same uranium-to-lead ratios would confirm the deduction. Otherwise one would expect to find a mix of young and older zircons. Choose the youngest, which would have crystallized at the time of the intrusion, rather than having been recycled by tectonic activity from an older time. (Which in many cases is the primaeval solidification of the Earth's crust, and the best estimation of the age of our planet, but that's not relevant here).

Now, compare the age deduced by radioactive decay, to the less accurate age from Victorian geology. If the rate of radioactive decay has changed greatly over geological deep time, there will be a disagreement between these two estimated ages. Furthermore, the disagreement will be different for intrusions of different ages (as judged by Victorian geology), but consistent for intrusions of similar age in different location.

Look for locations where there is a sedimentary rock with intrusion, covered by a younger sedimentary rock without intrusion, meaning that the age of the intrusion can be deduced to be between that of the two sedimentary strata. The closer the age of the two sedimentary strata, the better.

I do not know if this has been done (I'd certainly hope so). Any serious proponent of time-varying radioactive decay, needs to research this. If nobody has looked, get out in the field, find those discrepancies, and publish. It might lead to a Nobel prize if he is right. The onus is certainly on him to do this, because otherwise Occam's razor applies to this theory.

Back to the physics, I'd ask another question, if this observation fails to uncover strong evidence that radioactive decay rates do vary with time. It is this. How come that the $^{238}$U and $^{235}$U "clocks" in zircons always agree? Radioactive decay is basically quantum tunnelling across a potential barrier. The half-life depends exponentially on the height of the barrier. Any proposed time variation, would mean that the height of this barrier varied in deep time, in such a way that the relative rate of $^{235}$U and $^{238}$U decay does not change. Which is a big ask of any such theory, given the exponential sensitivity to changes.

nigel222
  • 690
  • 1
    Great answer, I very much appreciate the "how to test" approach, and the idea of counting sedimentary layers to cross-check the radiodates seems like the good one, especially since this dating method was used as long ago as Victorian times (I find this of historical interest, any nineteenth century sources on this? Did anyone actually manually count to one billion?). @DavidHammen suggests that some cross-checking has already been done, do you (or him) have any sources on this? – Pertinax May 24 '17 at 16:36
  • RE U235-U238: Would a change of the, for instance, weak interaction be expected to change the relative rate? – Pertinax May 24 '17 at 16:45
  • @TheThunderChimp you can download Sir Charles Lyell's "Principles of Geology" for free from Amazon Kindle or public domain. It's a seriously weighty tome and he lacked Darwin's gift for the English language. But its interesting to dip into, to find the state of Victorian geology. – nigel222 May 24 '17 at 17:45
  • 2
    Re relative decay rates: it might be possible to formulate a theory which kept the relative decay rates of U235 and U238 the same while varying both. My instincts tell me that this would be hard (especially when other longlived isotopes are also checked). – nigel222 May 24 '17 at 17:51
  • There's also a lot of good evidence from the Oklo natural nuclear reactor, cited in Luc J Bourhis's answer. – nigel222 May 24 '17 at 17:54
  • 3
    The last paragraph, if I understand it, is actually actually an excellent point all on it's own because it means that changes to fundamental constants would not produce proportional changes in decay rates. That alone should provide all the basis needed to refute any significantly shorter-timeline hypothesis. – RBarryYoung May 24 '17 at 20:20
31

The basic point here is that we don't "know" anything about "the real world". All we have is a model of the world, and some measure of how well the model matches what we observe.

Of course, you can construct an entirely consistent model which says "an invisible, unobservable entity created everything I have ever observed one second before I was born, and made it appear to be much older for reasons that cannot be understood by humans". But as Newton wrote in Principia in the section where he states his "rules for doing science," hypotheses non fingo - don't invent theories just for the sake of inventing them.

Actually one of the Newton's examples gave to illustrate that point was spectacularly wrong - he used his general principle to conclude that the sun gives off light and heat by the same chemical reactions as a coal fire on earth - but that's not the point: given the limited experimental knowledge that he had, he didn't need a different hypothesis about the sun to explain what was known about it.

So, the situation between you and your friend is actually the other way round. You (and all conventional physicists) have a model of the universe which assumes these constants don't change over time, and it fits very well with experimental observations. If your friend wants to claim they do change, the onus is on him/her to find some observable fact(s) which can't be explained in any other way - and also to show that his/her new hypothesis doesn't mess up the explanations of anything else.

As some of the comments have stated, if you start tinkering with the values of the fundamental constants in the Standard Model of particle physics, you are likely to create an alternative model of the universe which doesn't match up with observations on a very large scale - not just over the dating of a few terrestrial fossils.

The "big picture" approach is critically important here. You can certainly make the argument that finding a fossil fish on the top of a high mountain means there must have been a global flood at some point in history - but once you have a global model of plate tectonics, you don't need to consider that fossilized fish as a special case any more!

alephzero
  • 10,129
  • 13
    I don't think this gets to the heart of the question: what exactly would go wrong if a coupling constant changed? This isn't a crazy idea, as many of them did change in the early universe. We don't "need" to prove this, but we should easily be able to. – knzhou May 23 '17 at 22:04
  • 9
    I think this is ultimately not the right answer. Physicists' belief that the fundamental constants involved haven't changed is not an a prioi deduction from Ockham's razor but an a posteriori hypothesis resulting from many independent lines of evidence, including measurements and modelling, as the other answers detail. – N. Virgo May 24 '17 at 06:01
3

I thought I would include something on how coupling constants and masses vary. This might be a bit off topic, and I thought about asking a question that I would answer myself. Anyway here goes.

We have a number of quantities in the universe that are related to each other by fundamental constants. The first two of these are time and space, which are related to each other by the speed of light $x~=~ct$. The speed of light is something I will consider to be absolutely fundamental. It really is in correct units a light second per second or one. The speed of light defines light cones that are projective subspaces of Minkowski spacetime. Minkowski spacetime can be thought of then as due to a fibration over the projective space given by the light cone. The other fundamental quantity that relates physical properties is the Planck constant $h$ or $\hbar~=~h/2\pi$. This is seen in $\vec p~-~\hbar\vec k$ where $\vec k~=~\hat k/\lambda$. This relates momentum and wavelength, and is also seen in the uncertainty principle $\Delta p\Delta x~\ge~\hbar/2$. The uncertainty principle can be stated according to the Fubini-Study metric, which is a fibration from a projective Hilbert space to Hilbert space. These two systems share remarkably similar structure when seen this way. I will then say as a postulate that $c$ and $\hbar$ are absolutely constant, and since momentum is reciprocal length then in natural units the Planck constant is length per length and is unitless.

There are other constants in nature such as the electric charge. The important constant most often cited is the fine structure constant $$ \alpha~=~\frac{e^2}{4\pi\epsilon\hbar c}~\simeq~1/137. $$ This constant is absolutely unitless. In any system of units it has no units. In natural systems of units we have that $ e^2/4\pi\epsilon$ has ithe units of $\hbar c$, which in MKS units is $j-m$. However, we know from renormalization that $e~\rightarrow~e)-~+~\delta e$ is a correction with $\delta e~\sim~1/\delta^2$, for $\delta~=~1/\Lambda$ the cutoff in space scale for a propagator or the evaluation of a Feynman diagram. This means the fine structure constant can change with scattering energy, and at the TeV energies of the LHB $\alpha'~\sim~1/127$. We have of course the strong and weak interactions and we can well enough state there are coupling constants $e_s$ and $e_w$ and the analogues of the dielectric constants $\epsilon_w$ and $\epsilon_w$ so there are the fine structure constants $$ \alpha_s~=~\frac{e_s^2}{4\pi\epsilon_s\hbar c}~\simeq~1,~\alpha_w~=~\frac{e_w^2}{4\pi\epsilon_w\hbar c}~\simeq~10^{-5}. $$ Most often these coupling constants are $g_s$ and $g_w$. These two have renormalizations $g_s~=~g^0_s~+~\delta g_s$ and $g_w~=~g^0_w~+~\delta g_w$ this runs into the hierarchy problem and how coupling constants vary. These

What is clear is that gauge coupling constants vary with momentum. They do not vary with time, which by $x~=~ct$ or more generally Lorentz boosts means if gauge fields did vary with time they would do so with spatial distance. So far there is no observation and data of such variation from radiation emitted from the very distant universe.

What about gravitation and mass? We do have mass renormalization $m~\rightarrow~m~+~\delta m$. This can mean the mass of a particle can be renormalized at higher energy, and more it means terms due to vacuum energy contributions that renormalize the mass of a bare particle mass must add up and cancel to give the mass we observe. Again this happens with momentum. For the Higgs field the self interaction is due to the $\lambda\phi^4$ term, Technically this means there is a mass renormalization term $\sim~\lambda/\delta^2$ $=~\lambda\Lambda$ for $\delta$ a small region around the point for the $4$ point interaction where we have smeared it out into some small ball or disk of radius $\delta$. Also $\Lambda$ is the corresponding momentum cut off. We have similar physics for other fields, though with fermions have subtle sign issues,

I used the Higgs field because I think there is a deep relationship between gravitation and the Higgs field. I am from this going to compute what I think is the appropriate $\alpha_{grav}$. We can compute the ratio of the Compton wavelength $\lambda~=~M_H/hc$ and gravitational radius $r~=~2GM_H/c^2$ of a Higgs particles, with mass $m~=~125GeV$ $=~2.2\times 10^{-25}kg$. This means $$ \alpha_g~=~\frac{4\pi GM_H^2}{\hbar c}~=~\left(\frac{4\pi M_H}{M_p}\right)^2~=~1.3\times 10^{-33}, $$ where $M_p$ is the Planck mass. This constant is then connected to the mass of all elementary particles. The renormalization to of the Higgs mass determines the mass of all other particles.

There is then no indication of there being any variation of particle masses or coupling constants that depend on time. They all depend on momenta, and the large number of Feynman diagram terms to various orders add and cancel to give observed masses. With supersymmetry this is made somewhat more simple with the cancellation of many diagrams.