3

I read on Wikipedia:

Quantum mechanics predicts that certain physical phenomena, such as the nuclear decay of atoms, are fundamentally random and cannot, in principle, be predicted.

What does that mean exactly? I thought nothing can be predicted with arbitrary precision. Yet, we still often model physical phenomena to follow some statistical distribution.

Does the above perhaps imply that nuclear decay is (more) uniformly random, than other physical phenomena?

Or perhaps that it is statistically more independent, in terms of its Markov blanket, than other physical phenomena? i.e. less predictable than other physical phenomena, provided other knowledge?

glS
  • 14,271
Josh
  • 397
  • 1
    flipping a coin is deterministic randomness, the reason it looks random is because we cannot predict all the details of the dynamics, which is chaotic. In quantum mechanics this is not the case (according with most interpretations), the randomness is inherent, not due to a lack of knowledge. There is no underlying reason about why a particle decays now and not later. –  Jul 18 '20 at 15:45
  • 2
    A very recent answer of mine addresses a similar question: https://physics.stackexchange.com/q/566360/ –  Jul 18 '20 at 16:03

3 Answers3

7

When people talk about "fundamental" or "inherent" randomness in the context of quantum mechanics, the technical meaning behind this is Bell's theorem, which tells us that there are no local hidden variable theories explaining the results of quantum mechanics.

A "local hidden variable" theory is basically the classical idea of how the world works - everything has a list of well-defined properties, like position or momentum, and there is a "true" precise value for each of these at each time, and the laws of physics in principle determine the precise value at each other time from those at one instant. "Randomness" in this classical world is incidental, arising from incomplete knowledge, imperfect measurement devices, etc. When you flip a classical coin in the exact same way, it will always yield the same result. The "randomness" is just because humans are extremely bad at the level of consistency required to flip it "in the same way" again. The belief that there is a definite value for each property at all times is also called realism.

Bell's theorem says that quantum mechanics is incompatible with local hidden variable theories. No such theory can ever predict the results that we do, in fact, observe. (Hunting for and closing "loopholes" in our experiments that might make it possible to argue we don't actually observe the violations of the Bell inequalities that rule out local hidden variable theories is a somewhat active niche I won't get into here.)

So "fundamental randomness" really is supposed to mean "no hidden variables" - before you measured a particle's momentum, it didn't have a definite one. The quantum state is not a list of numbers with definite values for properties we can measure, it is merely a list of probabilities. To say this is "fundamental" is to say that it is impossible to explain these probabilities as just arising from our lack of knowledge of some underlying definite variables, i.e. it is the content of Bell's theorem. The claim is that the uncertainties and probabilities of quantum mechanics are really features of the world, not features of our inability to comprehend it.

For completeness, let me mention that Bell's theorem gives you a way to preserve belief in hidden variables - instead of abandoning realism you can choose to give up locality, roughly speaking the notion that things cannot instantaneously affect the state of other things separated from them in space. This is what Bohmian mechanics does, but it is far from being the dominant viewpoint among physicists. Although there is a plethora of different quantum interpretations, which are effectively ontological frameworks trying to explain how to think about a world that is not classical and mechanistic, most of them choose locality and abandon realism - which is why you'll often hear that "quantum mechanics says the world is fundamentally random".

ACuriousMind
  • 124,833
  • Thanks - Although I'm not sure I follow: "Randomness" in this classical world is incidental, arising from incomplete knowledge, imperfect measurement devices, etc. Sure, but one could argue that we don't need to know where uncertainty comes from (hidden physical quantities that we can measure or not) to still establish a belief (in a Bayesian sense) and observe, measure our accuracy, and assess prediction hardness, right? In other words, accuracy is all there is, even if we accept that there are not definite unobserved / underlying quantities at play no? – Josh Jul 18 '20 at 20:47
  • 1
    @Josh I'm saying that the "fundamental randomness" of quantum mechanics has nothing to do with a difference in the degree of accuracy! In fact, what "accuracy" even means is tightly linked to whether or not you believe the world is realist or not! In a realist world, there is a true value and "accuracy" is the deviation from that value. In the probabilistic quantum world, there is a "true" list of probabilities and accuracy is how closely our prediction of these probabilities matches the true probabilities. QM is not "less accurate" than classical mechanics. – ACuriousMind Jul 18 '20 at 20:50
  • +1 Thanks - I should probably dive deeper into what you wrote - I appreciate it. The statement "QM is not "less accurate" then classical mechanics" is quite eye opening for me (even if using a basic notion of accuracy in terms of how frequently a theory predicts correctly the quantities in consideration). – Josh Jul 18 '20 at 20:54
  • I think that this answer could be a little more careful about conflating “hidden variables” with “hidden variables as sufficient to explain observations.” If I’m reading correctly, Bell’s theorem doesn’t say there aren’t hidden variables, just that no number of hidden variables is going to explain observed phenomena so they can’t (alone) be our answer here (the way quantum mechanics can). Because surely, there are hidden variables; even if the “value” of those hidden variables are a probability distribution, it’s not as though we necessarily know that probability distribution. – KRyan Dec 17 '21 at 20:53
  • The opening paragraph is very good at being clear about this, but some of the later lines start to conflate the two, e.g. “So ‘fundamental randomness’ really is supposed to mean ‘no hidden variables’” and “a way to preserve belief in hidden variables.” Fundamentally (again, unless I misunderstand), it doesn’t mean “no hidden variables,” it means “hidden variables are not a complete explanation,” and likewise, our “belief in hidden variables” isn’t lacking in preservation; what some might seek to preserve is rather a “belief that hidden variables can fully explain observations.” – KRyan Dec 17 '21 at 20:56
1

In you case of radioactive decay, it means that the times of decay of a sample of radioactive material occur completely randomly. The sample will have quite a lot of radioactive nuclei. When a single nucleus decays is random. Decay may occur early or late, there is no way to predict which. After an x second measurement, you'll find some decays were early and some, from the same sample, is late. The decay history of a sample will have been determined. After the fact, that is after the random decays have been measured, we can calculate the properties like half-life and lifetimes. while a 2nd measurement will have the same properties the actual decay times can not be predicted because they are random.

Natsfan
  • 2,652
  • Thanks @jmh not to sound pedantic here, but when you say "decay may occur early or late, there is no way to predict which" shouldn't we rephrase that as "we don't know how to predict which, and thus not able to make accurate statistical predictions about them"? If so, isn't it fair to say that mathematically, we have more unexplained random variation in these than other observed quantities? (i.e. in a Bayesian sense, "more random" is simply more entropy on our best predictive distribution over their values, but nothing is "fundamentally" random). – Josh Jul 18 '20 at 16:06
  • Or are you saying that we already know with full certainty that there is no information that we could ever gather to make a better prediction beyond a fully random guess? – Josh Jul 18 '20 at 16:09
  • when I say early or late I'm referring to a grand scale of from zero to infinity but determined by the length of your measurement interval. After an initial measurement we can predict things like half-life but not the time of decay. That is random. No matter how well you know the properties of the particular isotope, you can never predict the time of decay. You may be able to predict if a decay occurs in some interval but not the time of the decy. Does this address your question or have I misunderstood your question? – Natsfan Jul 18 '20 at 16:22
  • Thanks I think it does, it sounds like "fundamental" randomness simply refers to our inability to make relatively certain predictions about these quantities, but this designation seems vague and somewhat arbitrary. If not, how would you define fundamental randomness? – Josh Jul 18 '20 at 16:30
0

In classical mechanics, one can theoretically predict a trajectory for everything and only measurment errors enter in practical measurements. When the numbers become very large as in a gas again in classical physics the assumption is that if one had the ability to get so much data , everything would be predictably calculated.

In quantum mechanics due to the probabilistic wave function postulate (second page), it is inherently impossible to predict a single event's (x,y,z,t). Only the accumulation of measurements can be predicted. This is evident in the double slit experiments one electron at a time, see this.

Nuclear decay lifetimes are predicted by quantum mechanics, i.e. an accumulation of similar events.Individual events are random, the probability weighted by the wave function that describes the event .

anna v
  • 233,453
  • Thx Anna, but from a statistical perspective, they are both random (unknown, not deterministic). It just happens that we are able to predict more accurately and w/ greater certainty the latter than the former provided other information, correct? If so, this designation is artificial (nothing is technically "fundamentally" random, it's just degrees of certainty, and how much and what type of additional information we could gather to increase our certainty). Whether or not a particular theory of physics is deterministic or provides statistical guarantees is orthogonal to this Q, isn't it? – Josh Jul 18 '20 at 16:14
  • @Josh , I do not think it is orthogonal. If you ask me to predict the trajectory of a rocket , given initial condiitons, I can do it and the errors can be predictable too. If you ask me to predict which nucleus is going to decay ( or if a given nucleus will decay ), it is not possible to give an answer. – anna v Jul 18 '20 at 16:18
  • Thanks, the way I see it is that we have naturally developed more (or fully) deterministic theories for quantities that we are able to observe with very high precision, but "the cosmos" obviously doesn't know anything about classic vs quantum mechanics. So yes, our scientific theories naturally try to match the epistemological challenges we have faced for different types of physical phenomena and our ability to make accurate predictions about diff quantities. Or are you saying that the universe truly exposes and manifests two types of physical phenomena? (one fundamentally random and one not?) – Josh Jul 18 '20 at 16:24
  • @Josh at the moment the microcosm is fundamentally random but for dimensions larger than h_bar determinism emerges, with Newtons etc equations – anna v Jul 18 '20 at 16:58