2

I was reading about superdeterminism and it was a bit counter-intuitive. The idea of having a hidden variable on the measurement device is very rational. For example, if we emit light to a constrained electron like in a hydrogen atom, only photons of certain frequency and polarization can interact with it. Likewise, when we have a spin pair under some constrain eg:

$$ \hat S_A + \hat S_B = 0 $$

Not all photons can interact with let's say Bob's particle, That's what I thought is the reason we need a non-uniform field on the Stern-Gerlach-like experiments.

While I was expecting that the interacting-photon polarization on the perpendicular plane of Stern-Gerlach experiment to be the hidden variable, on the superdeterminism page there was nothing about this. It was more about philosophical concepts like free will and the angle of measurement itself as a hidden variable.

Are there any classifications of superdeterministic theories? Or only global superdeterminism is possible and my intuition about a local hidden variable on the measurement device caused by non-uniform fields is wrong? Are there any Bell tests considering that?

I'm not an English speaker nor a physicist please edit my question to be more accurate.

Qmechanic
  • 201,751
  • 4
    Super-determinism tries to "solve" philosophical problems. It doesn't solve a single physics problem, nor does it make new predictions that would differ from standard QM. It's not even science because it is not testable. You can disregard it as a physical hypothesis. Hidden variables don't solve "measurement", which is an irreversible process that destroys the quantum system. After a measurement the QS has more or less energy than before. That energy went to/came from the measurement system. That is why it is impossible to fold the measurement into the free system dynamics. – FlatterMann Apr 19 '23 at 18:46
  • 7
    Superdeterminism has no scientific interest whatsoever. It only appears as a loophole because Bell is being excessively rigorous in its analysis (where it is nothing more than a reductio ad absurbum argument, and definitely not a theoretical perspective of any kind). – Stéphane Rollandin Apr 19 '23 at 19:27
  • 2
    Do you know Bell's theorem? You may need to check its derivation and see where superderterminism can enter the game. Note that you can also have hidden variables without superdeterminism (nonlocal ones). Anyway superdeterminism seems unfalsifiable, it does not impose any conditions or useful ideas on how physics works and treats all correlations as red herrings. – Mauricio Apr 19 '23 at 19:32
  • 2
    Bell is a problem in its own right. There is a rule in mathematics that one has to prove the existence of the elements of a set, first, before one can make statements about the set because the expression (A and not A) is always true for the empty set for an arbitrary property A. Bell makes a statement about hidden variable theories... but he doesn't prove that hidden variable theories actually exist. To the best of my knowledge nobody has ever been able to demonstrate that the set of hidden variable theories is not empty. To a mathematician that's basically the end for Bell's theorem. – FlatterMann Apr 19 '23 at 21:21
  • 2
    related: https://physics.stackexchange.com/q/651029/58382 – glS Apr 21 '23 at 16:05
  • 1
    @FlatterMann Mathematically eq (2) of article "On the Einstein-Podolsky-Rosen paradox" is already not rigorous : $$P(\vec{a},\vec{b})=\int d\lambda A(\vec{a},\lambda)B(\vec{b},\lambda)\rho(\lambda)$$ : it is a primitive, hence the lhs should read $$P(\vec{a},\vec{b},\lambda)$$ – QuantumPotatoïd Dec 26 '23 at 07:12
  • Thanks. I will have to reread the article, I am afraid. I appreciate the technical hint, though. – FlatterMann Jan 14 '24 at 13:10

4 Answers4

3

As the other answers here seem content with saying "Superdeterminism is stupid, don't try to understand it", let me try to address your questions and clarify some things:

When Bell derived his inequality, he considered two entangled particles $A$ and $B$ and assumed that they contain "hidden variables" $\rho_A$ and $\rho_B$ with information about their state. For example, if photon $A$ is spin-up, then $\rho_A = +1$, if its spin down then $\rho_B = -1$.

The measurement device, on the other hand, does not contain hidden variables. Alice and Bob know its state precisely since they set them up. So they are in a known state (let call these state $a$ and $b$). For a polarizers, this would be the angle of their polarization axis, e.g. $a = 45^\circ, b = 60^\circ$ if $A$ is set at 45 and $B$ at 60 degrees.

Now, Bell assumes some additional properties for the hidden variables:

  1. Reality: This is the existence of $\rho_A$ and $\rho_B$ from the start.
  2. Locality: This is the independence of $\rho_A$ and $\rho_B$. Otherwise, we would have to consider $\rho_{AB}$, a single variable for both photons.
  3. Statistical independence: This is the independence of $\rho_A$ and $\rho_B$ on the state of the measurement devices $a,b$.

Since Quantum Mechanics violates Bells inequality, one of these assumptions must be wrong. In Superdeterminism, Number 3. is left out.

So its not that $\lambda$ is hidden. Superdeterminism assumes that the photon states may depend on $\lambda$, i.e. $\rho_A(a,b), \rho_B(a,b)$.

Are there any classifications of superdeterministic theories? Or only global superdeterminism is possible and my intuition about a local hidden variable on the measurement device caused by non-uniform fields is wrong?

I don't really know, but I never heard of global Superdeterminism.

Are there any Bell tests considering that?

The hope of people working on this is that our current experiments are explained well by Quantum Mechanics, but more sensitive experiments might show the dependence of $\rho_A,\rho_B$ on $a$ and $b$ if you look for it.

Whatever you think about Sabine Hossenfelder, she has a decent summary on this topic if you are interested to learn more (pdf). See in particular section 3 (What) and 6 (Experimental Tests).

Cream
  • 1,578
  • 6
  • 22
1

Imagine you make a simple experiment, dropping balls from different heights and measuring their speed and time to impact.

With all that data, you find that all of those experiments follow a precise, exact math equation, which we then call a law (the law of gravity). We use that law to predict future experiments, and when we do them, the prediction was correct!

That's standard science.

But there's another, valid, interpretation.

Maybe, it was all just a coincidence. Maybe the balls could move at any speed in any direction, or even turn into elephants mid-fall. But, on pure luck, they all feel in a way compatible with the law of gravity.

This is ridiculously improbable, but it's not impossible, and we can't prove it's not true. And we can't prove it's not false either. So it's "unfalsifiable", and therefore it's not science.

It's not wrong, it's just not science.

That's superdeterminism. Not that the world is "random", not that. But rather, that everything behaves in a way that cannot be predicted by doing small, contained experiments (like dropping balls), and extrapolating them to other situations. In other words, not by science.

It's not surprising we can invoke superdeterminism to explain away anything we don't like in a theory (like Bell in QM), but by doing so we're pretty much abandoning the ability to make predictions.

Juan Perez
  • 2,949
  • I agree, but you seem to be using the terminology backwards. Superdeterminism will say there is no luck, the ball always fall under the law of gravity and rolls downhill. A better example might be that if you ever find some unusual behavior, e.g. you find a die that always lands in 6, probability will say that it is luck or that you have to look inside the die and see what is made of. Superdeterminism on the other hand contents itself by saying that there is no luck and that the every fluctuation in the air and floor are conspiring to always make it land a 6 (everywhere since the Big Bang). – Mauricio Apr 19 '23 at 21:49
  • 1
    My understanding of superdeterminism is that it will say that there is no law of gravity. Those balls fell because the "grand scheme" involves them falling, but not because of any specific interaction like gravity. They just happened to fall. And they might not fall tomorrow, because it's not an interaction that pulls them. There's no cause-and-effect, just a fixed motion pre-ordained at the beginning of the universe. Where every apparent interaction is just a coincidence (or as you call it, a conspiracy). – Juan Perez Apr 20 '23 at 13:15
  • I agree with you and maybe this is just nitpicking. My way of viewing it's that supedeterminism claims that everything is predetermined to do so as you say. In that way it is not telling us anything about the deterministic problems like the motion under gravity, but it is making claims about statistical significance and probabilistic problems. So under superdeterminism any unusual thing that might pop up in an experiment is not a statistical fluctuation but actual mechanisms that the universe have in order to force quantum effects (or if you wish to force any effect that we cannot explain). – Mauricio Apr 20 '23 at 13:29
1

Superdeterminism is a response to Bell's Theorem. It is one of two ways that a certain assumption required to prove Bell's Theorem might fail.

The assumption in question is most commonly called "Statistical Independence". More accurately, it would be called "Statistical Independence between the past hidden variables $\lambda$ and the future settings $(a,b)$", but that's a bit of a mouthful. In mathematical terms, this assumption would look like $$Prob(\lambda)=Prob(\lambda|a,b).$$

The idea here is that one can try to model entanglement by assigning a probability distribution $Prob(\lambda)$ to the shared hidden variables of the two particles, back when they are first entangled. The above equation is the assumption that any reasonable model must assign those probabilities for $\lambda$ independently of the eventual measurement settings $(a,b)$ for that run of the experiment. It seems like a reasonable assumption, but if we break it, the central argument of Bell's Theorem doesn't go through. (If this assumption failed, then one really could explain entanglement experiments in terms of localized hidden variables.)

Superdeterminism is the idea that one can violate the above equation, explaining correlations between $\lambda$ and $(a,b)$ in terms of past common causes. Specifically, there could be some distant past set of hidden variables $\Lambda$ which would serve to correlate $(\lambda,a,b)$. That argument makes sense to the point where you consider $a,b$ to be some microscopic details in the measurement device, perhaps why you're asking about hidden variables in the measurement devices themselves. Certainly it would be unreasonable to insist that a model had no correlations between those details.

But ${a,b}$ aren't microscopic details. They are macroscopic settings, chosen in some manner. They're the values written down in the lab book when calculating the entanglement correlations. As Bell himself put it, they could be chosen by the Swiss Lottery Machine. So any superdeterministic account can't merely correlate hidden details. They have to correlate the output of the Lottery Machine in Alice's lab, with the output of the Lottery Machine in Bob's lab, and both of those in turn need to be reproducibly correlated with the original hidden variables back where the entangled particles were generated. If you can't find an account in the literature explaining what those hidden variables might be, it's probably because there's no conceivable set of hidden variables which could account for every way that the settings $(a,b)$ might be chosen.

The other way to break Statistical Independence is having a model which is "Future-Input Dependent", or "Retrocausal", at a hidden level. Instead of a common-cause explanation of the correlations, now the explanation is a direct cause, from $a$ to $\lambda$ and also from $b$ to $\lambda$. (This assumes one is using Pearl-style interventionist causation, where the external intervention/setting is always the "cause" by definition. If you don't take this view of causation, such models are hard to wrap your head around, but can still be analyzed in terms of the input-output structure of the underlying model as described here.)

Some papers (and also the initial definition on Wikipedia) blur the distinction between retrocausal and superdeterministic models, calling them both "superdeterministic", but this seems misguided to me. Clearly there's an enormous conceptual difference between direct retrocausal influences from $(a,b)$ to $\lambda$ and a common-cause explanation of $(a,b,\lambda)$.

Ken Wharton
  • 1,515
-1

There are actually two different versions of Superdeterminism. There's the original version that Sabine Hossenfelder works on, namely the version that hidden causal events going back to the initial correlations of the universe can account for the correlations observed in quantum entanglement. And then there is Dr. Johan Hansson's version, which does not necessarily rely on hidden variables, but rather posits the violation of a little known 4th assumption of Bell's Theorem, namely the assumption of continuous causation in physics. Dr. Hansson published a proof in 2020 that the universe is a predetermined static block universe without continuous causation in physics. Unfortunately, his proof though brilliant is relatively obscure. Dr. Hansson's proof shows you why superdeterminism should be taken very seriously and is anything but stupid. You can read it at Physics Essays Vol. 33, No. 2 (2020), or here at this link:https://www.diva-portal.org/smash/get/diva2:1432225/FULLTEXT01.pdf?fbclid=IwAR1LumqekGHmXOzJXOdpNOgFyRkya5CeKefZpQeGWEIQxrr9yyUd7NnZY5o