4

(Sorry if I'm spamming your board with Popular Science speculations, but I just thought this could be an interesting thought experiment).

There are quite a few videos on youtube about Bell's Experiment. This experiment supposedly disproves the hypothesis of local hidden variables. We split entangled photons, measure them separately and while our results look individually random, they are somehow align if we compare them afterwards. This supposed to prove that photon "decided" its orientation at the time of measurement.

This heavily relies on our ability to "surprise" the photon with a random direction we're going to measure it. But what if that photon knew all along in which direction it was going to be measured. Since photons travel at the speed of light, they don't experience time, hence they should know the future, and there shouldn't be a way for us to "surprise" it at all.

glS
  • 14,271
avloss
  • 151
  • 2
    Bell tests don’t have to use photons. – Joe Sep 30 '21 at 12:16
  • You're right, I'm sure. I've excused myself as being a non-expert. For whatever reason I've decided that it does, since https://www.youtube.com/watch?v=zcqZHYo7ONs this video talks about both about photons and Bells Theorem. Thank you for your correction. – avloss Sep 30 '21 at 13:21
  • 1
    I was with you up to the last sentence. If information from the future can reach the photon that's currently bouncing off your head, then information from the future can reach your head. – WillO Sep 30 '21 at 13:57

2 Answers2

9
  1. Bell's inequalities hold more generally. You can verify the existence of nonclassical correlations in a variety of platforms, including those that have nothing to do with photons or light in general.

  2. "This heavily relies on our ability to "surprise" the photon with a random direction we're going to measure it. But what if that photon knew all along in which direction it was going to be measured":

    I don't think this means anything. Bell's inequalities are a way to certify the existence of a specific type of correlations between measurement outcomes. It's not about the photon being "surprised" or not. It's about the way you interact with the photon during measurement. In other words, it's about the types of "questions" you "ask" the two photons.

    Any kind of classical correlation between the two parties isn't sufficient to violate Bell's inequalities. If you assume that the systems "know the direction along which they will be measured", then yes, you can violate the inequalities. This is often referred to as the superdeterminism loophole. However, it's worth remarking that any probability distribution can be obtained with such an assumption. In other words, you get a model which is not falsifiable, as it can "explain" any observation. Furthermore, you'd have to come up with a physical mechanism to explain how the information about the measurement choices somehow leaks to the detectors. In many situations, such a mechanism would be really weird and far-fetched.

glS
  • 14,271
  • Thank you, makes sense. I'm only learning about "superdeterminism" here after asking this question. physical mechanism to explain how the information - I guess that was my whole question. I'm speculating (perhaps very naiively) that the physical mechanism is "time travel" - photon (or any other elementary particle travelling at the speed of light) knows the future. By "surprised" i meant exactly that, photon already "knows" what questions it will be asked. It's like going to a test, after hacking into the teachers computer and knowing all the questions in advance. – avloss Sep 30 '21 at 13:29
  • 1
    @avloss yes, but the sentence "the photon knows the future" doesn't really mean anything. The whole point of a physical description of a phenomenon is to describe how things now "explain" how things will be later. A description of current state based on the future state would be useless, how could you ever use it? Also, the "photons don't experience time" thing is a common misconception. It simply isn't meaningful to consider the reference frame of a photon in special relativity. – glS Sep 30 '21 at 14:02
  • You say that "[A]ny probability distribution can be obtained with [superdeterminism]". I don't see that. How would I get a multivariate Gaussian over all system variables with covariance matrix equal to the identity matrix from the assumption superdeterminism? It would seem that such a distribution would contradict superdeterminism rather than be obtained from it. Please clarify. – Galen Mar 23 '22 at 04:36
  • And to clarify my concern about the contradiction, for Gaussian variables their zero covariance implies mutual/statistical/measure independence. This is not true of all distributions, hence I chose this one. Superdeterminism implies the existence of such a dependence between certain kinds of variables (Hossenfelder and Palmer 2019), so such a Gaussian distribution would contradict superdeterminism. – Galen Mar 23 '22 at 04:41
  • This phrase "[A]ny probability distribution can be obtained with [superdeterminism]" seems to be reworded to be "[models that assume superdeterminism are] unfalsifiable". Is that correct? If so, that is really different claim from the former point about probability distributions. – Galen Mar 23 '22 at 04:59
  • @DifferentialCovariance superdeterminism means measurement choices can be correlated through the hidden variable. This means that rather than discussing a behaviour of the form $p(ab|xy)$, you are dealing with a probability distribution where $x,y$ are not a choice but rather a part of the distribution, so some $p(a,b,x,y)$. But given an arbitrary prob distro, you can always write it as "deterministically caused" by some hidden variable, essentially by the same argument as https://physics.stackexchange.com/a/421622/58382. Unless you understand "superdeterminism" as defined in some other way? – glS Mar 23 '22 at 09:49
  • regarding your last comment, I'm not sure I understand your point. I'm saying that an arbitrary set of observed correlations can be explained via some superdeterministic model, which I would consider equivalent to saying that the assumption of superdeterminism cannot be falsified. One can of course falsify specific superdeterministic models though. Is that what you are saying? – glS Mar 23 '22 at 09:56
  • @glS On the point about the Gaussian distribution, I was saying that if all variables were uncorrelated then there would be no superdeterminism. By "all system variables" I was including both hidden variables and otherwise. I suspect now that you are referring to a data distribution, which is not hidden by definition. Is that correct? – Galen Mar 23 '22 at 14:33
  • @glS We agree on the definition superdeterminism as a violation of statistical independence of detector settings and hidden variables. My first reading of it comes from Hossenfelder and Palmer 2019 and Hossenfelder. – Galen Mar 23 '22 at 14:36
  • @glS I suspect that for any given data distribution will be consistent with some determinism process, as you said, although I must admit I have not checked that claim mathematically. It might warrant some future reading. – Galen Mar 23 '22 at 14:40
  • @glS My comment about the rephrasing doesn't apply as I thought now that I believe you are talking about data distributions rather than distributions over all system variables. – Galen Mar 23 '22 at 14:44
  • I meant "deterministic process" rather than "determinism process". – Galen Mar 23 '22 at 14:49
  • @glS I think that we agree that some superdeterministic models should be testable in the sense that we can compute the predictions of the model and compare to data. Often this is more of an 'implausification' rather than falsification of a model due to the quantitative way we evaluate models. Worse, there is a Pareto front of models once multiple objectives are chosen. But briefly, I think models that assume superdeterminism or imply superdeterminism from other assumptions may or may not be testable. – Galen Mar 23 '22 at 14:55
  • @DifferentialCovariance I think what you are saying goes in a bit of a different direction. My point is that sure, you can always find a superdeterministic model explaining data. But that means finding a superdeterministic model won't in general be of any interest. Of course, if you find a superdeterministic model that is "natural", as in, it comes from a "nice enough" physical theory, than that would be great (and falsibiable etc). But simply finding a superdeterministic explanation for observed correlations in a given apparatus is not per se interesting, and can always be done – glS Mar 23 '22 at 17:37
  • @glS Thank you for clarifying. That is all agreeable to me. – Galen Mar 23 '22 at 17:39
0

I do not think your argument "they don't experience time, hence they should know the future" has a very strong foundations...

But you can make particle to know the future pretty naturally if

  1. Physical laws are fully deterministic so that every experiment we make is predetermined by initial data
  2. The relevant particles have access to relevant initial data to make their predictions

In fact, both points are satisfied in universe governed by Newtonian physics, so they are not something too crazy to contemplate. It seems to me, that the overoptimistic claim that Bell's theorem disputes local hidden variable theories uses its own conclusion as an assumption. If we can model whole universe by deterministic local hidden variable theory, then Bell's theorem is not applicable.

See https://en.wikipedia.org/wiki/Superdeterminism

Umaxo
  • 5,818