4

Superdeterminism is the idea that the apparent freedom for the choice of experimental apparatuses and their settings are nothing but an illusion. Contextuality is the dependence of the properties of a system upon the choice of measuring device. However, what if there is no "free" choice of measurement settings, i.e. the measurement settings aren't a freely variable independent degree of freedom? Bell inequalities require counterfactual polarization settings, but what if the polarization setting could only have been one way and not any other? As for the ability of Shor's algorithm to factorize large integers and compute discrete algorithms, maybe the answers gotten at the end retrocausally determine the numbers fed into the quantum computer in the first place so that it couldn't have been any other number?

Are there any absolutely fatal objections?

Qmechanic
  • 201,751
  • Maybe this question should be protwected, given all these "answers" by new users asking new questions. – Abhimanyu Pallavi Sudhir Sep 04 '13 at 14:04
  • Superdeterminism requires either the ability to invert one-way functions, retrocausality, or a hidden local duplication of distant computations in quantum systems. What if the choice of measurement setting is determined by a deterministic one-way function? –  Aug 13 '12 at 11:49

3 Answers3

6

What would you recognize as an absolutely fatal objection?

Let us brush aside the fact that it more or less is an assumption that experimental physics is doomed to fail to accurately reveal important features of the world and that science is doomed. These consequences make this question a better fit for Philosophy.SE than here; but never mind — let's tackle the philosophical question, and accept that it is entirely conceivable (if not very productive to put into practise) that the enterprise of modern physics is futile.

What we are basically discussing, here, is whether or not conscious human choices are constrained by local hidden variables in such a way, by a deterministic universe, that it tends to give rise to choices of measurements which reveal correlations which seem impossible for local hidden variables; or whether conscious human choices about what numbers we would choose to factor are constrained so that we only choose to factor the numbers which the hidden variables of the quantum system happen to be useful to factor. That is: that our brains — one of the most complicated uncontrolled systems that we know of — either has a vast supply of secret correlations with the world around us, or is so thoroughly prone to ingress of such secret information, that it serves to constantly thwart any attempt to discover the fact that the world around us is not random, nor involves any apparently non-local effects.

This is Cartesian paranoïa at its finest. In order for a deterministic physics to give rise to conscious agents such as ourselves, and at the same time spoof us with outcomes that seem routinely probabilistic, with reasonable convergence of frequencies of events to reliable averages, by directing us to think that we are choosing to make measurements for which the outcomes seem random but correlated in a way that defies expanation by local hidden variables — this seems to me to require not just a fine-tuned universe hospitable to life à la the anthropic principle, but a cosmological coincidence so enormous as to require an intelligent agency whose purpose is to deceive; to design ants just so that it could poke at anthills. While this position presents a solution to such classic theological conundrums as The Problem of Evil, does it really seem like a simpler hypothesis than just the idea that nature is stochastic, and a just little more complicated than we've ever had reason to believe before 1900 CE?

Consider this, as well: if we manage to build a large quantum computer, there is nothing to prevent us from setting up a computer so that it will multiply each pair of odd primes over 106 in some fixed order, and then re-factor each such product using Shor's algorithm. The only "choices" involved in the running of the computer, then, are the details of the design of the computer, the precise moment when we decide to turn it on, and the mathematical theory (running all the way down to number theory!) which underlies its operation. Failing mysterious failures of the computer, a cosmic conspiracy which tried to fool us into believing quantum mechanics would have to be responsible for the precise moment we turn the computer on, and the entire course of the development of mathematics and engineering leading to Shor's algorithm and the building of the quantum computer, to continue to fool us. Furthermore, these forces which determine our behaviour so precisely must somehow manage to avoid detection, either by making us constitutionally unable to realise that they're going on, or by giving rise to these developments in technology by a sudden cascade of events at the last second, even as our behaviour is being guided in other ways by the same hidden variables.

There is simply no reason, at present, to consider this a reasonable hypothesis. It is delicate to the extreme, and I cannot even begin to fathom how precisely the initial conditions would have to be tuned in order to make it true — especially in view of the fact that complex systems exhibit such instability that the "cosmic lie" would presumably be quickly revealed unless the initial conditions were set just so. Only someone who was absolutely committed to determinism and local realism, in preference to any sane and pragmatic approach to epistemology or ontology, would ever entertain it. But then, if the world is actually as they describe, we shouldn't blame them for it; after all, they have no more choice to believe it than I have to mention quite tangentially that 11×13 = 143.

  • 1
    This is a great answer, but let me add one observation: it seems to me that superdeterminism fails miserably even at its own goal, of explaining away quantum nonlocality. For in order for the particles that make up our brains, measuring devices, EPR pairs, etc. to be in on this conspiracy since the beginning of time, you'd evidently need a much more radical sort of nonlocality than the kind you were trying to explain! So it's hard for me to understand how anyone could fail to find this "cure" infinitely worse than the disease. – Scott Aaronson Aug 24 '12 at 00:56
  • @ScottAaronson: I always assumed that, as with all classical conspiracies of distant parties which act only locally, that it is supposed to be all a part of the extremely intricate plan with a localized starting point: in this case the Big Bang, where everything started off at the same place so that the conspiracy could be possible under the constraints of the hypothesis. Are there "superdeterminists" who don't believe that the matter-energy in the visible universe was ever localized? – Niel de Beaudrap Sep 05 '12 at 18:09
  • Quoting the first answer, > " But then, if the world is actually as they describe, we shouldn't

    blame them for it; after all, they have no more choice to believe it than I have to mention quite tangentially that 11×13 = 143" My question to you is this- can you trace the roots of your tangential exclamation that 11x13=143, logically thru the application of causality? Of course you can, and if you cannot fathom this then you have a long way to go my friend. Knowing causality to be a fact, how could you have possibly chosen a statement any different than 11x13=143?

    –  Sep 04 '13 at 09:19
2

I am not a scholar of the concept of "superdeterminism" so I may get some historical nuances of the idea wrong. But superdeterminism is just determinism. More precisely, it is a fallacious attempt to find a loophole in Bell's theorem by noticing that determinism, if true, would apply to the experimenter as well as to the experiment.

Bell inequalities typically pertain to an experiment in which there is some choice about what you measure, e.g. spin up vs spin down, or angle of a polarization filter. Then you look at the statistics predicted by quantum theory, and notice that a local deterministic theory would be unable to produce them, unless the particles had advance knowledge of the measurement settings. There's no reasonable way for them to have that knowledge anyway, but just to make the point, exposition of Bell's theorem may emphasize the ability of the experimenter to choose the measurement settings after the particles (e.g. in an EPR pair) set out towards the measuring device.

At this point, the "superdeterminist" says "Aha! But if determinism is true, then the experimenter's choice was predetermined, all the way back to the big bang, so in principle the particles had access to information implying what the choice would be." Which is true, but ridiculous. We could set up these experiments so that the settings are controlled by any absurd mechanism you imagine; for example, the total amount of droppings from a mouse in a cage - if it's more than X, we measure spin up, if it's less than X, we measure spin down.

It is absurd to suppose that the microscopic causality of the universe is such that it would always set the hidden variables inside the particle in a way which matches the amount of droppings that the mouse produces; or equivalently, that the microscopic causality of the universe is always going to finetune the metabolism of the mouse in order to match the hidden variables on board the approaching particle. Yet this really is the sort of thing that the superdeterminist loophole requires! (If anyone has a non-absurd formulation of superdeterminism, I'd like to know about it.)

Retrocausality, which was also mentioned in the question, is a completely different concept. A retrocausal theory - perhaps we should say bicausal, since it would be "procausal" as well as retrocausal - has two arrows of time in it, going in opposite directions. So it can be locally procausal and locally retrocausal, and still have spacelike correlation and loops in time. But no-one has a working model of such a theory.

  • 1
    "Superdeterminism" becomes a lot less ridiculous if you simply state the following: In a Bell-like experiment, neither Alice nor Bob can "change their minds" as to what to measure, without having one or more changes of the settings of the experiment in the distant past. If you do change any of these settings, not only the number of mouse droppings might or might not change, but the entire experiment is reset. This already removes Bell's contradiction. No non-local physical law is needed. – G. 't Hooft Aug 20 '12 at 22:01
  • The immediate cause of the experimental settings is the number of mouse droppings. So in this case, the chain of cause and effect from the distant past to the choice of experiment passes through the mouse's gut. The chain of cause and effect from the distant past to, e.g., the photons in an EPR pair does not. So it is still ridiculous, because it asserts that rodent digestion is a high-fidelity medium for communicating which subquantum vacuum we're in. – Mitchell Porter Aug 20 '12 at 22:33
  • How do you imagine a change in the distant past that affects the gut of a mouse but not the EPR photons? Even if the EPR photons are outside the lightcone, I can simply go further to the past ... all the way to the inflationary early universe, if needed. – G. 't Hooft Aug 22 '12 at 09:52
  • The distant past is a causal antecedent of both the experimental settings and the EPR pair. But for the experimental settings, the local chain of cause and effect "passes through" the mouse, whereas it does not do so for the EPR pair. The scenario is that the selection of experimental setting is determined by what happens inside the mouse, a complicated macroscopic process... – Mitchell Porter Aug 23 '12 at 04:16
  • so you can't get the right statistics just by relying on the subquantum vacuum being the same everywhere. – Mitchell Porter Aug 23 '12 at 04:18
  • @ Mitchell: even if the logical chain does not pass through the mouse's gut, it does reset the data for the EPR experiment. So the photons are affected by the past data. – G. 't Hooft Aug 24 '12 at 10:21
  • The sad thing is that I can't get it into people's heads that counterfactual measurements are not meaningful, even in CA theories. Operators that are not diagonal in the CA basis cannot be measured. The point that seems to escape all the time is that a CA can be so complex that no counterfactual measurements are needed to describe the outcome of a macroscopic observation. This is why non-diagonal operators may nevertheless be extremely useful to describe intermediate results of a quantum calculation. – G. 't Hooft Aug 24 '12 at 10:21
  • 1
    You don't have to think in terms of counterfactuals. You can have many instances of the same experiment (but with varied settings) taking place in the same space-time. The implication of Bell's theorem is that you can't get the right frequencies for the outcomes from a local deterministic physics, unless you artificially tune the microscopic causes at each separate location in order to give the desired frequencies. – Mitchell Porter Aug 24 '12 at 19:58
  • 1
    I never said the photons are unaffected by the past. The problem is that the experimental settings can be controlled by arbitrarily complicated macroscopic processes. When this happens at two or more locations which are spacelike separated, there is no way for a shared subquantum vacuum to be the common cause behind Bell correlations... – Mitchell Porter Aug 24 '12 at 20:15
  • 1
    Each location only has a local copy of the vacuum data, but to produce the correlations, the apparatus at each location would also need to "know about" the settings of the apparatus at the other location, which aren't determined by the vacuum data, they're determined by the macroscopic control processes. – Mitchell Porter Aug 24 '12 at 20:15
  • @ Mitchell: Do you agree with the following? Your "arbitrarily complicated macroscopic processes" do have a past; to change the setting of polarization filter $a$, you have to change that past. By resetting the past, the "local vacuum data", both at $a$ and at $b$, will have changed completely. My point is that, whether a photon at $a$ or at $b$ gets through, is difficult or impossible to compute in the deterministic theory. Only the chances can be calculated by using QM as a tool. Acceptable or not? – G. 't Hooft Aug 26 '12 at 12:40
  • You have to show how the altered past could possibly produce the right chances without miraculous finetuning, when the right chances depend on the experimental settings, and the experimental settings depend on weight of mouse droppings, or number of lightning strikes modulo 2, or the nth digit of pi, or any other input that could be used to govern a control process... – Mitchell Porter Aug 27 '12 at 05:44
  • We can make the choice of settings depend on a deterministically chaotic process. This chaotic intermediate process necessarily randomizes any "signal" that existed in the local microscopic physical state. So the only way to get the right statistics in the experimental outcomes will be to have exponential finetuning of the initial conditions. – Mitchell Porter Aug 27 '12 at 05:46
  • @ Mitchel: The CA is a universal computer, as such non integrable. I think my CA - quantum mapping just proves that the CA generates "miraculous finetuning". It does not exactly randomise. It computes more accurately than a mouse so, yes, it controls the mouse's gut. Now remember that the QFT generated by the CA still obeys quantum causality (i.e. commutators vanish outside the light cone), so the finetuning is still causal, which makes it a bit less "miraculous". We can never short circuit this universal computer, but we can compute its correlations, using the mapping on QM. – G. 't Hooft Aug 27 '12 at 07:57
  • That these correlations are "absurd" or "ridiculous" is not a good enough argument to me. The mouse gut, the brain, a flipper machine, they all obey conservation laws such as energy and angular momentum, and the laws of thermodynamics. So they also obey unitarity. There's nothing absurd or ridiculous about that. Just do the CA - quantum mapping. – G. 't Hooft Aug 27 '12 at 08:09
  • Regarding your constructions: my expectation is that Bell's theorem is irrelevant for free bosonic fields because the physics isn't rich enough to make a measuring device. I'm not sure about the string, but if you can realize a Bell scenario there, I think it would have to be in a space obtained by a nonlocal transformation (holographic, twistorial, ...?) of the target space. It shouldn't be possible to produce Bell violations directly in the target space, for the usual reasons. – Mitchell Porter Aug 27 '12 at 08:49
  • I'd be happy to continue in "chat" but I am not going to waste my time trying to figure out how to do this. – G. 't Hooft Aug 28 '12 at 08:13
  • When a discussion gets as long as this, this website starts automatically prompting you to "move to chat". I clicked on "yes", and that last message was automatically generated... The link leads to a chat area created for continuation of this discussion. To post there, you have to have an account on "stackexchange.com", which is distinct from "physics.stackexchange.com", though you may be able to import your account details from this site. – Mitchell Porter Aug 28 '12 at 08:28
  • It's essential to add interactions, making the CA non-trivial. Only then my arguments make sense. Without that, they also work but it's just formalities and semantics. Interactions turn your CA into a universal computer. You can still "solve" the CA equations but you can't speed them up using, say, renormalization group (RG) techniques, to go to larger time and distance cales. Therefore you would need a computer with Planckian dimensions. That does not exist today. – G. 't Hooft Aug 28 '12 at 08:32
  • @G.'tHooft it seem to me that your Superdeterminism is just a Necessitarianism, when you said... "How do you imagine a change in the distant past that affects the gut of a mouse but not the EPR photons? Even if the EPR photons are outside the lightcone, I can simply go further to the past" . the causal chain constituting the world cant have been different, there is just one way for the world to be. – user12103 Oct 13 '12 at 20:14
  • @G.'tHooft and in http://physics.stackexchange.com/questions/32203/discreteness-and-determinism-in-superstrings?lq=1 "if you want to change your mind about what to measure, because you have "free will", then this change of mind always has its roots in the past, all the way to time -> minus infinity, whether you like it or not" – user12103 Oct 13 '12 at 20:15
  • @MitchellPorter It appears that your idea has now evolved into the "mouse dropping function". See p. 141 (and elsewhere) of http://arxiv.org/abs/1405.1548v1. :) – Řídící May 08 '14 at 16:50
  • The arguments that superdeterminism is ridiculous are looking suspiciously coupled to human emotion and judgement, which nature couldn't care less about. Regardless, really admire @G.'tHooft for defending his position and even including this conversation in a paper afterwards. Why do I feel like once we have a complete description of the universe's main loop, superdeterminism will suddenly move from looking ridiculous to being taught as obvious to freshmen physics students? – MaiaVictor Feb 17 '21 at 14:17
-2

Superdeterminism is simply the theory that determinism is predetermined. For that to be true then we are saying that something can only come from nothing thereby establishing true cause and effect. Anything that is 'truly' casual can not exist prior to its own existence. There is only one thing can satisfy such requirements and that is a true causal dichotomy which would be non-local to its effectual state, i.e., spin, which we observe as local. Follow me so far?

If so, then perhaps you will find my findings on this topic of importance:

http://fqxi.org/community/forum/topic/1809

... John Bell was right, "There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will."