2

Possible Duplicates:
Why quantum entanglement is considered to be active link between particles?
Why can't the outcome of a QM measurement be calculated a-priori?

Why do some (the majority of?) physicists conclude non-determinism from quantum uncertainty?

If we can't measure something, it seems to me like it's just a reflection of our ignorance.

Yet, from what I've read and seen, physicists actually interpret that as a reflection of the underlying non-determinism.

I can't understand why this is. It seems to violate the fundamental axiom of science. Specifically causality.

According to Wikipedia's page on Uncertainty Principle:

(..) Certain pairs of physical properties, such as position and momentum, cannot be simultaneously known to arbitrarily high precision. (...) The principle implies that it is impossible to determine simultaneously both the position and the momentum of an electron or any other particle with any great degree of accuracy or certainty.

This seems like a statement about a limit of our instruments and knowledge.

I see no reason to conclude "the underlying nature of reality is inherently nondeterministic" from "we cannot make precise measurement".

It seems like Einstein was pretty much the only prominent physicist who rejected it.

Again, to quote Wikipedia:

Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality.

I find myself naturally agreeing with Einstein.

I see no logical pathway from "we can't measure nature" to "nature is random".

So why do physicists adopt this view?

Please provide an answer that is clear and simple, and not smothered by formality.

Please note: this question is not for debating. I'm genuinely asking why is this the prevalent point of view? I honestly see no logical pathway to that conclusion at all.

For example, consider the question "Is it day or night in Cairo right now?", and assume we don't know what causes day and night. If we try to find out the answer to this question at any point in time, there's 50% chance that it's day and 50% chance that it's night. This doesn't mean that there's no answer to the question until we check, it simply means we don't really know what causes day and night to occur in Cairo at any given point in time.

So why, oh why, would anyone conclude from this thought experiment, that Cairo has the fundamental property that day and night are non-deterministic in it?

It seems patently clear that there's an awful lot about sub-atomic particle that we don't know. If we knew more, perhaps we could come up with better ways to measure things.

EDIT:

To clarify what I mean by non-determinism:

If a particle has 30% chance of being "here", and 70% chance of being "there", then I would assume that there's some underlying reason that determines where the particle is. But the prevalent view (as I understand it) is that there's no underlying reason, the particle just happens to sort of "choose" to be "here" 30% of the time and "there" 70% of the time with no particular reason. (I find this view absurd)

Qmechanic
  • 201,751
hasen
  • 217
  • 3
    As Roy correctly says, there is a lot more known about this question than just uncertainty principle by now. You might want to check out Bell's theorem: http://en.wikipedia.org/wiki/Bell%27s_theorem Also search this site -- similar topics have been covered many times. Perhaps some of that will be useful to you. – Marek Mar 21 '11 at 14:52
  • Hasn't this question been asked on this site before? Stackexchange software is set up so we don't have to keep answering the same questions. – Peter Shor Mar 21 '11 at 18:10
  • @Peter Shor, if you think it has been asked and answered before, give a link. I did search before posting. – hasen Mar 21 '11 at 20:25
  • Aren't the following essentially the same question? http://physics.stackexchange.com/questions/317/why-cant-the-outcome-of-a-qm-measurement-be-calculated-a-priori and http://physics.stackexchange.com/questions/3158/why-quantum-entanglement-is-considered-to-be-active-link-between-particles – Peter Shor Mar 21 '11 at 21:00
  • @Hasen hi. I'm curious whether you have anything to criticize in my Answer. You've had such robust responses to all the others. I'm feeling sooo left out. Having said that, I've just noticed your EDIT, which I may respond to. – Peter Morgan Mar 22 '11 at 02:23
  • 1
    @Peter Shor For myself I prefer to think about the details of each question. I'm sure there are other Questions that are similar to this one, but small differences can lead to rather different Answers, or at least to me giving Answers that are different enough that I learn something. I hope and aim to hit the sweet spot where both the Questioner and I learn something, and perhaps even others, but it's good enough if one of us does. We individually have the option of not Answering if we don't like or don't know enough to address a Question. The flow of Questions is fascinating, to me. – Peter Morgan Mar 22 '11 at 02:37
  • @hasen j Your question is a legitimate one and has been answered satisfactorily here and in other places on this website. But I would highly recommend you remove the mention of "causality" - it is not at all clear what connection you are making between causality and determinism. You'll have my vote then :P – dbrane Mar 22 '11 at 12:22
  • 2
    Unlike the other stackexchanges I've seen, physics doesn't seem to be in the habit of closing questions because they are duplicates. This may be something to bring up on meta. – Peter Shor Mar 22 '11 at 12:36
  • @Peter: we do close duplicate questions, it's just that we don't get all that many of them. If you're talking about this one specifically, I didn't have time to check the links you pointed out until now. (For a long question like this, it takes a while to figure out whether it's really a duplicate or not) If you've noticed several questions which you think should have been closed as duplicates but weren't, you could certainly bring that up on meta, though. – David Z Mar 23 '11 at 00:06
  • @David: I'm not really sure it counts as a duplicate, but I think it's certainly answered by the answers in the other questions. – Peter Shor Mar 23 '11 at 01:21

3 Answers3

8

A short answer (which has just become somewhat longer) is that there is more than just the Uncertainty Principle thought experiment involved in the non-determinism deduction. If it were only that then physicists might just conclude that it was some classical wave phenomenon (which it is in a way) that gave rise to the HUP. The key other factor at work is the mysterious object called $\Psi$ which obeys an equation and is said to describe all of (quantum) physics. That is all the atomic experiments (spectroscopy etc) can be calculated from it and its equation (called the Schrodinger equation).

The factor then is that $\Psi$ is not like other objects in traditional physics - these give a specific answer to a specific question. Thus a clock measures the time, and a calculation based on that can determine whether it is or is not night time in Cairo (the date and other physics input might be needed for this in general). But for physics involving $\Psi$ all that happens is a probability! So there is maybe a 80% probability of result X and 20% result not-X. EDIT: I shall have more to say on Einstein's position on this topic after some references at the end of the answer.

Clearly physicists want to know whether this is some kind of approximation (as Einstein viewed) or whether it is a final result. In some way the maximum possible answer being just a probability. You will find many Stack questions which take this to the next level involving subtle tests of this probability (like Bell's Theorem). Many things are known from these theorems and corresponding experiments, and $\Psi$ clearly gives non-deterministic results.

So the HUP is simply one consequence of the basic properties of $\Psi$ giving somewhat of an explanation of that result. If you want to study this more note that words like "determinism" and "non-determinism" would need to be analysed a little bit carefully.

EDIT: Some references:

The primary topic to study further is Einstein's thought experiment to advance his perspective. This is known as the Einstein-Podolsky-Rosen (EPR) experiment. It has become famous in exactly this area: http://en.wikipedia.org/wiki/EPR_paradox

Derived from this have been further theorems primarily: Bell's Theorem : http://en.wikipedia.org/wiki/Bell_theorem This theorem is subject to "debate" from several directions. It rules out what are called "hidden variables", subject to certain conditions.

This is an old stack question related to yours: Will Determinism be ever possible?

A more recent one related to the Bell Theorem topics is (which I havent studied yet):

Is contextuality required in quantum mechanics?

Another theorem related to this question is the Kocken-Specker (Conway) theorem, also discussed here: What does John Conway and Simon Kochen's "Free Will" Theorem mean?

EDIT (after some clarification on the Question):

So there are two parts to this: (a) Einstein's reaction to the points mentioned above and (b) why "most physicists" dont accept that reaction.

Einstein's reaction was to develop some thought experiments, essentially the EPR thought experiment, which has become famous. This makes the claim that even although QM is accurate and even complete in its own terms, it is incomplete as a physical theory. The general idea was to show that uncertainty in a value (or even the impossibility to obtain a value) was somehow just a consequence of Quantum Mechanics as it then stood (and still stands). The EPR thought-experiment was intended to show that there was other "physical information" out there which was not being captured by $\Psi$.

Initially I think that physicists just ignored this paper, as no experiments were done or further development was provided. There was also a theorem by John Von Neumann which supported the view that there could be no "hidden variables" - which is what the Einstein position seemed to imply. (A hidden variable might be if $\Psi(x) = \Psi(x,h)$ where somehow we only measure $\Psi(x)$ probabilistically, but $\Psi(x,h)$ is more deterministic - but we never see or determine h, maybe. A few options here as one sees.) This wasnt quite what EPR claimed, but Bohm did develop a version of QM which had a hidden field (not quite a variable) which stochastically determined things. This was a counterexample, of a sort, to the von Neumann theorem. This is now called the "Bohmian Interpretation" of QM and showed that "Interpretations" might exist which were different from the ones known in earlier days of QM.

Bell examined this argument in the 1960s with a Theorem to update von Neumann and clarify what was excludable by the maths of QM itself. The Kochen-Specker theorem also derived similar results in a special finite case. When the Bell theorem relations were tested experimentally it showed that the Bell theorem was correct experimentally. This is why most physicists consider this topic closed.

Some philosopher-physicists and some mathematical physicists like to study remaining loopholes in these theorems and their experiments, and so the topic is under subtle discussion from that perspective. Finally it is known that Quantum Mechanics is not Quantum Gravity so it is possible that some modification of QM might be required before it becomes QC. Maybe something in that modification will also add some subtletly to the discussion.

Roy Simpson
  • 4,723
  • Pretty good but you might consider linking to some of those questions (or some other references), I guess. – Marek Mar 21 '11 at 14:47
  • @Marek, yes there are so many I will take a moment to find the best few and add some links as an Edit. – Roy Simpson Mar 21 '11 at 14:48
  • "all that happens is a probability" what makes anyone say for sure there's nothing else? It seems patently clear that there's an awful lot we don't know about sub-atomic particles, and future discoveries could bridge this gap. – hasen Mar 21 '11 at 14:48
  • @hasen j, that latter is a good question, so I am working on some Stack links for you to study. This question was about the HUP, these other topics are related. – Roy Simpson Mar 21 '11 at 14:51
  • Your summary about Bell's Theorem is the closest thing to an answer. Maybe you could rewrite your answer and center it around that point? – hasen Mar 21 '11 at 15:41
  • @hasen j, I have provided an updated and extended answer. – Roy Simpson Mar 21 '11 at 16:59
  • It's gotten too long now :) The only thing I really understood is that this "Psi" thing is a probability function and it works, and there's a theorem (that hasn't been proven) which asserts no hidden variables (no yet-to-be-discovered reasons). But I already hinted at this in my question and so you're not really providing an answer yet. I'm (somewhat) aware of the EPR experiment, and according to Wikipedia, Einstein didn't directly participate in writing it and he felt the outcome was too formal and the main point got lost in the formalism. – hasen Mar 21 '11 at 20:40
  • @hasen j Not sure what you mean by "hasn't been proven" but all the theorems mentioned (esp. Bell's) have mathematical proofs. Bell's inequalities have also been tested experimentally extensively (beginning with the famous Aspect experiments - wonder why no one mentioned that) and the results have always consistently favoured the traditional probabilistic interpretation of QM, ruling out most hidden variable theories. – dbrane Mar 22 '11 at 12:14
3

Hasen j hacker-not-engineer. I suspect the reason is that causality in the classical sense hasn't been found to be useful enough for Physics and Engineering to justify the extra baggage. It's enough for practical purposes to construct good models for the statistics of experimental data, and the most effective mathematics for doing this is the mathematics of Hilbert spaces.

Interpretations of quantum theory exist —such as the de Broglie-Bohm and the Nelson stochastic interpretation, but there are various others— that will more-or-less fulfil what looks rather like a wish for classical causality in your Question, but they add a relatively awkward layer of mathematics that doesn't, for most Physicists, add enough insight or ability to do better Physics, Mathematics, or Engineering. There are other detailed reasons why not many physicists use these interpretations, particularly concerning Special Relativity.

We can model an experiment in a way that is as close to classical as we feel like having, while staying in the quantum theory fold, by introducing increasingly detailed models of every part of an experimental apparatus, instead of just modeling small numbers of electrons or photons, etc. This is, very loosely, called "moving the Heisenberg cut". This is quantum theory's version of "hidden variables"; model refinement happens, it's just not done by introducing classical hidden variables, it's done by introducing quantum mechanical variables that weren't in the less refined models. Consequently, there's no strong need to introduce classical hidden variables, even though it's not impossible to do so. This is loosely tied to a currently popular interpretation of what happens in quantum mechanical measurements, decoherence, which says that classical properties emerge because of the huge numbers of degrees of freedom that surround quantum systems.

You should understand that I've insinuated some unconventional views in this presentation of the conventional view. Also, there are more people in the Physics community than you might think who have committed their whole working lives trying to find good reasons why the Physics community should do Physics more in a way that, crudely, Einstein would approve of, so far without much success. New approaches and arguments emerge fairly regularly, are considered seriously by people who would jump if they could see practical advantage in it, and are found not good enough. The de Broglie-Bohm approach is essentially a product of the 50s, the Nelson approach is essentially a product of the 60s, from the 80s we have the GRW approach (apologies to anyone whose favorites I've missed out); it's more difficult to give a name to an approach from the 90s and 00s that people might cite in 20 years time, but there are many candidates, however none of them is obviously more useful.

EDIT: Try looking at all the interpretations listed on the Wikipedia page https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics, and think whether there's a way in which any of them helps to make the construction of models more practical. The Wikipedia presentations are not necessarily the best available, but the best presentations are probably not in a different ballpark as far as the level of complexity involved is concerned. It's OK to be annoyed that there's not anything better available, but constructing something better is real hard. Not a lot different from the situation for large-scale software, perhaps, where the balance of features and simplicity is also hard to master. Wanna displace Microsoft? We have to go ahead and try.

EDIT(2): In response to your EDIT,

If a particle has 30% chance of being "here", and 70% chance of being "there", then I would assume that there's some underlying reason that determines where the particle is. But the prevalent view (as I understand it) is that there's no underlying reason, the particle just happens to sort of "choose" to be "here" 30% of the time and "there" 70% of the time with no particular reason.

As I see it, this introduces complications, because what the conventional view is depends on whether you put a question in terms of particles or in terms of quantum fields. Consider this restatement, which changes the single reason that you invoke to cause events, a particle, into a confluence of multiple or a continuum of reasons, which is much more appropriate to a field,

If there is a 30% chance of observing an event "here", and a 70% chance of observing an event "there", then I would assume that there are underlying reasons that determine whether we observe an event "here" or "there".

There is rather too little interpretation of quantum field theory, but it's to QFT that Physicists retreat when people press for details. The state of the quantum field describes where we can expect to see events (OK, to stay in the conventional I should say particles, so you can stop reading now, but it's fairly widely understood that the concept of a particle in QFT is theoretically problematic, whereas statistics of recorded events are measured) when we use a particular experimental apparatus. Now, if you want a reason why events happen "here" or "there", you can, if you're careful to understand that a quantum field is an operator-valued distribution, not a classical field, say that the statistics we observe are because of the quantum field, which is everywhere between the preparation apparatus and the events in the measurement apparatus. Art Hobson is the only person I know who has published on this, although some of my papers falteringly hint towards this kind of approach. Art's papers are not perfect, but they're available on his web-site, try "Teaching quantum physics without paradoxes" and try others if you like that. Brigitte Falkenburg's book, Particle Metaphysics may be too much Philosophy for most people, but I find it a very good counterpoint to Art Hobson.

This says only that the statistics of the events are caused by the quantum field. I'm ambivalent about this, I'd prefer to say only that the quantum field describes or models the statistics of the events, but you do what you like. If you want to know what causes individual events, then I'm going to leave you on your own. I think something more satisfactory than deBB-type trajectories for fields can be done in the field context, but I haven't done it. Actually, I've been trolling around here at PhysicsSE for a week or so while I recoup my spirits for a new thrash at issues that halted me a few weeks ago (I've been 20 years at it, so don't hold your breath). My approach is certainly not conventional, and you can find so many other approaches out there that there is no reason at all to read what I have to say about it. If we stay with conventional Physics, however, as you see in some of the other Answers, we can hardly engage with your Question at all.

Urb
  • 2,608
Peter Morgan
  • 9,922
  • I can see that I shall have to study this Nelson approach after all the marketing it has received recently! – Roy Simpson Mar 21 '11 at 16:41
  • Roy, haha! It was just at the front of my mind, I'm not marketing it. It's something that is not well-known but I think to be "academically well-rounded" (if one wants to be!) one should understand that it's possible. I think it's fair, however, that it's not that well-known. – Peter Morgan Mar 21 '11 at 16:57
  • Peter, since you are looking for a criticism (or at least comment) on your answer: basically you are advocating a statistical interpretation of QM and leaving open the interpretation of what happens to individual particles. The OP really wanted to know (I think) whether "all physicists" now believed in a non-deterministic interpretation of QM as the fundamental one, and if so why. So your answer slightly dodges this. Incidentally havent some Answers been deleted from here since we wrote ours? – Roy Simpson Mar 22 '11 at 14:04
  • @Roy, Yep, a couple have gone. Fine if Isaac doesn't want to go at it, but I'm curious. I'm not (specially) advocating a statistical interpretation, but the OP effectively asks why Physicists don't, for the most part, go for causal, non-statistical interpretations, and I tried to rock and roll with that. Usefulness for engineering, in some broad sense, is key, IMO, although Decoherence is rather causal, and the persistence of the causal phrase "Particle Physics" seems telling. Isaac's EDIT shifts the question somewhat, so I let myself ride that wave. Your Answer is useful, and I like it. – Peter Morgan Mar 22 '11 at 14:34
  • I would have thought that Engineers would be quite happy with a fully deterministic explanation of the underlying physics were such available. I think that you are really saying that the statistical QM maths is good enough for getting on with the engineering, which seems to be true enough; although another Stack Q on difficulties in Quantum Computing is making me rethink that point. – Roy Simpson Mar 22 '11 at 14:41
  • @Roy, An Engineer's (or Physicist's) job is to be ingenious. A specialist will use anything that lets them do or make something better than the next Engineer or Physicist can. The good Engineer will use methods that aren't in the book, including multiple interpretations, anything that helps their creativity. What do we teach Engineers who are not specialists in QM, or who don't handle multiple (incompatible!) points of view so well? Good teachers struggle with that. Given the mess out there, what should be pointed out as more useful than other stuff? That's a sideways take, I'm afraid. – Peter Morgan Mar 22 '11 at 14:56
1

In order to test the speed and position of a particle, you need to hit it with a photon. In doing so, you change the outcome of where the particle is, where it's going and its speed. Only by not measuring it does the particle maintain its true characteristics. So, in this way, it is not merely the inaccuracy of the detector which causes the uncertainty, but the act of measuring in itself.

  • That doesn't answer the question at all; it doesn't even remotely address it. Surely the particle has a certain speed and a certain momentum, regardless of whether or not you can measure it. – hasen Mar 21 '11 at 18:58
  • Ahhhh I see where the problem is now. You think (if I've read you right) that quantum mechanics or the uncertaincy principle says that the particle has no velocity or location or momentum or whatever. I get the confusion now. But you see, the HUP is all about the measuring of (sub-atomic) particles. It's not saying that there's no velocity or position, but that when one measures it, that act changes the condition of it. Also, remember that this only applies to the smallest of particles, not every-day phenomena like "is it daylight in Cairo". –  Mar 21 '11 at 19:22
  • I don't have a problem with that. My problem is the way people interpret that. It seems the most prevalent interpretation is that particles really act according to a probability function, not anything else. In that sense, their behavior is random. It could be here or there, but there's no reason why. I think it's not random, I think it's determined by factors we don't know. But the prevalent view among physicists seems to be that: no, it's just non-deterministic. (see my "cairo day/night" example in the question). – hasen Mar 21 '11 at 20:24
  • @hasenj - probabilistic functions do not imply randomness. And why do you place more reliance on you "thinking it's determined by factors" than on many scientists and mathematicians working on the theory to try and understand the issue? – Rory Alsop Mar 22 '11 at 15:28