9

Suppose I have the following double split experiment set up:

  • a monochromatic electron source of low intensity, which we can model as emitting a single electron at a time with energy $T$.
  • a diffraction screen at a distance $A$ with two macroscopic slits equidistant from the source with width $w$ and distance from the center of the screen $\frac{d}{2}$.
  • a detector screen sitting parallel to the diffraction screen at a distance $B$ which consists essentially of a lattice of ammeters each connected with a square surface of side $r$.

                                 Double slit experiment

Classical quantum mechanics, and experiments, tell us that the following behaviour is experienced:

  • Only one random ammeter is disturbed each time an electron is fired
  • The probability distribution of the random ammeters being fired is as given by QM.

The interaction of the electron with the detector is not modeled, the electron is essentially modeled as a wave with interference and which ammeter is fired, essentially an effect of particle behaviour, cannot be discussed within this picture.

So, the measurement or decoherence of the electron wave function corresponds to this switch between wave-like and particle-like models of the electron - which is clearly unsatisfying, intellectually, because QM is vague about this - it merely associates mathematical operations with the procedure, without giving any particular physical or mathematical justification of it.

The questions:

Do further refinements of QM (QED or QFT or ST) give any better explanation of what justifies the measurement "recipe" of classical QM by correctly modelling the electron/detector interaction?

How, in layman terms, is the change of picture from wave to particle modeled (if it is at all)?

Sklivvz
  • 13,499
  • 7
  • 64
  • 87
  • Have an answer for you here http://physics.stackexchange.com/questions/234527/quantum-field-theorys-interpretation-of-double-slit-experiment/280264#280264 – HolgerFiedler Sep 15 '16 at 18:15

6 Answers6

5

An important contemporary approach to the two-slit situation you ask about is decoherence. Many contributors to this forum are fans of decoherence, I am not, but it is very important and worth attention. I couldn't help noticing you still had queries about it even after the explanations of these «fans».

Since one should always be able to include more of the outside world in the box of quantum analysis, always be able to push the boundary out, you are basically asking if any progress has been made by including the slits, and detectors, in the unitary Quantum picture, or if instead any progress has been made by changing the unitary picture even a little. The decoherence approach does not change the unitarity of the evolution, and neither will QFT. (Some have wondered if gravity or other non-linearities will, but we will not go into that right now.)

Brief sketch of decoherence

For simplicity, assume there are only two ammeters behind the slits. As always with unitary evolutions, the electron, after being diffracted by the slits, and when it reaches the plane of the ammeters, is in a superposition of two states belonging to the different locations on that plane of the two ammeters (normally it will be in a superposition of many states but we can simplify here, too). $c_1\psi_1 + c_2\psi_2$. (We will simply neglect the occurrences where neither ammeter fires.) The state space is effectively C$^2$. Now, the ammeters are being modelled quantum-ly, too, so the set of two ammeters has a Hamiltonian and a Hilbert Space for the state vectors (wave functions) of the system of two ammeters, $H_{amm}$. The state space for the combined system of electron--being--measured and measurement apparatus (two ammeters) is then C$^2\otimes H_{amm}$. The decoherence approach makes more or less the same key assumption Mott, London, Wigner, and many others make down to today (but which I query, elsewhere, see my other posts and links), which is that whatever dictionary or correspondence there is between the macroscopic idea of «ammeter 1 fires» or, parallellly, «ammeter 2 fired», is to be modelled by a collection of quantum states in $H_{amm}$ (this is what I usually complain about, but not here), or, since we can pass to their closed span and simplify to assume there is one state, label them $\phi_1$ and $\phi_2$. (A key point in this regard will be expanded upon in a minute...) Now since this means that $\psi_1 \otimes \phi_o$ evolves unitarily to $\psi_1 \otimes \phi_1$ (Here, $\phi_o$ is the initial state of the ammeter-lattice while it waits to discharge or fire) but that $\psi_2 \otimes \phi_o$ would evolve to $\psi_2 \otimes \phi_2$, by linearity (no progress has been made, IMHO, by tinkering with the assumption of linearity), what actually happens is that our electron and ammeter as a combined system evolve to $$c_1 \psi_1 \otimes \phi_1 + c_2 \psi_2 \otimes \phi_2,$$ which is an entangled state: neither the ammeter nor the electron can be considered as a separate system anymore.

At this point, decoherence, depending on which flavour is administered, points out something undeniable: in fact, there are many more degrees of freedom within the ammeters: our simplification (which is the same as that of Wigner, EPR, and many others) overlooks something important. It is, according to this theory, important that the ammeters are at least weakly coupled to the environment. It is not quite a closed system, so the analysis above is only approximate.

All decoherence approaches (as far as I know) use density matrices. (I would be interested in a current reference to one that works only with pure states and not density matrices.) It can be shown in theory, rigorously, that this coupling with the environment leads to a further thermodynamic-like evolution to a density matrix which is very nearly diagonal and so can be regarded as a classical (or Bayesian) probability distribution on the two states, which are each obviously separable: $$\psi_1 \otimes \phi_1$$ and $$\psi_2 \otimes \phi_2.$$

The Coleman--Hepp model and Bell's response

In my opinion, the grand-daddy of all decoherence theories is the so-called Coleman--Hepp model. I learned it in Bell's famous paper, a freely available copy of it is here: http://www.mast.queensu.ca/~jjohnson/Bellagainst.html in which he tranlsates it from the language of QFT in C*algebras to the Schroedinger picture and Heisenberg picture. Not on Los Alamos archive. Of course Coleman and Hepp are two most distinguished physicists. Briefly, the criticism, which I agree with, is that a density matrix is not a state, so this is really no better from a logical or foundational point of view than the open system approach and suffers from the same question-begging. (Which is more than thiry years old now, so it's not progress.)

The Physics of these models

These models all use a kind of thermodynamics, and this is surely right, as Peter Morgan (hi peter) said in a previous post, the possible detections are thermodynamic events...and the way to make that precise is to take some sort of limit as the number of degrees of freedom goes to infinity. But none of these models reflects in the model that measurement is a kind of amplification, none of the thermodynamic limits involved uses negative temperature, and this is surely wrong. Feynman's opinion was that this was decisive, see Feynman and Hibbs, Quantum Mechanics and Path Integrals, New York, 1965, p. 22. So these models do not incorporate Feynamn's insight. The models of Balian et al. cited above do incorporate this, and study phase transitions induced by tiny disturbances from an unstable equilibrium which is indeed the physics of bubble chambers and photographic emulsions. Bohr thought that the apparatus had to be classical, and decoherence models do not use a limiting procedure that introduces a classical approximation. So they do not incorporate Bohr's insight.

There is experimental evidence that interaction with the environment induces decoherence, but not yet in a way relevant to measurement. There is also experimental evidence , the so-called spin echoes, that mesoscopic systems can recover their coherence after losing it, which is what Wigner always assumed.

  • This came up recently on physics.org http://phys.org/news/2013-07-physicists-publish-solution-quantum-problem.html about the Balian et al. approach – joseph f. johnson Aug 29 '13 at 13:54
4

Dear sklivvz, the very same question was asked a few days ago. Quantum field theory, string theory, or any other viable theory that may supersede quantum mechanics directly reduces to non-relativistic quantum mechanics in the non-relativistic limit and changes nothing about the basic postulates of quantum mechanics.

It means that you may find the corresponding low-energy, low-speed (multi-)particle states in the Hilbert space of QFT or string theory or anything else - essentially creation operators acting on the relevant vacuum - and you may prove that the QFT or stringy Hamiltonian acts on these states exactly as the non-relativistic Hamiltonian does, plus corrections that go like positive powers of $1/c$ and may be neglected in the non-relativistic limit.

So nothing changes about quantum mechanics and the double-slit experiment in QFT or string theory and it's likely that those things will never change. Cheers, LM

Luboš Motl
  • 179,018
  • 2
    Hi Lubos, this does not address the question - it's a given that more advanced theories must reduce to QM, as QM models successful experiments. This is just like saying that any gravitational theory must reduce to Newton. However, for example, QED lets us model the detectors so, in theory, one should be able model the measurement in a way that reduces to classical QM. What is this model? How does it reduce to classical QM? – Sklivvz Jan 30 '11 at 12:15
  • 3
    Dear Sklivvz, if you're asking how a silicon plate or iron or glass plate is represented in QED, then you're not asking a question about the double slit experiment; instead, you're asking a question about condensed matter physics. The precise description of a material depends on the material. At any rate, QED or SM or string theory won't say anything substantially different than non-relativistic QM. If you're assuming that relativistic QFT or string theory change anything about the interpretation of QM or the measurement, then your assumption is incorrect. They don't change anything. – Luboš Motl Jan 30 '11 at 16:46
  • Dear Lubos, I am asking whether modelling the detectors gives us a better understanding on why we observe a particle-like effect (one ammeter spikes per electron), instead of a wave like effect (all ammeters have spikes distributed in interference patterns). Classical QM doesn't explain this at all (or for that matter why we should use a position operator when we are measuring current). – Sklivvz Jan 30 '11 at 19:32
3

The answer is "yes and no". Important research is being done on modelling, with a Hamiltonian, the joint interaction between a microscopic particle being measured and the macroscopic measurement apparatus doing the measuring. Allahaverdyan, Balian, et al. have done the best, latest work. H.S. Green did a very stylised model long ago. Hannabuss has been doing important work on this.
Bibliographic information for their papers, and others by Collet, Milburn, et al., in Quantum Optics, there is also C. Gardiner and P. Zoller, Quantum Noise, can be found in the bibliography to my Thermodynamic Limits, Non-commutative Probability, and Quantum Entanglement, published, available for free at http://arxiv.org/abs/quant-ph/0507017 and my longer paper at http://arxiv.org/abs/0705.2554

That was the "yes" part. The cited authors are mainstream important researchers.

But. It doesn't do much to change the big picture.

In particular, it does not resolve the controversy about decoherence. The work of these authors is valid whether or not there is decoherence, nor do they address the question of what would produce it and when.
Instead, they address directly the measurement process as a unitary, quantum mechanical process. The electron is still entangled with the slits and ammeter and everything, but they can say something about the precise way in which that entanglement is "negligible for all practical purposes". So it is important work.

More precisely, to answer the last parts of your question, it is not even a good idea to think of the "particle" as sometimes being examined in a a wave picture and sometimes in a a particle mode. That is just sloppy undergraduate thinking, you won't find that in the axioms of QM and you won't find it in these papers. There is no "this switch between wave-like and particle-like models of the electron" in their careful analyses of the measurement process, nor in the axioms of QM. In the axioms there is a switch from using the unitary evolution axioms of the wave function to the axioms for a measurement process, but this has nothing to do with modelling the electron. These cited papers succeed in modelling the measurement process as a unitary deterministic evolution of a quantum combined system. The resulting entangled superposition of states is not the same thing as what the measurement axioms predict, because the entangled state is a superposition of quantum states and the "result of a measurement process" is supposed to be a probability distribution on separable states. But they can show that the difference between these two different things is practically negligible because the "coherence" in the superposition is very very low...If now at this step you, dear reader, take the point of view of the "decoherence" crowd and apply it and go a little further than these results do, you can then say that since the coherence is negligible, this superposition of entangled states will appear the same as a mixed state, and a mixed state is, « as we all know », a probability distribution on its different components.

So yes, progress has been made, but J.S. Bell would never have accepted going from a quantum superposition of states to even a diagonal density matrix since logically these are distinct conceptions, so the controversy continues.

0

I agree the answer by @motl, the more elaborate theories do not modify the non relativistic conclusions, since QFT and more involved have the quantum mechanical postulates built in. Since this came up again, here are my two cents of the euro:

Do further refinements of QM (QED or QFT or ST) give any better explanation of what justifies the measurement "recipe" of classical QM by correctly modelling the electron/detector interaction?

For the a non relativistic quantum mechanics model to rigorously model a problem one has to define a) the potentials and b) the boundary conditions of the solutions.

In this case one would need a geometrical (with delta functions) definition of the potentials , infinite attractive out side the slits, totaly absorbing in the slits. The solution of the Schrodinger equation will give the probability distribution observed depending on the size and the distance between slits. There is nothing more esoteric than this basic quantum mechanical picture for QFT etc to elaborate on.

As with solving mathematical models of physical situations, it is not necessary to go into the details of "what the materials are composed of" to get a first order solution.

How, in layman terms, is the change of picture from wave to particle modeled (if it is at all)?

In all quantum mechanical theories, relativistic or not, what comes out are probability distributions for interactions in space and time. The probability is one to find the "particle"= quantum_mechanical_entity in a delta(x),delta(y) on the screen in an area smaller than a few microns^2. One says that the quantum mechanical entity shows its particle nature. This description comes from classical mechanics problems. When the interference pattern is built up by an accumulation of points, the probability function is displayed showing the wave nature of the probability.

anna v
  • 233,453
  • 1
    Thanks, but this is not what I asked. I was already familiar on how to model this experiment with classical QM, as I stated in the question. – Sklivvz Jan 13 '16 at 07:42
0

So, the measurement or decoherence of the electron wave function corresponds to this switch between wave-like and particle-like models of the electron - which is clearly unsatisfying, intellectually, because QM is vague about this - it merely associates mathematical operations with the procedure, without giving any particular physical or mathematical justification of it.

Quantum mechanics doesn't have two models of the electron. For intuitive purposes it can be useful to visualize very broad wave packets (that are almost a momentum eigenstate) as "waves" and very localized wave packets (that are almost a position eigenstate) as "particles", but the QM description is unique.

The Copenhagen interpretation assumes that a measurement of position "collapses" the (position) wavefunction to a very localized wave packet with a width in the order of the measurement uncertainty. Decoherence explains the apparent collapse of the wavefunction by an entanglement between the system and environmental degrees of freedom due to the measurement process.

mmc
  • 1,869
  • "Decoherence explains the apparent collapse of the wavefunction by an entanglement between the system and environmental degrees of freedom due to the measurement process.": Ok can you be more clear about this by avoiding jargon? I don't really understand what you are saying. – Sklivvz Jan 30 '11 at 13:48
  • If you are measuring a system, you can't avoid entangling the system with macroscopic objects (measurement device, observer, ...) because the state of this objects must be modified by the measurement. So you won't observe a superposition of measurements because your state will be entangled with the measurement (in the MWI the different measurement outcomes are in "different branches"). – mmc Jan 30 '11 at 14:15
  • All "mainstream" interpretations of quantum mechanics assume that the results of measurements will be non-deterministic. In the Copenhagen Interpretation it's just a postulate, while in the Many Wolrds Interpretation it comes from you being unable to know the "branch" where you will end. I don't know enough about the Consistent Histories Interpretation to explain it in detail, but I know it doesn't deterministically select an outcome (I think that it just discards unobservable entanglements to get a "classical superposition".) – mmc Jan 30 '11 at 14:34
  • My question is not about which interpretation of QM is correct, but about why QM is correct as-is. – Sklivvz Jan 30 '11 at 19:38
  • @Skli Because it agrees with experiments – TROLLHUNTER Jan 30 '11 at 20:41
  • @Sklivvz Those are the same question. The "Why is QM correct?" question is the whole interpretations debate. – spencer nelson Jan 30 '11 at 22:17
  • QM is a non-deterministic scientific theory with enormous experimental support, regardless of the adopted interpretation. It's always possible that something we believe random is just pseudorandom in reality. But there is no evidence pointing in that direction, AFAIK. – mmc Jan 30 '11 at 22:24
  • @kake why are you being pointlessly polemic? I know it agrees with experiments, but it does so by postulating a bunch of stuff, which is never good for a theory. @spencer I haven't said QM is incorrect. I have simply asked if more complete theories give us better insights. – Sklivvz Jan 30 '11 at 22:44
  • @mmc QM is a deterministic theory, which deals with incomplete/fuzzy information. The DSE will always exhibit the same interference pattern... There is no "random" or "pseudorandom" in QM (at least as far as i know). There is predicted and unpredictable. Also, the statistical/probability laws used by QM are not, as far as I know, compatible with the classic statistical/probability laws. – Sklivvz Jan 30 '11 at 22:51
  • In QM you must separate the "classical" randomness from quantum superpositions. An special object, called a density matrix, encodes both kinds of "uncertainty". It's true that QM is, in a certain sense, purely deterministic. But not in the sense that it allows in principle deterministic predictions of specific measured values (the usual sense of determinism). – mmc Jan 30 '11 at 23:13
-2

After reading Heinz Pagels "Cosmic Code" and its clear elaborations of quantum field theory, it seems to me that a QFT analysis of the double-slit experiment would flow along these lines (which will, also, be seen to offer the clearest conceptual understanding of the delayed-choice double slit experiment):

Although light appears to us as traveling at 299792458 meters per second, from the perspective of a moving photon, it is traveling in no time --i.e. instantaneously (see: http://www.universetoday.com/111603/does-light-experience-time/) From this perspective, the arrival of the photon is the same instant as its departure. This concept is designated by the term "null line". One can imagine that this "line" is like a pole that is placed between departure point and arrival point, such that both ends of the "null pole" (as it were) are laid between the starting and ending points to contact them simultaneously (i.e. departure and arrival are simultaneous). Yet, "null line" is an inadequate term, since this linear concept could only represent the path of a photon. Yet again, even in basic (non-field) QT, light doesn't naturally travel as a particle, but as a wave (only measurement collapses this more natural state). And the null line is a term of Relativity theory, not of QFT. To describe the situation in terms of QFT, one should say that, at the instant light is released, it forms a non-moving wave pattern that is instantaneously imprinted on a quantum field extending from departure point to arrival point (i.e. from the point of light's release to the light's display on the "back wall", as in the most basic DS experimental set-up). This wave-imprinted field can be imagined (for convenience sake only) as the basic picture of the double slit wave pattern often accompanying descriptions of the experiment in books.

So now lets describe the implications of this reconceptualization: At the moment the light is released, a wave pattern representing the double slit experiment is instantaneously imprinted on the quantum field. Yes, there is motion, but it is not the light that is moving (it was already "there" when it left). It is we who are bound within spacetime (not light!), and within spacetime's "speed limit": we cannot experience anything, inclusive of an experiment, as going faster than the speed of light. Thus, from the experimenter's frame of reference, he sees the illusion of light moving at 299792458 metres per second, when it is he who is moving through the time dimension at 299792458 metres per second (which we cannot perceive as a spacial dimension like the other 3, due to length contraction caused by our motion through time). Our motion removes the 4th dimension from our perception (see p690 "The Myth", and p.691 "In the Beginning": https://books.google.com/books?id=aBUDS_G-SU8C&pg=PA690&lpg=PA690&dq=%22a+myth+for+special+relativity%22&source=bl&ots=QlEUXl41lJ&sig=zZq2LF86ilvvkELZx0-NyobiLvk&hl=en&sa=X&ved=0ahUKEwikvaHguaXKAhXJmR4KHZ70DngQ6AEIHDAA#v=onepage&q=%22a%20myth%20for%20special%20relativity%22&f=false).

So while we're always moving through time at the speed of light, we are unaware of this motion except as the vague sense of the "motion through time" we experience as gradual aging.

To quickly pull this all together, as the experimenter watches the experiment, he is only catching up to the light that has already arrived ---he is merely experiencing his motion through the wave pattern imprinted on the field. He experiences this journey (that only he is taking) as a moving wave of light. Now we can see that any change he makes during his journey to "catch" the light "null pattern" (appearing to him as a moving wave or, alternatively, as a moving particle) simply results in the entire wave pattern instantaneously changing back and forth from a particle trajectory to a wave trajectory as many times as viewed or not viewed before reaching the 2 slits.

Now for the beauty of this field interpretation when it comes to the "delayed choice" version: even after the illusory moving wave/particle has passed by the 2 slits, any attempt to change the state by viewing or not viewing the light simply results in the entire wave pattern on the field instantaneously changing (the light is not bound by time and its changes must be instantaneous as is every quantum "collapse"). It, thus, doesn't matter whether you check the light in front of the slits or behind the slits, the wave pattern imprinted on the quantum field instantly changes. You can't "beat the light" to its destination because its always at its destination from the beginning, while we are moving along the wave pattern path playing a "fools game" trying to trick the light and are inevitably surprised that it seems to have known what we were doing all along (rather, it is instantly modifying its pattern as we play this dance, always manifesting the correct pattern by the time we finally reach the end of the experiment). At that point, our time is up; we register from our perspective that the light "has arrived"; and are amazed by the seemingly prescient nature of the "moving light" (illusory) that seems to "know" where we were and what we were doing all along (which it did ---only metaphorically, of course). And even more amazingly in the case of the delayed-choice experiment, the light appears to have even "gone back in time!"

Anyways, I think I got Pagels right. Like I said, his book is the only book that made me finally "get" this. Although, perhaps, everything I have said is wrong. But now I've said it, and I'll leave others to judge if this, in any way, helps deal with the profound mystery of the DS experiment.

  • From the perspective of the photon... no such perspective exists, there is no frame with $v=c$. – Kyle Kanos Jan 12 '16 at 11:00
  • Yes, Things can always be looked at through different frames of reference. I just did a quick check on the net, but there are many other such articles that consider the frame of reference of a moving photon: http://www.universetoday.com/111603/does-light-experience-time/ – Paleo Daleo Jan 13 '16 at 00:24
  • 1
    That is a lay-science article, it is not rigorous. What that author is doing is considering time-dilation, $\tau=\gamma(v)t$, as $v\to c$ and supposing that $1/0=\infty$ which is known to be improper. There is no such $v=c$ frame. – Kyle Kanos Jan 13 '16 at 00:35