8

It is said that the decoherence does not solve the problem of measurement and/or the emergence of classicality, can somebody explain it with simple analogies or in a manner accessible to a non-professional.

Concepts like coarse graining, diagonalization of density matrix, discontinuous jumps and so on are a way beyond and by same way comments that there is no collapse models.

Qmechanic
  • 201,751
user12103
  • 362

4 Answers4

8

You ask a lot, because these questions are very deep and require a very good understanding of the mathematical concepts and their physical meaning.

That said, decoherence is an important aspect of the quantum to classical transition, because it shows how the environment can destroy the precisely defined phase relationship between different components of a superposition. Without a well defined phase relationship the components cannot interfere, but add up classically (in the sense of classical probability distributions).

It is often claimed that decoherence also describes the collapse of the wavefunction and the emergence of classical probabilities for the measurement outcomes. This is not true, as a decohered state still describes a single state of the global system and not an ensemble of measurement outcomes. Some then argue that it does not have to describe an ensemble, as long as it is dynamically indistinguishable from one. This argument is only applicable if you accept the postulates of measurement which allow to describe an ensemble with the same mathematical description as a decohered state. But if you argue like that then the measurement postulate is what created the measurement outcome, and not decoherence. And the measurement outcome would have been the same even before the state had decohered if you apply the measurement postulate before, so this argument is meaningless.

Another common argument is that decoherence explains the (relative) state collapse together with the state branching of the many worlds interpretation. While this allows the actual construction of an ensemble of worlds from a single state and therefore supports the above arguments, it fails to deliver the right probabilities for explaining a state collapse in the sense of the measurement postulate. There are other issues with determining what proper branches are at all, too.

The bottom line is that decoherence is surely an important aspect of the macroscopic transition, but it is not the final answer to the measurement problem.

A.O.Tell
  • 927
  • 5
  • 11
  • I have a question:

    You say that a decohered state still describes a single state of the global system, but to what extent does this decohered state include the observer?

    This is how I think about it, does this make any sense?

    It seems to me that if the state has decohered to the point that the observer sees an ensemble of measurement outcomes then the observer is part of the newly decohered state.

    If not, then the observer will still see a single state of the global system (albeit a bigger more complex one).

    – lukewm Feb 10 '13 at 15:58
4

Take every day probability estimates.

What is the probability that one will be killed crossing this road?

One can have a compilation of all deaths at crossings, maybe even fitted to a functional form, and sees that the probabiliy is 1/10000 that one will be dead after crossing the road.

This does not mean that one is 1/10.000 dead and 9.999/10.0 alive. The probability function is not describing any particular person. When a death occurs, one does not say : the probability function collapsed for this person. In a similar way, the probabilities given by solutions of quantum mechanical equations are just that. A probability . When a measurement happens the quantum mechanical functions that will describe the future probability behavior of the particle change. That is the only meaning of collapse, nothing esoteric. Similar to some polls taken when, once you have taken them the sample you belong to changes, due to the knowledge you acquired from the polling questions and you no longer belong to the polling sample.

So far so good.

Now lets tackle decoherence.

Take a large crystal, you would think it is a classical object since you are holding it in your hand, but you will be wrong. A crystal is one of the clearest manifestations of the underlying quantum mechanical layer of nature. One can measure its symmetries in various ways non destructively, with x-rays for example. Those symmetries are the collective manifestation of the phases that are characteristic of the underlying quantum mechanical nature, each small crystal unit building coherently on each other to grow into a large observable crystal retaining the phases between the molecules and atoms.

The coherence can easily be lost. Take a large hammer and beat the crystal to dust ( do not do it on a diamond, salt is fine). What is the difference between the crystal state and the dust state? The mass and atoms are the same, but the phases were lost, decohered.

In a similar way, all matter starts from atoms and molecules with quite definite phases to each other but generally these phases are lost very fast and decoherence means that even though in principle we might know how one water molecule relates quantum mechanically to a water molecule one centimeter away the complexity of the problem is such that the problem becomes statistical, with quantum statistical mechanics to start with and at large scales with classical statistical mechanics with its classical probability distributions answering questions about matter. It is only in special cases as in crystals, superconductivity, superfluidity that the underlying quantum phases are built up instead of turning into an incoherent mass.

Now measurement:

Whatever we measure arrives in our comprehension through a large number of proxies, by proxy meaning an intermediary which is a mathematical function (necessary for physics observations) that deconvolutes the basic interaction (which is quantum mechanical) up to the level our brains can apprehend. Example: a basic proton antiproton proton annihilation seen in a bubble chamber. Ignoring that our eyes see it, the mathematical path is as follows: measure the curvature, deconvolute to momenta, masses, fit hypothesis to energy and momentum conservation, find total energy =mass of two protons. decide: proton antiproton annihilation. All these processes are classical but we arrive at a quantum mechanical measurement. In my opinion all measurements are classical but when deconvoluted to the quantum level measurement= registered interaction . What is the problem?

antiproton proton

anna v
  • 233,453
4

Decoherence gives you a mixed state, one in which you can read off the probabilities. But in each single case you still measure just one particular outcome, not the probability distribution. Decoherence can't tell you, therefore, why you always measure a particular eigenstate, it just reproduces Born's rule telling you why you measure eigenstates with certain probabilities. In this sense it doesn't solve the measurement problem because it leaves you hanging with a statistical mixture.

WIMP
  • 2,645
2

take a system initially uncorrelated with the environment, with both being in pure states. It interacts with the environment, forming an entangled state. Taking a partial trace over the environment leads to a mixed state matrix for the system with nearly zero off-diagonal entries.

The problem is we can reverse both the environment and the system so that the entanglement disappears leading to recoherence. That's the key point; decoherence in the present might possibly be undone by recoherence in the future. Of course, if you subscribe to the modal interpretation, that's no big deal, but Copenhagenists will have to tell you, no, you have to be the One and Only observer, and must first wait for an eternity outside the universe, and you also have to be infinite.

Suppose the system has N effective degrees of freedom. Then, the decohered density matrix might have a rank which is exponential in N. The spacing between its eigenvalues might possibly be exponentially small. Now, exponentially small is for all practical purposes FAPP zero. So, FAPP, we have eigenspaces of huge dimensionality. There is no unique basis decomposition of these eigenspaces. So, the choice of basis compatible with decoherence is far from unique. This is the preferred basis problem. Some of these decoherent bases are far from quasiclassical. Mere decoherence is unsufficient to pick out a quasiclassical basis.