46

Recently there have been some interesting questions on standard QM and especially on uncertainty principle and I enjoyed reviewing these basic concepts. And I came to realize I have an interesting question of my own. I guess the answer should be known but I wasn't able to resolve the problem myself so I hope it's not entirely trivial.

So, what do we know about the error of simultaneous measurement under time evolution? More precisely, is it always true that for $t \geq 0$ $$\left<x(t)^2\right>\left<p(t)^2\right> \geq \left<x(0)^2\right>\left<p(0)^2\right>$$ (here argument $(t)$ denotes expectation in evolved state $\psi(t)$, or equivalently for operator in Heisenberg picture).

I tried to get general bounds from Schrodinger equation and decomposition into energy eigenstates, etc. but I don't see any way of proving this. I know this statement is true for a free Gaussian wave packet. In this case, we obtain equality, in fact (because the packet stays Gaussian and because it minimizes HUP). I believe this is, in fact, the best we can get and for other distributions, we would obtain strict inequality.

So, to summarize the questions

  1. Is the statement true?
  2. If so, how does one prove it? And is there an intuitive way to see it is true?
SRS
  • 26,333
Marek
  • 23,530
  • 1
  • 77
  • 106
  • Why do you think it would apply? You can't really make a measurement that way (either you measure at $t=0$ or at $t=T$, but never both), so you basically have two different $\psi$ solutions. Both will obey the principle independently. Am I misunderstanding your question? – Sklivvz Mar 19 '11 at 16:06
  • If your wavepacket, to begin with, saturates the uncertainty bound (i.e. is a coherent state) then this is trivially true - coherent states stay coherent under time-evolution. If your initial state is not a coherent state then the evolution is clearly more involved, but in that case you could expand your arbitrary initial state in the coherent state basis - so that this inequality (as established for coherent states) could still be used, component by component to show that it remains true for the arbitrary state. Or perhaps not. Chug and plug, baby, chug and plug. –  Mar 19 '11 at 16:08
  • @Sklivvz: there's no problem with that. The particle still needs to satisfy HUP at every moment, even if you don't measure it; I just want to make this statement quantitative. If it helps, think about this as a purely mathematical problem. – Marek Mar 19 '11 at 16:09
  • @Deepak: good idea. I know coherent states are useful for harmonic oscillator and I suppose perturbations thereof too. But what about a general system? Are there always coherent states present? – Marek Mar 19 '11 at 16:12
  • @Marek, I understand, however your statement is stronger than HUP, is it not? – Sklivvz Mar 19 '11 at 16:25
  • @Sklivvz: hm, not really stronger. It's independent but (if true) gives further information on the behavior of uncertainty. – Marek Mar 19 '11 at 16:31
  • @Marek, Coherent states dont need to be "present" to use them in a basis-centric calculation. Remember that CS form an overcomplete basis for the Hilbert space. – Roy Simpson Mar 19 '11 at 16:32
  • 4
    I don’t think the statement is true. Put the minimum uncertainty wave packet at t=0. What was the uncertainty before, at t<0? it was larger so it has been decreasing before t=0. More generally, you cannot derive time asymmetric statements from time symmetric laws. –  Mar 19 '11 at 16:39
  • @Roy: correct me if I am wrong but I assumed that coherent states are special states that satisfy certain condition (namely minimalization of HUP) in every time. It's obvious that this condition depends on the precise Hamiltonian and it's not obvious to me that such states can always be found. Are you perhaps talking about CS of harmonic oscillator? If so, how do these help me? They are surely not preserved by evolution under arbitrary Hamiltonian. – Marek Mar 19 '11 at 16:41
  • 2
    @Moshe: there are loopholes in your argument: there might be no minimum for a given system (just infimum) and if there is minimum, it might be preserved in evolution (as for free Gaussian). Still, nice idea and I'll try to use it to find a counterexample in some simple system. As for the second statement: right, so I am sure you'll tell me that we can't obtain second law too... just kiddin', I don't want to get into this discussion that made Boltzmann commit suicide :) – Marek Mar 19 '11 at 16:47
  • 1
    @Marek, in any example you can solve the Schrodinger equation, you'll find that the quantity you are interested in grows away from t=0, both towards the past and towards the future, this is guaranteed by symmetry. As for the general statement, it is also true for the second law. You cannot derive time asymmetric conclusions from time symmetric laws without extra input, this is just basic logic, nothing to do with physics. The whole discussion is what is that extra input and where does it come in. –  Mar 19 '11 at 16:57
  • @Marek, I havent yet tried the calculation suggested by Deepak (too many Stack Q's to review) but a basis is just that. The x basis vectors are the delta functions of position, p the delta basis on momentum (with a given proportion of each value for $\Psi$), likewise the CS basis. Problem is it is overcomplete, so that might cause problems ie $\Psi(t) = a1(t)CS1 + a2(t)CS2 + ...$. – Roy Simpson Mar 19 '11 at 17:03
  • @Moshe: good points, thanks. @Roy: I understand it is some set of vectors spanning the Hilbert space but the question is, how they are defined for general system? Always the same way (i.e. as "eigenstates" of annihilation operators) or depending in the Hamiltonian? I've never encountered them besides the standard QM class when talking about harmonic oscillator, so I have no idea about general situation. – Marek Mar 19 '11 at 17:57
  • @Marek: There are many ways of generalizing coherent states to non harmonic systems. 1) Annihilation operator coherent states. 2) Dynamical symmetry group coherent states (coset G/H where H is stability group of fiducial vector). 3) Minimal (and equal) uncertainty states. 4) Klauder's generalized coherent state.... All definitions coincide for the harmonic oscillator and may be extended to get generalized squeezed states. Defn 3) may not be time stable, depending on what observables your're minimizing. See also nLab – Simon Mar 20 '11 at 22:15
  • @Simon: very interesting, I had no idea there were so many definitions. Could you also mention some useful applications (if you happen to know some)? – Marek Mar 20 '11 at 22:35
  • @Marek: I'm not really one to ask about applications... Klauder (and others) uses them for his alternate approach to path integrals and quantization. The last third of Perelemov's book is devoted to physical applications. You can look at the Contents to see what. – Simon Mar 21 '11 at 00:51
  • @Marek: I looked at this stuff ages ago as a 4th year project - and the only application I really looked at was approximation of classical solutions. Now, looking back I understand a lot more, and I see that there's some stuff related to my current work that I should really look at closer... – Simon Mar 21 '11 at 00:52
  • I personally think that the book by Ali, Antoine and Gazeau is a great reference for coherent states. The first few pages alone are sufficient to give most people all they need to deal with coherent states. As for applications, squeezed states (generalizations of coherent states) are used frequently in cosmology in particular to address questions about the emergence of classicality after inflation. –  Mar 21 '11 at 06:20

6 Answers6

50

The question asks about the time dependence of the function

$$f(t) := \langle\psi(t)|(\Delta \hat{x})^2|\psi(t)\rangle \langle\psi(t)|(\Delta \hat{p})^2|\psi(t)\rangle,$$

where

$$\Delta \hat{x} := \hat{x} - \langle\psi(t)|\hat{x}|\psi(t)\rangle, \qquad \Delta \hat{p} := \hat{p} - \langle\psi(t)|\hat{p}|\psi(t)\rangle, \qquad \langle\psi(t)|\psi(t)\rangle=1.$$

We will here use the Schrödinger picture where operators are constant in time, while the kets and bras are evolving.

Edit: Spurred by remarks of Moshe R. and Ted Bunn let us add that (under assumption (1) below) the Schroedinger equation itself is invariant under the time reversal operator $\hat{T}$, which is a conjugated linear operator, so that

$$\hat{T} t = - t \hat{T}, \qquad \hat{T}\hat{x} = \hat{x}\hat{T}, \qquad \hat{T}\hat{p} = -\hat{p}\hat{T}, \qquad \hat{T}^2=1.$$

Here we are restricting ourselves to Hamiltonians $\hat{H}$ so that

$$[\hat{T},\hat{H}]=0.\qquad (1)$$

Moreover, if

$$|\psi(t)\rangle = \sum_n\psi_n(t) |n\rangle$$

is a solution to the Schrödinger equation in a certain basis $|n\rangle$, then

$$\hat{T}|\psi(t)\rangle := \sum_n\psi^{*}_n(-t) |n\rangle$$

will also be a solution to the Schrödinger equation with a time reflected function $f(-t)$.

Thus if $f(t)$ is non-constant in time, then we may assume (possibly after a time reversal operation) that there exist two times $t_1<t_2$ with $f(t_1)>f(t_2)$. This would contradict the statement in the original question. To finish the argument, we provide below an example of a non-constant function $f(t)$.

Consider a simple harmonic oscillator Hamiltonian with the zero point energy $\frac{1}{2}\hbar\omega$ subtracted for later convenience.

$$\hat{H}:=\frac{\hat{p}^2}{2m}+\frac{1}{2}m\omega^{2}\hat{x}^2 -\frac{1}{2}\hbar\omega=\hbar\omega\hat{N},$$

where $\hat{N}:=\hat{a}^{\dagger}\hat{a}$ is the number operator.

Let us put the constants $m=\hbar=\omega=1$ to one for simplicity. Then the annihilation and creation operators are

$$\hat{a}=\frac{1}{\sqrt{2}}(\hat{x} + i \hat{p}), \qquad \hat{a}^{\dagger}=\frac{1}{\sqrt{2}}(\hat{x} - i \hat{p}), \qquad [\hat{a},\hat{a}^{\dagger}]=1,$$

or conversely,

$$\hat{x}=\frac{1}{\sqrt{2}}(\hat{a}^{\dagger}+\hat{a}), \qquad \hat{p}=\frac{i}{\sqrt{2}}(\hat{a}^{\dagger}-\hat{a}), \qquad [\hat{x},\hat{p}]=i,$$

$$\hat{x}^2=\hat{N}+\frac{1}{2}\left(1+\hat{a}^2+(\hat{a}^{\dagger})^2\right), \qquad \hat{p}^2=\hat{N}+\frac{1}{2}\left(1-\hat{a}^2-(\hat{a}^{\dagger})^2\right).$$

Consider Fock space $|n\rangle := \frac{1}{\sqrt{n!}}(\hat{a}^{\dagger})^n |0\rangle$ such that $\hat{a}|0\rangle = 0$. Consider initial state

$$|\psi(0)\rangle := \frac{1}{\sqrt{2}}\left(|0\rangle+|2\rangle\right), \qquad \langle \psi(0)| = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|\right).$$

Then

$$|\psi(t)\rangle = e^{-i\hat{H}t}|\psi(0)\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle+e^{-2it}|2\rangle\right),$$

$$\langle \psi(t)| = \langle\psi(0)|e^{i\hat{H}t} = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|e^{2it}\right),$$

$$\langle\psi(t)|\hat{x}|\psi(t)\rangle=0, \qquad \langle\psi(t)|\hat{p}|\psi(t)\rangle=0.$$

Moreover,

$$\langle\psi(t)|\hat{x}^2|\psi(t)\rangle=\frac{3}{2}+\frac{1}{\sqrt{2}}\cos(2t), \qquad \langle\psi(t)|\hat{p}^2|\psi(t)\rangle=\frac{3}{2}-\frac{1}{\sqrt{2}}\cos(2t),$$

because $\hat{a}^2|2\rangle=\sqrt{2}|0\rangle$. Therefore,

$$f(t) = \frac{9}{4} - \frac{1}{2}\cos^2(2t),$$

which is non-constant in time, and we are done. Or alternatively, we can complete the counter-example without the use of above time reversal argument by simply performing an appropriate time translation $t\to t-t_0$.

Qmechanic
  • 201,751
  • I was thinking of trying to work out some harmonic oscillator example myself (because I have few further questions and it seems like simplest system where something nontrivial is happening) but you've beat me to it. Thanks! – Marek Mar 20 '11 at 18:57
  • Although there is one thing that bugs me. I believe the calculation is essentially right, however we have $f(0) = 1/4$ which means it minimizes HUP (unless I am misunderstanding your conventions) and therefore $\psi(0)$ would have to be Gaussian -- a contradiction with your initial state. Is there a little mistake in calculation somewhere or do I have a flaw in my argument? – Marek Mar 20 '11 at 19:02
  • Okay, I fixed it (I hope) :) – Marek Mar 20 '11 at 19:20
  • Dear @Marek: I agree, there was powers of $2$ missing in three formulas. – Qmechanic Mar 20 '11 at 19:32
  • +1 Nice clear, simple example that gets the point across. – Simon Mar 20 '11 at 22:09
  • 15
    One thing that's worth noting: you say that the Schrodinger equation is not invariant under time reversal. It's true that simply substituting $t\to -t$ is not invariant, but simultaneously changing $t\to -t$ and complex conjugating $\psi\to\psi^$ does leave the equation invariant. That means that, for every solution $\psi(t)$, there is a corresponding solution $\psi^(-t)$ that "looks like" the same state going backwards in time (and in particular has the same expectation values for all operators). That's what people mean when they say that the Schrodinger equation has time-reversal symmetry. – Ted Bunn Mar 21 '11 at 13:02
  • @TedBunn: "Looks like" or "is"? Actually, it should be "is", according to your argument. What is your position on this matter? – Jinxed Nov 12 '16 at 23:27
23

The Schrodinger equation is time-symmetric. The answer is therefore no.

knzhou
  • 101,976
Ted Bunn
  • 19,793
  • I'm with you, but it is probably useful for Marek to see for himself how this works in the simple example to be convinced of the general statement. –  Mar 19 '11 at 17:19
  • Yes, this seems like a good argument to settle the original question. But it brings in further questions :) In particular, Moshe's solution (minimum growing towards both future and past) is a kind of bounce. But on both sides of that bounce I suppose the inequality would be satisfied. In other words, would the statement hold if we allowed these simple bouncy solutions and the time "t=0". Or to put it more clearly: I should've asked more general question of what does the uncertainty as a function of time look like... We now know it need not be monotone but perhaps it has other nice properties. – Marek Mar 19 '11 at 18:07
  • I can't make heads or tails of this sentence: In other words, would the statement hold if we allowed these simple bouncy solutions and the time "t=0". I don't know if anything interesting in general can be said about the time evolution of $\Delta x,\Delta p$, other than of course that it's bounded below. – Ted Bunn Mar 19 '11 at 18:09
  • @Ted: ah, that was indeed not very clear. The best rephrasing is probably this: whether there exists time $t_0$ such that the inequality holds for all times $t \geq t_0$. But it is a different question. – Marek Mar 19 '11 at 20:15
  • Ah, I understand what you mean. That's a very plausible conjecture. I bet it's true for a free particle, but I don't know. – Ted Bunn Mar 19 '11 at 21:10
  • Yes, I think it's true for a free particle. Working in the Heisenberg picture, you can show that (unless I've screwed up the commutators, which is always possible) $d(x^2)/dt^2=2 p^2/m^2$. Work in a reference frame in which $\langle p\rangle=0$, so that $\langle x\rangle$ is constant, and may as well also be taken to be zero. Then the uncertainty in $x$ is just $\Delta x^2=\langle x^2\rangle$, which has a positive second derivative $2\langle p^2\rangle/m^2$. So it eventually starts to increase. In this situation $\Delta p$ is constant, so $\Delta x,\Delta p$ eventually increases. – Ted Bunn Mar 19 '11 at 21:53
  • @Ted Bunn; My intuition said the same thing, but how do you know that $\Delta p$ is constant? Maybe it's decreasing. I would have to make the calculation. But it makes sense that if the wave function spreads out, it will have a smaller average gradient, and so have a smaller $\Delta p$. – Carl Brannen Mar 20 '11 at 05:06
  • @Ted: @Carl is right. As I've already written in my question, free Gaussian packet minimizes HUP for all times. But we also know that uncertainty in its position increases. Therefore uncertainty in its impulse must decrease. I suppose this is a general feature of every evolution (but again have no proof and it might well be that it fails for trivial reasons). – Marek Mar 20 '11 at 10:07
  • @Ted, @Carl: okay, now I am completely lost. For one thing, I don't see any problem with my argument about HUP. For another, I understand Ted's comment about $\Delta p$ being constant because impulse is an integral of motion for free particle and therefore the packet's impulse distribution cannot change in time. Obviously I must be be overlooking something trivial but can't figure what it is :( – Marek Mar 20 '11 at 11:45
  • Okay, I got it. The evolved function obviously isn't Gaussian anymore. Only its probability density is but not its amplitude. Therefore there is no reason to suppose that it still minimizes HUP. – Marek Mar 20 '11 at 12:07
  • 2
    I thnk that @Marek and I are in complete agreement. Just to be explicit, let me answer @Carl's question about how we know $\Delta p$ is constant. Marek is right: For a free particle, $p^n$ commutes with the Hamiltonian, so all expectation values $\langle p^n\rangle$ are constant. So $\Delta p^2=\langle p^2\rangle-\langle p\rangle^2$ is constant. (Indeed, the entire probability distribution for $p$ is constant in time.) As a result, a Gaussian wave packet for a free particle does not remain minimum-uncertainty for all time. It spreads in real space while remaining the same in momentum space. – Ted Bunn Mar 20 '11 at 14:05
  • @Ted Bunn; Actually, the reference I gave: http://demonstrations.wolfram.com/EvolutionOfAGaussianWavePacket/ does show that the Gaussian has minimum $\hbar/2$ at t=0. Accordingly, I'll delete the comment I made saying the opposite. – Carl Brannen Mar 21 '11 at 04:17
  • I'm having such a hard time with this concept. Your statement appears wrong to me, and instead of arguing, let me present an equation to make my argument. Isn't it true that $\left<x(-t)^2\right>\left<p(-t)^2\right> \geq \left<x(0)^2\right>\left<p(0)^2\right>$ as well? The most accurate known $x$ and $p$ are at $t=0$, so with only that information, the uncertainty grows in both time directions. Yes, the equation is time symmetric, but that is consistent with your answer being wrong. – Alan Rominger Jul 31 '11 at 19:12
  • Well, this statement can't be true for all solutions to the Schrodinger equation, since there's nothing special about $t=0$ in the equation! I think I'm missing your point. – Ted Bunn Jul 31 '11 at 21:08
  • Let me be slightly more specific about the point of my previous comment. Suppose you have a solution in which the uncertainty grows as you move away from $t=0$ in both directions. Then by shifting your (arbitrary) choice of $t=0$ forward a bit, you'd have a solution in which uncertainty was growing at $t=0$, and by shifting $t=0$ backwards a bit, you'd have a solution in which uncertainty was dropping at $t=0$. – Ted Bunn Aug 16 '11 at 14:45
6

No. Here's a simple example where it shrinks:

You have a particle that has a 50% chance of being on the left going right, and a 50% chance of being on the right going left. This has a macroscopic error in both position and momentum. If you wait until it passes half way, it has a 100% chance of being in the middle. This has a microscopic error in position. There will also only be a microscopic change in momentum. (I'm not entirely sure of this as the possibilities hit each other, but if you just look right before that, or make them miss a little, it still works.)

As such, the error in position decreased significantly, but the error in momentum stayed about the same.

user2898
  • 306
3

Marek,

Think in terms of Harmonic Functions and their Maximum Principle (or Mean Value Theorem).

For simplicity (and, in fact, without loss of generality), let's just think in terms of a free particle, ie, $V(x,y,z) = 0$. When the Potential vanishes, the Schrödinger equation is nothing but a Laplace one (or Poisson equation, if you want to put a source term). And, in this case, you can apply the Mean Value Theorem (or the Maximum Principle) and get a result pertaining your question: in this situation you saturate the equality.

Now, if you have a Potential, you can think in terms of a Laplace-Beltrami operator: all you need to do is 'absorb' the Potential in the Kinetic term via a Jacobi Metric: $\tilde{\mathrm{g}} = 2\, (E - V)\, \mathrm{g}$. (Note this is just a conformal transformation of the original metric in your problem.) And, once this is done, you can just turn the same crank we did above, ie, we reduced the problem to the same one as above. ;-)

I hope this helps a bit.

Daniel
  • 4,197
  • I am sorry but I don't see how this is related to uncertainty and time evolution. Could you explain that? – Marek Mar 19 '11 at 20:51
  • @Marek: the point was made explicit by Qmechanic, in his answer above. If you apply what i said in the Schrödinger picture, you get evolving states whose magnitude is always bound by the Mean Value Theorem. (If we were talking about bounded operators, this could be made rigorous with a bit of Functional Analysis.) – Daniel Mar 20 '11 at 19:32
3

A physical way of seeing this is that the phase space volume of a system is preserved. Hamiltonian mechanics preserves the volume of a system on its energy surface H = E, which in quantum mechanics corresponds to the Schrodinger equation. The phase space volume on the energy surface of phase space is composed of units of volume $\hbar^{2n}$ for the momentum and position variables plus the $\hbar$ of the energy $i\hbar\partial\psi/\partial t~=~H\psi$. This is then preserved. Any growth in the uncertainty $\Delta p\Delta q~=~\hbar/2$ would then imply the growth in the phase space volume of the system. This would then mean there is some dissipative process, or the quantum dynamics is replaced by some master equation with a thermal or environmental loss of some form. For a pure unitary evolution however the phase space volume of the system, or equivalently the $Tr\rho$ and $Tr\rho^2$ are constant. This means the uncertainty relationship is a Fourier transform between complementary observables which preserve an area $\propto~\hbar$.

  • -1, this is completely irrelevant to my question. I am interested just in pure states and for those phase volume is always zero and so trivially conserved. But this doesn't give any information on the behavior of uncertainty. – Marek Mar 21 '11 at 13:20
  • 4
    The volume a system occupies in phase space defines entropy as $S~=~k~log(\Omega)$ for $\Omega$. The von Neumann entropy $$ S~=~-k~Tr~\rho log(\rho). $$ A mixed state has each element of $\rho~=~1/n$ and the trace is $\sum(1/n)log(1/n)$ $~=~log(n)$. A pure state then occupies a phase space region that is normalized to unit volume --- not zero. – Lawrence B. Crowell Mar 21 '11 at 14:45
-5

The Heisenberg Uncertainty Principle revolves around the Compton Effect which states that wavelength (w) is inversely proportional to E, p, and f. How ever if one gathers up several quantum physics formulas, one can create these 4 formulas and use a form of geometry to solve the Uncertainty Principle but finding the electron's(e-) location and momentum at the same time while being able to approximate where it's presence was, is, and will be.

[Formulas are of my creation]

  1. ((h(c/wi)-h(c/wf))/c^2)+m(e-)=m (e- post photon collision)
  2. h(c/wi)-h(c/wf)=E(nergy expelled from photon when it collided with the electron) 3.((h(c/wi)-h(c/wf))/c)=p
  3. (h(c/wi)-h(c/wf)+m(e-)c^2=total e- E

Using a 60/30 rule, one can find the angle that the electron will be launched to and where it was before the photon collided with it by using the original trajectory and the current to find where it intersects. This will allow you to find its location.

If you disagree with this, please tell why so that I can improve it.

The momentum formula: enter image description here

The energy expelled from the photon upon collision: enter image description here

LKING
  • 1