4

Question and Background

So I came across a question on conditional probability in quantum mechanics: How is conditional probability handled in quantum mechanics? There's an interesting comment which tells why this does not work for "the non-commutative case"

I was wondering, however, since there are more than operators in quantum mechanics one could ask about their relation. For example, there is time which is a parameter. It seems straightforward to compute the conditional probability of an outcome given the time was say $t$ by (for example):

$$ P( A|T_1) = |\langle x_A, t_1 | \psi, t_1 \rangle|^2 $$

where $A$ denotes the event of say measuring the position at a $x = x_a$, $T_1$ represents the time being say $t_1$ and let pre-measurement state be $\psi$. But what if one swaps things as:

$$ P(T_1|A) = ? $$

Which would ask what is the probability of the time being $t_1$ given we have measured the position at $x_A$? Is there a nice relation between $P(T_1|A)$ and $P( A|T_1)$

glS
  • 14,271
  • 1
    You should check a very interesting paper of Aharonov, Bergmann and Lebowitz ('64), where they have introduced a time symmetric formulation of quantum measurement. – MST Nov 26 '19 at 01:00
  • @MaxStammer This is the only thing I could find not behind a paywall :/ https://arxiv.org/pdf/quant-ph/9703001.pdf (I'm not currently affiliated with a uni at the moment) – More Anonymous Nov 26 '19 at 02:49
  • 1
    I will write a brief summary about the paper as soon as possible! – MST Nov 26 '19 at 09:46
  • @MaxStammer thank you so much! – More Anonymous Nov 29 '19 at 13:42

1 Answers1

3

Let $|\psi\rangle$ be the initial state, and let $U_t=e^{-i Ht}$ be the evolution operator, assuming a time-independent Hamiltonian. I will also assume for simplicity that we are working on a discrete basis. If you want to work with continuous variables, you can replace sums with integrals and you should mostly be fine.

Suppose we start at $t=0$, and measure the state at times $\{t_k\}_{k=1}^N$, letting it evolve freely in the intermediate times.

Measuring at $t=t_1$ gives the outcome $x$ with probability $p(x,t_1)=|\langle x|U_{t_1}|\psi\rangle|^2$, and a post-measurement state $|x\rangle$. Write the coefficients of $|\psi\rangle$ in the basis of $|x\rangle$ as $|\psi\rangle=\sum_x c_x |x\rangle$, and define the kernel of the evolution as $K(x,y;\delta t)\equiv\langle x|U_{\delta t}|y\rangle$. Finally, let us define $\Delta_k\equiv t_k- t_{k-1}$. We can then write $p(x,t_1)$ (assuming a discrete set of possible outcomes) as $$p(x,t_1)=\left|\sum_y K(x,y;\Delta_1)c_y\right|^2.$$

Because we don't know the post-measurement state after the first measurement, we now need to switch to a density matrix formalism to take into account this classical uncertainty. We therefore write the post-measurement state as : $$\rho_1=\sum_x p(x,t_1) \mathbb P_x, \text{ where } \mathbb P_x\equiv |x\rangle\!\langle x|.$$ At time $t_2$, before the second measurement, the state is therefore given by $$\tilde\rho_2=\sum_x p(x,t_1)\, U_{\Delta_2}\mathbb P_x U_{\Delta_2}^\dagger,$$ which then results in an outcome $x$ with probability $p(x,t_2)=\sum_y |K(x,y;\Delta_2)|^2 p(y,t_1)$, and a post-measurement state $$\rho_2=\sum_{x}p(x,t_2) \,\mathbb P_x = \sum_{x,y} |K(x,y;\Delta_2)|^2 \Big|\sum_z K(y,z; \Delta_1)c_z\Big|^2 \, \mathbb P_x.$$ You can keep going and compute the state at each successive measurement time $t_k$. If this reminds you of Feynman's path integral formulation, it's because it kind of is. The difference is that here you break the interference at every measurement time, and so the final state is determined by a mixture of quantum interference and classical probabilities.

Define now for ease of notation $q_k\equiv p(x,t_k)$. What is the probability of finding at specific $x$ for the first time at the $k$-th measurement? This will equal the probability of not finding it in the previous measurements and finding it at the $k$-th, that is, $$(1-q_1)(1-q_2)\cdots (1-q_{k-1})q_k.$$

Note that with this formalism you can also answer other questions about the probability of finding a given result once or more at specific combinations of times. For example, the probability of measuring $x$ at least once will be given by $$1-\prod_{k=1}^N (1-q_k).$$

I don't know if there is a nice way to write these expressions in general. Maybe, if you write probabilities back in terms of the kernels, but I haven't tried, and the post already got a bit too long.

glS
  • 14,271
  • One question: since we are using density matrix formalism did thermodynamics enter the picture? If so, I feel you did a magic trick and it seems like magic cause I missed something :P – More Anonymous Oct 28 '19 at 12:45
  • @MoreAnonymous I'm not sure what you mean. Thermodynamics enters the picture if you study thermodynamics quantities, which I am not doing here (as far as I know). I don't know much about quantum thermodynamics so I can't think of anything specific right now – glS Oct 28 '19 at 12:47
  • Oh, I'm under the impression that the pure state has $0$ entropy whereas the mixed state has non-$0$. So you somehow start with explaining in terms of a pure state and talk about a mixed state (or use it's notation) ... Which is interesting ... – More Anonymous Oct 28 '19 at 12:50