0

Imagine an ensemble of $N$ identical and identically prepared quantum systems, all of which are in the state $\psi(x,t)$ at time $t$. Given the state (which could be a Gaussian in position) the postulates of quantum mechanics tell us, for example, what will be the result of position measurements on this ensemble at time $t$ i.e. which position eigenvalue will be obtained with what probability. It allows us to theoretically calculate $\Delta x$ from $\psi(x,t)$, ONLY. Given $\psi(x,t)$, the calculation yields a definite value for $\Delta x$ (say, $\Delta x=0.05$mm). This value, solely obtained from $\psi(x,t)$, seems to be blind to how the process of measurement is (or will be) carried out.

  • For a given ensemble with fixed $N$ and given $\psi(x,t)$, it is not true that $\Delta x$ will depend on how precise an apparatus is used to make the measurements?

However, I don't think there is any serious problem here. If, for example, $x\in[-5,+5]$ in some units, and if the measuring apparatus has a least count $1$ in the same units, the only allowed values that can arise in the measurement are $[-5,-4,-3,...+3,+4,+5]$ (something like $1.3$ or $3.7$ is not measurable). Therefore, the theoretical value of $\Delta x$ should also be calculated by discretizing the integrals over $x$. On the other hand, if the least count of the apparatus were $0.5$ in the same unit, allowed $x$ values will be more in number than the previous case. Thus the theoretical $\Delta x$ should be re-calculated accordingly. So it seems that theoretical $\Delta x$ also has a direct bearing on how the measurement is carried out.

  • However, experimentally, is it also not true that $\Delta x$ will be different for an ensemble with $N=1000$ and another with $N=10000$ both ensembles being specified by the same state $\psi(x,t)$)? How do we resolve this?

2 Answers2

1
  • The actual measurements, of course, depend on the accuracy and precision of the measuring apparatus. You are correct that we do not take this into account when we calculate $\Delta x$. However, there is something more pervasive in the case of the position operator -- it is that the framework of quantum mechanics itself tells us that the position operator is not really observable in the sense that eigenstates of the position operator are not in the Hilbert space (because they are not normalizable).
  • Of course, experimental verification of any probability distribution intrinsically refers to ratios of the frequency of an outcome to the total number of trials as the total number of trials $\to \infty$. The way we derive $\Delta x$ is simply by evaluating the expectation values of $x$ and $x^2$ and these expectation values have baked into them this reference to the total number of trials $\to \infty$. So, in terms of your formulation of the question, the $\Delta x$ that we calculate is for $N\to\infty$, the larger the $N$ you take, the better you approximate what you are actually calculating.
  • @mithusengupta123 I have swapped the points accordingly, I will try to give a closer look at your new remarks in a while and edit my response if I have something to say about it. Thanks for the update :) –  Mar 14 '21 at 14:25
  • Thank you for the answer. Can you please comment on my remark/understanding about the question in the first bullet point? – Solidification Mar 14 '21 at 14:25
  • " However, there is something more pervasive in the case of the position operator...(because they are not normalizable)." If we ignore this for a moment, does my remark following the first bullet make sense? – Solidification Mar 14 '21 at 14:41
1

Quantum mechanical uncertainty - that which we denote by $\Delta x$ - has nothing to do with measurement, see for example this question and its linked questions. The $\Delta x$ we compute in quantum mechanics is the standard deviation of $x$ assuming a perfect measurement apparatus. It is an abstract statistical quantity derived from the probability distribution for the position variable that is encoded in the quantum state ("wavefunction") and has no direct relation with any actual measurements being performed.

Think about flipping a fair coin, i.e. a coin which you believe has 50% probability to show heads and 50% to show tails. If we assign heads the value -1 and tails the value 1, then the expected value is 0, with a standard deviation of 1. The expected value of $n$ coin tosses is still 0, with a standard deviation of $\sqrt{n}$. If you actually go and flip $n$ coins, you can try to estimate the standard deviation of the underlying distribution with one of the common expressions for standard deviations of samples. This might come out to be close to $\sqrt{n}$, it might not - the only thing that is guaranteed is that the estimation converges to the theoretical value as $n\to\infty$. Note that in this case, the measurement apparatus is perfect - we can tell whether or not a coin is heads or tails without any room for error.

That is, the "$\Delta x$" you compute from a sample is not actually the same quantity as the $\Delta x$ we compute from the theory for a quantum state - the former is merely an estimation of the latter, even if we have a perfect measurement apparatus.

ACuriousMind
  • 124,833
  • Are you saying that $(\Delta x){\rm experimental}\neq (\Delta x){\rm theoretical}$ because the r.h.s is calculated for a perfect measuring apparatus? – Solidification Mar 14 '21 at 18:22
  • 1
    @mithusengupta123 No, that is exactly not what I'm saying - I'm saying the two would be different even if you had a perfect apparatus because the "experimental" quantity is just an estimation of the "true" standard deviation from a finite sample size. – ACuriousMind Mar 14 '21 at 19:35
  • So you're saying that even if we did the actual experiment with a perfect apparatus, the experimental standard deviation will converge to the theoretical value ONLY in the limit of infinite sample size, otherwise need not. Please let me know if I got it. Moreover, since the real measurements will always entail imperfect apparatus and other experimental uncertainties, in that case, even in the infinite sample size limit, $(\Delta x){\rm expt}$ will be greater than $(\Delta x){\rm theo}$ because errors will add up over and above intrinsic quantum spread $(\Delta x)$, computed theoretically. – Solidification Mar 15 '21 at 03:20
  • 1
    @mithusengupta123 Yes, exactly. – ACuriousMind Mar 15 '21 at 08:45
  • Sorry to come back to this @ACuriousMind Can you suggest some trustworthy references that has the correct interpretation of the uncertainty principle? – Solidification Mar 27 '21 at 11:34