1

Background I:

Suppose the commonly used non commuting operator $\hat p$ and $\hat x$.

The uncertainty principle told us that $\sigma_p\sigma_x\geq \frac{\hbar}{2}$.

In standard quantum mechanic classes, the mostly commonly used explanation was something like:"...this told us that, if we know about x, ..., then we won't know anything, or would be less certain, about p,..."

However, I've always been wondering what's the exact nature of this "acknowledgement" of the constrain towards the information one could ever know about a system. Further, one should notice that $\sigma_x\sigma_p$ comes from the variance of the statistical distribution, and is therefore a particular inference towards the system.

Background II:

I've been looking into approximation theory these days. Some related concepts like: Fisher Information, Loss Function, where they describe/evaluate the "goodness" of an estimator by risk/normalized quadratic risk. (The concept such as Bias, was the standard from the inferential statistics. )

Now here's what's so "mind-blowing" part of the theory. For nonlinear loss, function (such as a quadratic loss), it's possible to construct a biased estimator that's locally(means in a neighborhood region) that's more efficient than the standard estimator such as sample mean!!

Coming back to physics, this directly implied that it's possible for one to use the newly constructed estimator directly "go under" $\frac{\hbar}{2}$ in a small segment of the system.

Important notice to avoid confusion:

  1. Notice that the estimator was biased, and the high efficiency was only valid for small part of the system, so uncertainty principle definitely still holds. It's just saying that sometimes we can obtain more accurate estimation using a methods differ from the one responsible to the uncertainty principle.

  2. Further, the approximation was usually discussed in terms of a bunch of observations, while uncertainty was most introduced with the observation of a single object.

Despite those complicated and strict constrains, one could still use the theorem in some simple cases. One particular case seemed to be the simple harmonic oscillator.

My question was thus:

  1. Does uncertainty principle truly represent the "lower bound" of the information we can obtain from a pair of noncommunicable operator?

  2. Is it possible to mathematically breach the uncertainty principle? Even in terms of bearing the bias?

  • Related: https://physics.stackexchange.com/q/24068/50583 and its linked questions – ACuriousMind Mar 26 '19 at 00:25
  • @ACuriousMind It's the construction of estimator,... I'm not doubting about $\sigma_x\sigma_p\geq \hbar/2$ – ShoutOutAndCalculate Mar 26 '19 at 00:28
  • "a biased estimator" would refer to hidden variable theories. imo within the quantum mechanics postulates the principle holds. It might be broken for hidden variable theories that are accessible to experimental verification. – anna v Mar 26 '19 at 04:43
  • @annav How does biased estimator imply hidden variable?(I thought the fascinating part of approximation and estimator was so that we can inference the measurement in a particular way. No soft measurement(experimental) or adjustment to the theory was needed. It's the same equation ant theorem, but just so that we can have an advantage in estimate expectation value under particular condition. Therefore change the understanding to the results. The postulate, assumptions, equations..., were unchanged.) – ShoutOutAndCalculate Mar 26 '19 at 06:33
  • Can you give a link? to construct a "biased estimator" just the word construct means that one affects specific variables. ( the bias of an estimator which I find by googling is a different story. If you can construct a bias it means you know more variables than the simple ones entering the average) – anna v Mar 26 '19 at 06:38
  • @annav A classical example may be this one: https://www.stat.washington.edu/people/pdhoff/courses/581/LectureNotes/bayes.pdf from Bayes estimators. – ShoutOutAndCalculate Mar 26 '19 at 07:01

0 Answers0