Background I:
Suppose the commonly used non commuting operator $\hat p$ and $\hat x$.
The uncertainty principle told us that $\sigma_p\sigma_x\geq \frac{\hbar}{2}$.
In standard quantum mechanic classes, the mostly commonly used explanation was something like:"...this told us that, if we know about x, ..., then we won't know anything, or would be less certain, about p,..."
However, I've always been wondering what's the exact nature of this "acknowledgement" of the constrain towards the information one could ever know about a system. Further, one should notice that $\sigma_x\sigma_p$ comes from the variance of the statistical distribution, and is therefore a particular inference towards the system.
Background II:
I've been looking into approximation theory these days. Some related concepts like: Fisher Information, Loss Function, where they describe/evaluate the "goodness" of an estimator by risk/normalized quadratic risk. (The concept such as Bias, was the standard from the inferential statistics. )
Now here's what's so "mind-blowing" part of the theory. For nonlinear loss, function (such as a quadratic loss), it's possible to construct a biased estimator that's locally(means in a neighborhood region) that's more efficient than the standard estimator such as sample mean!!
Coming back to physics, this directly implied that it's possible for one to use the newly constructed estimator directly "go under" $\frac{\hbar}{2}$ in a small segment of the system.
Important notice to avoid confusion:
Notice that the estimator was biased, and the high efficiency was only valid for small part of the system, so uncertainty principle definitely still holds. It's just saying that sometimes we can obtain more accurate estimation using a methods differ from the one responsible to the uncertainty principle.
Further, the approximation was usually discussed in terms of a bunch of observations, while uncertainty was most introduced with the observation of a single object.
Despite those complicated and strict constrains, one could still use the theorem in some simple cases. One particular case seemed to be the simple harmonic oscillator.
My question was thus:
Does uncertainty principle truly represent the "lower bound" of the information we can obtain from a pair of noncommunicable operator?
Is it possible to mathematically breach the uncertainty principle? Even in terms of bearing the bias?