I'm working in a lab, and the terminology in error analysis is confusing me. Lets say I have a theory that claims the fine structure constant is exactly 1/137. My current reference tells me that the best experimental measurements yield 0.0072973525693.
Then what is the minimum precision I need for this experiment? My guess is that since the two quantities start to differ at the 0.000001 order, then this must be the precision I should strive for.
Many reports I'm reading are stating a '$3\sigma$' bound (in context, I'm seeing "upper bound uncertainty allowable to distinguish by at least $3\sigma$"). I'm confused on what this means and how this would apply in the example I provided. If I recall my stats correctly, I think I would need to consider some normal distribution $N(\mu,\sigma^2)$ and to see how the probability varies for a value to be within $n$ $\sigma$'s.