0

I'm working in a lab, and the terminology in error analysis is confusing me. Lets say I have a theory that claims the fine structure constant is exactly 1/137. My current reference tells me that the best experimental measurements yield 0.0072973525693.

Then what is the minimum precision I need for this experiment? My guess is that since the two quantities start to differ at the 0.000001 order, then this must be the precision I should strive for.

Many reports I'm reading are stating a '$3\sigma$' bound (in context, I'm seeing "upper bound uncertainty allowable to distinguish by at least $3\sigma$"). I'm confused on what this means and how this would apply in the example I provided. If I recall my stats correctly, I think I would need to consider some normal distribution $N(\mu,\sigma^2)$ and to see how the probability varies for a value to be within $n$ $\sigma$'s.

Qmechanic
  • 201,751

2 Answers2

0

Normally you would perform multiple measurements of the quantity of interest. Suppose you do $N$ measurements with the results $$x_1, x_2, ... x_N,$$ then, assuming that the distribution is normal, you can estimate its mean and standard deviation as $$m =\frac{1}{N}\sum_{k=1}^Nx_k, s^2 =\frac{1}{N-1}\sum_{k=1}^N(x_k-m)^2.$$ I write $m,s$ instead of $\mu,\sigma$ in order to distinguish the estimated quantities from the actual (unknown) ones.

A few comments:

  • There are statistical tests to check the normality. It is usually a good assumption, but not always
  • You may want to refresh/read on confidence intervals, chi-squared test, hypothesis testing, etc., since you will probably quickly run into these.

There is a well-known chapter by particle physics group that reviews the essential statistics concepts in J. Beringer et al.(PDG), PR D86, 010001 (2012) (http://pdg.lbl.gov).

Roger V.
  • 58,522
0

An inherent property of a measurement is that it scatters. Hence, if we take a sample containing $N=100$ data points, each measuring the same thing (=your fine structure constant), we obtain probably 100 different values. Thus, the natural question to ask is, which one is the correct measurement or is there a better way to obtain the "true" value? Generally speaking, the average value is a "good" estimator of the true value.

Skipping the details, we are (most often) allowed to say, that the average value is normally distributed. If the standard deviation of a single measurement is $\sigma_\epsilon$ and we take $N$ independent measurements, then the "uncertainty" (=standard deviation) of the mean value is $\sigma_\epsilon/\sqrt{N}$. Hence, by taking more and more measurements, we can reduce the uncertainty -- assuming that we do not include a systematic error in our data.

Finally, your question is about the confidence of your measured result. Suppose your measured average value is $\bar{\alpha}$ and you like to know how confidence you should be, that the literature value $\alpha_{0}$ is incorrect. Well, you can ask the following question: Suppose the literature value is correct, but my experimental uncertainty (standard deviation) is $\sigma_\epsilon$. What is the probability that I measure a value which is at least $k$ standard deviations smaller than the literature value? The answer is given by the so called cumulated probability, cumProb where I plotted the $x$ axis in units of $\sigma = \sigma_\epsilon / \sqrt{N}$.

Semoi
  • 8,739