0

I remember hearing of cases where some initial measurement of a constant gives a value that is wildly inaccurate (either too high or too low). Subsequent experimental measurements, instead of disregarding the first and giving something close to the correct answer, tend to be within the margin of error of the first experiment but (somewhat) closer to the correct result. This process repeats itself until finally after many years the correct result is obtained.

The problem is I can't find any examples of this actually occurring. Is this one of those thing that people say as lore but are not actually true?

I thought that I heard of it happening with measurement of the speed of light, but looking back at the history it doesn't look like this happened with the historical data I can find.

  • See also https://physics.stackexchange.com/questions/92695/the-famous-drop-of-c and links therein. –  Nov 26 '18 at 01:00

1 Answers1

0

Several examples are given in Jeng's "A selected history of expectation bias in physics". The example you may have heard about, which became particularly well known after Richard Feynman remarked upon it, is Millikan's measurement of elementary charge.