Now that I finished my pregraduate laboratory courses I've been thinking a lot about the error theory we used for our measurements. I tried to find a mathematically rigorous book on error analysis but it was pointless. I was wondering if any of you guys knew literature on this subject. The topics I am searching for go along the lines of:
- considering the result of a measurement as a probability measure;
- generalizing the notion of "best value" or the value with highest probability to measures that do not have a density (radon-nikodym derivative) associated to them, such as the Dirac measure;
- a concrete definition of uncertainty which coincides with the standard deviation in the case of a gaussian distribution but isn't as cumbersome as the 68% confidence interval,
- given a set of measurement results $\mu_1,\dots,\mu_n$ and a function of those measurements $f$, how to find the result of the measurement $f$ (notice that solving this question generalizes the problem of error propagation).
If any of you guys can give a quick explanation of any of these topics it would also be greatly appreciated.