Dear StackExchange community,
I hope this is the right place for my question. This question is motivated by the photographical topic, but is about a purely mathematical issue.
In astrophotography, a common problem is the "trailing" of stars in pictures that have been taken with too long exposure times. Point-shaped stars are pictured as curved lines. This of course originates in the earth's rotation around itself.
To provide a guideline for the longest exposure time possible (in the following: t), there is the "300 rule":
t = 300 / f
with the focal length f.
As far as I can say, this rule is purely empiric. I wonder how a rule like this could be deduced in a mathematically accurate way. My first approach was the following:
The earth's velocity depends on angular velocity ω and radius r, which are
ω = 2π / (86400 s)
r = r_earth * cos(latitude)
with the mean earth radius r_earth.
The critical distance which leads to trailing depends on the camera sensor's dimensions and resolution. If, for example, a maximum trailing of one pixel horizontally would be allowed (i. e. one star affects two neighbouring pixels), this critical distance would be
s = w / n
with the sensor's width w and the number of pixels per line n.
All in all, this leads to the equation
s = ω * r * t ⇔ t = s / (ω * r)
Of course, this is a very bad oversimplification as I totally ignore the artificial manipulation of r via the lens with a certain focal length. However, I thought I would get too large numbers in this approach. Instead, with my sensor's and location's data I get t = 1.7 * 10⁻⁸ s. As you see in the equation above, even if I accept a desastrous amount of 1000 neighbouring pixels that are affected by the same one-pixel star, I do not compute a realistic exposure time.
So, where is the error? And how would I correctly integrate the focal length of the lens in my approach?
Thank you for your suggestions!