So i’ve been reading Richard Feyman’s book, QED, and in it, he simplifies the idea of how physicists calculate the probability of a photon hitting a certain detector. He lets the magnitude of a vector represent the square root of the probability of the event occurring, and the direction is given by a imaginary clock that times how long it takes for the photon to travel to a given distance. I just can’t fully wrap my head around why we are using an “imaginary stopwatch” to determine the direction of our vectors. I understand that the modular nature of a clock conveys the periodic behaviors light exhibits (like changing the thickness of a reflection surface will cause the probability to cycle between 0 and 16%). As well, paths with similar distances will contribute significantly more to the final vector (as they will be pointing generally the same way) than paths with large variations in time, thus representing the idea that light “travels” the shortest path.
So I get that this math works, but not exactly why it works. Can someone provide a more rigorous/full explanation on why we use this imaginary clock? In addition, when we sum vectors, why would the difference in the “clocks” (difference in arguments) influence the magnitude of the final vector. In other words, if 2 vector has a magnitude of 0.10, and you add them, you will get a final probability varying from 0 to 0.20 depending on the differences in the directions of the vectors by the nature of vector addition, which is counter intuitive.