3

There are a number of questions on the internet and on this site asking about how the LIGO interferometer measurement works given that the gravitation wave stretches both the length of the interferometer arms as well as the wavelength of light within the arms. If the gravitational wave changes the length of the interferometer arm but the ticks on the ruler (spatial periods of light) also expand then you wouldn't notice a difference in length due to the gravitational wave. This is a well explored paradox which often comes up in relation to gravitational wave measurement discussions.

This question is about me not understanding the apparent resolution to this paradox. Let me lay the ground work and someone can let me know where I go astray.

The solution to the apparent paradox is apparently that the length of the arms is not what is being measured, rather, the time the light spends in the arm is being measured.

Imagine a Michelson interferometer with two arms of length $L_1=L_2=L_0$. A short pulse of light split into the two arms at $t_0$ will take time $T_{1,2} = \frac{2L_{1,2}}{c}$ to traverse each arm.

If $L_1 = L_2$ then $T_1 = T_2$ and the two pulses of light will arrive at the detector at the same time.

If a gravitational wave comes by then we have

\begin{align} L_1' &= (1+h)L_1\\ L_2' &= (1-h)L_2\\ \end{align}

In this case

\begin{align} T_1' &= \frac{2L_0}{c}(1+h)\\ T_2' &= \frac{2L_0}{c}(1-h)\\ \end{align}

So we see that $\Delta T = T_1' - T_2' = \frac{4L_0}{c}$. There will be a time delay between the two pulses.

Thus it is clearly possible to measure the presence of a gravitational wave using pulses of light sent down the two arms.

What I do not understand is how this picture still works when we change from pulses of light to continuous beams of light. The argument is roughly that the time spent in a given interferometer arm translates into a phase collected in a given interferometer arm. Since the time spent in each arm is slightly different the phase collected in each arm is different, this phase difference is then measured at the detector.

I understand that but here is my hangup. I would think we have something like

$$ \phi_{1,2} = \omega_{1,2} T_{1,2} $$

That is, the phase collected in a particular arm is the frequency of light in that arm multiplied by the time spent in that arm. If $\omega_1 = \omega_2 = \omega_0$ then it is clear that because the time spent in each arm is different that a measurable relative phase can appear.

However, I have somehow convinced myself that the frequencies of light in each arm change in such a way as to cancel the effect.. in some resolutions to the original paradox mentioned it is stated that it is correct that the wavelength of light is stretched by the same factor as the total arm lenght. So that is:

$$\lambda_1' = \lambda_1(1+h)$$

We know the speed of light is constant so

$$ \omega_{1,2} = 2\pi \frac{c}{\lambda_{1,2}}\\ \omega'_{1,2} = 2\pi \frac{c}{\lambda_0} \frac{1}{1\pm h} $$

Since

$$T'_{1,2} = \frac{2L_0}{c} (1\pm h)$$

We have

$$ \phi'_{1,2} = \omega'_{1,2} T'_{1,2} = \frac{2L}{c} \frac{2\pi c}{\lambda_0} = 2\times 2\pi \frac{L}{\lambda_0} $$

That is, there is no differential phase between the two paths and no effect is detected. Basically the original paradox asks how the length change is measured by light if the length ruler (spatial period of light) changes in the same way. The apparent resolution is that length is not measured, but rather time. But it appears to me that the time ruler (temporal periods of light) changes in the exact compensating way for the effect to also vanish in the time domain.

Where am I going wrong?

edit: Response to possible duplicate identification: While the question and answers in LIGO flawed by the identical expansion of laser wavelength and arms in presence of a gravitational wave? are very related to my question they do not answer the question I am asking here. The answer by Kyle Kanos indicates that $\phi \propto \omega_0 (L_1-L_2) h$ but there is no discussion of the possibility that the frequency of light is different in the two different arms. It is assumed to be the same.

More thoughts I've had: I've been looking at the Kip Thorn lecture notes: http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1227.1.K.pdf

I believe my confusion is related to the transverse traceless (TT) gauge vs local lorentz (LL) gauge discussion. It seems that in in the TT gauge the wavelength of light is changed as discussed above but the frequency is not changed. To my extreme surprise, then, it looks like the speed of light is actually different in the two different arms. If what I am saying here is correct the answer to my question would be that I've gone wrong in assuming the speed of light is the same in both arms. In the LL gauge it looks like the easiest way to think about things is that the length of the two arms changes but neither the wavelength nor frequency of light changes. This explanation makes the most sense to me and is often how I hear LIGO described. It seems nice to not have to worry about the light itself being affected by the gravitational wave...

edit2: See this answer for more on the TT vs LL gauge.. this may be the answer to my question if my description above of the explanation of the two gauges is correct: in TT the speed of light and wavelength (but not frequency) in the two arms changes but the length of the two arms is fixed while in LL the speed of light, wavelength, and frequency are all constant while the length of the two arms varies.

Jagerber48
  • 13,887

3 Answers3

3

The frequency of the light is (almost) unchanged in each arm.

The though experiment is to imagine a gravitational wave as a step function that abruptly changes the length of the arms (or more precisely, the distance between the inertial test masses) and the wavelength of light that is already in the instrument.

However light that enters the instrument after the step change will have an unchanged frequency and wavelength, since the gravitational wave has no effect on processes occurring at the atomic level in the NdYAG laser.

So long as the length of time light spends in the arms is less than the time is takes the arms to change their length significantly, then assuming a fixed frequency is correct.

For a simple interferometer this means that $$\frac{2L}{c} \ll \frac{\lambda_{GW}}{c}$$ i.e. that the gravitational wave wavelength, $\lambda_{GW}$ is much greater than the path length travelled in the interferometer.

This must be modified of course if a Fabry-Perot resonator is used, which effectively means the light travels many times back and forth (several hundred times for aLIGO) resulting in a total pathlength of around 1500 km. This means that the sensitivity of the interferometer does begin to decline at gravitational wave frequencies above about 200 Hz.

ProfRob
  • 130,455
  • Let me confirm. Suppose the step is at $t=0$ and call the round trip time $T = \frac{2L}{c}$. It sounds like you are saying for $0<t\ll T$ we have that $\lambda$, $\omega$ are both changed in the two arms. You also have that $L$ is different for each arm and that $c$ is the usual $c$. This means for this time there should be no observable phase shift. Is that correct? Then, for $t\gg T$ we have that $\lambda$ and $\omega$ go back to their usual values so that $c$ is still the same usual value in both arms but since $L$ is different for the two arms there is an observable phase shift. correct? – Jagerber48 Aug 22 '19 at 15:52
  • I am a bit confused about why light that is already in the instrument is stretched while light that is not in the instrument is not stretched.. I'm imagining the GW to be a huge plane wave (like bigger than earth) so that both arms are entirely in the GW as well as all of the instrumentation (including the laser). If light in the interferometer is stretched to $\lambda'$ then I would think new light coming out of the laser would also be stretched.. apparently not? – Jagerber48 Aug 22 '19 at 15:54
  • Also, by "in the instrument" certainly you can't mean in the interferometer arms.. The input light to the beamsplitter is collinear with the light in one of the arms so if the light in that arm is stretched certainly the input light must be stretched as well. Is this correct? So it sounds like you are saying all light "out of the laser" is stretched to $\lambda'$ and $\omega'$ but "new light" that comes out of the laser comes out as $\lambda$ and $\omega$. Is this correct? – Jagerber48 Aug 22 '19 at 15:55
  • and to be entirely sure are you saying that YES, for a period of time the frequency of light in the two interferometer arms does change and YES the speed of light is the same in the two arms? – Jagerber48 Aug 22 '19 at 15:59
  • A gravitational wave does not "stretch" atoms in the laser! Light that is not in the instrument does not yet exist. When it is emitted it has the usual lab frame frequency and wavelength. The distance between laser and beam splitter is tiny compared with the path length in the interferometer arms. – ProfRob Aug 22 '19 at 19:55
  • yes that roughly makes sense to me. Can you confirm that for your model 1) that $c$ is constant in both arms for all time 2) that for $t>0$ that $L_1$ and $L_2$ are unequal for all time 3) for $0<t<T$ that $\lambda_1 \neq \lambda_2$ and $\omega_1 \neq \omega_2$ and there is no measurable phase shift and 4) for $t>T$ that $\lambda_1=\lambda_2$ and $\omega_1=\omega_2$ and there is a measurable phase shift. – Jagerber48 Aug 23 '19 at 00:15
  • @jgerber Almost all of that seems correct in the case of a step function GW and in the "lab frame". Except (3), the phase shift would build from zero at $t=0$ to a maximum at $t=T$. – ProfRob Aug 23 '19 at 05:38
  • I would think there is no observable phase shift until $t=T$. The reason is that all of the "original light" that was in the interferometer at $t=0$ which got stretched to $\lambda'{1,2}$ and $\omega'{1,2}$ cannot give a phase shift by arguments given in the question. Only when the "new light" gets to the detector can a phase shift be observed. Frankly I find this behavior strange and I'm not sure if it is correct. if $T\ll T_{GW}$ where $T_{GW}$ is the period of the gravitational wave then this initial transient time $T$ can be ignored. – Jagerber48 Aug 23 '19 at 16:29
1

For a continuous wave of laser light, a falling edge of the sin wave (of the electric field amplitude) from the laser is split to two falling edges (at the splitter mirror) and then go down the two arms. The falling edge of the sin wave behaves like the leading edge of your pulse for the time delay argument.

The second part of your question in which a constant c and oppositely strained $\lambda_1$, $\lambda_2$ yield $\omega_1 \neq \omega_2$ has also puzzled me, but for a different reason. Whereas you conclude it eliminates any GW phase shift between the two paths, I can not understand how the passive splitter mirror in the presence of a constant GW strain turns one input frequency of laser light into two different output frequencies! I asked this in a Physics Stack question: How Does LIGO’s Splitter Mirror Cause Two Different Frequencies When a GW is Present?, but have received no answers.

Because of the splitter mirror, I conclude that in the presence of the GW, $\omega_1 = \omega_2$. Then either:

1) c is the same along both arms and therefore $\lambda_1=\lambda_2$. This contradicts the standard argument on the LIGO website and in LIGO talks which say $\lambda_1 \ne\lambda_2$ because $\lambda$ strains like the arm lengths (I guess by common sense since I don't know a GR argument for it). However, even if $\lambda_1=\lambda_2$, the laser light leading edge will take different times along the two arms, and the interference pattern will change … therefore LIGO can detect GWs.

OR

2) $\lambda_1 \ne\lambda_2$ and therefore $c_1 \ne c_2$. It is strange having two speeds of light since we are used to saying "the speed of light is the same in all reference frames … that are related by a Lorentz transformation that leaves the Minkowski metric unchanged". The strains caused by GWs and Schwarzschild masses change the Minkowski metric (and therefore c) as evidenced by the Shapiro delay, where a far away observer sees light slow down as it passes near the Sun. If $c_1 \ne c_2$, then by an argument slightly similar to yours, one can show that light goes faster along the lengthend arm and slower along the shortend arm such that the laser light leading edge will take the same time along both arms, and the interference pattern will not change … therefore LIGO can not detect GWs.

Since LIGO appears to have detected GWs, argument (1) seems to be correct, and $\lambda$ does not get strained by a GW. The GW leaves c, $\omega$, and $\lambda$ the same along both arms, and just the length of the arms change. Because the length of the arms have changed, I have been thinking in the Local Lorentz gauge.

Gary Godfrey
  • 3,354
  • yes, this is very much along the lines I am thinking. Regarding 1) this makes the most sense to me. We say that $\lambda_1 = \lambda_2$, $\omega_1=\omega_2$, $c_1=c_2$ and $L_1 \neq L_2$ so we get a phase shift. Most LIGO presentations I have seen seem to take this viewpoint I think so it's not necessarily contradictory with everything LIGO say. – Jagerber48 Aug 22 '19 at 15:59
  • Regarding 2) this sounds like the TT gauge in the Thorne document. In the reasoning where $\lambda_1 \neq \lambda_2$, $\omega_1 = \omega_2$ we get that $c_1 \neq c_2$. But in the TT gauge apparently $L_1 = L_2$ so an effect is still observable. I've also wondered about how the beamsplitter could allow $\omega_1 \neq \omega-2$. It seems strange but the spacetime metric is different in $x$ and $y$ so I don't know.. I also prefer explanation 1) as it seems straightforward and intuitive. No worries about light being stretched, just space between mirrors.. – Jagerber48 Aug 22 '19 at 16:01
1

Whereas astrophysical electromagnetic waves are typically much smaller than their sources, ranging from a few kilometres down to sub-nuclear wavelengths, gravitational waves are larger than their sources, with wavelengths starting at a few kilometres and ranging up to the size of the Universe.

This is to make clear that there is a large difference in wavelength between electromagnetic and gravitational waves.

The LIGO lasers

The laser beam that enters LIGO's interferometers begins inside a laser diode, which uses electricity to generate a 4 watt (W) 808 nm beam of near-infrared laser light

By the time the gravitational wave has made one cycle, of kilometer wavelength (the lowest estimate above) for example, the laser has emitted zillions of photons that make up a train of at least a million peaks and troughs. As far as an individual wave peak-to-trough goes, gravity does not change for two pulses arriving at the same time, so the frequency of the laser does not change during the time measurements. As far as the photons $h*ν$ goes, they see a steady gravitational field within at least $10^{-6}$, the exponent much more negative if the size of the sources is taken into account.

At least that is how I understand it intuitively, and trust the calculations made by the LIGO team for the exact representation.

anna v
  • 233,453
  • Yes, you are certainly correct that the gravitational strain is essentially constant between successive (or nearby peaks of laser light). Therefore these peaks arrive with the same freq by any argument. But the freq change in the Op's question is between the zero of the GW sine wave and its peak .01 sec later when the strain is maximal (and effectively constant) on the flat top of the GW strain sin wave. Hence, the discussion is about whether the GW might cause different freq laser light in the two interferometer arms while on the GW flat top while no difference when GW strain=0. – Gary Godfrey Aug 23 '19 at 19:12
  • @GaryGodfrey my argument is that .o1 seconds later is aeons for the laser pulses but the gravity wave amplitude has changed infinitesimally the gravity field that the laser pulse sees, within measurement errors not measurable – anna v Aug 24 '19 at 03:01