I was wondering if it would be possible to shorten the distance between detectors when measuring the speed of neutrinos to, say, 7m rather than the current ~700km? In this way the distance traveled would be known directly. Something similar to the coincidence measurements we now do when studying positronium. Is there a limit to the technology used for timing of events or it is only a matter of technical development and there's room for achieving a greater accuracy than the currently known?

- 201,751

- 1,270
-
Notice that the OPERA number is a fractional difference of few time $10^{-5}$, so you are asking about differences on order of $10^{-5} * (7\text{ m}/3 \times 10^8\text{ m/s})\approx 2 \times 10^{-13}\text{ s}$. – dmckee --- ex-moderator kitten Sep 28 '11 at 17:02
-
I know but do you think that at present it's only a technological difficulty to reach that accuracy or its unachievability is intrinsic in the phenomena? – ganzewoort Sep 28 '11 at 17:24
-
What's the fastest oscilloscope on the market? – JEB Jun 09 '18 at 23:07
2 Answers
This is not really an answer to the question in the title, but a description of why the proposed short baseline neutrino speed measurement is exceedingly difficult. It related to the question in the sense that it explains the limits of the precision with which $\delta t$ can be extracted in a neutrino experiment, without even touching on the kind of ultra-high precision timing work that NIST and related bodies like to do.
Getting very high timing precision is possible in many instance, but neutrinos pose a few special challenges.
Even at accelerator beam energies (multiple GeV as in the OPERA beam) the cross-section for neutrino interactions is tiny. So to get any kind of rate at all you do two things
Make the detector big. Tens of thousand tones for some distant detectors and a few tons (or at least hundreds of kilograms) for near detectors. A massive detectors has non-trivial sizes so you have to correct for the timing over which signals develop, are detected, and get converted to latch-able electronic signals. You'll note that in the case of the OPERA paper these correction were of order a few to tens of ns each. Each of these corrections carries with it a systematic error.
The beams have to be very intense. Ideally you would generate a single bunch of progenitor particles (protons in the case of OPERA) and bang them on target over a time-scale less than your anticipated $\delta t$, and then wait for a time much larger than $\delta t$ before the next bunch arrived. But due the limits of accelerator technology and the neutrino cross-section this is a losing game. In the case of OPERA they pour protons onto the target in small bunches for 10 micro-seconds at a time. There is no unique way to identify the time of origin associated with each neutrino event in the far detector. Thus the statistical method they employed
(this is one of my favorite places to suspect the OPERA procedure, though they made a real try at handling it)originally, they have now used a lower statistics short bunch approach which largely removes this as a possible source of error.
Neutrino beams are not well focused.
You could be thinking that with a nearby detector You could beat both these problems at once by building a very small detector.
You run into two problems.
By that point the beam is already meters across, so a really small detector exacerbates the small cross-section problem.
You have to be far enough away to loose the muons as a non-trivial number of these are generated, and even though you can probably ID them on veto around their arrival times (and it has to be a moderately long veto because of the risk of spallation products) you have to go far enough away that the deadtime doesn't kill you.
You could use a big sweep magnet after the decay line beam stop. That sounds promising, but then you lose you best tool for determining when you might have spallation products (which you have to veto or subtract), so you need to go far enough downstream to ditch most of them.
The start point is not well defined on short distance scales.
Neutrino beams are generated by the decay of high-energy particles in flight. Because the timing of that decay is random on a exponential, you don't know exactly where the neutrinos started. You'll have to measure from some well known place and correct for time of flight of these heavier particles in the horn. Now, we're pretty confident of being able to do this to the few ns scale, it is not going to be possible to do enormously better than that.
By the way, if you are thinking that OPERA seems under-optimized for this measurement, that's because it is. This is a parasitic measurement that simply takes advantage of a machine designed to measure neutrino mixing parameters in the $\nu_\mu \to \nu_\tau$ appearance channel, and the need to unambiguously identify $\nu_\tau$ charged-current events (by unambiguously observing the $\tau$-lepton) drives the design of the detector.
To answer the question in the title, $10^{-15}$ seconds can be measured routinely with optical combs (see here for a review). According to Wikipedia, processes in the tenths of a femtosecond can also be measured.
EDIT: As Georg pointed out, a frequency comb would not be useful for measuring time-of-flight of particles between two distant locations (and possibly not even for short distances? I don't know).

- 1,478
-
As ultrafast lasers are extending into the extreme-uv and x-ray, single cycle atto-second pulses are being produced many labs. Also, combs can be used to lock/know the carrier envelope phase, which could dramatically increase the coherence length of a laser. Short times are measured by field-field correlation in an interferometer with one arm part of a delay line. A pico-second corresponds to 300 microns, which is enormous. The accuracy and resolution of modern stages is pretty mind-boggeling. – Aug 10 '13 at 04:00