6

The maximum obtainable angular resolution of an optical system with some given aperture is well known, but it seems to me that this isn't a real theoretical limit. The assumption is that you are going to take a picture using the system and no further processing will take place. However, given the known point spread function of the optical system, you could perform a deconvolution calculation. The final resolution of the processed image should be limited by the noise. So, what then seems to matter is the observation time (the longer you integrate the signal the better the signal/noise ratio will be).

So, what is the correct theoretical limit of the maximum resolution in terms of brightness of the two sources to be resolved, the aperture and the observation time (assuming that the only noise comes from the fluctuations in the finite numbers of photons from the sources)?

Count Iblis
  • 10,114

2 Answers2

5

If you assume a perfect, circular aperture, there are some frequencies (corresponding to a null in the aperture response function) that cannot be recovered by deconvolution - if you consider deconvolution a division by the FFT of the aperture function, you encounter divide-by-zero. This is a fundamental limit first described by Abbe, and for a long time this was considered an unbreakable "rule".

In reality resolution is often limited by many things that are worse than the diffraction limit - atmospheric distortion (big one - but one for which there are very cool tricks of adaptive optics), errors in the lens / mirror, vibrations, ...

There is also work going on to image "beyond the diffraction limit". See for example http://arxiv.org/abs/1406.2168 for a very recent example involving radio waves, or an entire special issue of Nature Photonics devoted to the technologies http://www.nature.com/nphoton/journal/v3/n7/full/nphoton.2009.100.html . Quoting from the editorial:

It now appears that there is no fundamental limit in achieving spatial resolution; using visible light, it is possible to resolve up to a few nanometres with these approaches.

Food for thought. For details, see the link (and its links).

afterword responding to Carl Witthoft's comment, the diffraction limit is usually given by Abbe's formula:

$$d=\frac{\lambda}{2 n sin\theta}$$

where $n sin\theta$ is known as the numerical aperture of the system - a measure of its light gathering characteristics. Note that the refractive index $n=1$ for air, but it can be higher for other media. This is a reason for using oil coupling in microscopes at the highest magnification settings: the oil makes the effective wavelength of the light shorter in that part of the imaging chain where it most matters (for determining the spatial resolution of the system). Shorter wavelength is one way to achieve better resolution.

Floris
  • 118,905
  • Ya beat me to it :-). Might be worth another paragraph discussing the sin limit in microscope objectives and why oil-coupling helps. – Carl Witthoft Jun 11 '14 at 19:50
  • 1
    Thanks! I would still need to sit down and look carefully at why deconvolution won't help here. Of course, dividing by zero is a problem but I need to see that you cannot gain any extra information... – Count Iblis Jun 12 '14 at 18:20
  • Oh - deconvolution helps. A lot. But there is some information that is "destroyed" - think of the convolution of a sine wave with a rect function. When the rect is an integer multiple of the wave length, the integral (convolution) is zero and you don't know if the sine wave was there, and how big it was. For all other frequencies, the deconvolution can help (at the expense of amplifying noise for frequencies close to the nulls). If your detection adds white noise (all frequencies) then there are places where you amplify noise when no signal is available... – Floris Jun 12 '14 at 18:23
1

I'm going to focus on the information contained in the light field itself. This excludes from the discussion many if not all "superresolution" techniques, which directly or indirectly make use of information further to that in the imaging light field[footnote 1].

It is true that you can do a deconvolution to get somewhat below the traditional diffraction "limit" if the signal to noise ratio is very high. But there is a fundamental limit even to this, and it leads to tradeoff between distance from the object and resolution. Near field microscopy can image arbitrarily small features with light, but herein lies the catch - you need to be near to the object, and, for a given noise level, no matter how small, the imageable feature size decreases exponentially with distance from the object. The notion of what leads to the notion a hard limit arises from:

Only nonevanescent waves (corresponding to truly free photons) can convey Fourier component information to an imaging system that is arbitrarily far from the object

The phenomenon is indeed best understood through evanescent waves. If you want to encode Fourier component of a transverse feature into the light field and that component's spatial angular frequency $k_f>k$ (here $k$ is the light's wavenumber), then as the plane wave encoding this component propagates away from the object (call this the $z$ direction), its amplitude varies as $\exp(-\sqrt{k_f^2-k^2}\,z)$, i.e. the wavevector component becomes imaginary and the amplitude swiftly drops off with distance. As $z\to\infty$, only the nonevanescent waves are left, so the system transfer function looks more and more like a hard limitting lowpass filter with cutoff spatial frequency $k$ as $z$ increases. If you want to image features of characteristic length $d<\lambda$, then the loss in signal to noise ratio is:

$$\begin{array}{lcl}SNR &=& SNR_0-40\,\pi\,z\,\sqrt{\frac{1}{d^2}-\frac{1}{\lambda^2}}\,\log_{10}e\quad\text{(decibel)}\\&\approx& SNR_0-40\,\pi\,\frac{z}{d}\,\log_{10}e\quad (d\ll\lambda)\end{array}$$

where $SNR_0$ is the signal to noise ratio you would get if you held the SNOM right up on the imaged object and $z$ is the distance of the SNOM tip from the object. This is a horrifically fast dropoff. If you probe scans $1{\rm \mu m}$ from the imaged object and we wish to see $50{\rm nm}$ sized objects, the signal to noise lost by the mere $1{\rm \mu m}$ standoff is 1000 decibels (a power factor of $10^{100}$!). Practically speaking, your probe must be within a distance $d$ or less of the imaged object, where $d$ is the subwavelength feature length you wish to see; the above formula then gives an SNR dropoff of about $54{\rm dB}$ when $z=d$.

Footnotes

[1]. For example STED depletes fluorophores out of focus before taking the final light reading, thereby disabling anything more than a few tens of nanometres from the focus from registering)