4

I've been fascinated with light field cameras for quite a while.

They are devices that can capture information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the direction that the light rays are traveling in space. This contrasts with a conventional camera, which records only light intensity.

Basic construction looks like a matrix of small lenses : light field camera sensor

Is it possible to create an array of sensors (something like a point in that 3D scene) that can capture the entire light field of the 3D scene (think about your dormitory) and then using the information about each photon's vector recreate the scene in a simulation in which you can move around and see behind objects ?

That is, can you recreate the occluded objects's surfaces given the information encoded by the photons travelling inside that scene that hit the array of sensors ?

garyp
  • 22,210
StefanS
  • 93
  • Because light doesn't travel between two points in the shortest distance, this is only apparent because of the light path integral. Feynman discusses this in his book Quantum electrodynamics (QED). EDIT: Here's a summary : "In elementary school, you learn that the angle of incidence equals the angle of reflection. But actually, saith Feynman, each part of the mirror reflects at all angles." http://lesswrong.com/lw/pk/feynman_paths – StefanS Apr 06 '16 at 07:11
  • http://physics.stackexchange.com/q/83105/ – StefanS Apr 06 '16 at 07:11
  • 2
    I think the answer is no, at several levels:
    • capturing all light = no cloning thorem prohibits determination of a quantum state
    • However the quantum state knowledge may not be necessary - maybe computing based on large sensor input would be sufficient. Unfortunately to move around and see behind objects one would have to capture photons emitted by a back surface of a object reflected off other objects. If the wall would be a mirror then this could be possible, but walls are diffusive surfaces reflecting rays in all directions. I believe the reconstruction in hampered by a "butterfly effect"
    –  Apr 07 '16 at 20:05
  • 1
    Take a following point of view: if you put a newspaper close to a wall, and lood from behind the newspaper at the wall, the wall changes color to a "grayish" tone. What is written does not matter, what matters is the % of the black ink on the frontpage. To capture the image from the wall and even roughtl reconstruct the newspaper look based on that seems impossible (the sensor data is noisy, and the diffusive reflection is essentially an estremely strong "gaussian blur" filter applied to the input data - see in photoshop: unsharp mask cannot reverse a big radius gaussian blur filter ). –  Apr 07 '16 at 20:08

4 Answers4

4

From the view point of information loss we can think about how many degrees of freedom there are. The detector will in general have few degrees of freedom (compared to the source system you are trying to measure) and that is even if its precision is perfect!

Now let me try to specifically address the points in your question wether we could see

see behind objects ?

given the

information encoded by the photons.

What you would try to use is either diffraction (i.e. so that the photons get to the detector around corners) or reflection (photon bouncing off a wall).

Reflection is easily discarded as a process to see objects around the corner: Take a perfect mirror and your detector sees the photons coming from it. How is going to know wether something bounced off at that point or simply came from further behind? (of course if you knew a priori that it is a mirror you could maybe infer that, but that is really not the point of this question.

Diffraction is a bit harder to see. All the diffractive systems I have encountered have a finite number of modes (i.e. when coupling to any detector, not just practical ones) passing through the system with significant efficiency/throughput. I am just gonna name an example that I simulated: for a grating spectrometer I got about 20... and there is know way you reconstruct any object from 20 pieces of information (number).

To summarize: what I argued here is that this is not possible from a purely theoretical viewpoint due to degeneracy in the optical propagation and the resulting information loss about the source. So I would say we can most certainly take the answer to be: no, this is not possible in general, you can't fully reconstruct an object around the corner! The "in general" of course means on the other hand that you might be able to do it for specific sources/environments that you are looking at. And the "fully" means you might be able to partially reconstruct it. But I think what I can say is: You can not see around corners in general, no matter how good you make your detector!

Wolpertinger
  • 11,535
2

Well, if you are willing to replace "light field" with wave front, and are happy to recreate the original wave front, then what you are looking for is a way to record a hologram.

By necessity, holographic film intended to reconstruct objects for the human eye need a high resolution in the 1000 line pairs/mm range. What you actually record is an interference pattern with a reference wave.

If you are willing to reduce the requirements for visibility under large (>10 degree) angles, accept a noisy image, you can certainly record holograms with a cheap CCD.

When I was a physics student, we didn't even record the interference patterns, we computed them (with a 2D FFT) and then wrote the data to a b/w ("phase") LCDs instead of film. The result was not to observe by the human eye, but it could recreate a wave front that would display the original we computed the FFT from.

Jens
  • 3,659
1

No.

Light field cameras are indeed fascinating and they can be used perform impressive feats such as refocusing after the image data has been recorded. They are not, however, magical, so they are restricted to providing information which was already encoded in the photons incident upon their objective lens(es), as you noted at the end of your question statement.

You asked whether one could deduce the appearance of an occluded object from the data recorded by a light field camera. As we have both observed, this is (at best) equivalent to asking if one could deduce the appearance of occluded object from the information encoded in the photons incident on the objective lens(es).

By "occluded object", I presume you mean an object which is not visible from the vantage point of the camera. Or in other words, an object which has had no interaction with any of the photons incident upon the camera. Or in other words, an object about which there can be no information encoded in the photons incident upon the objective lens(es). I'm sure you see where this is going...

A good exercise to understand exactly what a light field camera can and cannot do (in theory---in practice they can't do much yet) is to sit on a chair in the dormitory you mentioned and spend 5 minutes continuously looking around, waggling your head from side to side (no more than a couple inches--the camera isn't very big), letting your eyes go in and out of focus, until you've covered every angle at every focus range. You've now seen all of the intelligible scenes which could be constructed from any camera.

0

You can have some parallax from such systems but the spacial extent of the cameras will be large. 3D no, some angular view only.

The light field is great as concept but is not quite practical to store so much info. The array camera size needs to be huge. Each pixel is several hundreds of sub pixels. Which is poor resolution overall for a lot scenes. You can increase the resolution numerically somewhat also.

The interest is mostly for 'seeing through' occlusions and ability to numerically refocus, high speed cameras and illumination only particular light angles ( fake '3D monitors'). Also you can 'see' behind walls in artificially designed scenes (this is some 3D information).

Anonymous
  • 1,037