"Omni-focus" camera can focus on near and far objects simultaneously


There’s a trick here, I just can’t quite figure it out. Ordinary cameras, for centuries now since the very first experiments in optics, have relied on organizing lenses in sequence to recreate an image. Even our eyes work on this principle. Moving the lenses around creates zoom, changing focus, and results in more or less light transmission. The device described by Professor Keigo Iizuka at the University of Toronto, breaks with that tradition. As you can see in the images, objects only centimeters from the front of the device are as sharp as objects several meters away. How is this possible?

Traditional cameras could make it happen using an extremely small aperture. At F/22, a common minimum aperture value, you’re essentially getting a pinhole image, and the way the light is bent and re-bent results in the entire image being in one focal plane. I doubt this new device is simply a pinhole camera, though.

They call it a “Divergence-ratio Axi-vision Camera,” or Divcam for short. Not a lot of clues there except perhaps for an optics expert. I would guess, though, that a complete and flat image is created via a polarized and flat “lens” (for lack of a better term) and the light is sent in parallel back to a high-sensitivity sensor. The images you see, notice, have an extremely narrow field of view, which supports my theory, as a rounded lens would produce both a larger field of view and divergent light rays within the device, and that would make the images we see impossible. The full frame crop here also suggests a large, low-resolution sensor and parallel rays:

See how the edge of the finger is sort of all-or-nothing pixellation? There’s absolutely no overlap between the light coming from the doll and the light coming from the finger, suggesting the camera/sensor is only accepting light that is coming straight at it. I’m not sure if I’m explaining it correctly but it makes sense to me.

Whatever the case, they think they can apply it to the video world in general, and I hope they do. It looks interesting. In the mean time, though, I think a lot of consumers are just starting to discover depth of field in their video as they start shooting with cameras like the T2i. We’ll keep you posted on this new technology.

Update: Nope, I’m totally wrong. I thought it was a pretty good guess, though. I should have known, that pixel occlusion pattern is totally the result of a software “magic wand” — plus, the ability to determine distance implies some form of stereoscopy.