This prototype camera has a big array of miniature sensors that use the stereo differences between the images they capture individually to create a depth of field map of the scene. The idea is that once you take the picture, the depth map is packed in as metadata and you can mess with it in post, changing what’s in focus and what’s not.
My question is, what use is the depth map when the actual photo has a fixed depth of field? The depth of field is created by the aperture of the camera, and it’s only taking one photo, in which what is in focus is set by objective lens. So while a depth map certainly has applications, I don’t think that what they’re implying is possible, this rolling of focus from near to far, because the source photograph is already focused. I could be wrong, though. Either way, it’s a cool piece of technology. What do you think?
Stanford researchers developing 3-D camera with 12,616 lenses [Stanford News Service, via Slashdot]