I’ve heard of this kind of technology before, but this quick little video demonstrates just how cool it really is. Basically, instead of a normal lens, you use a whole bunch of tiny lenses, and using the correct algorithms you can interpret the many resulting images into a whole — and what’s more, you can change the focal point. It’s pretty trippy to see in action.
The presentation (by Todor Georgiev, the lead researcher) is actually quite dry, but it’s only a couple minutes, so stick with it. I’m not sure I understand how they get the blur effect, though. The microimages appear to all be in focus. I wonder if it’s just added in?
The downside is vastly reduced image fidelity, since so much of the sensor is dedicated to more or less redundant data. He says it’s “on the order of” 25-50 individual cameras, so I’m guessing you’d have to divide the size of the source image by that to get the size of the “final” image.