Fujifilm's new CCD may be just crazy enough to work


Fujifilm has shown (will show, rather, if this technique works) that there’s still a lot of play on the lowest levels of sensor design. With all the new facilities cameras enjoy such as face detection and touchscreen interfaces, it’s easy to forget that we haven’t peaked in terms of the underlying technology, in many cases CCDs. The way photodiode arrays are laid out seems like something that should stay out of the limelight, but as we’ve seen with NVIDIA’s choice of solder and its consequences, changes on the ground floor tend to be felt higher up. Read on for news on how Fujifilm might be shaking the foundations today.

Fujifilm’s new CCD changes the the layout of the photodiodes; the original pattern was lines stacked like cake layers: one line alternating between red- and blue-sensitive diodes, the next line all green-sensitive. The human eye is most sensitive to green so this disproportionality is intended. The new design is, as you see above, organized diagonally, with the red and blue pixels doubled up. Why is this important?

Firstly, it means that the red and blue detail will be slightly coarser and the green detail slightly finer, due to increased and decreased distance, respectively, between diodes. So far, so neutral. The bigger thing this enables is better pixel binning. Pixel binning is essentially allowing adjacent diodes to “team up” and become a sort of “superpixel.” At the cost of fine detail, light sensitivity is thus increased. In the original layout, the distance between similarly-sensitive pixels was far greater than it is in the new layout — now, pairs of pixels can team up without “assuming” similar data for the space between them, or creating unnatural combinations of pixels if, say, red and green diodes accidentally share information. Smaller and more accurate bins means the same high sensitivity but reduced noise and color issues.


Click to embiggen

A secondary gain of the pattern, and perhaps one even more interesting, is the capability to take two exposures from the sensor at the same time, with nearly identical color data. Half of each pixel pair is used for one, say a normal exposure, and the other half are used for the other, say an overexposure for lowlights. The two images are then combined in-camera, providing an image with much greater dynamic range. It’s similar to exposure bracketing, but it’s more self-contained and possibly easier to use. Once again, since only half the pixels are used at a time, the image will lose some detail, but if the engineers are careful, that won’t affect the image quality too much.

Sorry this has been such a lengthy post about such a dry topic, but I find the ground-level stuff very interesting and I think the better informed we all are on it, the more insight we’ll have into the everyday stuff like “what camera is the best in low light?” and “which processor does video encoding fast enough?” If I’m honest, I doubt this sensor will really “shake the foundations” of the digital imaging world, but it’s interesting, marketable, and indicative of a liveliness in the basic sectors where one might assume the creators of today’s tech are resting on their laurels. It also means theiy’re not giving up in the face of Kodak dominance. Nice work, Fujifilm!

[images credit: Fujifilm]