Google research AI image noise reduction is out of this world

If you have great lighting, a good photographer can take decent photos even with the crappiest camera imaginable. In low light, though, all bets are off. Sure, some cameras can shoot haunting video lit only by the light of the moon, but for stills — and especially stills shot on a smartphone — digital noise continues to be a scourge. We may be getting close to what is possible to achieve with hardware; heat and physics are working against us making even better camera sensors. But then Google Research came along, releasing an open source project it calls MultiNerf, and I get the sense that we’re at the precipice of everything changing.

I can write a million words about how awesome this is, but I can do better; here’s a 1-minute-51-second video, which, at 30 frames per second and “a picture tells a thousand words,” is at least 1.5 million words worth of magic:

Video Credits: DIYPhotography

The algorithms run on raw image data and adds AI magic to figure out what footage “should have” looked like without the distinct video noise generated by imaging sensors.

At the moment this is research rather than a commercially available product, but as a photography and AI nerd, I’m wildly excited by these developments; the lines are blurring between photography and computer graphics, and I’m here for it. Computational photography is already present in all modern smartphones to some degree, and it’s a question of time before algos like this are fully integrated as well.