If you have good lighting, a good photographer can Take great pictures even with imaginative crappy cameras, In low light, however, all bets are off. Sure, some cameras can Shoot haunting videos lit only by moonlightBut for stills – and especially stills shot on a smartphone – digital noise remains a curse. We are getting closer to what is possible with hardware; heat and physics are working against us Till here Better camera sensor. But then Google Research came along, released An open source project it calls MultinerfAnd it seems to me that we are on the verge of changing everything.
I could write a million words about how wonderful this is, but I can do better than this; Here’s a 1-minute 51-second video that clocks in at 30 frames per second and ‘A picture speaks a thousand words, at least 1.5 million words worth of magic’:
Algorithms run on raw image data and add AI magic to figure out ‘what footage’ should have looked without the typical video noise generated by the imaging sensor.
At the moment this is research rather than a commercially available product, but as a photography and AI nerd, I am wildly excited by these developments; The lines between photography and computer graphics are getting blurred, and I’m here for it. Computational photography is already present to some degree in all modern smartphones, and it is a question of time before such algos are even fully integrated.