Google's new AI image noise reduction tool could be a game changer

headtr1p

Posts: 15   +1
Why it matters: Google Research is working on new AI image noise reduction technology that could drastically change low-light photography. The tool is capable of reconstructing a dark scene with powerful denoising and minimal artifacts, comfortably outperforming existing denoise tools.

Computational photography has come a long way, and nowadays it is prevalent in smartphones and post-processing software. Noise reduction is arguably one of the most valued tools, as even the best camera sensors are not exempt from image noise, especially when used in a darker environment. Google Research unveiled an exciting new technology that uses artificial intelligence to eliminate image noise from darker scenes, effectively allowing photographers to "see in the dark."

They are calling this new tool RawNeRF and it forms part of their open source project known as MultiNeRF. RawNeRF is capable of assisting photographers when capturing darker scenes specifically. It makes use of AI that has unprecedented denoising power, and what's really impressive is that the denoising seems to happen with a minimal loss in quality and far fewer artifacts than comparable tools.

NeRF (Neural Radiance Fields) is a view synthesizer that can scan a collection of images and reconstruct an accurate 3D render. Ben Mildenhall, a Google researcher, explains that RawNeRF "combines images taken from many different camera viewpoints to jointly denoise and reconstruct a scene." So it's not just a denoiser, but it be used to vary the camera position and view the scene from different angles. Scenes are reconstructed in a linear HDR color space, which means it can work out details like varying the exposure, use tone mapping, and changing the focus.

As shown in the video above, Mildenhall uses a smartphone photo of a candlelit table to demonstrate the power of RawNeRF. He applies minimal post-processing and brightening, and while the resulting picture is more detailed, it has a significant amount of sensor noise. He shows that running the picture through a cutting edge deep denoiser leaves unsightly artifacts, but with RawNeRF the results are truly staggering, especially with reference to the image quality and lack of artifacts. The reason it performs so well is because the AI is trained on raw image data rather than post-processed JPEGs.

It's exciting to think that we could see this technology integrated into our cameras and smartphones very soon, and it'll likely become a game changer for professional photographers and hobbyists. There is no existing noise reduction tool that even comes close to matching these results.

Permalink to story.

 
This Google tool may well be much better, but right now Topaz AI DeNoise does an amazing job on many pictures.
 
This Google tool may well be much better, but right now Topaz AI DeNoise does an amazing job on many pictures.

All the new AI NR tools, be Topaz, On1 or DxO do an amazing job. Still I'd like to see a direct comparison to NERF. Maybe Goolge should offer this a Photoshop plugin and proive they have confidence in the product.

I hope the security industry takes note, because most security cameras have garbage sensors that are hoipless in the low light they are supposed to operate in. What we need is this built into the deviceso the images are displayed are already NERF'd in real time.
 
Instead of all of these "tricks", how about putting LARGER sensors on phone? And by larger I don't mean more megapixels. I get sick & tired of hearing "smartphone phones as good as a d-slr"
50 megapixels, 100 megapixels, 200 megapixels. If "more" were better, how come most d-slr's, and even the 6,000 dollar and more professional d-slr's max out at most 40-50 megapixels?
It's not the number of sensors, it's the SIZE of each sensor.
To take a photo in low light, they crank up the gain then the photo ends up with noise. To compensate, the "AI" tries to squish it, which can result in a "flat" image.
Not to mention this is a google idea, but but but I'm sure they won't be "keeping" the data. ;)
 
Instead of all of these "tricks", how about putting LARGER sensors on phone? And by larger I don't mean more megapixels. I get sick & tired of hearing "smartphone phones as good as a d-slr"
50 megapixels, 100 megapixels, 200 megapixels. If "more" were better, how come most d-slr's, and even the 6,000 dollar and more professional d-slr's max out at most 40-50 megapixels?
It's not the number of sensors, it's the SIZE of each sensor.
To take a photo in low light, they crank up the gain then the photo ends up with noise. To compensate, the "AI" tries to squish it, which can result in a "flat" image.
Not to mention this is a google idea, but but but I'm sure they won't be "keeping" the data. ;)

Well you can only increase the sensor so much unless you want giant phones, that are 20mm thick and cost $2K. The sensors are already quite large in high end phones getting up towards 1". Canon and Sony have specially developed sensors for the security industry that can basically see in the dark and make a mockery of current garbage.
 
Well you can only increase the sensor so much unless you want giant phones, that are 20mm thick and cost $2K. The sensors are already quite large in high end phones getting up towards 1". Canon and Sony have specially developed sensors for the security industry that can basically see in the dark and make a mockery of current garbage.
Unfortunately, so true!

FYI my HTC One M8 back in 2014 had 4 Mpixels sensor, but had superior depth because it had 2µm pixel size. And yes photos were small compared to today's but richnes of colors and gradient was amazing. Incomparable!
 
Last edited:
All the new AI NR tools, be Topaz, On1 or DxO do an amazing job. Still I'd like to see a direct comparison to NERF. Maybe Goolge should offer this a Photoshop plugin and proive they have confidence in the product.

I hope the security industry takes note, because most security cameras have garbage sensors that are hoipless in the low light they are supposed to operate in. What we need is this built into the deviceso the images are displayed are already NERF'd in real time.

This isn't comparable as it's not designed to work on just one image. You need a collection of images - up to 200 in one of the examples. They are two different classes of products doing different tasks.
 
This isn't comparable as it's not designed to work on just one image. You need a collection of images - up to 200 in one of the examples. They are two different classes of products doing different tasks.
Moving the camera around with low quality - is based on good statistical algorithms - eg if your phones GPS was only accurate to 10m and that was evenly spread - then getting lots of readings around and over time - must increase accuracy .
Something like this with a huge memory would be able to clean up old movies - especially if someones face etc is in multiple scenes - still a few years away

AI just does all the hard work
 
This technology is incorrectly labeled as "de-noising". It is "image enhancement". The articles says than itself. Removal of noise (pixels not representative of the actual photons received by the sensor) without loss in fidelity means that every pixel removed is substituted by a pixel that it much more representative based on the analysis of how like its surrounding pixels are (though interpolation ans extrapolation techniques. De-noising is the removal of out of place pixels. Since the removal of pixels is only being done to add pixels the only correct description is enhancement. Put another way; the choice of the definition of "noise" is irrelevant if the image is being enhanced. So whether I define "not representative" as the eyes being closed or the face frowning or whatever, or not ,is up to me. Noise is in the eye of the beholder: just like beauty. This is digital image enhancement. Which it great. I have digitized all of my old analogue photos and videos to enhance them and provide a clearer picture of the past.
 
All the new AI NR tools, be Topaz, On1 or DxO do an amazing job. Still I'd like to see a direct comparison to NERF. Maybe Goolge should offer this a Photoshop plugin and proive they have confidence in the product.

I hope the security industry takes note, because most security cameras have garbage sensors that are hoipless in the low light they are supposed to operate in. What we need is this built into the deviceso the images are displayed are already NERF'd in real time.

+1 on the comparison, especially with DeepPRIME which in my view produces similar denoising results.

Do not entirely agree on security cameras, the resulting video is not usable in court. would bet, as it is generated and not the actual image captured by the sensor. All these beautiful AI tools are creating a fascinating legal challenge actually =)

Bit like JBIG2 compression rewriting characters: https://arstechnica.com/information...copiers-randomly-rewriting-scanned-documents/
 
Back