VidCNN - Learning Blind Video Denoising

More Info
expand_more

Abstract

We propose a novel Convolutional Neural Network (CNN) for Video Denoising called VidCNN, which is capable to denoise videos without prior knowledge on the noise distribution (Blind). VidCNN is a flexible model, since it tackles multiple noise types, artificial and real. The CNN architecture uses a combination of spatial and temporal filtering, which learns how to spatially denoise the frames first and how to combine their temporal information, handling camera and objects motion, brightness changes, low-light conditions and temporal inconsistencies at the same time. We demonstrate the importance of the data used for CNNs training, creating for this purpose a specific dataset
for low-light conditions. We test VidCNN on videos commonly used for benchmarking and on self-collected data, achieving good results comparable with the state-of-the-art in video denoising. Our model can be easily adapted to different noise models, keeping the same temporal denoising network, maintaining performance in terms of accuracy and speed.