Signal and Image processing
Video denoising by combining patch search and CNNs
Publié le - Journal of Mathematical Imaging and Vision
Non-local patch based methods were until recently the state of the art for image denoising but are now outperformed by CNNs. In video denoising however, they are still competitive with CNNs, as they can effectively exploit the video temporal redundancy, which is a key factor to attain high denoising performance. The problem is that CNN architectures are not compatible with the search for self-similarities. In this work we propose a simple, yet efficient way to feed video self-similarities to a CNN. The non-locality is incorporated into the network via a first non-trainable layer which finds for each patch in the input image its most similar patches in a search region. The central values of these patches are then gathered in a feature vector which is assigned to each image pixel. This information is presented to a CNN which is trained to predict the clean image. We apply the proposed method to image and video denoising. In the case of video, the patches are searched for in a 3D spatio-temporal volume. The proposed method achieves state-of-the-art results. Keywords Denoising • Video denoising • Non-local • Patch-based methods • CNN