RDASNet: Image Denoising via a Residual Dense Attention Similarity Network.

Researchers

Journal

Modalities

Models

Abstract

In recent years, thanks to the performance advantages of convolutional neural networks (CNNs), CNNs have been widely used in image denoising. However, most of the CNN-based image-denoising models cannot make full use of the redundancy of image data, which limits the expressiveness of the model. We propose a new image-denoising model that aims to extract the local features of the image through CNN and focus on the global information of the image through the attention similarity module (ASM), especially the global similarity details of the image. Furthermore, dilation convolution is used to enlarge the receptive field to better focus on the global features. Moreover, avg-pooling is used to smooth and suppress noise in the ASM to further improve model performance. In addition, through global residual learning, the effect is enhanced from shallow to deep layers. A large number of experiments show that our proposed model has a better image-denoising effect, including quantitative and visual results. It is more suitable for complex blind noise and real images.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *