|

Self-attention learning network for face super-resolution.

Researchers

Journal

Modalities

Models

Abstract

Existing face super-resolution methods depend on deep convolutional networks (DCN) to recover high-quality reconstructed images. They either acquire information in a single space by designing complex models for direct reconstruction, or employ additional networks to extract multiple prior information to enhance the representation of features. However, existing methods are still challenging to perform well due to the inability to learn complete and uniform representations. To this end, we propose a self-attention learning network (SLNet) for three-stage face super-resolution, which fully explores the interdependence of low- and high-level spaces to achieve compensation of the information used for reconstruction. Firstly, SLNet uses a hierarchical feature learning framework to obtain shallow information in the low-level space. Then, the shallow information with cumulative errors due to DCN is improved under high-resolution (HR) supervision, while bringing an intermediate reconstruction result and a powerful intermediate benchmark. Finally, the improved feature representation is further enhanced in high-level space by a multi-scale context-aware encoder-decoder for facial reconstruction. The features in both spaces are explored progressively from coarse to fine reconstruction information. The experimental results show that SLNet has a competitive performance compared to the state-of-the-art methods.Copyright © 2023 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *