z-logo
open-access-imgOpen Access
Parallel Residual Attention Network for Image Super-Resolution
Author(s) -
Yongsheng Duan,
Su Yang,
Jiahao Xu,
Wei Wu
Publication year - 2020
Publication title -
iop conference series. materials science and engineering
Language(s) - English
Resource type - Journals
eISSN - 1757-899X
pISSN - 1757-8981
DOI - 10.1088/1757-899x/782/5/052032
Subject(s) - computer science , residual , block (permutation group theory) , benchmark (surveying) , image (mathematics) , artificial intelligence , feature (linguistics) , pattern recognition (psychology) , convolution (computer science) , process (computing) , convolutional neural network , low resolution , artificial neural network , resolution (logic) , computer vision , high resolution , algorithm , mathematics , linguistics , philosophy , geometry , geodesy , geography , operating system , remote sensing , geology
The application of convolution neural networks (CNN) to image super-resolution has achieved excellent results. Inspired by these excellent results, we find that most models only increase the depth of the network in order to obtain deeper features. However, these models often cannot take full advantage of the original feature information from the low-resolution (LR) images. Moreover, the image information is mainly divided into high frequency information and low frequency information, and the existing image super resolution network models are mostly single-branch models, which limits the performance of extracting various aspects of image information. To resolve the problem, we proposed a parallel net work model. About the network, we proposed the LR residual block (LRB) and attention block (AB). LRB allows constantly replenishing original LR features in the process of extracting deep features, thus it can effectively extract more information from LR image. The AB contains the attention mechanism, it will ad just their weights based on the importance of the image features to extract more efficient features between low- resolution (LR) images and high-resolution (HR) images. Experiments on benchmark datasets show that our model performs better. We achieve higher accuracy and visual improvements against state-of-the-art methods.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here