
Pre‐training of gated convolution neural network for remote sensing image super‐resolution
Author(s) -
Peng Yali,
Wang Xuning,
Zhang Junwei,
Liu Shigang
Publication year - 2021
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/ipr2.12096
Subject(s) - computer science , residual , convolution (computer science) , block (permutation group theory) , remote sensing , convolutional neural network , artificial intelligence , artificial neural network , image (mathematics) , noise (video) , image resolution , computer vision , pattern recognition (psychology) , algorithm , geography , mathematics , geometry
Many very deep neural networks are proposed to obtain accurate super‐resolution reconstruction of remote sensing images. However, the deeper the network for image SR is, the more difficult it is to train. The low‐resolution inputs and features contain abundant low‐frequency information and noise, which are treated equally as the high‐frequency information to across the network. To solve these problems, a novel single‐image super‐resolution algorithm named pre‐training of gated convolution neural network (PGCNN) is proposed for remote sensing images. The proposed PGCNN consists of several residual blocks with long skip connections. Each residual block contains an additional well‐designed gated convolution unit, which provides different weights to high‐frequency information and low‐frequency information to control the transmission of information, making the main network focus on learning high‐frequency information. Compared with several state‐of‐the‐art methods, experimental results on the remote sensing datasets (SIRI‐WHU, NWPU‐RESISC45, RSSCN7 and UC‐Merced‐Land‐Use) show that the proposed PGCNN has the accuracy and visual improvements.