
Deep Residual Fusion Network for Single Image Super-Resolution
Author(s) -
Jia Wang,
Chuwen Lan,
Zehua Gao
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1693/1/012164
Subject(s) - residual , discriminative model , computer science , fuse (electrical) , artificial intelligence , convolutional neural network , pattern recognition (psychology) , feature (linguistics) , field (mathematics) , fusion , feature extraction , data mining , algorithm , mathematics , engineering , linguistics , philosophy , pure mathematics , electrical engineering
Convolutional neural networks have been applied in the field of single-image super-resolution (SISR) and have achieved a series of outstanding results. However, most of the SISR research still attempt to pursue wider and deeper network structure, without paying enough attention to the correlations between different features. In order to solve these problems, deep residual fusion network (DRFN) is proposed for more powerful feature expression and feature learning. Specifically, we propose a feature fusion group (FFG) structure, which can effectively use the relevant features extracted from the residual attention group (RAG) and fuse them to be more discriminative. Residual attention group (RAG) includes channel attention module (CAM) and spatial attention module (SAM), which uses attention mechanism to refine features. DRFN also makes full use of nested residual connections, skipping redundant low-frequency information to enhance circulation, thereby focusing the calculation on more important high-frequency components. Extensive experimental results have proved the effectiveness of our model. And our model finally achieves excellent performance in terms of both quantitative metrics and visual quality.