z-logo
open-access-imgOpen Access
Efficient Channel Attention Feature Fusion for Lightweight Single Image Super Resolution
Author(s) -
Lingxiu Jiang,
Yue Zhou
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1828/1/012020
Subject(s) - computer science , fuse (electrical) , feature (linguistics) , block (permutation group theory) , artificial intelligence , pattern recognition (psychology) , residual , convolution (computer science) , channel (broadcasting) , feature extraction , binary number , scale (ratio) , artificial neural network , data mining , algorithm , mathematics , engineering , computer network , philosophy , linguistics , physics , geometry , arithmetic , quantum mechanics , electrical engineering
Recent advances in deep learning and convolution neural network have greatly improved the reconstruction performance of SISR compared with the traditional methods. However, complicated models and huge amount of parameters limit the application of those methods in real-world scenes. In our paper, we propose an efficient channel attention feature fusion method on the lightweight super-resolution network (ELSRN) for SISR. We reduce our network parameters through several modules, including binary cascading feature fusion. Besides, we propose to build efficient inverted residual block (EIRB) and stack several EIRBs to capture effective feature information of different scales. Last, we fuse multi-scale features in pairs step by step and finally refine final feature information with different scale features. Several experiments have proved that our EIRBs module and binary cascading method are effective and our network can achieve a great trade-off between reconstruction performance and model size.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here