z-logo
open-access-imgOpen Access
Multi-Branch Neural Architecture Search for Lightweight Image Super-Resolution
Author(s) -
Joon Young Ahn,
Nam Ik Cho
Publication year - 2021
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2021.3127437
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Deep convolutional neural networks (CNNs) are widely used to improve the performance of image restoration tasks, including single-image super-resolution (SISR). Generally, researchers are manually designing more complex and deeper CNNs to further increase the given problems’ performance. Instead of this hand-crafted CNN architecture design, neural architecture search (NAS) methods have been developed to find an optimal architecture for a given task automatically. For example, NAS-based SR methods find optimized network connections and operations by reinforcement learning (RL) or evolutionary algorithms (EA). These methods enable finding an optimal system automatically, but most of them need a very long search time. In this paper, we propose a new search method for the SISR that can significantly reduce the overall design time by applying a weight-sharing scheme. We also employ a multi-branch structure to enlarge the search space for capturing multi-scale features, resulting in better reconstruction on the textured region. Experiments show that the proposed method finds an optimal SISR network about twenty times faster than the existing methods, while showing comparable performance in terms of PSNR vs. parameters. Comparison of visual quality validates that the obtained SISR network reconstructs texture areas better than the previous methods because of the enlarged search space to find multi-scale features.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here