
Image quality enhancement using hybrid attention networks
Author(s) -
Wang Jiachen,
Yang Yingyun,
Hua Yan
Publication year - 2022
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/ipr2.12368
Subject(s) - computer science , artificial intelligence , convolutional neural network , block (permutation group theory) , representation (politics) , image quality , image (mathematics) , feature (linguistics) , pattern recognition (psychology) , image resolution , computer vision , geometry , mathematics , politics , political science , law , linguistics , philosophy
Image quality enhancement aims to recover rich details from degraded images, which is applied into many fields, such as medical imaging, filming production and autonomous driving. Deep convolutional neural networks (CNNs) have enabled rapid development of image quality enhancement. However, most existing CNN‐based methods lack versatility when targeting different subtasks in terms of the design of networks. Besides, they often fail to balance precise spatial representations and necessary contextual information. To deal with these problems, this paper proposes a novel unified framework for low‐light image enhancement, image denoising and image super‐resolution. The core of this architecture is a residual hybrid attention block (RHAB), which consists of several dynamic down‐sampling modules (DDM) and hybrid attention up‐sampling modules (HAUM). Specifically, multi‐scale feature maps are fully interacted with each other with the help of nested subnetworks so that both high‐resolution spatial details and high‐level contextual information can be combined to improve the representation ability of the network. Further, a hybrid attention network (HAN) is proposed and evaluations on three separate subtasks demonstrate its good performance. Extensive experiments on the authors' synthetic dataset, a more complex version, show that the authors' method achieve better quantitative and visual results compared to other state‐of‐the‐art methods.