
Iterative attribute augmentation network for face image super resolution
Author(s) -
Teng Zi,
Yu Xiaosheng,
Wu Chengdong
Publication year - 2021
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/ell2.12285
Subject(s) - face (sociological concept) , artificial intelligence , computer science , image (mathematics) , convolutional neural network , pixel , computer vision , transformation (genetics) , pattern recognition (psychology) , facial recognition system , scheme (mathematics) , mathematics , social science , sociology , mathematical analysis , biochemistry , chemistry , gene
Although recently a series of deep‐learning based methods have promoted the performance of face super resolution (FSR), most of these methods cannot recover essential face attributes accurately, especially to super‐resolve a very low‐resolution (LR) face image ( 16 × 16 pixels) to its ( 8 × ) high‐resolution (HR) version. To address this issue, a novel alternating optimisation algorithm to estimate facial attributes and restore facial images in a single network is presented . Specifically, two convolutional neural modules (denoted as Restorer and Corrector) are constructed and these two modules are alternated repeatedly to form an end‐to‐end trainable network. The Restorer module reconstructs face images based on estimated facial attributes, while the Corrector module corrects the estimated attributes with the help of restored face image. Since Corrector can utilise information from previous estimated attributes and the FSR image, the estimated attributes are iteratively corrected and gradually approach the ground truth. Moreover, a new attribute transformation scheme is designed to introduce attribution information into Restorer, in which the facial attribute vectors as the control conditions can guide the face image restoration explicitly. By conducting extensive experiments on the well‐known CelebA dataset, it is demonstrated that the proposed method can provide superior FSR performance in both quantitative and qualitative measurements.