z-logo
open-access-imgOpen Access
Generative adversarial image super‐resolution network for multiple degradations
Author(s) -
Lin Hong,
Fan Jing,
Zhang Yangyi,
Peng Dewei
Publication year - 2020
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2020.1176
Subject(s) - discriminator , computer science , artificial intelligence , preprocessor , generator (circuit theory) , image (mathematics) , kernel (algebra) , generative adversarial network , consistency (knowledge bases) , pattern recognition (psychology) , image restoration , noise (video) , computer vision , image processing , mathematics , detector , telecommunications , power (physics) , physics , quantum mechanics , combinatorics
The existing single image super‐resolution methods based on deep learning cannot handle multiple degradations well, and the generated image tends to be blurred and over‐smoothed due to poor generalisation ability. In this study, the authors propose a method based on a generative adversarial network (GAN) to deal with multiple degradations. In the generator network, blur kernel and noise level are used as input through dimensionality stretching strategy preprocessing to make full use of prior knowledge. In addition, three discriminators with different scales are used in the discriminator network to pay attention to the reconstruction of image details while focusing on the global consistency of the image. For the problems of vanishing gradient and mode collapse existing in GAN‐based methods, a gradient penalty term is added in the loss function. Extensive experiments demonstrate that the proposed method not only can handle multiple degradations to obtain state‐of‐the‐art performance, but also deliver visually credible results in real scenes.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here