z-logo
Premium
DHGAN: Generative adversarial network with dark channel prior for single‐image dehazing
Author(s) -
Wu Wenxia,
Zhu Jinxiu,
Su Xin,
Zhang Xuewu
Publication year - 2020
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.5263
Subject(s) - computer science , artificial intelligence , image (mathematics) , channel (broadcasting) , computer vision , generator (circuit theory) , benchmark (surveying) , image restoration , feature (linguistics) , perception , generative adversarial network , image processing , physics , computer network , power (physics) , linguistics , philosophy , geodesy , quantum mechanics , neuroscience , biology , geography
Summary Image dehazing technology has attracted much interest in the field of image processing. Most existing dehazing methods based on neural networks are inflexible and do not consider the loss in haze‐related feature space. They sacrificed texture details and perceptual characteristics in images. To overcome these weaknesses, we propose an image‐to‐image dehazing model based on generative adversarial networks (DHGAN) with dark channel prior. The DHGAN takes a hazy image as input and directly outputs a haze‐free image by applying a U‐net‐based generator. In addition to pixelwise loss and perceptual loss, we introduce dark‐channel‐minimizing loss to constrain the generated images to the manifold of natural images, thus leading to better texture details and perceptual properties. Comparative experiments on benchmark images with several state‐of‐the‐art dehazing methods demonstrate the effectiveness of the proposed DHGAN.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here