z-logo
open-access-imgOpen Access
The Synthesis of Unpaired Underwater Images Using a Multistyle Generative Adversarial Network
Author(s) -
Na Li,
Ziqiang Zheng,
Shaoyong Zhang,
Zhibin Yu,
Haiyong Zheng,
Bing Zheng
Publication year - 2018
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2018.2870854
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Underwater image datasets are crucial in underwater vision research. Because of the strong absorption and scattering effects that occur underwater, some ground truth such as the depth map, which can be easily collected in-air, becomes a great challenge in underwater environments. To solve the issues associated with the lack of underwater ground truth, we propose a trainable end-to-end system of an underwater multistyle generative adversarial network (UMGAN) that takes advantage of a cycle-consistent adversarial network (CycleGAN) and conditional generative adversarial networks. This system can generate multiple realistic underwater images from in-air images using a hybrid adversarial system and an unpaired method. Moreover, our model can translate in-air images to underwater images that retain the main content and structural information of the in-air images under specified turbidities or water styles through a style classifier and a conditional vector. Furthermore, we define the color loss and include the structural similarity index measure loss for the system to preserve the content and structure of original in-air images while transferring the backgrounds of the images from air to water. Using UMGAN, we can take advantage of the in-air ground truth and convert the corresponding in-air images into an underwater dataset with multiple water color styles. Our experiments demonstrate that our synthesized underwater images have a high score on image assessment against CycleGAN, WaterGAN, StarGAN, AdaIN, and other state-of-the-art methods. We also show that our synthesized underwater images with in-air depths can be applied to real underwater image depth map estimation.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom