z-logo
open-access-imgOpen Access
Informative Multimodal Unsupervised Image-to-Image Translation
Author(s) -
Tien Tai Doan,
Guillaume Ghyselinck,
Blaise Hanczar
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5121/csit.2021.110503
Subject(s) - computer science , translation (biology) , artificial intelligence , image translation , image (mathematics) , automatic image annotation , domain (mathematical analysis) , computer vision , image quality , image retrieval , annotation , quality (philosophy) , pattern recognition (psychology) , mathematics , mathematical analysis , biochemistry , chemistry , philosophy , epistemology , messenger rna , gene
We propose a new method of multimodal image translation, called InfoMUNIT, which is an extension of the state-of-the-art method MUNIT. Our method allows controlling the style of the generated images and improves their quality and diversity. It learns to maximize the mutual information between a subset of style code and the distribution of the output images. Experiments show that our model cannot only translate one image from the source domain to multiple images in the target domain but also explore and manipulate features of the outputs without annotation. Furthermore, it achieves a superior diversity and a competitive image quality to state-of-the-art methods in multiple image translation tasks.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here