
Unsupervised many‐to‐many image‐to‐image translation across multiple domains
Author(s) -
Lin Ye,
Fu Keren,
Ling Shenggui,
Cheng Peng
Publication year - 2021
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/ipr2.12227
Subject(s) - image translation , translation (biology) , computer science , image (mathematics) , domain (mathematical analysis) , artificial intelligence , focus (optics) , key (lock) , image quality , encoder , computer vision , pattern recognition (psychology) , mathematics , mathematical analysis , biochemistry , chemistry , physics , computer security , messenger rna , optics , gene , operating system
Unsupervised multi‐domain image‐to‐image translation aims to synthesize images among multiple domains without labelled data, which is more general and complicated than one‐to‐one image mapping. However, existing methods mainly focus on reducing the large costs of modelling and do not pay enough attention to the quality of generated images. In some target domains, their translation results may not be expected or even cause the model collapse. To improve the image quality, an effective many‐to‐many mapping framework for unsupervised multi‐domain image‐to‐image translation is proposed. There are two key aspects to the proposed method. The first is a many‐to‐many architecture with only one domain‐shared encoder and several domain‐specialized decoders to effectively and simultaneously translate images across multiple domains. The second is two proposed constraints extended from one‐to‐one mappings to further help improve the generation. All the evaluations demonstrate that the proposed framework is superior to existing methods and provides an effective solution for multi‐domain image‐to‐image translation.