
GD-StarGAN: Multi-domain image-to-image translation in garment design
Author(s) -
Yangyun Shen,
Runnan Huang,
Wenmei Huang
Publication year - 2020
Publication title -
plos one
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.99
H-Index - 332
ISSN - 1932-6203
DOI - 10.1371/journal.pone.0231719
Subject(s) - image translation , generator (circuit theory) , computer science , image (mathematics) , translation (biology) , artificial intelligence , computer vision , image texture , set (abstract data type) , texture (cosmology) , image quality , image processing , physics , power (physics) , biochemistry , chemistry , quantum mechanics , messenger rna , gene , programming language
In the field of fashion design, designing garment image according to texture is actually changing the shape of texture image, and image-to-image translation based on Generative Adversarial Network (GAN) can do this well. This can help fashion designers save a lot of time and energy. GAN-based image-to-image translation has made great progress in recent years. One of the image-to-image translation models––StarGAN, has realized the function of multi-domain image-to-image translation by using only a single generator and a single discriminator. This paper details the use of StarGAN to complete the task of garment design. Users only need to input an image and a label for the garment type to generate garment images with the texture of the input image. However, it was found that the quality of the generated images is not satisfactory. Therefore, this paper introduces some improvements on the structure of the StarGAN generator and the loss function of StarGAN, and a model was obtained that can be better applied to garment design. It is called GD-StarGAN. This paper will demonstrate that GD-StarGAN is much better than StarGAN when it comes to garment design, especially in texture, by using a set of seven categories of garment datasets.