
Creative and diverse artwork generation using adversarial networks
Author(s) -
Chen Haibo,
Zhao Lei,
Qiu Lihong,
Wang Zhizhong,
Zhang Huiming,
Xing Wei,
Lu Dongming
Publication year - 2020
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2020.0014
Subject(s) - discriminator , computer science , flexibility (engineering) , adversarial system , artificial intelligence , feature (linguistics) , generative grammar , image (mathematics) , pixel , creativity , style (visual arts) , computer vision , mathematics , art , visual arts , linguistics , telecommunications , statistics , philosophy , detector , political science , law
Existing style transfer methods have achieved great success in artwork generation by transferring artistic styles onto everyday photographs while keeping their contents unchanged. Despite this success, these methods have one inherent limitation: they cannot produce newly created image contents, lacking creativity and flexibility. On the other hand, generative adversarial networks (GANs) can synthesise images with new content, whereas cannot specify the artistic style of these images. The authors consider combining style transfer with convolutional GANs to generate more creative and diverse artworks. Instead of simply concatenating these two networks: the first for synthesising new content and the second for transferring artistic styles, which is inefficient and inconvenient, they design an end‐to‐end network called ArtistGAN to perform these two operations at the same time and achieve visually better results. Moreover, to generate images of higher quality, they propose the bi‐discriminator GAN containing a pixel discriminator and a feature discriminator that constrain the generated image from pixel level and feature level, respectively. They conduct extensive experiments and comparisons to evaluate their methods quantitatively and qualitatively. The experimental results verify the effectiveness of their methods.