Premium
A novel hybrid augmented loss discriminator for text‐to‐image synthesis
Author(s) -
Gan Yan,
Ye Mao,
Liu Dan,
Yang Shangming,
Xiang Tao
Publication year - 2021
Publication title -
international journal of intelligent systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.291
H-Index - 87
eISSN - 1098-111X
pISSN - 0884-8173
DOI - 10.1002/int.22333
Subject(s) - discriminator , computer science , generator (circuit theory) , image (mathematics) , sample (material) , artificial intelligence , process (computing) , pattern recognition (psychology) , detector , telecommunications , power (physics) , physics , quantum mechanics , operating system , thermodynamics
For the text‐to‐image synthesis task, most discriminators in existing generative adversarial networks based methods tend to fall into a local suboptimal state too early in the training process, resulting in the poor quality of generated images. To address the above problems, a hybrid augmented loss discriminator is designed. In this designed discriminator, to reduce the sensitivity of the discriminator classification recognition, make it pay attention to the semantic and structural changes, we add the loss value of the fake sample to the loss value of the real sample. Moreover, to indirectly guide the generator to generate samples, the loss value of the real sample is added to the fake sample. The loss value mixed with real and fake samples actually augments signal transmission. It perturbs parameter update of the discriminator during optimization and prevents the discriminator from falling into the local suboptimal state prematurely. Whereafter, we apply the proposed discriminator to two kinds of text‐to‐image synthesis tasks. Experimental results show that the proposed method can help the baseline models to improve performance.