z-logo
open-access-imgOpen Access
SCGN: novel generative model using the convergence of latent space by training
Author(s) -
Kim H.,
Jung S.H.
Publication year - 2020
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/el.2020.1333
Subject(s) - generator (circuit theory) , divergence (linguistics) , computer science , generative model , generative grammar , artificial intelligence , convergence (economics) , pattern recognition (psychology) , physics , linguistics , philosophy , power (physics) , quantum mechanics , economics , economic growth
Generative models such as variational autoencoders (VAEs) and generative adversarial networks (GANs) have been recently applied to various fields. However, the VAE and GAN models have blur and mode collapse problems, respectively. Here, the authors propose a novel generative model, self‐converging generative network (SCGN), to address the issues. Self‐converging means the convergence of latent vectors into themselves through being trained in pairs with training data, by which the SCGN can reconstruct all training data. In the authors’ model, the latent vectors and weights of the generator are alternately trained. Specifically, the latent vectors are trained to follow a normal distribution, using a loss function derived from the Kullback–Leibler divergence and a pixel‐wise loss. The weights of the generator are adjusted for the generator to produce training data by means of a pixel‐wise loss. As a result, their SCGN did not fall into the mode collapse, which occurs in GANs, and made clearer images than VAEs thanks to no use of sampling. Moreover, the SCGN successfully learned the manifold of the dataset in the extensive experiments with CelebA.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here