Reconstruction of Generative Adversarial Networks in Cross Modal Image Generation with Canonical Polyadic Decomposition
Author(s) -
Ruixin Ma,
Junying Lou,
Peng Li,
Jing Gao
Publication year - 2021
Publication title -
wireless communications and mobile computing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.42
H-Index - 64
eISSN - 1530-8677
pISSN - 1530-8669
DOI - 10.1155/2021/8868781
Subject(s) - computer science , autoencoder , modal , convolution (computer science) , task (project management) , algorithm , decomposition , image (mathematics) , deep learning , truncation (statistics) , artificial intelligence , machine learning , artificial neural network , polymer chemistry , ecology , chemistry , management , economics , biology
Generating pictures from text is an interesting, classic, and challenging task. Benefited from the development of generative adversarial networks (GAN), the generation quality of this task has been greatly improved. Many excellent cross modal GAN models have been put forward. These models add extensive layers and constraints to get impressive generation pictures. However, complexity and computation of existing cross modal GANs are too high to be deployed in mobile terminal. To solve this problem, this paper designs a compact cross modal GAN based on canonical polyadic decomposition. We replace an original convolution layer with three small convolution layers and use an autoencoder to stabilize and speed up training. The experimental results show that our model achieves 20% times of compression in both parameters and FLOPs without loss of quality on generated images.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom