z-logo
open-access-imgOpen Access
Multi-View Transformation via Mutual-Encoding InfoGenerative Adversarial Networks
Author(s) -
Liang Sun,
Wenjing Kang,
Yuxuan Han,
Hongwei Ge
Publication year - 2018
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2018.2845696
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
The problem of multi-view transformation is associated with transforming available source views of a given object into unknown target views. To solve this problem, a Mutual-Encoding InfoGenerative Adversarial Networks (MEIGANs)-based algorithm is proposed in this paper. A mutual-encoding representation learning network is proposed to obtain multi-view representations, i.e., it guarantees through encoders different views of the same object are mapped to the common representation, which carries enough information with respect to the object itself. An InfoGenerative Adversarial Networks-based transformation network is proposed to transform multi-views of the given object, which carries the representation information in the generative models and discriminative models, guaranteeing the synthetic transformed view matches the source view. The advantages of the MEIGAN are that it bypasses direct mappings among different views, and can solve the problem of missing views in training data and the problem of mapping between transformed views and source views. Finally, experiments on incomplete data to complete data restoration tasks on MNIST, CelebA, and multi-view angle transformation tasks on 3-D rendered chairs and multi-view clothing show the proposed algorithm yields satisfactory transformation results.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom