z-logo
open-access-imgOpen Access
Face Attribute Manipulation Based on Self-Perception GAN
Author(s) -
Xiaoguang Tu,
Yan Luo,
H S Zhang,
Wenjie Ai,
Zheng Ma,
Mei Xie
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1518/1/012017
Subject(s) - computer science , generator (circuit theory) , face (sociological concept) , artificial intelligence , image (mathematics) , computer vision , perception , generative grammar , pattern recognition (psychology) , inverse , quality (philosophy) , mathematics , psychology , social science , power (physics) , philosophy , physics , geometry , epistemology , quantum mechanics , neuroscience , sociology
Manipulating human facial images between two domains is an important and interesting problem in computer vision. Most of the existing methods address this issue by applying two generators or one generator with extra conditional inputs to generate face images with manipulated attribute. In this paper, we proposed a novel self-perception method based on Generative Adversarial Networks (GANs) for automatic face attribute inverse, where giving a face image with an arbitrary facial attribute the model can generate a new face image with the reversed facial attribute. The proposed method takes face images as inputs and employs only one single generator without being conditioned on other inputs. Profiting from the multi-loss strategy and modified U-net structure, our model is quite stable in training and capable of preserving finer details of the original face images. The extensive experimental results have demonstrated the effectiveness of our method on generating high-quality and realistic attribute-reversed face images.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here