
ApprGAN: appearance‐based GAN for facial expression synthesis
Author(s) -
Peng Yao,
Yin Hujun
Publication year - 2019
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2018.6576
Subject(s) - image synthesis , expression (computer science) , computer science , face (sociological concept) , animation , artificial intelligence , facial expression , consistency (knowledge bases) , translation (biology) , texture synthesis , computer vision , perspective (graphical) , computer graphics , identity (music) , view synthesis , graphics , generative grammar , image (mathematics) , computer graphics (images) , image processing , image texture , programming language , social science , biochemistry , chemistry , physics , rendering (computer graphics) , sociology , messenger rna , acoustics , gene
Facial expression synthesis has drawn increasing attention in computer vision, graphics and animation. Recently, generative adversarial nets (GANs) have become a new perspective for face synthesis and have had remarkable success in generating photorealistic images and image‐to‐image translation. In this study, the authors present an appearance‐based facial expression synthesis framework, ApprGAN, by combining shape and texture and introducing cycle consistency and identity mapping into the adversarial learning. Specifically, given an input face image, a pair of shape and texture generators are trained for synthetic shape deformation and expression detail generation, respectively. Extensive experiments on expression synthesis and cross‐database synthesis were conducted, together with comparisons with the existing methods. Results of expression synthesis and quantitative verification on various databases show the effectiveness of ApprGAN in synthesising photorealistic and identity‐preserving expressions and its marked improvement over the existing methods.