Premium
Facial expression animation through action units transfer in latent space
Author(s) -
Fan Yachun,
Tian Feng,
Tan Xiaohui,
Cheng Housen
Publication year - 2020
Publication title -
computer animation and virtual worlds
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.225
H-Index - 49
eISSN - 1546-427X
pISSN - 1546-4261
DOI - 10.1002/cav.1946
Subject(s) - computer science , expression (computer science) , animation , facial expression , artificial intelligence , computer animation , computer facial animation , code (set theory) , translation (biology) , interpolation (computer graphics) , computer vision , action (physics) , pattern recognition (psychology) , computer graphics (images) , biochemistry , chemistry , physics , set (abstract data type) , quantum mechanics , messenger rna , gene , programming language
Automatic animation synthesis has attracted much attention from the community. As most existing methods take a small number of discrete expressions rather than continuous expressions, their integrity and reality of the facial expressions is often compromised. In addition, the easy manipulation with simple inputs and unsupervised processing, although being important to the automatic facial expression animation applications, is relatively less concerned. To address these issues, we propose an unsupervised continuous automatic facial expression animation approach through action units (AU) transfer in the latent space of generative adversarial networks. The expression descriptor which is depicted with AU vector is transferred into the input image without the need of labeled pairs of images and even without their expressions and further network training. We also propose a new approach to quickly generate input image's latent code and cluster the boundaries of different AU attributes with their latent codes. Two latent code operators, vector addition and continuous interpolation, are leveraged for facial expression animation simulating align with the boundaries in the latent space. Experiments have shown that the proposed approach is effective on facial expression translation and animation synthesis.