z-logo
Premium
Emotional facial expression transfer from a single image via generative adversarial nets
Author(s) -
Qiao Fengchun,
Yao Naiming,
Jiao Zirui,
Li Zhihao,
Chen Hui,
Wang Hongan
Publication year - 2018
Publication title -
computer animation and virtual worlds
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.225
H-Index - 49
eISSN - 1546-427X
pISSN - 1546-4261
DOI - 10.1002/cav.1819
Subject(s) - computer science , facial expression , artificial intelligence , expression (computer science) , generative grammar , face (sociological concept) , computer graphics , computer vision , image (mathematics) , generative model , sequence (biology) , pattern recognition (psychology) , social science , sociology , biology , genetics , programming language
Facial expression transfer from a single image is a challenging task and has drawn sustained attention in the fields of computer vision and computer graphics. Recently, generative adversarial nets (GANs) have provided a new approach to facial expression transfer from a single image toward target facial expressions. However, it is still difficult to obtain a sequence of smoothly changed facial expressions. We present a novel GAN‐based method for generating emotional facial expression animations given a single image and several facial landmarks for the in‐between stages. In particular, landmarks of other subjects are incorporated into a GAN model to control the generated facial expression from a latent space. With the trained model, high‐quality face images and a smoothly changed facial expression sequence can be effectively obtained, which are showed qualitatively and quantitatively in our experiments on the Multi‐PIE and CK+ data sets.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom