z-logo
open-access-imgOpen Access
Lip-synced character speech animation with dominated animeme models
Author(s) -
Shuen-Huei Guan,
Yumei Chen,
Fuchun Huang,
BingYu Chen
Publication year - 2012
Publication title -
citeseer x (the pennsylvania state university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/2407746.2407772
Subject(s) - animation , computer facial animation , computer science , character animation , computer animation , character (mathematics) , computer graphics , skeletal animation , computer graphics (images) , graphics , motion capture , motion (physics) , human body , virtual actor , human–computer interaction , multimedia , artificial intelligence , virtual reality , geometry , mathematics
One of the holy grails of computer graphics is the generation of photorealistic images with motion data. To re-generate convincing human animations might not be the most challenging part, but it is definitely one of ultimate goals for computer graphics. Amongst full-body human animations, facial animation is the challenging part because of its subtlety and familarity to human beings. In this paper, we like to share the work of lip-sync animation, part of facial animations, as a framework for synthesizing lip-sync character speech animation in real time from a given speech sequence and its corresponding texts.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom