Premium
Statistics‐based Motion Synthesis for Social Conversations
Author(s) -
Yang Yanzhe,
Yang Jimei,
Hodgins Jessica
Publication year - 2020
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.14114
Subject(s) - computer science , gesture , motion (physics) , animation , graph , synchronization (alternating current) , human–computer interaction , artificial intelligence , speech recognition , theoretical computer science , computer graphics (images) , computer network , channel (broadcasting)
Plausible conversations among characters are required to generate the ambiance of social settings such as a restaurant, hotel lobby, or cocktail party. In this paper, we propose a motion synthesis technique that can rapidly generate animated motion for characters engaged in two‐party conversations. Our system synthesizes gestures and other body motions for dyadic conversations that synchronize with novel input audio clips. Human conversations feature many different forms of coordination and synchronization. For example, speakers use hand gestures to emphasize important points, and listeners often nod in agreement or acknowledgment. To achieve the desired degree of realism, our method first constructs a motion graph that preserves the statistics of a database of recorded conversations performed by a pair of actors. This graph is then used to search for a motion sequence that respects three forms of audio‐motion coordination in human conversations: coordination to phonemic clause, listener response, and partner's hesitation pause. We assess the quality of the generated animations through a user study that compares them to the originally recorded motion and evaluate the effects of each type of audio‐motion coordination via ablation studies.