z-logo
open-access-imgOpen Access
Learning a model of speaker head nods using gesture corpora
Author(s) -
Jina Lee,
Stacy Marsella
Publication year - 2009
Language(s) - English
DOI - 10.1145/1558013.1558053
During face-to-face conversation, the speaker's head is continually in motion. These movements serve a variety of important communicative functions. Our goal is to develop a model of the speaker's head movements that can be used to generate head movements for virtual agents based on a gesture annotation corpora. In this paper, we focus on the first step of the head movement generation process: predicting when the speaker should use head nods. We describe our machine-learning approach that creates a head nod model from annotated corpora of face-to-face human interaction, relying on the linguistic features of the surface text. We also describe the feature selection process, training process, and the evaluation of the learned model with test data in detail. The result shows that the model is able to predict head nods with high precision and recall.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom