z-logo
Premium
Responsive listening behavior
Author(s) -
Gillies M.,
Pan X.,
Slater M.,
ShaweTaylor J.
Publication year - 2008
Publication title -
computer animation and virtual worlds
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.225
H-Index - 49
eISSN - 1546-427X
pISSN - 1546-4261
DOI - 10.1002/cav.267
Subject(s) - active listening , conversation , computer science , animation , expression (computer science) , motion (physics) , human–computer interaction , cognitive psychology , artificial intelligence , communication , psychology , computer graphics (images) , programming language
Humans use their bodies in a highly expressive way during conversation, and animated characters that lack this form of non‐verbal expression can seem stiff and unemotional. An important aspect of non‐verbal expression is that people respond to each other's behavior and are highly attuned to picking up this type of response. This is particularly important for the feedback given while listening to some one speak. However, automatically generating this type of behavior is difficult as it is highly complex and subtle. This paper takes a data driven approach to generating interactive social behavior. Listening behavior is motion captured, together with the audio being listened to. These data are used to learn an animation model of the responses of one person to the other. This allows us to create characters that respond in real‐time during a conversation with a real human. Copyright ? 2008 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here