z-logo
open-access-imgOpen Access
Dynamic information for the recognition of conversational expressions
Author(s) -
Douglas W. Cunningham,
Christian Wallraven
Publication year - 2009
Publication title -
journal of vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.126
H-Index - 113
ISSN - 1534-7362
DOI - 10.1167/9.13.7
Subject(s) - conversation , computer science , facial expression , variety (cybernetics) , motion (physics) , head (geology) , speech recognition , everyday life , expression (computer science) , biological motion , communication , human–computer interaction , cognitive psychology , artificial intelligence , psychology , geomorphology , political science , law , programming language , geology
Communication is critical for normal, everyday life. During a conversation, information is conveyed in a number of ways, including through body, head, and facial changes. While much research has examined these latter forms of communication, the majority of it has focused on static representations of a few, supposedly universal expressions. Normal conversations, however, contain a very wide variety of expressions and are rarely, if ever, static. Here, we report several experiments that show that expressions that use head, eye, and internal facial motion are recognized more easily and accurately than static versions of those expressions. Moreover, we demonstrate conclusively that this dynamic advantage is due to information that is only available over time, and that the temporal integration window for this information is at least 100 ms long.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom