z-logo
open-access-imgOpen Access
Synthesizing cooperative conversation
Author(s) -
Catherine Pélachaud,
Justine Cassell,
Norman I. Badler,
Mark Steedman,
Scott Prevost,
Matthew Stone
Publication year - 1998
Publication title -
lecture notes in computer science
Language(s) - English
Resource type - Book series
SCImago Journal Rank - 0.249
H-Index - 400
eISSN - 1611-3349
pISSN - 0302-9743
DOI - 10.1007/bfb0052313
Subject(s) - intonation (linguistics) , computer science , gesture , conversation , gaze , facial expression , speech recognition , motion (physics) , head (geology) , movement (music) , planner , artificial intelligence , natural language processing , communication , linguistics , psychology , acoustics , philosophy , physics , geomorphology , geology
We describe an implemented system which automatically generates and animates conversations between multi- ple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produc es the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonati on in turn drive facial expressions, lip motions, eye gaze, h ead motion, and arm gesture generators.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom