z-logo
open-access-imgOpen Access
Emotion Recognition and Synthesis Based on MPEG‐4 FAPs
Author(s) -
Nicolas Tsapatsoulis,
Amaryllis Raouzaiou,
Stefanos Kollias,
Roddy Cowie,
Ellen DouglasCowie
Publication year - 2002
Publication title -
ktisis at cyprus university of technology (cyprus university of technology)
Language(s) - English
Resource type - Book series
DOI - 10.1002/0470854626.ch9
Subject(s) - computer science , range (aeronautics) , mpeg 4 , emotion recognition , artificial intelligence , engineering , sociology , aerospace engineering , social science , coding (social sciences)
In the framework of MPEG-4 hybrid coding of natural and synthetic data streams, one can include teleconferencing and telepresence applications, in which a synthetic proxy or a virtual agent is capable of substituting the actual user. Such agents can interact with each other, analyzing input textual data entered by the user and multisensory data, including human emotions, facial expressions and nonverbal speech. This not only enhances interactivity, by replacing single media representations with dynamic multimedia renderings, but also assists human–computer interaction issues, letting the system become accustomed to the current needs and feelings of the user. Actual application of this technology [1] is expected in educational environments, 3-D videoconferencing and collaborative workplaces, online shopping and gaming, virtual communities and interactive entertainment. Facial expression synthesis and animation has gained much interest within the MPEG-4 framework; explicit facial animation parameters (FAPs) have been dedicated to this purpose. However, FAP implementation is an open research area [2]. In this chapter we describe a method for generating emotionally enriched human–computer interaction, focusing on analysis and synthesis of primary [3] and intermediate facial expressions [4]. To achieve this goal we utilize both MPEG-4 facial definition parameters (FDPs) and FAPs. The contribution of the work is twofold: it proposes a way of modeling primary expressions using FAPs and it describes a rule-based technique for analyzing both archetypal and intermediate expressions; for the latter we propose an innovative model generation framework. In particular, a relation between FAPs and the activation parameter proposed in classical psychological studies is established, extending the archetypal expression studies that the computer society has concentrated on. The overall scheme leads to a parameterized 144 EMOTION RECOGNITION AND SYNTHESIS BASED ON MPEG-4 FAPs approach to facial expression synthesis that is compatible with the MPEG-4 standard and can be used for emotion understanding.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom