z-logo
open-access-imgOpen Access
Recognizing Emotion from Speech Based on Age and Gender Using Hierarchical Models
Author(s) -
Ftoon Abu Shaqra,
Rehab Duwairi,
Mahmoud AlAyyoub
Publication year - 2019
Publication title -
procedia computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.334
H-Index - 76
ISSN - 1877-0509
DOI - 10.1016/j.procs.2019.04.009
Subject(s) - emotion recognition , computer science , classifier (uml) , speech recognition , affect (linguistics) , emotion classification , task (project management) , artificial intelligence , psychology , management , communication , economics
Age and gender are two factors that affect the physiologic and acoustic features of human voice. In fact, most of the speech emotion recognition applications use these voice features as a foundation to complete the classification task. Significant improvements have been made for voice emotion recognition; and several studies have addressed the age and gender identification from speech topics. We studied the effect of age and gender on the emotion recognition applications. In our work, we built hierarchical classification models to investigate the importance of identifying the age and gender before identifying the emotional label. We compared the performance of four different models and presented the relationship between the age \ gender and the emotion recognition accuracy. Our results showed that using a separated emotion model for each of gender and age category gives a higher accuracy compared with using one classifier for all the data.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom