z-logo
Premium
Deep learning and SVM‐based emotion recognition from Chinese speech for smart affective services
Author(s) -
Zhang Weishan,
Zhao Dehai,
Chai Zhi,
Yang Laurence T.,
Liu Xin,
Gong Faming,
Yang Su
Publication year - 2017
Publication title -
software: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.437
H-Index - 70
eISSN - 1097-024X
pISSN - 0038-0644
DOI - 10.1002/spe.2487
Subject(s) - support vector machine , deep belief network , mel frequency cepstrum , sadness , artificial intelligence , computer science , anger , surprise , feature (linguistics) , speech recognition , emotion recognition , emotion classification , formant , cepstrum , pattern recognition (psychology) , machine learning , deep learning , feature extraction , psychology , social psychology , linguistics , philosophy , vowel
Summary Emotion recognition is challenging for understanding people and enhances human–computer interaction experiences, which contributes to the harmonious running of smart health care and other smart services. In this paper, several kinds of speech features such as Mel frequency cepstrum coefficient, pitch, and formant were extracted and combined in different ways to reflect the relationship between feature fusions and emotion recognition performance. In addition, we explored two methods, namely, support vector machine (SVM) and deep belief networks (DBNs), to classify six emotion status: anger, fear, joy, neutral status, sadness, and surprise. In the SVM‐based method, we used SVM multi‐classification algorithm to optimize the parameters of penalty factor and kernel function. With DBN, we adjusted different parameters to achieve the best performance when solving different emotions. Both gender‐dependent and gender‐independent experiments were conducted on the Chinese Academy of Sciences emotional speech database. The mean accuracy of SVM is 84.54%, and the mean accuracy of DBN is 94.6%. The experiments show that the DBN‐based approach has good potential for practical usage, and suitable feature fusions will further improve the performance of speech emotion recognition. Copyright © 2017 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here