z-logo
open-access-imgOpen Access
Speech Emotion Based Sentiment Recognition using Deep Neural Networks
Author(s) -
Ravi Raj Choudhary,
Gaurav Meena,
Krishna Kumar Mohbey
Publication year - 2022
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/2236/1/012003
Subject(s) - computer science , mel frequency cepstrum , convolutional neural network , conversation , reading (process) , task (project management) , speech recognition , categorization , emotion recognition , artificial intelligence , feeling , deep learning , mood , natural language processing , feature extraction , psychology , linguistics , communication , social psychology , philosophy , management , psychiatry , economics
The capacity to comprehend and communicate with others via language is one of the most valuable human abilities. We are well-trained in our experience reading awareness of different emotions since they play a vital part in communication. Contrary to popular belief, emotion recognition is a challenging task for computers or robots due to the subjective nature of human mood. This research proposes a framework for acknowledging the passionate sections of conversation, independent of the semantic content, via the recognition of discourse feelings. To categorize the emotional content of audio files, this article employs deep learning techniques such as convolutional neural networks (CNNs) and long short-term memories (LSTMs). In order to make sound information as helpful as possible for future use, models using Mel-frequency cepstral coefficients (MFCCs) were created. It was tested using RAVDESS and TESS datasets and found that the CNN had a 97.1% accuracy rate.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here