z-logo
open-access-imgOpen Access
Quantifying Emotional Similarity in Speech
Author(s) -
John Harvill,
Seong-Gyun Leem,
Mohammed AbdelWahab,
Reza Lotfian,
Carlos Busso
Publication year - 2021
Publication title -
ieee transactions on affective computing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.309
H-Index - 67
ISSN - 1949-3045
DOI - 10.1109/taffc.2021.3127390
Subject(s) - computing and processing , robotics and control systems , signal processing and analysis
This study proposes the novel formulation of measuring emotional similarity between speech recordings. This formulation explores the ordinal nature of emotions by comparing emotional similarities instead of predicting an emotional attribute, or recognizing an emotional category. The proposed task determines which of two alternative samples has the most similar emotional content to the emotion of a given anchor. This task raises some interesting questions. Which is the emotional descriptor that provide the most suitable space to assess emotional similarities? Can deep neural networks (DNNs) learn representations to robustly quantify emotional similarities? We address these questions by exploring alternative emotional spaces created with attribute-based descriptors and categorical emotions. We create the representation using a DNN trained with the triplet loss function, which relies on triplets formed with an anchor, a positive example, and a negative example. We select a positive sample that has similar emotion content to the anchor, and a negative sample that has dissimilar emotion to the anchor. The task of our DNN is to identify the positive sample. The experimental evaluations demonstrate that we can learn a meaningful embedding to assess emotional similarities, achieving higher performance than human evaluators asked to complete the same task.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here