z-logo
open-access-imgOpen Access
Weighting of Prosodic and Lexical-Semantic Cues for Emotion Identification in Spectrally Degraded Speech and With Cochlear Implants
Author(s) -
Margaret E Richter,
Monita Chatterjee
Publication year - 2021
Publication title -
ear and hearing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.577
H-Index - 109
eISSN - 1538-4667
pISSN - 0196-0202
DOI - 10.1097/aud.0000000000001057
Subject(s) - weighting , identification (biology) , speech recognition , computer science , natural language processing , psychology , audiology , linguistics , acoustics , medicine , biology , physics , philosophy , botany
Normally-hearing (NH) listeners rely more on prosodic cues than on lexical-semantic cues for emotion perception in speech. In everyday spoken communication, the ability to decipher conflicting information between prosodic and lexical-semantic cues to emotion can be important: for example, in identifying sarcasm or irony. Speech degradation in cochlear implants (CIs) can be sufficiently overcome to identify lexical-semantic cues, but the distortion of voice pitch cues makes it particularly challenging to hear prosody with CIs. The purpose of this study was to examine changes in relative reliance on prosodic and lexical-semantic cues in NH adults listening to spectrally degraded speech and adult CI users. We hypothesized that, compared with NH counterparts, CI users would show increased reliance on lexical-semantic cues and reduced reliance on prosodic cues for emotion perception. We predicted that NH listeners would show a similar pattern when listening to CI-simulated versions of emotional speech.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here