z-logo
open-access-imgOpen Access
Quantifying Cochlear Implant Users’ Ability for Speaker Identification Using CI Auditory Stimuli
Author(s) -
Nursadul Mamun,
Ria Ghosh,
John H. L. Hansen
Publication year - 2019
Publication title -
interspeech 2022
Language(s) - English
Resource type - Conference proceedings
SCImago Journal Rank - 0.689
H-Index - 100
pISSN - 2308-457X
DOI - 10.21437/interspeech.2019-1852
Subject(s) - speech recognition , quiet , computer science , cochlear implant , speaker recognition , formant , speech processing , identification (biology) , basilar membrane , signal (programming language) , signal processing , audiology , vowel , cochlea , telecommunications , medicine , radar , physics , botany , quantum mechanics , biology , programming language
Speaker recognition is a biometric modality that uses underlying speech information to determine the identity of the speaker. Speaker Identification (SID) under noisy conditions is one of the challenging topics in the field of speech processing, specifically when it comes to individuals with cochlear implants (CI). This study analyzes and quantifies the ability of CI-users to perform speaker identification based on direct electric auditory stimuli. CI users employ a limited number of frequency bands (8 ∼ 22) and use electrodes to directly stimulate the Basilar Membrane/Cochlear in order to recognize the speech signal. The sparsity of electric stimulation within the CI frequency range is a prime reason for loss in human speech recognition, as well as SID performance. Therefore, it is assumed that CI-users might be unable to recognize and distinguish a speaker given dependent information such as formant frequencies, pitch etc. which are lost to un-simulated electrodes. To quantify this assumption, the input speech signal is processed using a CI Advanced Combined Encoder (ACE) signal processing strategy to construct the CI auditory electrodogram. The proposed study uses 50 speakers from each of three different databases for training the system using two different classifiers under quiet, and tested under both quiet and noisy conditions. The objective result shows that, the CI users can effectively identify a limited number of speakers. However, their performance decreases when more speakers are added in the system, as well as when noisy conditions are introduced. This information could therefore be used for improving CI-user signal processing techniques to improve human SID.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom