z-logo
open-access-imgOpen Access
Creation of a Nigerian Voice Corpus for Indigenous Speaker Recognition
Author(s) -
Adekunle Akinrinmade,
Emmanuel Adetiba,
Joke A. Badejo,
Aderemi A. Atayero
Publication year - 2019
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1378/3/032011
Subject(s) - indigenous , nigerians , computer science , biometrics , population , identification (biology) , speaker recognition , speech recognition , linguistics , artificial intelligence , sociology , ecology , philosophy , botany , demography , biology
One of the goals of Word Bank’s Identification for Development (ID4D) is the realization of robust digital identification systems as a means of sustainable development priority. ID4D’s most recent report shows about 1.1 billion of the world’s population are yet to be identified for development. Africa represents about half of that number while Nigeria represents about a quarter of Africa’s share. Biometrics is the state-of-the-art approach for identification using human behavioral and/or physiological digitally calibrated traits and one such trait is the voice. The backbone of biometric research is the database employed in the design of biometric systems. Although many voice databases are publicly available such as the THCHS-30 for Chinese and Microsoft Indian language Speech Corpus for Indians, none is currently publicly available or free for Nigerians. The creation of such an indigenous database (or corpus) can open doors to Nigerian automatic speaker recognition as well as for indigenous language, ethnicity, gender, age group and emotion classification amongst others. This work is a first step in the direction of creating a Nigerian Voice Corpus (NVC) to aid indigenous voice biometric research. A voice corpus of popular Nigerians was created by curation of audio samples of 14 women and 23 men from YouTube. The corpus contains 10 different samples of 5 seconds duration for each individual resulting in a total of 370 samples. The created corpus was used to carry out speaker recognition experiment by dividing the audio samples into 25ms non-overlapping frame durations. Silent frames were excluded using short-term spectral energy threshold for Voice Activity Detection (VAD). This was followed by extraction of Mel Frequency Cepstral Coefficient (MFCC) as descriptors to discriminate different speakers using Support Vector Machine (SVM) with median Gaussian function. An overall recognition accuracy of 93.24% was achieved demonstrating the feasibility and research potential in this direction.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here