z-logo
open-access-imgOpen Access
Dysarthric speaker identification with different degrees of dysarthria severity using deep belief networks
Author(s) -
Farhadipour Aref,
Veisi Hadi,
Asgari Mohammad,
Keyvanrad Mohammad Ali
Publication year - 2018
Publication title -
etri journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.295
H-Index - 46
eISSN - 2233-7326
pISSN - 1225-6463
DOI - 10.4218/etrij.2017-0260
Subject(s) - dysarthria , speech recognition , task (project management) , computer science , articulation (sociology) , perceptron , artificial intelligence , mel frequency cepstrum , feature (linguistics) , multilayer perceptron , frame (networking) , artificial neural network , identification (biology) , pattern recognition (psychology) , feature extraction , psychology , engineering , philosophy , systems engineering , psychiatry , politics , law , political science , telecommunications , botany , biology , linguistics
Dysarthria is a degenerative disorder of the central nervous system that affects the control of articulation and pitch; therefore, it affects the uniqueness of sound produced by the speaker. Hence, dysarthric speaker recognition is a challenging task. In this paper, a feature‐extraction method based on deep belief networks is presented for the task of identifying a speaker suffering from dysarthria. The effectiveness of the proposed method is demonstrated and compared with well‐known Mel‐frequency cepstral coefficient features. For classification purposes, the use of a multi‐layer perceptron neural network is proposed with two structures. Our evaluations using the universal access speech database produced promising results and outperformed other baseline methods. In addition, speaker identification under both text‐dependent and text‐independent conditions are explored. The highest accuracy achieved using the proposed system is 97.3%.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here