z-logo
open-access-imgOpen Access
Convert Arabic Letters Voice into Gesture
Author(s) -
Shaker K. Ali,
Sabreen K. Saud
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1591/1/012018
Subject(s) - c4.5 algorithm , mel frequency cepstrum , arabic , speech recognition , computer science , support vector machine , gesture , artificial intelligence , cepstrum , pattern recognition (psychology) , feature extraction , linguistics , naive bayes classifier , philosophy
This paper suggest approach to solve the problem of social communication between blind and dumb by converting voices of 28 Arabic letters (ي,,أ) into gesture (images) by extraction features by using Mel-frequency Cepstral coefficients (MFCC)and classify the types of letters by using; J48, KNN, and Naive byes (NB). Several features are extracted from speech voice of Arabic letters voices. The dataset collected by recorded voices from twenty different persons, each person recorded ten voices for each twenty eight letters so the total dataset are 5600 voices (200 voices for each 28 letters). Mel-frequency Cepstral coefficients are extracted from 5600 voices of letters which convert the voices into a signal and extract features vector to classify later by using J48, KNN and NB algorithms, which may vary in time or speed signals. The experimental results shows that the best accuracy of speech recognition algorithm by using the J48 algorithm with a performance ratio of 100% while KNN is the 94.023% and Naive byes is the 20.012%.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here