z-logo
open-access-imgOpen Access
A Deep Convolutional Neural Network Approach to Sign Alphabet Recognition
Author(s) -
Uday Kumar Adusumilli,
M. Sanjana,
Surya Teja,
K M Yashawanth,
R. Raghavendra,
B. Udayabalan
Publication year - 2021
Publication title -
international journal of scientific research in science, engineering and technology
Language(s) - English
Resource type - Journals
eISSN - 2395-1990
pISSN - 2394-4099
DOI - 10.32628/ijsrset219430
Subject(s) - computer science , convolutional neural network , artificial intelligence , sign language , pixel , thresholding , classifier (uml) , deep learning , color space , pattern recognition (psychology) , focus (optics) , alphabet , sign (mathematics) , computer vision , image (mathematics) , mathematics , mathematical analysis , philosophy , linguistics , physics , optics
In this paper, we present an application that has been developed to be used as a tool for the purposes of learning sign language for beginners that utilizes hand detection as part of the process. It uses a skin-color modelling technique, such as explicit thresholding in the skin-color space, which is based on modeling skin-color spaces. This predetermined range of skin-colors is used to determine how pixels (hand) will be extracted from non-pixels (background). To classify the images, convolutional neural networks (CNN) were fed the images for the creation of the classifier. The training of the images was done using Keras. A uniform background and proper lighting conditions enabled the system to achieve a test accuracy of 93.67%, of which 90.04% was attributed to ASL alphabet recognition, 93.44% for number recognition and 97.52% recognition of static words, surpassing other studies of the type. An approach which is based on this technique is used for fast computation as well as real-time processing. Deaf-dumb people face a number of social challenges as the communication barrier prevents them from accessing basic and essential services of the life that they are entitled to as members of the hearing community. In spite of the fact that a number of factors have been incorporated into the innovations in the automatic recognition of sign language, an adequate solution has yet to be reached because of a number of challenges. As far as I know, the vast majority of existing works focus on developing vision based recognizers by deriving complex feature descriptors from captured images of the gestures and applying a classical pattern analysis technique. Although utilizing these methods can be effective when dealing with small sign vocabulary captures with a complex and uncontrolled background, they are very limited when dealing with large sign vocabulary. This paper proposes a method for analyzing and representing hand gestures, which acts as the core component of the vocabulary for signing languages, using a deep convolutional neural networks (CNN) architecture. On two publicly accessible datasets (the NUS hand posture dataset and the American fingerspelling A dataset), the method was demonstrated to be more accurate in recognizing hand postures.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here