z-logo
open-access-imgOpen Access
Sign Language Recognition
Author(s) -
Pariksheet Shende
Publication year - 2022
Publication title -
indian scientific journal of research in engineering and management
Language(s) - English
Resource type - Journals
ISSN - 2582-3930
DOI - 10.55041/ijsrem11773
Subject(s) - computer science , sign language , artificial intelligence , gesture , rgb color model , autoencoder , sign (mathematics) , segmentation , set (abstract data type) , data set , frame (networking) , test data , pattern recognition (psychology) , gesture recognition , computer vision , natural language processing , deep learning , mathematics , mathematical analysis , telecommunications , philosophy , linguistics , programming language
This paper focuses on experimenting with different segmentation approaches and unsupervised learning algorithms to create an accurate sign language recognition model. To more easily approach the problem and obtain reasonable results, we experimented with just up to 10 different classes/letters in our self-made dataset instead of all 26 possible letters. We collected 12000 RGB images and their corresponding depth data using a Microsoft Kinect. Up to half of the data was fed into the autoencoder to extract features while the other half was used for testing. We achieved a classification accuracy of 98% on a randomly selected set of test data using our trained model. In addition to the work we did on static images, we also created a live demo version of the project which can be run at a little less than 2 seconds per frame to classify signed hand gestures from any person.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here