
Semantic Deep Learning to Translate Dynamic Sign Language
Author(s) -
Eman K. Elsayed,
Doaa Fathy
Publication year - 2021
Publication title -
international journal of intelligent engineering and systems
Language(s) - English
Resource type - Journals
eISSN - 2185-310X
pISSN - 1882-708X
DOI - 10.22266/ijies2021.0228.30
Subject(s) - computer science , gesture , sign language , gesture recognition , convolutional neural network , artificial intelligence , natural language processing , speech recognition , semantics (computer science) , sign (mathematics) , programming language , mathematical analysis , philosophy , linguistics , mathematics
Dynamic Sign Language Recognition aims to recognize hand gestures of any person. Dynamic Sign Language Recognition systems have challenges in recognizing the semantic of hand gestures. These challenges come from the personal differences in hand signs from one person to another. Real-life video gesture frames couldn’t be treated as frame-level as a static sign. This research proposes a semantic translation system for dynamic hand gestures using deep learning and ontology. We used the proposed MSLO (Multi Sign Language Ontology) in the semantic translation step. Also, any user can retrain the system to be a personal one. We used Three-dimensional Convolutional Neural Networks followed by Convolutional long short-term memory to improve the recognition accuracy in Dynamic sign language recognition. We applied the proposed system on three dynamic gesture datasets from color videos. The recognition accuracy average was 97.4%. We did all the training and testing processes on the Graphics Processing Unit with the support of Google Colab. Using "Google Colab" in the training process decreases the average run time by about 87.9%. In addition to adding semantic in dynamic sign language translation, the proposed system achieves good results compared to some dynamic sign language recognition systems.