z-logo
open-access-imgOpen Access
Enhanced Sign Language Transcription System via Hand Tracking and Pose Estimation
Author(s) -
Jung-Ho Kim,
Najoung Kim,
Hancheol Park,
Jong Cheol Park
Publication year - 2016
Publication title -
journal of computing science and engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.172
H-Index - 16
eISSN - 2093-8020
pISSN - 1976-4677
DOI - 10.5626/jcse.2016.10.3.95
Subject(s) - computer science , sign language , artificial intelligence , pose , lexicon , rgb color model , transcription (linguistics) , scalability , natural language processing , american sign language , sign (mathematics) , speech recognition , database , philosophy , linguistics , mathematical analysis , mathematics
In this study, we propose a new system for constructing parallel corpora for sign languages, which are generally under-resourced in comparison to spoken languages. In order to achieve scalability and accessibility regarding data collection and corpus construction, our system utilizes deep learning-based techniques and predicts depth information to perform pose estimation on hand information obtainable from video recordings by a single RGB camera. These estimated poses are then transcribed into expressions in SignWriting. We evaluate the accuracy of hand tracking and hand pose estimation modules of our system quantitatively, using the American Sign Language Image Dataset and the American Sign Language Lexicon Video Dataset. The evaluation results show that our transcription system has a high potential to be successfully employed in constructing a sizable sign language corpus using various types of video resources.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom