Supporting Arabic Sign Language Recognition with Facial Expressions
Author(s) -
Ghada Dahy Fathy,
E. Emary,
Hesham N. Elmahdy
Publication year - 2015
Language(s) - English
Resource type - Conference proceedings
DOI - 10.15849/icit.2015.0024
Subject(s) - gesture , computer science , facial expression , sign language , preprocessor , artificial intelligence , classifier (uml) , feature extraction , speech recognition , gesture recognition , arabic , pattern recognition (psychology) , natural language processing , arabic numerals , feature (linguistics) , computer vision , linguistics , philosophy
this paper presents an automatic translation model forth combination official expressions of user and gestures of manual alphabets in the Arabic sign language. The part of facial expression depends on locations of user's mouth, nose and eyes. The part of gestures of manual alphabets in the Arabic sign language does not rely on using any gloves or visual markings to accomplish the recognition job. As an alternative, it deals with images of signer's hands. Two parts enable the user to interact with the environment in a natural way. First part in the model deals with signs and consists of three phases preprocessing phase, skin detection phase and feature extraction phase. Second part in the model that deals with facial expressions consists of two phases face detection and tracking facial expression. Proposed model has an accuracy 90% using minimum distance classifier (MDC) and absolute difference classifier in case of facial expressions and 99% in case of signer's hands.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom