z-logo
open-access-imgOpen Access
Human-Computer Chinese Sign Language Interaction System
Author(s) -
Xu Lin,
Gao Wen
Publication year - 2000
Publication title -
international journal of virtual reality
Language(s) - English
Resource type - Journals
eISSN - 2727-9979
pISSN - 1081-1451
DOI - 10.20870/ijvr.2000.4.3.2651
Subject(s) - sign language , computer science , gesture , gesture recognition , facial expression , computer graphics , body language , sign (mathematics) , graphics , key (lock) , visual language , hearing impaired , speech recognition , human–computer interaction , artificial intelligence , communication , linguistics , psychology , computer graphics (images) , audiology , computer security , medicine , mathematical analysis , philosophy , mathematics
The generation and recognition of body language is a key technologies of VR. Sign Language is a visual-gestural language mainly used by hearing-impaired people. In this paper, gesture and facial expression models are created using computer graphics and used to synthesize Chinese Sign Language (CSL), and from it a human-computer CSL interaction system is implemented. Using a system combining CSL synthesis and CSL recognition subsystem, hearing-impaired people with data-gloves can pantomime CSL, which can then be displayed on the computer screen in real time and translated into Chinese text. Hearing people can also use the system by entering Chinese text, which is translated into CSL and displayed on the computer screen. In this way hearing-impaired people and hearing people can communicate with each other conveniently.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom