z-logo
open-access-imgOpen Access
Enhanced Sign Language Translation between American Sign Language and Indian Sign Language Using LLMs
Author(s) -
Malay Kumar,
S. Sarvajit Visagan,
Tanish Mahajan,
Anisha Natarajan,
P S Sreeja
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3595943
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
This research introduces a comprehensive framework designed to bridge the communication gap between American Sign Language (ASL) and Indian Sign Language (ISL). The proposed system employs a hybrid deep learning model for ASL gesture recognition, integrating a random forest classifier (RFC) and a convolutional neural network (CNN) to enhance accuracy. Recognized gestures are converted into text, which is refined using a fine-tuned Large Language Model (LLM) for contextual and grammatical accuracy. The corrected text is then synthesized into ISL gestures using RIFE-Net, a real-time intermediate flow estimation network, to generate smooth and natural gesture videos. The framework addresses key challenges such as gesture variability and linguistic differences between ASL and ISL, achieving a translation accuracy of 93.0% for gesture recognition and 94.2% for text correction. Initial experimental results demonstrate real-time processing capabilities, averaging one gesture per second, with video outputs at 60 FPS. This system not only facilitates seamless communication between ASL and ISL users but also lays the groundwork for scalability to other sign language pairs. The results highlight the potential to improve accessibility and inclusion of the global deaf community, paving the way for future research in multimodal sign language translation systems.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom