z-logo
open-access-imgOpen Access
Head & Hands Tunneling Pipeline for Enhancing Sign Language Recognition
Author(s) -
Ganzorig Batnasan,
Munkh-Erdene Otgonbold,
Qurban Ali Memon,
Timothy K. Shih,
Munkhjargal Gochoo
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3591123
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Sign Language Recognition (SLR) presents a significant challenge as a fine-grained, scene- and subject-invariant video classification task, primarily relying on hand gestures and facial expressions to convey meaning. Vision foundation models, such as Vision Transformers (ViTs), trained on general human action recognition datasets, often struggle to capture the nuanced features of signs. We highlight two main challenges: a) the loss of critical spatial features in the head and hand regions due to video downscaling during preprocessing, and b) the lack of sufficient domain-specific knowledge of sign gestures in ViTs. To tackle these, we propose a pipeline comprising our Head & Hands Tunneling (H&HT) preprocessor and a domain-specifically pre-trained 32-frame ViT classifier. The H&HT preprocessor, incorporating the MediaPipe pose predictor, maximizes the preservation of critical spatial details from the signer’s head and hands in raw sign language videos. When the ViT model is pre-trained on a domain-specific, large-scale SLR dataset, the two parts complement each other. As a result, the 32-frame H&HT pipeline achieves a Top-1 accuracy of 62.82% on the WLASL2000 benchmark, surpassing the performance of the 32-frame models and ranking second among the 64-frame models. We also provide benchmarking results on the ASL-Citizen dataset and two revised versions of the WLASL2000 dataset. All weights and codes are available in this link.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom