z-logo
open-access-imgOpen Access
Comprehensive Model and Image-Based Recognition of Hand Gestures for Interaction in 3D Environments
Author(s) -
J. Bernardes,
Riko Nakamura,
Romero Tori
Publication year - 2011
Publication title -
international journal of virtual reality
Language(s) - English
Resource type - Journals
eISSN - 2727-9979
pISSN - 1081-1451
DOI - 10.20870/ijvr.2011.10.4.2825
Subject(s) - gesture , gesture recognition , computer science , set (abstract data type) , segmentation , artificial intelligence , computer vision , interaction technique , human–computer interaction , programming language
Interest in gesture-based interaction has been growing considerably, but most systems still limit the recognition of hand gestures to a small set of signs. We present a model for hand gestures that allows the definition of thousands of distinct signals based on the combination of a much smaller number of gesture components. This model comprehends several different kinds of gestures, both static and dynamic and using either hand or both. The choice of types of gestures and individual components is based not only on a review of the relevant literature but also on preliminary user studies, specifically for interaction in virtual and augmented environments and in entertainment and education applications. Gesture recognition based on this model is implemented as a finite state machine that incorporates the results of algorithms for the classification of each component, but is itself independent of those algorithms. The paper also describes an unencumbered gesture recognition system built using this model and recognition strategy, a single low-cost camera and relatively simple image-based algorithms to classify hand poses, movements and location and for segmentation. Tests show the model allowed the definition of the desired gestures for three target applications, a commercial computer game and two educational 3D environments. Our system was able to recognize these user gestures, and several others, in real-time. We could also perceive a need for the recognition of incomplete gestures and for a more robust segmentation strategy.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom