
Trajectory‐based view‐invariant hand gesture recognition by fusing shape and orientation
Author(s) -
Wu Xingyu,
Mao Xia,
Chen Lijiang,
Xue Yuli
Publication year - 2015
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2014.0368
Subject(s) - artificial intelligence , computer vision , computer science , invariant (physics) , gesture recognition , orientation (vector space) , trajectory , gesture , pattern recognition (psychology) , mathematics , geometry , physics , astronomy , mathematical physics
Traditional studies in vision‐based hand gesture recognition remain rooted in view‐dependent representations, and hence users are forced to be fronto‐parallel to the camera. To solve this problem, view‐invariant gesture recognition aims to make the recognition result independent of viewpoint changes. However, in current works the view‐invariance is achieved at the price of mixing different gesture patterns that have similar trajectory curve shape but different semantic meanings. For example, the gesture ‘push’ can be mistaken as ‘drag’ from another viewpoint. To address this shortcoming, in this study, the authors use a shape descriptor to extract the view‐invariant features of a three‐dimensional (3D) trajectory. As the shape features are invariant to omnidirectional viewpoint changes, the orientation features are then added into weight different rotation angles so that similar trajectory shapes are better separated. The proposed method was conducted on two different databases, including a popular Australian Sign Language database and a challenging Kinect Hand Trajectory database. Experimental results show that the proposed algorithm achieves a higher average recognition rate than the state‐of‐the‐art approaches, and can better distinguish confusing gestures while meeting the view‐invariant condition.