z-logo
open-access-imgOpen Access
Automatic feature point detection and tracking of human actions in time-of-flight videos
Author(s) -
Xiaohui Yuan,
Longbo Kong,
Dengchao Feng,
Zhenchun Wei
Publication year - 2017
Publication title -
ieee/caa journal of automatica sinica
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.277
H-Index - 41
eISSN - 2329-9274
pISSN - 2329-9266
DOI - 10.1109/jas.2017.7510625
Subject(s) - computing and processing , communication, networking and broadcast technologies , general topics for engineers , robotics and control systems
Detecting feature points on the human body in video frames is a key step for tracking human movements. There have been methods developed that leverage models of human pose and classification of pixels of the body image. Yet, occlusion and robustness are still open challenges. In this paper, we present an automatic, model-free feature point detection and action tracking method using a time-of-flight camera. Our method automatically detects feature points for movement abstraction. To overcome errors caused by miss-detection and occlusion, a refinement method is devised that uses the trajectory of the feature points to correct the erroneous detections. Experiments were conducted using videos acquired with a Microsoft Kinect camera and a publicly available video set and comparisons were conducted with the state-of-the-art methods. The results demonstrated that our proposed method delivered improved and reliable performance with an average accuracy in the range of 90 U+0025. The trajectory-based refinement also demonstrated satisfactory effectiveness that recovers the detection with a success rate of 93.7 U+0025. Our method processed a frame in an average time of 71.1 ms.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom