z-logo
open-access-imgOpen Access
Multiple Kinect Sensor Fusion for Human Skeleton Tracking Using Kalman Filtering
Author(s) -
Sungphill Moon,
Youngbin Park,
Dong Wook Ko,
Il Hong Suh
Publication year - 2016
Publication title -
international journal of advanced robotic systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.394
H-Index - 46
eISSN - 1729-8814
pISSN - 1729-8806
DOI - 10.5772/62415
Subject(s) - computer science , computer vision , artificial intelligence , kalman filter , sensor fusion , tracking (education) , reliability (semiconductor) , ground truth , tracing , tracking system , workspace , robot , psychology , pedagogy , power (physics) , physics , quantum mechanics , operating system
Kinect sensors are able to achieve considerable skeleton tracking performance in a convenient and low-cost manner. However, Kinect sensors often generate poor skeleton poses due to self-occlusion, which is a common problem among most vision-based sensing systems. A simple way to solve this problem is to use multiple Kinect sensors in a workspace and combine the measurements from the different sensors. However, this method creates a new issue known as the data fusion problem. In this research, we developed a human skeleton tracking system using the Kalman filter framework, in which multiple Kinect sensors are used to correct inaccurate tracking data from a single Kinect sensor. Our contribution is to propose a method to determine the reliability of each tracked 3D position of a joint and then combine multiple observations based on measurement confidence. We evaluate the proposed approach by comparison with the ground truth obtained using a commercial marker-based motion-capture system

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom