z-logo
open-access-imgOpen Access
Localized fusion of Shape and Appearance features for 3D Human Pose Estimation
Author(s) -
Suman Sedai,
Mohammed Bennamoun,
Du Q. Huynh
Publication year - 2010
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.24.51
Subject(s) - pose , artificial intelligence , computer vision , fusion , computer science , sensor fusion , 3d pose estimation , pattern recognition (psychology) , philosophy , linguistics
This paper presents a learning-based method for combining the shape and appearance feature types for 3D human pose estimation from single-view images. Our method is based on clustering the 3D pose space into several modular regions and learning the regressors for both feature types and their optimal fusion scenario in each region. This way the complementary information of the individual feature types is exploited, leading to improved performance of pose estimation. We train and evaluate our method using a synchronized video and 3D motion dataset. Our experimental results show that the proposed feature combination method gave more accurate pose estimation than that from each individual feature type.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom