z-logo
open-access-imgOpen Access
Training‐based head pose estimation under monocular vision
Author(s) -
Guo Zhizhi,
Zhou Qianxiang,
Liu Zhongqi,
Liu Chunhui
Publication year - 2016
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2015.0457
Subject(s) - pose , artificial intelligence , computer science , monocular , computer vision , 3d pose estimation , articulated body pose estimation , head (geology) , feature (linguistics) , feature vector , pattern recognition (psychology) , linguistics , philosophy , geomorphology , geology
Although many 3D head pose estimation methods based on monocular vision can achieve an accuracy of 5°, how to reduce the number of required training samples and how to not to use any hardware parameters as input features are still among the biggest challenges in the field of head pose estimation. To aim at these challenges, the authors propose an accurate head pose estimation method which can act as an extension to facial key point detection systems. The basic idea is to use the normalised distance between key points as input features, and to use ℓ 1 ‐minimisation to select a set of sparse training samples which can reflect the mapping relationship between the feature vector space and the head pose space. The linear combination of the head poses corresponding to these samples represents the head pose of the test sample. The experiment results show that the authors’ method can achieve an accuracy of 2.6° without any extra hardware parameters or information of the subject. In addition, in the case of large head movement and varying illumination, the authors’ method is still able to estimate the head pose.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here