Integrated face detection, tracking, and pose estimation
Author(s) -
Masayuki Miyama,
Yoshio Matsuda
Publication year - 2012
Publication title -
kanazawa university repository for academic resources (dspace) (kanazawa university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1109/icosp.2012.6491760
Subject(s) - computer vision , artificial intelligence , computer science , pose , initialization , face detection , face (sociological concept) , jitter , facial motion capture , tracking (education) , motion estimation , position (finance) , pattern recognition (psychology) , facial recognition system , psychology , telecommunications , social science , pedagogy , finance , sociology , economics , programming language
This paper presents a proposal of an integrated method for face detection, tracking, and head pose estimation. We use the de-facto Viola-Jones method for face and face part detection. We adopt affine motion model estimation as a tracking method. The combination enables efficient detection around the search area limited by tracking. Moreover, it reduces false detection because of the consistent processing with earlier results. In addition, the method re-initializes the position and size of the face and face parts in every frame. That initialization immediately corrects tracking jitter. The head pose is estimated using coordinates of both eyes and a mouth relative to the nose as the origin in the coordinate system. The computational cost is low because it uses only those three points. Experimental results show accurate estimation of the head pose. The average error is 6.50 deg in yaw angle, and 7.65 deg in pitch angle. © 2012 IEEE
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom