z-logo
open-access-imgOpen Access
Head pose‐free gaze estimation using domain adaptation
Author(s) -
Ahn Byungtae,
Seo Minseok,
Choi DongGeol
Publication year - 2021
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/ell2.12247
Subject(s) - artificial intelligence , gaze , computer science , computer vision , convolutional neural network , grayscale , pose , adaptation (eye) , feature (linguistics) , head (geology) , domain (mathematical analysis) , pattern recognition (psychology) , human head , image (mathematics) , mathematics , mathematical analysis , linguistics , philosophy , physics , geomorphology , acoustics , optics , absorption (acoustics) , geology
Human gaze information has been widely used in various areas, such as medical diagnosis and human–computer interactions (HCI). This study proposes a head pose‐free 3D gaze estimation method using a deep convolutional neural network (DCNN). To infer gaze direction, only a small grayscale image is required without any special devices such as an infrared (IR) illuminator and RGBD sensor. A domain adaptation method to reduce the feature gap between real and synthetic image data is also proposed here. Moreover, a novel synthetic dataset (SynFace) that contains head poses, gaze directions, and facial landmarks is established and released. The proposed method outperforms state‐of‐the‐art methods and achieves a mean error of less than 4 ○ .

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom