z-logo
open-access-imgOpen Access
Can Your Eyes Tell Me How You Think? A Gaze Directed Estimation of the Mental Activity
Author(s) -
Laura Florea,
Corneliu Florea,
Ruxandra Vrânceanu,
Constantin Vertan
Publication year - 2013
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.27.60
Subject(s) - computer science , artificial intelligence , computer vision , landmark , concatenation (mathematics) , position (finance) , iris recognition , gaze , luminance , perceptron , matching (statistics) , iris (biosensor) , segmentation , pattern recognition (psychology) , mathematics , artificial neural network , biometrics , statistics , finance , combinatorics , economics
We investigate the possibility of estimating the cognitive process used by a person when addressing a mental challenge by following the Eye Accessing Cue (EAC) model from the Neuro-Linguistic Programming (NLP) theory [1]. This model, showed in figure 1, describes the eyemovements that are not used for visual tasks (non visual movements) and suggests that the direction of gaze, in such a case, can be an indicator for the internal representational system used by a person facing a given query. The actual EAC is thought to be identified by distinguishing between the relative position of the iris and the eye socket (lid edge). Our approach is to determine the four limits of the eye socket: the inner and outer corners, the upper and lower lids and the iris center and to subsequently analyze the identified region. The entire method flowchart is presented in figure 2. The schematics of the method used for independently looking for the position of each eye landmark is described in figure 3. Given the face square position by Viola-Jones algorithm [4] and the eye centers given by the method from [3], we fuse information related to position, normalized luminance, template matching and shape constraining. For position and luminance, we construct priors over the training database, while for template matching we describe a patch by concatenation of integral and edge projections on horizontal and vertical directions. The score of how likely is a patch to be centered on the true landmark position is given by a Multi Layer Perceptron. For the shape constrain, inspired by the CLM [2], we construct the probability density function in the eigenspace of the shapes in the training set. By ordering the landmarks according to a prior confidence (e.g. eye outer corners are more reliable than upper and lower eye boundaries) and by keeping all points fixed with the exception of the current least reliable, we build the likelihood of various current landmark positions. This information is fused with previous stages and we iteratively improve the landmark position. The final landmark position is taken as the weighted center of mass of the convex combination between initial stages and shape likelihood. To study the specific of the gaze direction we introduce Eye-Chimera database, which comprises 1172 frontal face images, grouped according to the 7 gaze directions, with a set of 5 points marked for each eye: the iris center and 4 points delimiting the bounding box. Recognizing individual EACs. The recognition of the EAC case (gaze direction) is done by identifying the position of the iris center inside the eye socket complemented by the information of the interior of the eye delimited shape. The interior of the eye quadrilateral shape is described by the integral projections normalized to 32 samples. For the actual recognition we have trained a random forrest to take as input the EAC feature (landmarks positions and integral features). We consider two types of recognition situations: three cases (looking

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom