z-logo
open-access-imgOpen Access
Observers’ cognitive states modulate how visual inputs relate to gaze control.
Author(s) -
Omid Kardan,
John M. Henderson,
Grigori Yourganov,
Marc G. Berman
Publication year - 2016
Publication title -
journal of experimental psychology human perception and performance
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.691
H-Index - 148
eISSN - 1939-1277
pISSN - 0096-1523
DOI - 10.1037/xhp0000224
Subject(s) - eye movement , visual search , gaze contingency paradigm , psychology , gaze , cognitive psychology , memorization , eye tracking , cognition , artificial intelligence , visual perception , perception , computer science , neuroscience
Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom