Probing Bottom-up Processing with Multistable Images
Author(s) -
Ozgur E. Akman,
Richard A. Clement,
David S. Broomhead,
Sabira K. Mannan,
Ian R. Moorhead,
Hugh R. Wilson
Publication year - 2009
Publication title -
journal of eye movement research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.25
H-Index - 20
ISSN - 1995-8692
DOI - 10.16910/jemr.1.3.4
Subject(s) - top down and bottom up design , fixation (population genetics) , computer science , weighting , visual processing , artificial intelligence , computer vision , eye movement , selection (genetic algorithm) , computation , image processing , pattern recognition (psychology) , neuroscience , psychology , algorithm , image (mathematics) , perception , biology , physics , biochemistry , software engineering , acoustics , gene
The selection of fixation targets involves a combination of top-down and bottom-up processing. The role of bottom-up processing can be enhanced by using multistable stimuli because their constantly changing appearance seems to depend predominantly on stimulusdriven factors. We used this approach to investigate whether visual processing models based on V1 need to be extended to incorporate specific computations attributed to V4. Eye movements of 8 subjects were recorded during free viewing of the Marroquin pattern in which illusory circles appear and disappear. Fixations were concentrated on features arranged in concentric rings within the pattern. Comparison with simulated fixation data demonstrated that the saliency of these features can be predicted with appropriate weighting of lateral connections in existing V1 models.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom