Predicting visual search performance by quantifying stimuli similarities
Author(s) -
Tamar Avraham,
Yaffa Yeshurun,
Michael Lindenbaum
Publication year - 2008
Publication title -
journal of vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.126
H-Index - 113
ISSN - 1534-7362
DOI - 10.1167/8.4.9
Subject(s) - visual search , artificial intelligence , homogeneity (statistics) , computer science , orientation (vector space) , pattern recognition (psychology) , visual attention , similarity (geometry) , psychology , machine learning , perception , mathematics , neuroscience , image (mathematics) , geometry
The effect of distractor homogeneity and target-distractor similarity on visual search was previously explored under two models designed for computer vision. We extend these models here to account for internal noise and to evaluate their ability to predict human performance. In four experiments, observers searched for a horizontal target among distractors of different orientation (orientation search; Experiments 1 and 2) or a gray target among distractors of different color (color search; Experiments 3 and 4). Distractor homogeneity and target-distractor similarity were systematically manipulated. We then tested our models' ability to predict the search performance of human observers. Our models' predictions were closer to human performance than those of other prominent quantitative models.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom