z-logo
open-access-imgOpen Access
Memory models of visual search - searching in-the-head vs. in-the-world?
Author(s) -
Hansjörg Neth,
Wayne D. Gray,
Christopher W. Myers
Publication year - 2010
Publication title -
journal of vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.126
H-Index - 113
ISSN - 1534-7362
DOI - 10.1167/5.8.417
Subject(s) - visual search , alphanumeric , working memory , perception , cognitive psychology , computer science , cognition , psychology , eye movement , stimulus (psychology) , visual short term memory , cognitive science , computational model , artificial intelligence , neuroscience , programming language
Visual search takes place whenever we are looking for something. But when a stimulus has been visually encoded on a previous occasion, memory processes can supplement or compete with eye movements during search. While previous research has mostly focused on the perceptual features that allow us to identify a target among distractors in single shot searches (Wolfe, 1998, Psych. Science), recent findings have highlighted the contributions of visual short-term memory (VSTM) to search processes (Alvarez & Cavanagh, 2004, Psych. Science). We present a paradigm of repeated serial search that attempts to illuminate the potential roles of working memory (Anderson & Matessa, 1997, Psych. Review) and VSTM in visual search. A series of simple process models exemplifies various ways in which memory for items and/or locations can facilitate or obliterate search. Within a cognitive engineering approach, we developed multiple computational models that allowed us to explore and explicate the consequences of assumptions about VSTM capacity and organization, and the interaction between long-term memory and VSTM. Each model yielded distinct performance profiles based on the sequential order of target stimuli. We investigated our model predictions through an experiment that employed a serial search paradigm. Each of 10 targets (showing alphanumeric captions) had to be found on average twice per trial. As some items could be mere distractors and next targets were presented (auditorily) whenever the current target was found, participants could not anticipate the target sequence. Detailed comparisons between search performance, eye data and our computational models show clear evidence for memory processes for both target and distractor information, both within a single search and across multiple searches. Also, a between-subjects manipulation of target visibility shows that the use of knowledge-in-the-head (or memory) increases as the perceptual-motor costs of visual access are increased. The work reported was supported by a grant from the Air Force Office of Scientific Research AFOSR #F49620-031-0143.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom