An image-computable model of how endogenous and exogenous attention differentially alter visual perception
Author(s) -
Michael Jigo,
David J. Heeger,
Marisa Carrasco
Publication year - 2021
Publication title -
proceedings of the national academy of sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 5.011
H-Index - 771
eISSN - 1091-6490
pISSN - 0027-8424
DOI - 10.1073/pnas.2106436118
Subject(s) - endogeny , perception , computable general equilibrium , image (mathematics) , visual perception , cognitive psychology , computer science , artificial intelligence , psychology , cognitive science , computer vision , neuroscience , biology , economics , macroeconomics , biochemistry
Significance Visual attention alters perception. Endogenous (voluntary) and exogenous (involuntary) spatial attention shape perception by prioritizing some visual information and ignoring others. Each attention type induces different perceptual consequences. Endogenous attention flexibly optimizes the visibility of fine-grain or coarse-scale visual features, whereas exogenous attention inflexibly enhances fine details, even when detrimental to perception. The computations that govern these differences are unknown. We developed a computational model that predicts human behavior and these distinct attentional effects directly from visual displays used in previous experiments. At the model’s core, each attention type adjusts sensory processing with different selectivity across visual detail. The model explains several phenomena, including uniform improvements induced by endogenous attention and visual improvements and impairments induced by exogenous attention.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom