z-logo
Premium
Two‐stage model in perceptual learning: toward a unified theory
Author(s) -
Shibata Kazuhisa,
Sagi Dov,
Watanabe Takeo
Publication year - 2014
Publication title -
annals of the new york academy of sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.712
H-Index - 248
eISSN - 1749-6632
pISSN - 0077-8923
DOI - 10.1111/nyas.12419
Subject(s) - perception , stimulus (psychology) , cognitive psychology , visual field , visual perception , psychology , neuroscience , perceptual learning , feature (linguistics) , generalization , computer science , artificial intelligence , mathematical analysis , linguistics , philosophy , mathematics
Training or exposure to a visual feature leads to a long‐term improvement in performance on visual tasks that employ this feature. Such performance improvements and the processes that govern them are called visual perceptual learning (VPL). As an ever greater volume of research accumulates in the field, we have reached a point where a unifying model of VPL should be sought. A new wave of research findings has exposed diverging results along three major directions in VPL: specificity versus generalization of VPL, lower versus higher brain locus of VPL, and task‐relevant versus task‐irrelevant VPL. In this review, we propose a new theoretical model that suggests the involvement of two different stages in VPL: a low‐level, stimulus‐driven stage, and a higher‐level stage dominated by task demands. If experimentally verified, this model would not only constructively unify the current divergent results in the VPL field, but would also lead to a significantly better understanding of visual plasticity, which may, in turn, lead to interventions to ameliorate diseases affecting vision and other pathological or age‐related visual and nonvisual declines.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here