z-logo
open-access-imgOpen Access
Visual perceptual learning generalizes to untrained effectors
Author(s) -
Asmara Awada,
Shahab Bakhtiari,
Christopher C. Pack
Publication year - 2021
Publication title -
journal of vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.126
H-Index - 113
ISSN - 1534-7362
DOI - 10.1167/jov.21.3.10
Subject(s) - perception , saccade , stimulus (psychology) , neuroscience , cognitive psychology , perceptual learning , visual perception , psychology , computer science , visual search , task (project management) , artificial intelligence , eye movement , management , economics
Visual perceptual learning (VPL) is an improvement in visual function following training. Although the practical utility of VPL was once thought to be limited by its specificity to the precise stimuli used during training, more recent work has shown that such specificity can be overcome with appropriate training protocols. In contrast, relatively little is known about the extent to which VPL exhibits motor specificity. Previous studies have yielded mixed results. In this work, we have examined the effector specificity of VPL by training observers on a motion discrimination task that maintains the same visual stimulus (drifting grating) and task structure, but that requires different effectors to indicate the response (saccade vs. button press). We find that, in these conditions, VPL transfers fully between a manual and an oculomotor response. These results are consistent with the idea that VPL entails the learning of a decision rule that can generalize across effectors.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here