
Sequence learning recodes cortical representations instead of strengthening initial ones
Author(s) -
Kristjan Kalm,
Dennis Norris
Publication year - 2021
Publication title -
plos computational biology/plos computational biology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.628
H-Index - 182
eISSN - 1553-7358
pISSN - 1553-734X
DOI - 10.1371/journal.pcbi.1008969
Subject(s) - associative property , sequence learning , sequence (biology) , computer science , associative learning , recall , similarity (geometry) , representation (politics) , artificial intelligence , association (psychology) , contrast (vision) , task (project management) , artificial neural network , content addressable memory , machine learning , cognitive psychology , psychology , biology , mathematics , management , politics , political science , pure mathematics , law , economics , image (mathematics) , genetics , psychotherapist
We contrast two computational models of sequence learning. The associative learner posits that learning proceeds by strengthening existing association weights. Alternatively, recoding posits that learning creates new and more efficient representations of the learned sequences. Importantly, both models propose that humans act as optimal learners but capture different statistics of the stimuli in their internal model. Furthermore, these models make dissociable predictions as to how learning changes the neural representation of sequences. We tested these predictions by using fMRI to extract neural activity patterns from the dorsal visual processing stream during a sequence recall task. We observed that only the recoding account can explain the similarity of neural activity patterns, suggesting that participants recode the learned sequences using chunks. We show that associative learning can theoretically store only very limited number of overlapping sequences, such as common in ecological working memory tasks, and hence an efficient learner should recode initial sequence representations.