Premium
iMinerva: A Mathematical Model of Distributional Statistical Learning
Author(s) -
Thiessen Erik D.,
Pavlik Philip I.
Publication year - 2013
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/cogs.12011
Subject(s) - statistical learning , computer science , set (abstract data type) , statistical model , artificial intelligence , natural language processing , domain (mathematical analysis) , word (group theory) , linguistics , mathematics , mathematical analysis , philosophy , programming language
Statistical learning refers to the ability to identify structure in the input based on its statistical properties. For many linguistic structures, the relevant statistical features are distributional: They are related to the frequency and variability of exemplars in the input. These distributional regularities have been suggested to play a role in many different aspects of language learning, including phonetic categories, using phonemic distinctions in word learning, and discovering non‐adjacent relations. On the surface, these different aspects share few commonalities. Despite this, we demonstrate that the same computational framework can account for learning in all of these tasks. These results support two conclusions. The first is that much, and perhaps all, of distributional statistical learning can be explained by the same underlying set of processes. The second is that some aspects of language can be learned due to domain‐general characteristics of memory.