Premium
Learnability of Embedded Syntactic Structures Depends on Prosodic Cues
Author(s) -
Mueller Jutta L.,
Bahlmann Jörg,
Friederici Angela D.
Publication year - 2010
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/j.1551-6709.2009.01093.x
Subject(s) - learnability , syllable , grammar , computer science , natural language processing , artificial intelligence , linguistics , speech recognition , philosophy
The ability to process center‐embedded structures has been claimed to represent a core function of the language faculty. Recently, several studies have investigated the learning of center‐embedded dependencies in artificial grammar settings. Yet some of the results seem to question the learnability of these structures in artificial grammar tasks. Here, we tested under which exposure conditions learning of center‐embedded structures in an artificial grammar is possible. We used naturally spoken syllable sequences and varied the presence of prosodic cues. The results suggest that mere distributional information does not suffice for successful learning. Prosodic cues marking the boundaries of the major relevant units, however, can lead to learning success. Thus, our data are consistent with the hypothesis that center‐embedded syntactic structures can be learned in artificial grammar tasks if language‐like acoustic cues are provided.