Premium
Learning Diphone‐Based Segmentation
Author(s) -
Daland Robert,
Pierrehumbert Janet B.
Publication year - 2010
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/j.1551-6709.2010.01160.x
Subject(s) - phrase , computer science , segmentation , natural language processing , artificial intelligence , representation (politics) , bayes' theorem , word learning , variation (astronomy) , speech recognition , machine learning , bayesian probability , linguistics , philosophy , physics , vocabulary , astrophysics , politics , political science , law
Abstract This paper reconsiders the diphone‐based word segmentation model of Cairns, Shillcock, Chater, and Levy (1997) and Hockema (2006), previously thought to be unlearnable. A statistically principled learning model is developed using Bayes’ theorem and reasonable assumptions about infants’ implicit knowledge. The ability to recover phrase‐medial word boundaries is tested using phonetic corpora derived from spontaneous interactions with children and adults. The (unsupervised and semi‐supervised) learning models are shown to exhibit several crucial properties. First, only a small amount of language exposure is required to achieve the model’s ceiling performance, equivalent to between 1 day and 1 month of caregiver input. Second, the models are robust to variation, both in the free parameter and the input representation. Finally, both the learning and baseline models exhibit undersegmentation , argued to have significant ramifications for speech processing as a whole.