z-logo
open-access-imgOpen Access
Lexical Strata and Phonotactic Perplexity Minimization
Author(s) -
Eric Rosen
Publication year - 2021
Publication title -
proceedings of the annual meetings on phonology
Language(s) - English
Resource type - Journals
ISSN - 2377-3324
DOI - 10.3765/amp.v9i0.4918
Subject(s) - phonotactics , perplexity , computer science , grammar , artificial intelligence , natural language processing , context (archaeology) , linguistics , language model , speech recognition , phonology , history , philosophy , archaeology
We present a model of gradient phonotactics that is shown to reduce overall phoneme uncertainty in a language when the phonotactic grammar is modularized in an unsupervised fashion to create more than one sub-grammar. Our model is a recurrent neural network language model (Elman 1990), which, when applied in two separate, randomly initialized modules to a corpus of Japanese words, learns lexical subdivisions that closely correlate with two of the main lexical strata for Japanese (Yamato and Sino-Japanese) proposed by Ito and Mester (1995). We find that the gradient phonotactics learned by the model, which are based on the entire prior context of a phoneme, reveal a continuum of gradient strata membership, similar to the gradient membership proposed by Hayes (2016) for the Native vs. Latinate stratification in English.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here