z-logo
open-access-imgOpen Access
Improving Low-Resource Morphological Learning with Intermediate Forms from Finite State Transducers
Author(s) -
Sarah Moeller,
Ghazaleh Kazeminejad,
Andrew Cowell,
Mans Hulden
Publication year - 2019
Language(s) - English
DOI - 10.33011/computel.v1i.427
Subject(s) - morpheme , computer science , allomorph , encoder , natural language processing , representation (politics) , morphophonology , process (computing) , artificial intelligence , linguistics , speech recognition , phonology , programming language , philosophy , politics , political science , law , operating system
Neural encoder-decoder models are usually applied to morphology learning as an end-to-end process without considering the underlying phonological representations that linguists posit as abstract forms before morphophonological rules are applied. Finite State Transducers for morphology, on the other hand, are developed to contain these underlying forms as an intermediate representation. This paper shows that training a bidirectional two-step encoder-decoder model of Arapaho verbs to learn two separate mappings between tags and abstract morphemes and morphemes and surface allomorphs improves results when training data is limited to 10,000 to 30,000 examples of inflected word forms.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here