The Inside-Outside Recursive Neural Network model for Dependency Parsing
Author(s) -
Phong Ba Le,
Willem Zuidema
Publication year - 2014
Language(s) - English
Resource type - Conference proceedings
DOI - 10.3115/v1/d14-1081
Subject(s) - perplexity , treebank , computer science , dependency (uml) , artificial neural network , parsing , dependency grammar , generative model , context (archaeology) , artificial intelligence , scope (computer science) , recurrent neural network , theoretical computer science , generative grammar , language model , programming language , paleontology , biology
We propose the first implementation of an infinite-order generative dependency model. The model is based on a new recursive neural network architecture, the Inside-Outside Recursive Neural Network. This architecture allows information to flow not only bottom-up, as in traditional recursive neural networks, but also topdown. This is achieved by computing content as well as context representations for any constituent, and letting these representations interact. Experimental results on the English section of the Universal Dependency Treebank show that the infinite-order model achieves a perplexity seven times lower than the traditional third-order model using counting, and tends to choose more accurate parses in k-best lists. In addition, reranking with this model achieves state-of-the-art unlabelled attachment scores and unlabelled exact match scores.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom