z-logo
open-access-imgOpen Access
Universal Discourse Representation Structure Parsing
Author(s) -
Jiangming Liu,
Shay B. Cohen,
Mirella Lapata,
Johan Bos
Publication year - 2021
Publication title -
computational linguistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.314
H-Index - 98
eISSN - 1530-9312
pISSN - 0891-2017
DOI - 10.1162/coli_a_00406
Subject(s) - computer science , parsing , natural language processing , artificial intelligence , transformer , bottom up parsing , semantic role labeling , representation (politics) , top down parsing , linguistics , philosophy , physics , quantum mechanics , voltage , politics , political science , law , sentence
We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages. We introduce Universal Discourse Representation Theory (UDRT), a variant of DRT that explicitly anchors semantic representations to tokens in the linguistic input. We develop a semantic parsing framework based on the Transformer architecture and utilize it to obtain semantic resources in multiple languages following two learning schemes. The Many-to-One approach translates non-English text to English, and then runs a relatively accurate English parser on the translated text, while the One-to-Many approach translates gold standard English to non-English text and trains multiple parsers (one per language) on the translations. Experimental results on the Parallel Meaning Bank show that our proposal outperforms strong baselines by a wide margin and can be used to construct (silver-standard) meaning banks for 99 languages.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom