z-logo
open-access-imgOpen Access
Automatic annotation of context and speech acts for dialogue corpora
Author(s) -
Kallirroi Georgila,
Oliver Lemon,
James Henderson,
Johanna D. Moore
Publication year - 2009
Publication title -
natural language engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.29
H-Index - 54
eISSN - 1469-8110
pISSN - 1351-3249
DOI - 10.1017/s1351324909005105
Subject(s) - computer science , annotation , natural language processing , utterance , artificial intelligence , context (archaeology) , task (project management) , interpretation (philosophy) , baseline (sea) , speech recognition , paleontology , oceanography , management , economics , biology , programming language , geology
Richly annotated dialogue corpora are essential for new research directions in statistical learning approaches to dialogue management, context-sensitive interpretation, and context-sensitive speech recognition. In particular, large dialogue corpora annotated with contextual information and speech acts are urgently required. We explore how existing dialogue corpora (usually consisting of utterance transcriptions) can be automatically processed to yield new corpora where dialogue context and speech acts are accurately represented. We present a conceptual and computational framework for generating such corpora. As an example, we present and evaluate an automatic annotation system which builds ‘Information State Update’ (ISU) representations of dialogue context for the Communicator (2000 and 2001) corpora of human–machine dialogues (2,331 dialogues). The purposes of this annotation are to generate corpora for reinforcement learning of dialogue policies, for building user simulations, for evaluating different dialogue strategies against a baseline, and for training models for context-dependent interpretation and speech recognition. The automatic annotation system parses system and user utterances into speech acts and builds up sequences of dialogue context representations using an ISU dialogue manager. We present the architecture of the automatic annotation system and a detailed example to illustrate how the system components interact to produce the annotations. We also evaluate the annotations, with respect to the task completion metrics of the original corpus and in comparison to hand-annotated data and annotations produced by a baseline automatic system. The automatic annotations perform well and largely outperform the baseline automatic annotations in all measures. The resulting annotated corpus has been used to train high-quality user simulations and to learn successful dialogue strategies. The final corpus will be made publicly available

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom