z-logo
Premium
Is It That Difficult to Find a Good Preference Order for the Incremental Algorithm?
Author(s) -
Krahmer Emiel,
Koolen Ruud,
Theune Mariët
Publication year - 2012
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/j.1551-6709.2012.01258.x
Subject(s) - preference , order (exchange) , domain (mathematical analysis) , computer science , preference learning , algorithm , mathematics , artificial intelligence , statistics , economics , mathematical analysis , finance
In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well‐known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre‐determined, domain‐dependent Preference Order. The authors argue that there are potentially many different Preference Orders that could be considered, while often no evidence is available to determine which is a good one. In this brief note, however, we suggest (based on a learning curve experiment) that finding a Preference Order for a new domain may not be so difficult after all, as long as one has access to a handful of human‐produced descriptions collected in a semantically transparent way. We argue that this is due to the fact that it is both more important and less difficult to get a good ordering of the head than of the tail of a Preference Order.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here