
Leveraging Linguistic Coordination in Reranking N-Best Candidates For End-to-End Response Selection Using BERT
Author(s) -
Mingzhi Yu,
Diane Litman
Publication year - 2021
Publication title -
proceedings of the ... international florida artificial intelligence research society conference
Language(s) - English
Resource type - Journals
eISSN - 2334-0762
pISSN - 2334-0754
DOI - 10.32473/flairs.v34i1.128491
Subject(s) - leverage (statistics) , computer science , selection (genetic algorithm) , conversation , language model , artificial intelligence , natural language processing , state (computer science) , end to end principle , linguistics , programming language , philosophy
Retrieval-based dialogue systems select the best response from many candidates. Although many state-of-the-art models have shown promising performance in dialogue response selection tasks, there is still quite a gap between R@1 and R@10 performance. To address this, we propose to leverage linguistic coordination (a phenomenon that individuals tend to develop similar linguistic behaviors in conversation) to rerank the N-best candidates produced by BERT, a state-of-the-art pre-trained language model. Our results show an improvement in R@1 compared to BERT baselines, demonstrating the utility of repairing machine-generated outputs by leveraging a linguistic theory.