Research Library

open-access-imgOpen AccessImproving the Robustness of Knowledge-Grounded Dialogue via Contrastive Learning
Author(s)
Jiaan Wang,
Jianfeng Qu,
Kexin Wang,
Zhixu Li,
Wen Hua,
Ximing Li,
An Liu
Publication year2024
Knowledge-grounded dialogue (KGD) learns to generate an informative responsebased on a given dialogue context and external knowledge (\emph{e.g.},knowledge graphs; KGs). Recently, the emergence of large language models (LLMs)and pre-training techniques has brought great success to knowledge-groundeddialogue. However, when building KGD systems in real applications, there arevarious real-world noises that are inevitable to face. For example, thedialogue context might involve perturbations such as misspellings andabbreviations. In addition, KGs typically suffer from incompletion and alsomight contain erroneous and outdated facts. Such real-world noises pose achallenge to the robustness of KGD systems and hinder their applications in thereal world. In this paper, we propose an entity-based contrastive learningframework for improving the robustness of KGD. Specifically, we make use of theentity information in a KGD sample to create both its positive and negativesamples which involve semantic-irrelevant and semantic-relevant perturbations,respectively. The contrastive learning framework ensures the KGD model is awareof these two types of perturbations, thus generating informative responses withthe potentially noisy inputs in real applications. Experimental results onthree benchmark datasets show that our method achieves new state-of-the-artperformance in terms of automatic evaluation scores, verifying itseffectiveness and potentiality. Furthermore, we show that our method cangenerate better responses than comparison models in both the noisy and thefew-shot settings.
Language(s)English

Seeing content that should not be on Zendy? Contact us.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here