
SSM-Seq2Seq: A Novel Speaking Style Neural Conversation Model
Author(s) -
Boran Wang,
Yingxiang Sun
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1576/1/012001
Subject(s) - computer science , generator (circuit theory) , conversation , style (visual arts) , extractor , sequence (biology) , speech recognition , artificial intelligence , natural language processing , power (physics) , psychology , communication , physics , genetics , archaeology , quantum mechanics , process engineering , biology , history , engineering
Open domain personalized dialogue system has attracted more and more attention because of the ability of generating interesting and personalized responses. To incorporate speaking style, the existing methods first train respectively a response generator over a non-personalized conversational dataset and a speaking style extractor over a personalized non-conversational dataset, and then generate personalized responses by the parameter sharing mechanism. However, the training datasets’ speaking styles of the response generator and speaking style extractor is totally different, which makes the performance of the existing methods be not optimal. Intuitively, it will improve the performance by decreasing the gap between two training datasets’ speaking styles. Thus, in this paper, we propose a novel speaking style memory sequence-to-sequence (SSM-Seq2Seq) model, which incorporates the speaking style information from personalized non-conversational dataset into the training dataset of response generator to eliminate the gap. Extensive experiments show that the proposed approach yields great improvement over competitive baselines.