Building a Discourse-Argument Hybrid System for Vietnamese Why-Question Answering
Author(s) -
Chinh Trong Nguyen,
Dang Tuan Nguyen
Publication year - 2021
Publication title -
computational intelligence and neuroscience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.605
H-Index - 52
eISSN - 1687-5273
pISSN - 1687-5265
DOI - 10.1155/2021/6550871
Subject(s) - question answering , computer science , natural language processing , artificial intelligence , sentence , inference , argument (complex analysis) , set (abstract data type) , test (biology) , test set , natural language , reading comprehension , task (project management) , similarity (geometry) , vietnamese , comprehension , reading (process) , linguistics , paleontology , biochemistry , chemistry , philosophy , management , economics , image (mathematics) , biology , programming language
Recently, many deep learning models have archived high results in question answering task with overall F1 scores above 0.88 on SQuAD datasets. However, many of these models have quite low F1 scores on why-questions. These F1 scores range from 0.57 to 0.7 on SQuAD v1.1 development set. This means these models are more appropriate to the extraction of answers for factoid questions than for why-questions. Why-questions are asked when explanations are needed. These explanations are possibly arguments or simply subjective opinions. Therefore, we propose an approach to finding the answer for why-question using discourse analysis and natural language inference. In our approach, natural language inference is applied to identify implicit arguments at sentence level. It is also applied in sentence similarity calculation. Discourse analysis is applied to identify the explicit arguments and the opinions at sentence level in documents. The results from these two methods are the answer candidates to be selected as the final answer for each why-question. We also implement a system with our approach. Our system can provide an answer for a why-question and a document as in reading comprehension test. We test our system with a Vietnamese translated test set which contains all why-questions of SQuAD v1.1 development set. The test results show that our system cannot beat a deep learning model in F1 score; however, our system can answer more questions (answer rate of 77.0%) than the deep learning model (answer rate of 61.0%).
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom