z-logo
open-access-imgOpen Access
VS 3 ‐NET: Neural variational inference model for machine‐reading comprehension
Author(s) -
Park Cheoneum,
Lee Changki,
Song Heejun
Publication year - 2019
Publication title -
etri journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.295
H-Index - 46
eISSN - 2233-7326
pISSN - 1225-6463
DOI - 10.4218/etrij.2018-0467
Subject(s) - computer science , inference , context (archaeology) , artificial intelligence , sentence , artificial neural network , machine learning , latent variable , natural language processing , comprehension , language model , paleontology , biology , programming language
We propose the VS 3 ‐NET model to solve the task of question answering questions with machine‐reading comprehension that searches for an appropriate answer in a given context. VS 3 ‐NET is a model that trains latent variables for each question using variational inferences based on a model of a simple recurrent unit‐based sentences and self‐matching networks. The types of questions vary, and the answers depend on the type of question. To perform efficient inference and learning, we introduce neural question‐type models to approximate the prior and posterior distributions of the latent variables, and we use these approximated distributions to optimize a reparameterized variational lower bound. The context given in machine‐reading comprehension usually comprises several sentences, leading to performance degradation caused by context length. Therefore, we model a hierarchical structure using sentence encoding, in which as the context becomes longer, the performance degrades. Experimental results show that the proposed VS 3 ‐NET model has an exact‐match score of 76.8% and an F1 score of 84.5% on the SQuAD test set.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here