z-logo
open-access-imgOpen Access
Query-Based Retrieval Using Universal Sentence Encoder
Author(s) -
Deepthi Godavarthi,
A. Mary Sowjanya
Publication year - 2021
Publication title -
revue d'intelligence artificielle
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.146
H-Index - 14
eISSN - 1958-5748
pISSN - 0992-499X
DOI - 10.18280/ria.350404
Subject(s) - sentence , computer science , encoder , natural language processing , word (group theory) , artificial intelligence , context (archaeology) , embedding , word embedding , question answering , linguistics , paleontology , philosophy , biology , operating system
In Natural language processing, various tasks can be implemented with the features provided by word embeddings. But for obtaining embeddings for larger chunks like sentences, the efforts applied through word embeddings will not be sufficient. To resolve such issues sentence embeddings can be used. In sentence embeddings, complete sentences along with their semantic information are represented as vectors so that the machine finds it easy to understand the context. In this paper, we propose a Question Answering System (QAS) based on sentence embeddings. Our goal is to obtain the text from the provided context for a user-query by extracting the sentence in which the correct answer is present. Traditionally, infersent models have been used on SQUAD for building QAS. In recent times, Universal Sentence Encoder with USECNN and USETrans have been developed. In this paper, we have used another variant of the Universal sentence encoder, i.e. Deep averaging network in order to obtain pre-trained sentence embeddings. The results on the SQUAD-2.0 dataset indicate our approach (USE with DAN) performs well compared to Facebook’s infersent embedding.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here