z-logo
open-access-imgOpen Access
Knowledge Base Question Answering Based on Multi-head Attention Mechanism and Relative Position Coding
Author(s) -
Gan Liu,
Yang Xiao
Publication year - 2022
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/2203/1/012056
Subject(s) - encode , computer science , coding (social sciences) , benchmark (surveying) , position (finance) , question answering , artificial intelligence , theoretical computer science , mathematics , statistics , biochemistry , chemistry , geodesy , finance , economics , gene , geography
Most current knowledge base question answering models mainly use RNN and its various derivative versions such as BiLSTM to model the problem, which limits Parallel computing capabilities of the model. In response to this problem, we try to use TransformerEncoder instead of BiLSTM to model and encode the problem, and hope to improve the parallel computing efficiency of the model. At the same time, to solve the problem of insufficient relative position information obtained by using absolute position coding in TransformerEncoder, it is proposed to use relative position coding instead of absolute position coding. According to the experimental results, our model effectively reduces a certain amount of training time and has achieved certain results on the WebQuestions benchmark data set.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here