z-logo
open-access-imgOpen Access
Twice fine‐tuning deep neural networks for paraphrase identification
Author(s) -
Ko Bowon,
Choi HoJin
Publication year - 2020
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/el.2019.4183
Subject(s) - paraphrase , computer science , treebank , task (project management) , natural language processing , artificial intelligence , sentence , artificial neural network , speech recognition , word (group theory) , scrambling , deep learning , language model , linguistics , algorithm , philosophy , management , parsing , economics
In this Letter, the authors introduce a novel approach to learn representations for sentence‐level paraphrase identification (PI) using BERT and ten natural language processing tasks. Their method trains an unsupervised model called BERT with two different tasks to detect whether two sentences are in paraphrase relation or not. Unlike conventional BERT, which fine tunes the target task such as PI to pre‐trained BERT, twice fine‐tuning deep neural networks first fine tune each task (e.g. general language understanding evaluation tasks, question answering, and paraphrase adversaries from word scrambling task) and second fine tune target PI task. As a result, the multi‐fine‐tuned BERT model outperformed the fine‐tuned model only with Microsoft Research Paraphrase Corpus (MRPC), which is paraphrase data, except for one case of Stanford Sentiment Treebank ‐ 2 (SST‐2). Multi‐task fine‐tuning is a simple idea but experimentally powerful. Experiments show that fine‐tuning just PI tasks to the BERT already gives enough performance, but additionally, fine‐tuning similar tasks can affect performance (3.4% point absolute improvement) and be competitive with the state‐of‐the‐art systems.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here