
Construction and Application of a Data-Driven Abstract Extraction Model for English Text
Author(s) -
Hui Peng
Publication year - 2022
Publication title -
scientific programming
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.269
H-Index - 36
eISSN - 1875-919X
pISSN - 1058-9244
DOI - 10.1155/2022/9497783
Subject(s) - computer science , automatic summarization , natural language processing , artificial intelligence , sentence , inference , embedding , transformer , generative model , text generation , text graph , topic model , graph , information retrieval , generative grammar , theoretical computer science , physics , quantum mechanics , voltage
In this paper, a single English text is taken as the research object, and the automatic extraction method of text summary is studied using data-driven method. This paper takes a single text as the research object, establishes the connection relationship between article sentences, and proposes a method of automatic extraction of text summary based on graph model and topic model. The method combines the text graph model, complex network theory, and LDA topic model to construct a sentence synthesis scoring function to calculate the text single-sentence weights and output the sentences within the text threshold in descending order as text summaries. The algorithm improves the readability of the text summary while providing enough information for the text summary. In this paper, we propose a BERT-based topic-aware text summarization model based on a neural topic model. The approach uses the potential topic embedding representation encoded by the neural topic model to match with the embedding representation of BERT to guide topic generation to meet the requirements of semantic representation of text and explores topic inference and summary generation jointly in an end-to-end manner through the transformer architecture to capture semantic features while modelling long-range dependencies by a self-attentive mechanism. In this paper, we propose improvements based on pretrained models on both extractive and generative algorithms, making them enhanced for global information memory. Combining the advantages of both algorithms, a new joint model is proposed, which makes it possible to generate summaries that are more consistent with the original topic and have a reduced repetition rate for evenly distributed article information. Comparative experiments were conducted on several datasets and small uniformly distributed private datasets were constructed. In several comparative experiments, the evaluation metrics were improved by up to 2.5 percentage points, proving the effectiveness of the method, and a prototype system for an automatic abstract generation was built to demonstrate the results.