
Extractive Text Summarization using Recurrent Neural Networks with Attention Mechanism
Author(s) -
Shimirwa Aline Valerie
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5121/csit.2021.111518
Subject(s) - automatic summarization , computer science , generalization , benchmark (surveying) , set (abstract data type) , artificial intelligence , sentence , extractor , artificial neural network , encoder , mechanism (biology) , multi document summarization , machine learning , mathematical analysis , philosophy , mathematics , geodesy , epistemology , process engineering , engineering , programming language , geography , operating system
Extractive summarization aims to select the most important sentences or words from a document to generate a summary. Traditional summarization approaches have relied extensively on features manually designed by humans. In this paper, based on the recurrent neural network equipped with the attention mechanism, we propose a data-driven technique. We set up a general framework that consists of a hierarchical sentence encoder and an attentionbased sentence extractor. The framework allows us to establish various extractive summarization models and explore them. Comprehensive experiments are conducted on two benchmark datasets, and experimental results show that training extractive models based on Reward Augmented Maximum Likelihood (RAML)can improve the model’s generalization capability. And we realize that complicated components of the state-of-the-art extractive models do not attain good performance over simpler ones. We hope that our work can give more hints for future research on extractive text summarization.