
RoBERTa-BiLSTM: A Context-Aware Hybrid Model for Sentiment Analysis
Author(s) -
Md Mostafizer Rahman,
Ariful Islam Shiplu,
Yutaka Watanobe,
Md Ashad Alam
Publication year - 2025
Publication title -
ieee transactions on emerging topics in computational intelligence
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.135
H-Index - 21
eISSN - 2471-285X
DOI - 10.1109/tetci.2025.3572150
Subject(s) - computing and processing
With the rapid advancement of technology and its easy accessibility, online activity has become an integral part of everyday human life. Expressing opinions, providing feedback, and sharing feelings by commenting on various platforms, including social media, education, business, entertainment, and sports, has become a common phenomenon. Effectively analyzing these comments to uncover latent intentions holds immense value in making strategic decisions across various domains. However, several challenges hinder the process of sentiment analysis including the lexical diversity exhibited in comments, the presence of long dependencies within the text, encountering unknown symbols and words, and dealing with imbalanced datasets. Moreover, existing sentiment analysis tasks mostly leveraged sequential models to encode the long dependent texts and it requires longer execution time as it processes the text sequentially. In contrast, the Transformer requires less execution time due to its parallel processing nature. In this work, we introduce a novel hybrid deep learning model, RoBERTa-BiLSTM, which combines the Robustly Optimized BERT Pretraining Approach (RoBERTa) with Bidirectional Long Short-Term Memory (BiLSTM) networks. RoBERTa is utilized to generate meaningful word embedding vectors, while BiLSTM effectively captures the contextual semantics of long-dependent texts. The RoBERTa-BiLSTM hybrid model leverages the strengths of both sequential and Transformer models to enhance performance in sentiment analysis. We conducted experiments using datasets from IMDb, Twitter US Airline, and Sentiment140 to evaluate the proposed model against existing state-of-the-art methods. Our experimental findings demonstrate that the RoBERTa-BiLSTM model surpasses baseline models (e.g., BERT, RoBERTa-base, RoBERTa-GRU, and RoBERTa-LSTM), achieving accuracies of 80.74%, 92.36%, and 82.25% on the Twitter US Airline, IMDb, and Sentiment140 datasets, respectively. Additionally, the model achieves F1-scores of 80.73%, 92.35%, and 82.25% on the same datasets, respectively.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom