z-logo
open-access-imgOpen Access
Community pooling: LDA topic modeling in Twitter
Author(s) -
Solange Oliveira Rezende,
Emanuel Silva
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.52591/lxai2021072410
Subject(s) - latent dirichlet allocation , pooling , topic model , computer science , scheme (mathematics) , information retrieval , artificial intelligence , social media , machine learning , world wide web , mathematical analysis , mathematics
Aspect-Based Sentiment Analysis (ABSA) tasks aim to identify consumers’ opinions about different aspects of products or services. BERT-based language models have been used successfully in applications that require a deep understanding of the language, such as sentiment analysis. This paper investigates the use of disentangled learning to improve BERT-based textual representations in ABSA tasks. Motivated by the success of disentangled representation learning in the field of computer vision, which aims to obtain explanatory factors of the data representations, we explored the recent DeBERTa model (Decoding-enhanced BERT with Disentangled Attention) to disentangle the syntactic and semantics features from a BERT architecture. Experimental results show that incorporating disentangled attention and a simple fine-tuning strategy for downstream tasks outperforms state-of-the-art models in ABSA’s benchmark datasets.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here