z-logo
open-access-imgOpen Access
Comparison of word embeddings in text classification based on RNN and CNN
Author(s) -
Merlin Susan David,
Shini Renjith
Publication year - 2021
Publication title -
iop conference series. materials science and engineering
Language(s) - English
Resource type - Journals
eISSN - 1757-899X
pISSN - 1757-8981
DOI - 10.1088/1757-899x/1187/1/012029
Subject(s) - computer science , word (group theory) , word embedding , artificial intelligence , embedding , deep learning , field (mathematics) , recurrent neural network , natural language processing , document classification , pattern recognition (psychology) , artificial neural network , mathematics , geometry , pure mathematics
This paper presents a comparison of word embeddings in text classification using RNN and CNN. In the field of image classification, deep learning methods like as RNN and CNN have shown to be popular. CNN is most popular model among deep learning techniques in the field of NLP because of its simplicity and parallelism, even if the dataset is huge. Word embedding techniques employed are GloVe and fastText. Use of different word embeddings showed a major difference in the accuracy of the models. When it comes to embedding of rare words, GloVe can sometime perform poorly. Inorder to tackle this issue, fastText method is used. Deep neural networks with fastText showed a remarkable improvement in the accuracy than GloVe. But fastText took some time to train when compared to GloVe. Further, the accuracy was improved by minimizing the batch size. Finally we concluded that the word embeddings have a huge impact on the performance of text classification models

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here