
Comparing Classical ML Models with Quantum ML Models with Parametrized Circuits for Sentiment Analysis Task
Author(s) -
Nisheeth Joshi,
Pragya Katyayan,
Syed Afroz Ahmed
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1854/1/012032
Subject(s) - artificial intelligence , support vector machine , parameterized complexity , boosting (machine learning) , gradient boosting , computer science , machine learning , quantum machine learning , classifier (uml) , random forest , sentiment analysis , pattern recognition (psychology) , quantum , algorithm , quantum algorithm , physics , quantum mechanics
This paper studies the performance of classical and quantum machine learning models for sentiment analysis task. Here, popular machine learning algorithms viz support vector machine (SVM), gradient boosting (GB) and random forest (RF) are compared with variational quantum classifier (VQC) using two sets of parameterized circuits viz EfficientSU2 and RealAmplitudes. For experimenting with VQC, IBM Quantum Experience and IBM Qiskit were used while for classical machine learning models, scikit-learn was used. It was found that the performance of the VQC was slightly better than popular machine learning algorithms. For our experiments, we have used popular restaurant sentiment analysis dataset. The extracted features from this dataset and then after applying PCA reduced the feature set into 5 features. Quantum ML models were trained using 100 epochs and 150 epochs. Overall, four Quantum ML models were trained and three Classical ML models were trained. The performance of the trained models was evaluated using standard evaluation measures viz, Accuracy, Precision, Recall, F-Score etc. In all the cases EfficientSU2 based model with 100 Epochs performed better than all other models. Efficient SU2 model with 100 epochs produced an accuracy of 74.5% and an F-Score of 0.7605 which were highest across all the trained models.