z-logo
open-access-imgOpen Access
Categorizing Vaccine Confidence With a Transformer-Based Machine Learning Model: Analysis of Nuances of Vaccine Sentiment in Twitter Discourse
Author(s) -
Per Egil Kummervold,
Sam Martin,
Sara Dada,
Eliz Kilich,
Chermain Denny,
Pauline Paterson,
Heidi J. Larson
Publication year - 2021
Publication title -
jmir medical informatics
Language(s) - English
Resource type - Journals
ISSN - 2291-9694
DOI - 10.2196/29584
Subject(s) - social media , artificial intelligence , computer science , machine learning , psychological intervention , natural language processing , sentiment analysis , annotation , set (abstract data type) , test set , information retrieval , world wide web , medicine , nursing , programming language
Background Social media has become an established platform for individuals to discuss and debate various subjects, including vaccination. With growing conversations on the web and less than desired maternal vaccination uptake rates, these conversations could provide useful insights to inform future interventions. However, owing to the volume of web-based posts, manual annotation and analysis are difficult and time consuming. Automated processes for this type of analysis, such as natural language processing, have faced challenges in extracting complex stances such as attitudes toward vaccination from large amounts of text. Objective The aim of this study is to build upon recent advances in transposer-based machine learning methods and test whether transformer-based machine learning could be used as a tool to assess the stance expressed in social media posts toward vaccination during pregnancy. Methods A total of 16,604 tweets posted between November 1, 2018, and April 30, 2019, were selected using keyword searches related to maternal vaccination. After excluding irrelevant tweets, the remaining tweets were coded by 3 individual researchers into the categories Promotional , Discouraging , Ambiguous , and Neutral or No Stance . After creating a final data set of 2722 unique tweets, multiple machine learning techniques were trained on a part of this data set and then tested and compared with the human annotators. Results We found the accuracy of the machine learning techniques to be 81.8% ( F score=0.78) compared with the agreed score among the 3 annotators. For comparison, the accuracies of the individual annotators compared with the final score were 83.3%, 77.9%, and 77.5%. Conclusions This study demonstrates that we are able to achieve close to the same accuracy in categorizing tweets using our machine learning models as could be expected from a single human coder. The potential to use this automated process, which is reliable and accurate, could free valuable time and resources for conducting this analysis, in addition to informing potentially effective and necessary interventions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here