Evaluation of COVID-19 Information Provided by Digital Voice Assistants
Author(s) -
Alysee Shin Ying Goh,
Li Lian Wong,
Kevin YiLwern Yap
Publication year - 2021
Publication title -
international journal of digital health
Language(s) - English
Resource type - Journals
ISSN - 2634-4580
DOI - 10.29337/ijdh.25
Subject(s) - relevance (law) , misinformation , telehealth , computer science , covid-19 , test (biology) , phone , internet privacy , world wide web , medicine , telemedicine , computer security , health care , disease , infectious disease (medical specialty) , paleontology , linguistics , philosophy , pathology , political science , law , economics , biology , economic growth
Background: Digital voice assistants are widely used for health information seeking activities during the COVID-19 pandemic. Due to the rapidly changing nature of COVID-19 information, there is a need to evaluate COVID-related information provided by voice assistants, to ensure consumers’ needs are met and prevent misinformation. The objective of this study is to evaluate COVID-related information provided by the voice assistants in terms of relevance, accuracy, comprehensiveness, user-friendliness and reliability. Materials and Methods: The voice assistants evaluated were Amazon Alexa, Google Home, Google Assistant, Samsung Bixby, Apple Siri and Microsoft Cortana. Two evaluators posed COVID-19 questions to the voice assistants and evaluated responses based on relevance, accuracy, comprehensiveness, user-friendliness and reliability. Questions were obtained from the World Health Organization, governmental websites, forums and search trends. Data was analyzed using Pearson’s correlation, independent samples t-tests and Wilcoxon rank-sum tests. Results: Google Assistant and Siri performed the best across all evaluation parameters with mean scores of 84.0% and 80.6% respectively. Bixby performed the worst among the smartphone-based voice assistants (65.8%). On the other hand, Google Home performed the best among the non-smartphone voice assistants (60.7%), followed by Alexa (43.1%) and Cortana (13.3%). Smartphone-based voice assistants had higher mean scores than voice assistants on other platforms (76.8% versus 39.1%, p = 0.064). Google Assistant consistently scored better than Google Home for all the evaluation parameters. A decreasing score trend from Google Assistant, Siri, Bixby, Google Home, Alexa and Cortana was observed for majority of the evaluation criteria, except for accuracy, comprehensiveness and credibility. Conclusion: Google Assistant and Apple Siri were able to provide users with relevant, accurate, comprehensive, user-friendly, and reliable information regarding COVID-19. With the rapidly evolving information on this pandemic, users need to be discerning when obtaining COVID-19 information from voice assistants.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom