Premium
Mechanical Turk or volunteer participant? Comparing the two samples in the study of intelligent personal assistants
Author(s) -
Lopatovska Irene,
Korshakova Elena
Publication year - 2020
Publication title -
proceedings of the association for information science and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.193
H-Index - 14
ISSN - 2373-9231
DOI - 10.1002/pra2.236
Subject(s) - demographics , crowdsourcing , task (project management) , relevance (law) , psychology , medical education , applied psychology , computer science , world wide web , medicine , engineering , sociology , demography , systems engineering , law , political science
A challenge in academic and practitioner research is recruiting study participants that match target demographics, possess a desired skillset, and will participate for little to no compensation. An alternative to traditional participant recruitment struggles is crowdsourcing participants through online labor markets, such as Amazon Mechanical Turk (AMT). AMT is a platform that provides the tool for finding and recruiting participants with diverse demographics, skills, and experiences. This paper aims to demystify the use of crowdsourcing, and particularly AMT, by comparing the performance of traditionally recruited volunteers and AMT participants on tasks related to the evaluation of intelligent personal assistants (IPAs such as Amazon Alexa, Google Assistant, Apple Siri, and Microsoft Cortana). The comparison of AMT and non‐AMT samples indicated that while the two samples differed on demographics, their task performance was not significantly different. The paper discusses the costs and benefits of using AMT samples and would be of particular relevance to researchers who employ questionnaires and/or task‐specific data collection methods in their work.