z-logo
open-access-imgOpen Access
Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches
Author(s) -
Jinqiu Kuang,
Lauren Argo,
Greg Stoddard,
Bruce E. Bray,
Qing ZengTreitler
Publication year - 2015
Publication title -
journal of medical internet research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.446
H-Index - 142
eISSN - 1439-4456
pISSN - 1438-8871
DOI - 10.2196/jmir.4582
Subject(s) - crowdsourcing , sample (material) , likert scale , applied psychology , set (abstract data type) , medicine , psychology , computer science , data science , world wide web , developmental psychology , chemistry , chromatography , programming language
Background Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. Objective To explore the application of online crowdsourcing for health informatics research, specifically the testing of medical pictographs. Methods A set of pictographs created for cardiovascular hospital discharge instructions was tested for recognition. This set of illustrations (n=486) was first tested through an in-person survey in a hospital setting (n=150) and then using online MTurk participants (n=150). We analyzed these survey results to determine their comparability. Results Both the demographics and the pictograph recognition rates of online participants were different from those of the in-person participants. In the multivariable linear regression model comparing the 2 groups, the MTurk group scored significantly higher than the hospital sample after adjusting for potential demographic characteristics (adjusted mean difference 0.18, 95% CI 0.08-0.28, P <.001). The adjusted mean ratings were 2.95 (95% CI 2.89-3.02) for the in-person hospital sample and 3.14 (95% CI 3.07-3.20) for the online MTurk sample on a 4-point Likert scale (1=totally incorrect, 4=totally correct). Conclusions The findings suggest that crowdsourcing is a viable complement to traditional in-person surveys, but it cannot replace them.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom