z-logo
Premium
Data Quality from Crowdsourced Surveys: A Mixed Method Inquiry into Perceptions of Amazon's Mechanical Turk Masters
Author(s) -
Lovett Matt,
Bajaba Saleh,
Lovett Myra,
Simmering Marcia J.
Publication year - 2018
Publication title -
applied psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.497
H-Index - 88
eISSN - 1464-0597
pISSN - 0269-994X
DOI - 10.1111/apps.12124
Subject(s) - crowdsourcing , amazon rainforest , data collection , data quality , data science , perception , quality (philosophy) , survey data collection , qualitative property , psychology , computer science , world wide web , sociology , marketing , business , statistics , social science , mathematics , neuroscience , biology , machine learning , ecology , metric (unit) , philosophy , epistemology
Researchers in the social sciences are increasingly turning to online data collection panels for research purposes. While there is evidence that crowdsourcing platforms such as Amazon's Mechanical Turk can produce data as reliable as more traditional survey collection methods, little is known about Amazon's Mechanical Turk's most experienced respondents, their perceptions of crowdsourced data, and the degree to which these affect data quality. The current study utilises both quantitative and qualitative data to investigate Amazon's Mechanical Turk Masters' perceptions and attitudes related to the data quality (e.g. inattention). Recommendations for researchers using crowdsourcing data are provided.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here