z-logo
Premium
Attention by design: Using attention checks to detect inattentive respondents and improve data quality
Author(s) -
Abbey James D.,
Meloy Margaret G.
Publication year - 2017
Publication title -
journal of operations management
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.649
H-Index - 191
eISSN - 1873-1317
pISSN - 0272-6963
DOI - 10.1016/j.jom.2017.06.001
Subject(s) - prima facie , computer science , empirical research , range (aeronautics) , quality (philosophy) , sample size determination , construct (python library) , sample (material) , type i and type ii errors , attrition , cognitive psychology , statistical power , data quality , statistics , econometrics , psychology , operations management , mathematics , medicine , materials science , chemistry , dentistry , chromatography , composite material , programming language , metric (unit) , philosophy , epistemology , economics
This paper examines attention checks and manipulation validations to detect inattentive respondents in primary empirical data collection. These prima facie attention checks range from the simple such as reverse scaling first proposed a century ago to more recent and involved methods such as evaluating response patterns and timed responses via online data capture tools. The attention check validations also range from easily implemented mechanisms such as automatic detection through directed queries to highly intensive investigation of responses by the researcher. The latter has the potential to introduce inadvertent researcher bias as the researcher's judgment may impact the interpretation of the data. The empirical findings of the present work reveal that construct and scale validations show consistently significant improvement in the fit statistics—a finding of great use for researchers working predominantly with scales and constructs for their empirical models. However, based on the rudimentary experimental models employed in the analysis, attention checks generally do not show a consistent, systematic improvement in the significance of test statistics for experimental manipulations. This latter result indicates that, by their very nature, attention checks may trigger an inherent trade‐off between loss of sample subjects—lowered power and increased Type II error—and the potential of capitalizing on chance alone—the possibility that the previously significant results were in fact the result of Type I error. The analysis also shows that the attrition rates due to attention checks—upwards of 70% in some observed samples—are far larger than typically assumed. Such loss rates raise the specter that studies not validating attention may inadvertently increase their Type I error rate. The manuscript provides general guidelines for various attention checks, discusses the psychological nuances of the methods, and highlights the delicate balance among incentive alignment, monetary compensation, and the subsequently triggered mood of respondents.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here