z-logo
Premium
USING CALIBRATION AND INTEROBSERVER AGREEMENT ALGORITHMS TO ASSESS THE ACCURACY AND PRECISION OF DATA FROM ELECTRONIC AND PEN‐AND‐PAPER CONTINUOUS RECORDING METHODS
Author(s) -
Phillips Katrina J.,
Mudford Oliver C.,
Zeleny Jason R.,
Elliffe Douglas
Publication year - 2014
Publication title -
behavioral interventions
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.605
H-Index - 34
eISSN - 1099-078X
pISSN - 1072-0847
DOI - 10.1002/bin.1395
Subject(s) - touchscreen , laptop , calibration , computer science , quality (philosophy) , group (periodic table) , accuracy and precision , artificial intelligence , computer vision , computer hardware , statistics , mathematics , philosophy , chemistry , organic chemistry , epistemology , operating system
Often it is assumed that electronic recording by observers necessarily provides better quality data than pen‐and‐paper methods. Fifteen novice observers recorded rates of responding from 10 role‐played video samples using one of three continuous recording input formats: keyboard (laptop), touchscreen (personal digital assistants), or pen‐and‐paper. We evaluated the quality of the observers' data compared with criterion records using calibration and interobserver agreement algorithms. Results of the calibration analysis revealed that observers in the touchscreen group produced the most consistently accurate and precise data, the keyboard group observers showed wide variation in precision and accuracy, and the pen‐and‐paper group observers were significantly less precise than the touchscreen group. We conclude that although electronic recording has the potential to be as accurate as, and more precise than, pen‐and‐paper methods, this is far from guaranteed. Analyses of observers' errors advise recommendations for improving data accuracy and precision when using each method. Copyright © 2014 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here