z-logo
open-access-imgOpen Access
A New Remote Guided Method for Supervised Web-Based Cognitive Testing to Ensure High-Quality Data: Development and Usability Study
Author(s) -
Victoria Leong,
Kausar Raheel,
Jia Yi Sim,
Kriti Kacker,
Vasilis M Karlaftis,
Chrysoula Vassiliu,
Kastoori Kalaivanan,
Annabel Chen,
Trevor W. Robbins,
Barbara J. Sahakian,
Zoe Kourtzi
Publication year - 2022
Publication title -
jmir. journal of medical internet research/journal of medical internet research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.446
H-Index - 142
eISSN - 1439-4456
pISSN - 1438-8871
DOI - 10.2196/28368
Subject(s) - usability , data quality , computer science , data collection , quality (philosophy) , web application , world wide web , human–computer interaction , engineering , statistics , metric (unit) , philosophy , operations management , mathematics , epistemology
Background The global COVID-19 pandemic has triggered a fundamental reexamination of how human psychological research can be conducted safely and robustly in a new era of digital working and physical distancing. Online web-based testing has risen to the forefront as a promising solution for the rapid mass collection of cognitive data without requiring human contact. However, a long-standing debate exists over the data quality and validity of web-based studies. This study examines the opportunities and challenges afforded by the societal shift toward web-based testing and highlights an urgent need to establish a standard data quality assurance framework for online studies. Objective This study aims to develop and validate a new supervised online testing methodology, remote guided testing (RGT). Methods A total of 85 healthy young adults were tested on 10 cognitive tasks assessing executive functioning (flexibility, memory, and inhibition) and learning. Tasks were administered either face-to-face in the laboratory (n=41) or online using remote guided testing (n=44) and delivered using identical web-based platforms (Cambridge Neuropsychological Test Automated Battery, Inquisit, and i-ABC). Data quality was assessed using detailed trial-level measures (missed trials, outlying and excluded responses, and response times) and overall task performance measures. Results The results indicated that, across all data quality and performance measures, RGT data was statistically-equivalent to in-person data collected in the lab (P>.40 for all comparisons). Moreover, RGT participants out-performed the lab group on measured verbal intelligence (P<.001), which could reflect test environment differences, including possible effects of mask-wearing on communication. Conclusions These data suggest that the RGT methodology could help ameliorate concerns regarding online data quality—particularly for studies involving high-risk or rare cohorts—and offer an alternative for collecting high-quality human cognitive data without requiring in-person physical attendance.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here