Premium
O3‐03‐04: Cogstate and NIH Toolbox‐Cognitive Computerized Test Batteries: Repeat Assessments among African‐American Elders
Author(s) -
Shair Sarah,
Kavcic Voyko,
Garcia Sarah,
Lichtenberg Peter,
Paulson Henry L.,
Davis Kacy,
Overall Janet,
Rose Edna,
Campbell Stephen,
Teboe Sherry,
Bhaumik Arijit,
Hampstead Benjamin,
Dodge Hiroko H.,
Giordani Bruno
Publication year - 2016
Publication title -
alzheimer's and dementia
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.713
H-Index - 118
eISSN - 1552-5279
pISSN - 1552-5260
DOI - 10.1016/j.jalz.2016.06.521
Subject(s) - psychology , cognitive test , cognition , working memory , episodic memory , executive functions , medicine , toolbox , clinical psychology , executive dysfunction , gerontology , audiology , psychiatry , neuropsychology , computer science , programming language
Background:Unsupervised web-based cognitive assessment could widen and refine recruitment into clinical trials while reducing costs. However, challenges to reliability exist which are not present in supervised testing. We were interested in establishing: 1) the comparability of web-based testing to supervised testing 2) The development of measures related to participant compliance and engagement. Methods: Six hundred participants between 18 and 70 were recruited for online testing, and were matched to 94 participants assessed in a supervised setting on age and gender. Participants completed an adaptive test of episodic memory (Paired Associates Learning PAL) from Cantab. Demographic and background data (e.g. age, education) were also recorded. Participants in the supervised testing were assessed on iPads, whereas in online testing participants used a variety of systems, including desktop computers, laptops and tablet devices. Results:There was no difference in PAL errors between supervised and web-based testing. Within the web-based testing, there was no difference between hardware platforms or browser. However, trial-by-trial timing data showed highly variable and slow reaction times in a number of participants during web-based testing, outside the bounds seen on supervised testing. This was associated with more PAL errors (r1⁄4.35), and younger age (r1⁄4.-21). Test-retest reliability was increased when participants with speed and variability parameters outside those of supervised testing were excluded.Web browser activity monitoring revealed whether participants tabbed to a different browser window during task performance (n1⁄4200). This behaviour was associated with poorer PAL performance (t 1⁄4 -2.09, df 1⁄4 161.5, p-value 1⁄4 0.03), greater RT variability (t 1⁄4 -3.48, df 1⁄4 119.6, p-value < 0.01) and younger age (t 1⁄4 3.4157, df 1⁄4 192.9, p-value <0.01). Conclusions: Behavioural metrics can be used to reliably identify lack of task engagement in web-based testing. Good task engagement is typically seen in older participants. A combination of detailed task behaviour analysis and monitoring technology is recommended in remote testing. Once these safeguards are in place, online testing is feasible and produces similar results to those obtained in supervised settings, making this technology suitable for remote assessment of cognitive function for recruitment and research purposes.