
Developing Digital Tools for Remote Clinical Research: How to Evaluate the Validity and Practicality of Active Assessments in Field Settings
Author(s) -
Jennifer Ferrar,
Gareth J Griffith,
Caroline Skirrow,
Nathan Cashdollar,
Nick Taptiklis,
James Dobson,
Fiona Cree,
Francesca Cormack,
Jennifer H. Barnett,
Marcus R. Munafò
Publication year - 2021
Publication title -
jmir. journal of medical internet research/journal of medical internet research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.446
H-Index - 142
eISSN - 1439-4456
pISSN - 1438-8871
DOI - 10.2196/26004
Subject(s) - computer science , process (computing) , field (mathematics) , data science , data collection , automatic identification and data capture , data validation , human–computer interaction , statistics , mathematics , database , pure mathematics , speech recognition , operating system
The ability of remote research tools to collect granular, high-frequency data on symptoms and digital biomarkers is an important strength because it circumvents many limitations of traditional clinical trials and improves the ability to capture clinically relevant data. This approach allows researchers to capture more robust baselines and derive novel phenotypes for improved precision in diagnosis and accuracy in outcomes. The process for developing these tools however is complex because data need to be collected at a frequency that is meaningful but not burdensome for the participant or patient. Furthermore, traditional techniques, which rely on fixed conditions to validate assessments, may be inappropriate for validating tools that are designed to capture data under flexible conditions. This paper discusses the process for determining whether a digital assessment is suitable for remote research and offers suggestions on how to validate these novel tools.