Empirical assessment of bias in machine learning diagnostic test accuracy studies
Author(s) -
Ryan Crowley,
Yuan Tan,
John P. A. Ioannidis
Publication year - 2020
Publication title -
journal of the american medical informatics association
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.614
H-Index - 150
eISSN - 1527-974X
pISSN - 1067-5027
DOI - 10.1093/jamia/ocaa075
Subject(s) - generalizability theory , blinding , confidence interval , diagnostic accuracy , medicine , diagnostic odds ratio , clinical study design , test (biology) , sample size determination , research design , machine learning , artificial intelligence , computer science , statistics , medical physics , randomized controlled trial , pathology , clinical trial , radiology , mathematics , paleontology , biology
Machine learning (ML) diagnostic tools have significant potential to improve health care. However, methodological pitfalls may affect diagnostic test accuracy studies used to appraise such tools. We aimed to evaluate the prevalence and reporting of design characteristics within the literature. Further, we sought to empirically assess whether design features may be associated with different estimates of diagnostic accuracy.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom