z-logo
open-access-imgOpen Access
Artificial Intelligence-Based Differential Diagnosis: Development and Validation of a Probabilistic Model to Address Lack of Large-Scale Clinical Datasets
Author(s) -
Shahrukh Chishti,
Karan Raj Jaggi,
Anuj Saini,
Gaurav Agarwal,
Ashish Ranjan
Publication year - 2020
Publication title -
journal of medical internet research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.446
H-Index - 142
eISSN - 1439-4456
pISSN - 1438-8871
DOI - 10.2196/17550
Subject(s) - medical diagnosis , overfitting , probabilistic logic , machine learning , gold standard (test) , computer science , artificial intelligence , set (abstract data type) , scale (ratio) , recall , statistical model , presentation (obstetrics) , disease , data mining , medicine , statistics , mathematics , pathology , psychology , artificial neural network , physics , quantum mechanics , cognitive psychology , radiology , programming language
Background Machine-learning or deep-learning algorithms for clinical diagnosis are inherently dependent on the availability of large-scale clinical datasets. Lack of such datasets and inherent problems such as overfitting often necessitate the development of innovative solutions. Probabilistic modeling closely mimics the rationale behind clinical diagnosis and represents a unique solution. Objective The aim of this study was to develop and validate a probabilistic model for differential diagnosis in different medical domains. Methods Numerical values of symptom-disease associations were utilized to mathematically represent medical domain knowledge. These values served as the core engine for the probabilistic model. For the given set of symptoms, the model was utilized to produce a ranked list of differential diagnoses, which was compared to the differential diagnosis constructed by a physician in a consult. Practicing medical specialists were integral in the development and validation of this model. Clinical vignettes (patient case studies) were utilized to compare the accuracy of doctors and the model against the assumed gold standard. The accuracy analysis was carried out over the following metrics: top 3 accuracy, precision, and recall. Results The model demonstrated a statistically significant improvement ( P =.002) in diagnostic accuracy (85%) as compared to the doctors’ performance (67%). This advantage was retained across all three categories of clinical vignettes: 100% vs 82% ( P <.001) for highly specific disease presentation, 83% vs 65% for moderately specific disease presentation ( P =.005), and 72% vs 49% ( P <.001) for nonspecific disease presentation. The model performed slightly better than the doctors’ average in precision (62% vs 60%, P =.43) but there was no improvement with respect to recall (53% vs 56%, P =.27). However, neither difference was statistically significant. Conclusions The present study demonstrates a drastic improvement over previously reported results that can be attributed to the development of a stable probabilistic framework utilizing symptom-disease associations to mathematically represent medical domain knowledge. The current iteration relies on static, manually curated values for calculating the degree of association. Shifting to real-world data–derived values represents the next step in model development.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom