z-logo
open-access-imgOpen Access
Deep transfer learning artificial intelligence accurately stages COVID-19 lung disease severity on portable chest radiographs
Author(s) -
Jocelyn Zhu,
Beiyi Shen,
Almas Abbasi,
Mahsa Hoshmand-Kochi,
Haifang Li,
Timothy Q. Duong
Publication year - 2020
Publication title -
plos one
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.99
H-Index - 332
ISSN - 1932-6203
DOI - 10.1371/journal.pone.0236621
Subject(s) - radiography , medicine , transfer of learning , convolutional neural network , covid-19 , ground glass opacity , deep learning , artificial intelligence , kappa , standard deviation , lung , cohen's kappa , radiology , nuclear medicine , disease , machine learning , statistics , computer science , mathematics , infectious disease (medical specialty) , adenocarcinoma , geometry , cancer
This study employed deep-learning convolutional neural networks to stage lung disease severity of Coronavirus Disease 2019 (COVID-19) infection on portable chest x-ray (CXR) with radiologist score of disease severity as ground truth. This study consisted of 131 portable CXR from 84 COVID-19 patients (51M 55.1±14.9yo; 29F 60.1±14.3yo; 4 missing information). Three expert chest radiologists scored the left and right lung separately based on the degree of opacity (0–3) and geographic extent (0–4). Deep-learning convolutional neural network (CNN) was used to predict lung disease severity scores. Data were split into 80% training and 20% testing datasets. Correlation analysis between AI-predicted versus radiologist scores were analyzed. Comparison was made with traditional and transfer learning. The average opacity score was 2.52 (range: 0–6) with a standard deviation of 0.25 (9.9%) across three readers. The average geographic extent score was 3.42 (range: 0–8) with a standard deviation of 0.57 (16.7%) across three readers. The inter-rater agreement yielded a Fleiss’ Kappa of 0.45 for opacity score and 0.71 for extent score. AI-predicted scores strongly correlated with radiologist scores, with the top model yielding a correlation coefficient (R 2 ) of 0.90 (range: 0.73–0.90 for traditional learning and 0.83–0.90 for transfer learning) and a mean absolute error of 8.5% (ranges: 17.2–21.0% and 8.5%-15.5, respectively). Transfer learning generally performed better. In conclusion, deep-learning CNN accurately stages disease severity on portable chest x-ray of COVID-19 lung infection. This approach may prove useful to stage lung disease severity, prognosticate, and predict treatment response and survival, thereby informing risk management and resource allocation.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here