Premium
Deep learning‐based detection and stage grading for optimising diagnosis of diabetic retinopathy
Author(s) -
Wang Yuelin,
Yu Miao,
Hu Bojie,
Jin Xuemin,
Li Yibin,
Zhang Xiao,
Zhang Yongpeng,
Gong Di,
Wu Chan,
Zhang Bilei,
Yang Jingyuan,
Li Bing,
Yuan Mingzhen,
Mo Bin,
Wei Qijie,
Zhao Jianchun,
Ding Dayong,
Yang Jingyun,
Li Xirong,
Yu Weihong,
Chen Youxin
Publication year - 2021
Publication title -
diabetes/metabolism research and reviews
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.307
H-Index - 110
eISSN - 1520-7560
pISSN - 1520-7552
DOI - 10.1002/dmrr.3445
Subject(s) - medicine , cotton wool spots , diabetic retinopathy , grading (engineering) , fundus (uterus) , receiver operating characteristic , stage (stratigraphy) , lesion , ophthalmology , artificial intelligence , test set , optometry , radiology , surgery , diabetes mellitus , computer science , paleontology , civil engineering , biology , engineering , endocrinology
Abstract Aims To establish an automated method for identifying referable diabetic retinopathy (DR), defined as moderate nonproliferative DR and above, using deep learning‐based lesion detection and stage grading. Materials and Methods A set of 12,252 eligible fundus images of diabetic patients were manually annotated by 45 licenced ophthalmologists and were randomly split into training, validation, and internal test sets (ratio of 7:1:2). Another set of 565 eligible consecutive clinical fundus images was established as an external test set. For automated referable DR identification, four deep learning models were programmed based on whether two factors were included: DR‐related lesions and DR stages. Sensitivity, specificity and the area under the receiver operating characteristic curve (AUC) were reported for referable DR identification, while precision and recall were reported for lesion detection. Results Adding lesion information to the five‐stage grading model improved the AUC (0.943 vs. 0.938), sensitivity (90.6% vs. 90.5%) and specificity (80.7% vs. 78.5%) of the model for identifying referable DR in the internal test set. Adding stage information to the lesion‐based model increased the AUC (0.943 vs. 0.936) and sensitivity (90.6% vs. 76.7%) of the model for identifying referable DR in the internal test set. Similar trends were also seen in the external test set. DR lesion types with high precision results were preretinal haemorrhage, hard exudate, vitreous haemorrhage, neovascularisation, cotton wool spots and fibrous proliferation. Conclusions The herein described automated model employed DR lesions and stage information to identify referable DR and displayed better diagnostic value than models built without this information.