z-logo
open-access-imgOpen Access
Automatic geographic atrophy segmentation using optical attenuation in OCT scans with deep learning
Author(s) -
Zhongdi Chu,
Liang Wang,
Xiao Hua Zhou,
Yingying Shi,
Yuxuan Cheng,
Rita Laiginhas,
Hao Zhou,
Mengxi Shen,
Qinqin Zhang,
Luís de Sisternes,
Aaron Lee,
Giovanni Gregori,
Philip J. Rosenfeld,
Ke Wang
Publication year - 2022
Publication title -
biomedical optics express
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.362
H-Index - 86
ISSN - 2156-7085
DOI - 10.1364/boe.449314
Subject(s) - optical coherence tomography , artificial intelligence , deep learning , segmentation , computer science , nuclear medicine , geographic atrophy , attenuation , correlation , pattern recognition (psychology) , ophthalmology , medicine , macular degeneration , mathematics , optics , physics , geometry
A deep learning algorithm was developed to automatically identify, segment, and quantify geographic atrophy (GA) based on optical attenuation coefficients (OACs) calculated from optical coherence tomography (OCT) datasets. Normal eyes and eyes with GA secondary to age-related macular degeneration were imaged with swept-source OCT using 6 × 6 mm scanning patterns. OACs calculated from OCT scans were used to generate customized composite en face OAC images. GA lesions were identified and measured using customized en face sub-retinal pigment epithelium (subRPE) OCT images. Two deep learning models with the same U-Net architecture were trained using OAC images and subRPE OCT images. Model performance was evaluated using DICE similarity coefficients (DSCs). The GA areas were calculated and compared with manual segmentations using Pearson's correlation and Bland-Altman plots. In total, 80 GA eyes and 60 normal eyes were included in this study, out of which, 16 GA eyes and 12 normal eyes were used to test the models. Both models identified GA with 100% sensitivity and specificity on the subject level. With the GA eyes, the model trained with OAC images achieved significantly higher DSCs, stronger correlation to manual results and smaller mean bias than the model trained with subRPE OCT images (0.940 ± 0.032 vs 0.889 ± 0.056, p = 0.03, paired t-test, r = 0.995 vs r = 0.959, mean bias = 0.011 mm vs mean bias = 0.117 mm). In summary, the proposed deep learning model using composite OAC images effectively and accurately identified, segmented, and quantified GA using OCT scans.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here