
Merging Information From Infrared and Autofluorescence Fundus Images for Monitoring of Chorioretinal Atrophic Lesions
Author(s) -
Giovanni Ometto,
Giovanni Montesano,
Saman Sadeghi Afgeh,
Georgios Lazaridis,
Xiaoxuan Liu,
Pearse A. Keane,
David P. Crabb,
Alastair K. Denniston
Publication year - 2020
Publication title -
translational vision science and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.508
H-Index - 21
ISSN - 2164-2591
DOI - 10.1167/tvst.9.9.38
Subject(s) - artificial intelligence , segmentation , sørensen–dice coefficient , fundus (uterus) , computer science , pixel , computer vision , autofluorescence , image segmentation , ophthalmology , pattern recognition (psychology) , medicine , optics , physics , fluorescence
Purpose To develop a method for automated detection and progression analysis of chorioretinal atrophic lesions using the combined information of standard infrared (IR) and autofluorescence (AF) fundus images. Methods Eighteen eyes (from 16 subjects) with punctate inner choroidopathy were analyzed. Macular IR and blue AF images were acquired in all eyes with a Spectralis HRA+OCT device (Heidelberg Engineering, Heidelberg, Germany). Two clinical experts manually segmented chorioretinal lesions on the AF image. AF images were aligned to the corresponding IR. Two random forest models were trained to classify pixels of lesions, one based on the AF image only, the other based on the aligned IR-AF. The models were validated using a leave-one-out cross-validation and were tested against the manual segmentation to compare their performance. A time series from one eye was identified and used to evaluate the method based on the IR-AF in a case study. Results The method based on the AF images correctly classified 95% of the pixels (i.e., in vs. out of the lesion) with a Dice's coefficient of 0.80. The method based on the combined IR-AF correctly classified 96% of the pixels with a Dice's coefficient of 0.84. Conclusions The automated segmentation of chorioretinal lesions using IR and AF shows closer alignment to manual segmentation than the same method based on AF only. Merging information from multimodal images improves the automatic and objective segmentation of chorioretinal lesions even when based on a small dataset. Translational Relevance Merged information from multimodal images improves segmentation performance of chorioretinal lesions.