
A Detectability Analysis of Retinitis Pigmetosa Using Novel SE-ResNet Based Deep Learning Model and Color Fundus Images
Author(s) -
Rubina Rashid,
Waqar Aslam,
Arif Mehmood,
Debora Libertad Ramirez Vargas,
Isabel De La Torre Diez,
Imran Ashraf
Publication year - 2024
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2024.3367977
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Retinitis pigmentosa (RP) is a group of genetic retinal disorders characterized by progressive vision loss, culminating in blindness. Identifying pigment signs (PS) linked with RP is crucial for monitoring and possibly slowing the disease’s degenerative course. However, the segmentation and detection of PS are challenging due to the difficulty of distinguishing between PS and blood vessels and the variability in size, shape, and color of PS. Recently, advances in deep learning techniques have shown impressive results in medical image analysis, especially in ophthalmology. This study presents an approach for classifying pigment marks in color fundus images of RP using a modified squeeze-and-excitation ResNet (SE-ResNet) architecture. This variant synergizes the efficiency of residual skip connections with the robust attention mechanism of the SE block to amplify feature representation. The SE-ResNet model was fine-tuned to determine the optimal layer configuration that balances performance metrics and computational costs. We trained the proposed model on the RIPS dataset, which comprises images from patients diagnosed at various RP stages. Experimental results confirm the efficacy of the proposed model in classifying different types of pigment signs associated with RP. The model yielded performance metrics, such as accuracy, sensitivity, specificity, and f-measure of 99.16%, 97.70%, 96.93%, 90.47%, 99.37%, 97.80%, 97.44%, and 90.60% on the testing set, based on GT1 & GT2 respectively. Given its performance, this model is an excellent candidate for integration into computer-aided diagnostic systems for RP, aiming to enhance patient care and vision-related healthcare services.