z-logo
open-access-imgOpen Access
Transformer-Based DME Classification Using Retinal OCT Images Without Data Augmentation: An Evaluation of ViT-B16 and ViT-B32 with Optimizer Impact
Author(s) -
K C Pavithra,
Preetham Kumar,
M Geetha,
Sulatha V Bhandary,
K B Ajitha Shenoy,
Guruprasad Rao,
Steven Fernandes,
Akshat Tulsani
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3620945
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Diabetic macular edema (DME) continues to be the most prevalent manifestation of impaired vision in people suffering from diabetes. To assess DME in individuals, ophthalmologists routinely adopt optical coherence tomography (OCT), a retinal imaging modality. With the clinical assessment of DME, computerized diagnosis based on deep learning (DL) and OCT has emerged as a vital tool. A huge amount of information is required for model training, which constitutes the main limitation of DL. Most medical datasets are not appropriate for training DL models owing to their relatively small size. Classical knowledge augmentation generally fails to bring about the anticipated outcomes. Using transfer learning (TL) is an appropriate strategy to deal with this issue. Without using any augmentation strategies, we investigate the effectiveness of Vision Transformer (ViT) models in classifying DME OCT pictures. Using three optimization algorithms, Adam, SGD, and RMSProp, two ViT variants, ViT-B16 and ViT-B32, were fine-tuned on a public and a private dataset. The statistical measures, accuracy (AC), Recall (RE) and precision (PR) are presented. Additionally, gradient-weighted class activation mapping (Grad-CAM) heatmaps are employed for illustrating predictions from the model, providing significant details concerning the process of making decisions. The findings show that ViT-B16 consistently performed better than ViT-B32 on both datasets, while the Adam optimizer produced better recall (with the highest score of 100%), and in certain cases, RMSProp delivers the maximum precision. We used 5-fold cross-validation for statistical rigor and also compared ViT-B16 to CNN baselines (ResNet-50, ResNet-101, and EfficientNet-B3), which demonstrated that ViTs consistently outperform CNN baselines, albeit with a greater computational cost. Our findings reveal that OCT image classification performance can be improved by using finer-resolution transformer models in conjunction with suitable strategies for optimization.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom