Encouraging Discriminative Attention Through Contrastive Explainability Learning for Lung Cancer Diagnosis
Author(s) -
V Shravya,
Meghana Sunil,
B Natarajan,
R Elakkiya
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3616056
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Lung cancer diagnosis using CT scans is critical for early detection, but existing deep methods often lack interpretability, especially in highlighting medically relevant regions. Most approaches optimize only for prediction accuracy, leaving model explanations unstructured and inconsistent across samples. We introduce Contrastive Explainability Learning (CEL), a novel training approach that aligns Grad-CAM heatmaps across class-consistent samples while enforcing dissimilarity across different classes. Unlike prior methods, CEL integrates explanation supervision directly into the loss function, enabling interpretable representation learning without sacrificing accuracy. Using only a lightweight, spatially-attended CNN, our model achieves strong performance (99.2% accuracy, 99.5% F1 score) on the IQ-OTH/NCCD dataset and demonstrates robust generalization (93.0% accuracy) on the more complex HF Lung Cancer dataset with multiple cancer subtypes. Statistical analysis across multiple trials confirms that these improvements are significant (p < 0.01). We demonstrate through comprehensive comparisons with alternative XAI methods that CEL produces more consistent, discriminative explanations with minimal computational overhead. Experiments show that our contrastive saliency framework guides the CNN to focus on class-specific anatomical regions, improving both transparency and diagnostic trust while maintaining efficiency suitable for clinical deployment.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom