z-logo
open-access-imgOpen Access
LeSegGAN: A Hybrid Attention-Based GAN for Accurate Lesion Segmentation in Dermatological Images
Author(s) -
Mithun Kumar Kar,
Vipin Venugopal,
B N Anoop
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3621107
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Accurate segmentation of skin lesions from dermatological images is essential for the early detection of melanoma and other skin cancers. Conventional methods based on convolutional neural networks (CNNs) and transformer architectures often struggle to capture both local and global contextual features, delineate irregular lesion boundaries, and remain robust against artifacts such as hair, shadows, and illumination variations. To overcome these challenges, we introduce LeSegGAN, a hybrid attention-enhanced generative adversarial network (GAN) framework for robust skin lesion segmentation. The generator combines convolutional and inception modules with residual connections and channel attention to extract multi-scale features, while a vision transformer (ViT)-based discriminator improves segmentation accuracy through adversarial learning. A composite loss function integrating weighted binary cross-entropy, Dice, and focal losses further addresses class imbalance and enhances performance. LeSegGAN is evaluated on four benchmark datasets namely, Waterloo skin cancer, MED-NODE, SD-260, and ISIC-2016. The proposed LeSegGAN consistently outperformed five state-of-the-art deep learning models (UNet, UNet++, SegNet, FCN, and DTP-Net), achieving accuracies of 0.9943, 0.9759, 0.9873, and 0.9724, with corresponding IoU scores of 0.9451, 0.9664, 0.8709, and 0.7717. These results highlight LeSegGAN’s strong generalization ability and robustness, demonstrating its potential for integration into computer-aided diagnostic systems for automated skin cancer detection.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom