z-logo
open-access-imgOpen Access
SLRP: Improved heatmap generation via selective layer‐wise relevance propagation
Author(s) -
Jung YeonJee,
Han SeungHo,
Choi HoJin
Publication year - 2021
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/ell2.12061
Subject(s) - softmax function , relevance (law) , computer science , artificial intelligence , discriminative model , deep learning , pattern recognition (psychology) , layer (electronics) , machine learning , backpropagation , artificial neural network , materials science , political science , law , composite material
Deep learning has been recently applied to various areas of artificial intelligence, where it has displayed excellent performance. However, many deep‐learning models are a black box, which makes it difficult to interpret the models and understand the predictions. Explainability is crucial for critical real‐world systems (in the fields such as defense, aerospace, and security). To solve this problem, the concept of explainable artificial intelligence has emerged. For image classification, various approaches have been proposed to visually explain the model's prediction. A typical approach is layer‐wise relevance propagation, which generates a heatmap, where each pixel value represents the contributions to the model's predictions. However, even advanced versions of layer‐wise relevance propagation (such as contrastive layer‐wise relevance propagation and softmax‐gradient layer‐wise relevance propagation) have some limitations. Here, selective layer‐wise relevance propagation, which generates a clearer heatmap than the existing methods by combining relevance‐based methods and gradient‐based methods is proposed. To evaluate the proposed method and verify its effectiveness, we conduct comparative experiments. Qualitative and quantitative results show that selective layer‐wise relevance propagation produces less noisy, class‐discriminative, and object‐preserving results. The proposed method can be used to improve the explainability of deep‐learning models in image classification.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here