z-logo
Premium
Visual diagnostics of an explainer model: Tools for the assessment of LIME explanations
Author(s) -
Goode Katherine,
Hofmann Heike
Publication year - 2021
Publication title -
statistical analysis and data mining: the asa data science journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.381
H-Index - 33
eISSN - 1932-1872
pISSN - 1932-1864
DOI - 10.1002/sam.11500
Subject(s) - metric (unit) , black box , lime , matching (statistics) , computer science , quality (philosophy) , feature (linguistics) , artificial intelligence , machine learning , econometrics , data mining , mathematics , statistics , engineering , geology , operations management , paleontology , philosophy , linguistics , epistemology
Abstract The importance of providing explanations for predictions made by black‐box models has led to the development of explainer model methods such as LIME (local interpretable model‐agnostic explanations). LIME uses a surrogate model to explain the relationship between predictor variables and predictions from a black‐box model in a local region around a prediction of interest. However, the quality of the resulting explanations relies on how well the explainer model captures the black‐box model in a specified local region. Here we introduce three visual diagnostics to assess the quality of LIME explanations: (1) explanation scatterplots, (2) assessment metric plots, and (3) feature heatmaps. We apply the visual diagnostics to a forensics bullet matching dataset to show examples where LIME explanations depend on the tuning parameter values and the explainer model oversimplifies the black‐box model. Our examples raise concerns about claims made of LIME that are similar to other criticisms in the literature.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here