z-logo
open-access-imgOpen Access
Improving Anchor-based Explanations
Author(s) -
Julien Delaunay,
Luis Galárraga,
Christine Largouët
Publication year - 2020
Publication title -
hal (le centre pour la communication scientifique directe)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/3340531.3417461
Subject(s) - computer science , focus (optics) , fidelity , quality (philosophy) , simple (philosophy) , selection (genetic algorithm) , discretization , artificial intelligence , machine learning , epistemology , mathematics , telecommunications , mathematical analysis , philosophy , physics , optics
Rule-based explanations are a popular method to understand the rationale behind the answers of complex machine learning (ML) classifiers. Recent approaches, such as Anchors, focus on local explanations based on if-then rules that are applicable in the vicinity of a target instance. This has proved effective at producing faithful explanations, yet anchor-based explanations are not free of limitations. These include long overly specific rules as well as explanations of low fidelity. This work presents two simple methods that can mitigate such issues on tabular and textual data. The first approach proposes a careful selection of the discretization method for numerical attributes in tabular datasets. The second one applies the notion of pertinent negatives to explanations on textual data. Our experimental evaluation shows the positive impact of such methods on the quality of anchor-based explanations.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom