z-logo
open-access-imgOpen Access
You Shouldn't Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods
Author(s) -
Botty Dimanov,
Umang Bhatt,
Mateja Jamnik,
Adrian Weller
Publication year - 2020
Language(s) - English
DOI - 10.3233/faia200380
Transparency of algorithmic systems has been discussed as a way for end-users and regulators to develop appropriate trust in machine learning models. One popular approach, LIME [26], even suggests that model explanations can answer the question “Why should I trust you?” Here we show a straightforward method for modifying a pre-trained model to manipulate the output of many popular feature importance explanation methods with little change in accuracy, thus demonstrating the danger of trusting such explanation methods. We show how this explanation attack can mask a model’s discriminatory use of a sensitive feature, raising strong concerns about using such explanation methods to check model fairness.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom