z-logo
open-access-imgOpen Access
The Uncertainty of Counterfactuals in Deep Learning
Author(s) -
Katherine Brown,
Doug Talbert,
Steve Talbert
Publication year - 2021
Publication title -
proceedings of the ... international florida artificial intelligence research society conference
Language(s) - English
Resource type - Journals
eISSN - 2334-0762
pISSN - 2334-0754
DOI - 10.32473/flairs.v34i1.128795
Subject(s) - counterfactual conditional , counterfactual thinking , artificial intelligence , computer science , artificial neural network , machine learning , autoencoder , deep learning , econometrics , epistemology , mathematics , philosophy
Counterfactuals have become a useful tool for explainable Artificial Intelligence (XAI). Counterfactuals provide various perturbations to a data instance to yield an alternate classification from a machine learning model. Several algorithms have been designed to generate counterfactuals using deep neural networks; however, despite their growing use in many mission-critical fields, there has been no investigation to date as to the epistemic uncertainty of generated counterfactuals. This could result in the use of risk-prone explanations in these fields. In this work, we use several data sets to compare the epistemic uncertainty of original instances to that of counterfactuals generated from those instances. As part of our analysis, we also measure the extent to which counterfactuals can be considered anomalies in those data sets. We find that counterfactual uncertainty is higher in three of the four datasets tested. Moreover, our experiments suggest a possible connection between reconstruction error using a deep autoencoder and the difference in epistemic uncertainty between training data and counterfactuals generated from that training data for a deep neural network.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here