Bayesian-AIME: Quantifying Uncertainty and Enhancing Stability in Approximate Inverse Model Explanations
Author(s) -
Takafumi Nakanishi
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3617984
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Explainable artificial intelligence (XAI), which provides the rationale behind machine learning decisions, is essential for deploying AI in high-risk domains. However, most XAI methods provide only single-point explanations without indicating their reliability and are often unstable, i.e., small changes in training data can lead to significantly different explanations. This study addresses these limitations by proposing a Bayesian-approximate inverse model explanation method, an extension of AIME that incorporates a Bayesian framework. By modeling the inverse operator of a black-box model probabilistically, Bayesian-AIME estimates feature importance as a posterior distribution. This enables the assignment of 95% credible intervals (CIs) to feature importance scores, facilitating quantitative assessment of explanation reliability. Using the posterior mean also improves explanation stability. The effectiveness of the method was evaluated via three experiments: (1) stability analysis via bootstrapping, (2) validation of CIs on synthetic data with known importance, and (3) case studies on Titanic survival prediction and breast cancer diagnosis. The results showed that Bayesian-AIME outperformed existing methods in terms of stability and conveyed meaningful uncertainty, thus enhancing model interpretability. This study contributes to strengthening the reliability of XAI for practical applications.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom