z-logo
open-access-imgOpen Access
Artificial intelligence explainability: the technical and ethical dimensions
Author(s) -
John McDermid,
Yan Jia,
Zoë Porter,
Ibrahim Habli
Publication year - 2021
Publication title -
philosophical transactions - royal society. mathematical, physical and engineering sciences/philosophical transactions - royal society. mathematical, physical and engineering sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.074
H-Index - 169
eISSN - 1471-2962
pISSN - 1364-503X
DOI - 10.1098/rsta.2020.0363
Subject(s) - stakeholder , situated , theme (computing) , point (geometry) , accountability , computer science , service (business) , management science , engineering ethics , knowledge management , business , political science , public relations , artificial intelligence , engineering , marketing , law , geometry , mathematics , operating system
In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as ‘AI explainability’ or ‘XAI’ methods. This paper presents an overview of XAI methods, and links them to stakeholder purposes for seeking an explanation. Because the underlying stakeholder purposes are broadly ethical in nature, we see this analysis as a contribution towards bringing together the technical and ethical dimensions of XAI. We emphasize that use of XAI methods must be linked to explanations of human decisions made during the development life cycle. Situated within that wider accountability framework, our analysis may offer a helpful starting point for designers, safety engineers, service providers and regulators who need to make practical judgements about which XAI methods to employ or to require. This article is part of the theme issue ‘Towards symbiotic autonomous systems’.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here