Premium
Blockchain for explainable and trustworthy artificial intelligence
Author(s) -
Nassar Mohamed,
Salah Khaled,
ur Rehman Muhammad Habib,
Svetinovic Davor
Publication year - 2019
Publication title -
wiley interdisciplinary reviews: data mining and knowledge discovery
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.506
H-Index - 47
eISSN - 1942-4795
pISSN - 1942-4787
DOI - 10.1002/widm.1340
Subject(s) - computer science , adversarial system , key (lock) , artificial intelligence , trustworthiness , big data , implementation , data science , applications of artificial intelligence , machine learning , computer security , data mining , software engineering
The increasing computational power and proliferation of big data are now empowering Artificial Intelligence (AI) to achieve massive adoption and applicability in many fields. The lack of explanation when it comes to the decisions made by today's AI algorithms is a major drawback in critical decision‐making systems. For example, deep learning does not offer control or reasoning over its internal processes or outputs. More importantly, current black‐box AI implementations are subject to bias and adversarial attacks that may poison the learning or the inference processes. Explainable AI (XAI) is a new trend of AI algorithms that provide explanations of their AI decisions. In this paper, we propose a framework for achieving a more trustworthy and XAI by leveraging features of blockchain, smart contracts, trusted oracles, and decentralized storage. We specify a framework for complex AI systems in which the decision outcomes are reached based on decentralized consensuses of multiple AI and XAI predictors. The paper discusses how our proposed framework can be utilized in key application areas with practical use cases. This article is categorized under: Technologies > Machine Learning Technologies > Computer Architectures for Data Mining Fundamental Concepts of Data and Knowledge > Key Design Issues in Data Mining