z-logo
open-access-imgOpen Access
TOWARDS USER-CENTRIC EXPLANATIONS FOR EXPLAINABLE MODELS: A REVIEW
Author(s) -
Hassan Ali,
Riza Sulaiman,
Mansoor Abdullateef Abdulgabber,
Hasan Kahtan
Publication year - 2021
Publication title -
journal of information system and technology management
Language(s) - English
Resource type - Journals
ISSN - 0128-1666
DOI - 10.35631/jistm.622004
Subject(s) - transparency (behavior) , computer science , artificial intelligence , field (mathematics) , data science , resource (disambiguation) , knowledge management , computer security , computer network , mathematics , pure mathematics
Recent advances in artificial intelligence, particularly in the field of machine learning (ML), have shown that these models can be incredibly successful, producing encouraging results and leading to diverse applications. Despite the promise of artificial intelligence, without transparency of machine learning models, it is difficult for stakeholders to trust the results of such models, which can hinder successful adoption. This concern has sparked scientific interest and led to the development of transparency-supporting algorithms. Although studies have raised awareness of the need for explainable AI, the question of how to meet real users' needs for understanding AI remains unresolved. This study provides a review of the literature on human-centric Machine Learning and new approaches to user-centric explanations for deep learning models. We highlight the challenges and opportunities facing this area of research. The goal is for this review to serve as a resource for both researchers and practitioners. The study found that one of the most difficult aspects of implementing machine learning models is gaining the trust of end-users.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here