z-logo
open-access-imgOpen Access
An Introduction on Interpretable Machine Learning
Author(s) -
Neel Shah,
Sheetal Jeshwani
Publication year - 2020
Publication title -
international journal of innovative technology and exploring engineering
Language(s) - English
Resource type - Journals
ISSN - 2278-3075
DOI - 10.35940/ijitee.g1023.0597s20
Subject(s) - interpretability , artificial intelligence , machine learning , computer science , interpretation (philosophy) , feature (linguistics) , philosophy , linguistics , programming language
As Artificial Intelligence penetrates all aspects of human life, more and more questions about ethical practices and fair uses arise, which has motivated the research community to look inside and develop methods to interpret these Artificial Intelligence/Machine Learning models. This concept of interpretability can not only help with the ethical questions but also can provide various insights into the working of these machine learning models, which will become crucial in trust-building and understanding how a model makes decisions. Furthermore, in many machine learning applications, the feature of interpretability is the primary value that they offer. However, in practice, many developers select models based on the accuracy score and disregarding the level of interpretability of that model, which can be chaotic as predictions by many high accuracy models are not easily explainable. In this paper, we introduce the concept of Machine Learning Model Interpretability, Interpretable Machine learning, and the methods used for interpretation and explanations.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here