Human Knowledge in Constructing AI Systems — Neural Logic Networks Approach towards an Explainable AI
Author(s) -
Liya Ding
Publication year - 2018
Publication title -
procedia computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.334
H-Index - 76
ISSN - 1877-0509
DOI - 10.1016/j.procs.2018.08.129
Subject(s) - computer science , connectionism , artificial intelligence , fuzzy logic , artificial neural network , black box , interpretation (philosophy) , domain (mathematical analysis) , criticism , domain knowledge , programming language , art , mathematical analysis , mathematics , literature
To build an easy-to-use AI and ML, it is crucial to gain user’s trust. Trust comes from understanding the reasoning behind an AI system’s conclusions and results. The recent research efforts on Explainable AI (XAI) reflect the importance of explainability in responding to the criticism of “black box” type of AI. Neural Logic Networks (NLN) is a research to embed logic reasoning (being binary or fuzzy) to connectionist models having humans’ domain knowledge taken into consideration. The reasoning carried out on such network structures allows possible interpretation beyond binary logic. This article intends to discuss the potential contribution of NLN approach in making reasoning more explainable.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom