
Interpretable Neural Network Construction: From Neural Network to Interpretable Neural Tree
Author(s) -
Xuming Ouyang,
Cunguang Feng
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1550/3/032154
Subject(s) - interpretability , artificial neural network , computer science , artificial intelligence , tree (set theory) , time delay neural network , nervous system network models , stochastic neural network , function (biology) , transformation (genetics) , machine learning , types of artificial neural networks , mathematics , mathematical analysis , biochemistry , chemistry , evolutionary biology , gene , biology
The neural network has made outstanding achievements in many fields, while comparing with traditional machine learning models, the neural network has poor interpretability, which brings great limitation to its practical application. Therefore, many researchers try to combine neural networks with traditional models to improve the interpretability of the neural network. Their methods either result in performance depreciation or lead to computation-intensive. In this paper, we propose to transform a neural network into the interpretable neural tree . In the interpretable neural tree, each node contains transformation function and routing function. Each transformation function corresponds to a layer in the neural network, using for controlling data transformation. The routing function is utilized to control the direction of data flow in the tree structure. Our experiments have indicated that the interpretable neural tree makes the neural network interpretable to some extent while maintaining the performance.