Visualization analysis of feed forward neural network input contribution
Author(s) -
Alsakran Jamal,
Rodan Ali,
Alhindawi Nouh,
Faris Hossam
Publication year - 2014
Publication title -
scientific research and essays
Language(s) - English
Resource type - Journals
ISSN - 1992-2248
DOI - 10.5897/sre2014.5895
Subject(s) - interpretability , visualization , computer science , artificial neural network , sensitivity (control systems) , process (computing) , domain (mathematical analysis) , key (lock) , artificial intelligence , pruning , machine learning , data mining , mathematics , engineering , mathematical analysis , agronomy , computer security , electronic engineering , biology , operating system
The complexity of domain problem can slow or even hinder the learning process of neural networks. It is rather difficult to overcome such an obstacle because neural networks, as cited today in the literature, lack the interpretability of their internal structures. In this paper, we present a visualization approach capable of enhancing the understanding of neural networks. Our approach visualizes input and weight contributions, sensitivity analysis, and provides guidance in pruning less influential features and consequently reducing the complexity of domain problem while maintaining acceptable error rates. We conduct experiments on various datasets to show the effectiveness of our approach. Key words: Neural network, visualization, input contribution, sensitivity analysis
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom