A Computational Framework for Implementation of Neural Networks on Multi-Core Machine
Author(s) -
Wenduo Wang,
Yi Lu Murphey,
Paul Watta
Publication year - 2015
Publication title -
procedia computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.334
H-Index - 76
ISSN - 1877-0509
DOI - 10.1016/j.procs.2015.07.282
Subject(s) - computer science , artificial neural network , backpropagation , reusability , flexibility (engineering) , artificial intelligence , generalization , multi core processor , abstraction , machine learning , algorithm , software , parallel computing , programming language , mathematical analysis , philosophy , statistics , mathematics , epistemology
This paper presents a computational framework, the Generic Programmable Neural Network (GPNN), for efficient implementation of Back-Propagation based neural learning algorithms running on multi-core machines. GPNN has three components: parallelization of neural learning, abstraction of network components, and compile-time generalization. Together these computational components make GPNN an efficient framework for fast implementation of back-propagation based neural learning algorithms, and provide flexibility and reusability for modifying neural network topologies. The GPNN was applied to four different neural learning algorithms: classic back-propagation (BP), quick propagation (QP), resilient propagation (RP) and Levenberg-Marquardt (LM) algorithm. Experiments were conducted to evaluate the effectiveness of GPNN, and results show that the neural learning algorithms implemented in GPNN are more efficient than their respective functions provided by Matlab
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom