z-logo
open-access-imgOpen Access
A Computational Framework for Implementation of Neural Networks on Multi-Core Machine
Author(s) -
Wenduo Wang,
Yi Lu Murphey,
Paul Watta
Publication year - 2015
Publication title -
procedia computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.334
H-Index - 76
ISSN - 1877-0509
DOI - 10.1016/j.procs.2015.07.282
Subject(s) - computer science , artificial neural network , backpropagation , reusability , flexibility (engineering) , artificial intelligence , generalization , multi core processor , abstraction , machine learning , algorithm , software , parallel computing , programming language , mathematical analysis , philosophy , statistics , mathematics , epistemology
This paper presents a computational framework, the Generic Programmable Neural Network (GPNN), for efficient implementation of Back-Propagation based neural learning algorithms running on multi-core machines. GPNN has three components: parallelization of neural learning, abstraction of network components, and compile-time generalization. Together these computational components make GPNN an efficient framework for fast implementation of back-propagation based neural learning algorithms, and provide flexibility and reusability for modifying neural network topologies. The GPNN was applied to four different neural learning algorithms: classic back-propagation (BP), quick propagation (QP), resilient propagation (RP) and Levenberg-Marquardt (LM) algorithm. Experiments were conducted to evaluate the effectiveness of GPNN, and results show that the neural learning algorithms implemented in GPNN are more efficient than their respective functions provided by Matlab

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom