z-logo
Premium
A Parallelization Method for Neural Network Learning
Author(s) -
Tsuchida Yuta,
Yoshioka Michifumi
Publication year - 2015
Publication title -
electrical engineering in japan
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.136
H-Index - 28
eISSN - 1520-6416
pISSN - 0424-7760
DOI - 10.1002/eej.22694
Subject(s) - computer science , graphics , graphics processing unit , general purpose computing on graphics processing units , artificial neural network , parallel computing , signal processing , simple (philosophy) , cuda , parallelism (grammar) , parallel processing , image processing , central processing unit , electronic circuit , computer engineering , computational science , artificial intelligence , computer hardware , computer graphics (images) , digital signal processing , image (mathematics) , philosophy , epistemology , electrical engineering , engineering
SUMMARY Recently, the technology called General‐Purpose computing on Graphics Processing Unit (GPGPU), which treats not only graphic processing but also general purpose calculation by using GPU, has been investigated because the GPU has higher performance than the CPU for the development of 3DCG or movie processing. GPU has dedicated circuits to draw graphics, so that it has the characteristic that the many simple arithmetic circuits are implemented. This characteristic has promise for applications not only to graphic processing but also to massive parallelism. In this research, we apply the technology to neural network learning, a form of intelligent signal processing. As conventional research, we proposed three methods of speeding up neural network learning. One of the methods, parallelization of pattern processing, has points that should be improved. In this paper, we report that the updating of the weight coefficients in the neurons is processed simultaneously by changing the order of the pattern calculations. The proposed calculation method is evaluated against test data sets. The results confirm that the proposed method converges similarly to the conventional method. We also propose an optimal implementation method for the GPU. This proposed method is found to be three to six times faster than the conventional method.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here