
Efficient spiking neural network training and inference with reduced precision memory and computing
Author(s) -
Wang Yi,
Shahbazi Karim,
Zhang Hao,
Oh KwangIl,
Lee JaeJin,
Ko SeokBum
Publication year - 2019
Publication title -
iet computers and digital techniques
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.219
H-Index - 46
ISSN - 1751-861X
DOI - 10.1049/iet-cdt.2019.0115
Subject(s) - floating point , computer science , computer hardware , mnist database , fixed point arithmetic , spiking neural network , integer (computer science) , memory footprint , algorithm , artificial neural network , parallel computing , artificial intelligence , programming language , operating system
In this study, reduced precision operations are investigated in order to improve the speed and energy efficiency of SNN implementation. Instead of using the 32‐bit single‐precision floating‐point format, small floating‐point format and fixed‐point format are used to represent SNN parameters and to perform SNN operations. The analyses are performed on the training and inference of a leaky integrate‐and‐fire model‐based SNN that is trained and used to classify the handwritten digits in MNIST database. The analysis results show that for SNN inference, the floating‐point format with 4‐bit exponent and 3‐bit mantissa or the fixed‐point format with 6‐bit integer and 7‐bit fraction can be used without any accuracy degradation. For training, a floating‐point format with 5‐bit exponent and 3‐bit mantissa or a fixed‐point format with 6‐bit integer and 10‐bit fraction can be used to obtain full accuracy. The proposed reduced precision formats can be used in SNN hardware accelerator design and the selection between floating‐point and fixed‐point can be determined by design requirements. A case study of SNN implementation on field‐programmable gate array device is performed. With reduced precision numerical formats, memory footprint, computing speed, and resource utilisation are improved. As a result, the energy efficiency of SNN implementation is also improved.