z-logo
open-access-imgOpen Access
Slightly-slacked dropout for improving neural network learning on FPGA
Author(s) -
Sota Sawaguchi,
Hiroaki Nishi
Publication year - 2018
Publication title -
ict express
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.733
H-Index - 22
ISSN - 2405-9595
DOI - 10.1016/j.icte.2018.04.006
Subject(s) - dropout (neural networks) , field programmable gate array , artificial neural network , computer science , artificial intelligence , machine learning , embedded system
Neural Network Learning (NNL) is compute-intensive. It often involves a dropout technique which effectively regularizes the network to avoid overfitting. As such, a hardware accelerator for dropout NNL has been proposed; however, the existing method encounters a huge transfer cost between hardware and software. This paper proposes Slightly-Slacked Dropout (SS-Dropout), a novel deterministic dropout technique to address the transfer cost while accelerating the process. Experimental results show that our SS-Dropout technique improves both the usual and dropout NNL accelerator, i.e., 1.55 times speed-up and three order-of-magnitude less transfer cost, respectively.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom