z-logo
open-access-imgOpen Access
Optimization of Energy Efficiency for FPGA-Based Convolutional Neural Networks Accelerator
Author(s) -
Yongming Tang,
Rongshi Dai,
Yi Xie
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1487/1/012028
Subject(s) - flops , field programmable gate array , computer science , convolutional neural network , efficient energy use , parallel computing , energy (signal processing) , embedded system , computer hardware , artificial intelligence , engineering , mathematics , electrical engineering , statistics
Convolutional neural network (CNN) is widely applied to image recognition with high recognition accuracy. CNN has a wider implementation in general-purpose processors and can be accelerated on FPGA. CNN has a unique way of computing, but general-purpose processors are not efficient for CNN and cannot meet energy efficiency requirements. And the previous studies on FPGA did not involve an energy-efficient implementation on FPGA. We innovatively propose energy efficiency models and implement high energy efficiency CNN on FPGA. We implemented the LeNet-5 network model on the GENESYS 2 board and compared it to the traditional processor and previous studies. By comparison, the computing throughput of CPU, GPU and FPGA are 3.831GFLOPS, 27.143GFLOPS and 19.61GFLOPS respectively, and their powers are 32.15W, 52W, 4.152W respectively. The final energy efficiency (GFLOPS/W) is 0.119GFLOPS/W, 0.522 GFLOPS/W, 4.723 GFLOPS/W, so the energy efficiency of FPGA are far superior to that of CPU and GPU. Since the energy efficiency we achieved on FPGA is also higher than that of FPL2009 and FPGA2015, and we have achieved good experimental results in energy efficiency.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here