z-logo
open-access-imgOpen Access
A Review of FPGA‐Based Custom Computing Architecture for Convolutional Neural Network Inference
Author(s) -
Xiyuan Peng,
Jinxiang Yu,
Bowen Yao,
Liansheng Liu,
Yu Peng
Publication year - 2021
Publication title -
chinese journal of electronics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.267
H-Index - 25
eISSN - 2075-5597
pISSN - 1022-4653
DOI - 10.1049/cje.2020.11.002
Subject(s) - computer science , field programmable gate array , inference , convolutional neural network , computer architecture , computer engineering , overhead (engineering) , process (computing) , computation , embedded system , edge device , architecture , distributed computing , artificial intelligence , algorithm , cloud computing , art , visual arts , operating system
Convolutional neural network (CNN) has been widely adopted in many tasks. Its inference process is usually applied on edge devices where the computing resources and power consumption are limited. At present, the performance of general processors cannot meet the requirement for CNN models with high computation complexity and large number of parameters. Field‐programmable gate array (FPGA)‐based custom computing architecture is a promising solution to further enhance the CNN inference performance. The software/hardware co‐design can effectively reduce the computing overhead, and improve the inference performance while ensuring accuracy. In this paper, the mainstream methods of CNN structure design, hardware‐oriented model compression and FPGA‐based custom architecture design are summarized, and the improvement of CNN inference performance is demonstrated through an example. Challenges and possible research directions in the future are concluded to foster research efforts in this domain.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here