z-logo
open-access-imgOpen Access
An area‐efficient memory‐based multiplier powering eight parallel multiplications for convolutional neural network processors
Author(s) -
Choi Seongrim,
Cho Suhwan,
Nam ByeongGyu
Publication year - 2021
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/ell2.12206
Subject(s) - multiplier (economics) , computer science , convolutional neural network , parallel computing , arithmetic , artificial neural network , computer architecture , electronic engineering , computer hardware , artificial intelligence , mathematics , engineering , economics , macroeconomics
Convolutional neural network (CNN) is widely used for various deep learning applications because of its best‐in‐class classification performance. However, CNN needs several multiply‐accumulate (MAC) operations to realize human‐level cognition capabilities. In this regard, an area‐efficient multiplier is essential to integrate a large number of MAC units in a CNN processor. In this letter, we present an area‐efficient memory‐based multiplier targeting CNN processing. The proposed architecture adopts a 32‐port memory shared across eight multiplications. Simulation results show that area is reduced by 18.4% compared with the state‐of‐the‐art memory‐based multiplier.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here