z-logo
open-access-imgOpen Access
Optimization of the Convolution Operation to Accelerate Deep Neural Networks in FPGA
Author(s) -
Malathi Devendran,
Indumathi Rajendran,
Vijayakumar Ponnusamy,
Diwakar R. Marur
Publication year - 2021
Publication title -
revue d'intelligence artificielle
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.146
H-Index - 14
eISSN - 1958-5748
pISSN - 0992-499X
DOI - 10.18280/ria.350610
Subject(s) - loop unrolling , computer science , convolution (computer science) , field programmable gate array , computation , convolutional neural network , parallel computing , gate array , speedup , matlab , artificial neural network , pixel , deep learning , artificial intelligence , process (computing) , computer hardware , algorithm , compiler , programming language , operating system
In recent years, machine learning algorithms related to images have been widely utilized by Convolution Neural Networks (CNN), and it has a high accuracy for recognition of an image. As CNN contains large number of computations, hardware accelerator like Field Programmable Gate Array is employed. Quite 90 % of operations during a CNN involves convolution. The objective of this work is to scale back the computation time to increase the peak, width and the pixel intensity levels in the input image. The execution time of a image processing program is mostly spent on loops. Loop optimization is a process of accelerating speed and reducing the overheads related to loops. It plays a crucial role in improving performance and making effective use of multiprocessing capabilities. Loop unrolling is one of the loop optimization techniques. In our work CNN with four levels of loop unrolling is used. Due to this delay is reduced compared with conventional Xilinix. With the assistance of strides and padding the 40 % of computation time has been reduced and is verified in MATLAB.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here