
Compression of Convolutional Neural Network for Natural Language Processing
Author(s) -
Krzysztof Wróbel,
Michał Karwatowski,
Maciej Wielgosz,
Marcin Pietroń,
K. Wiatr
Publication year - 2020
Publication title -
computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.145
H-Index - 5
eISSN - 2300-7036
pISSN - 1508-2806
DOI - 10.7494/csci.2020.21.1.3375
Subject(s) - computer science , memory footprint , convolutional neural network , pruning , artificial intelligence , quantization (signal processing) , field programmable gate array , artificial neural network , process (computing) , image compression , pattern recognition (psychology) , image processing , embedded system , computer vision , image (mathematics) , programming language , agronomy , biology
Convolutional Neural Networks (CNNs) were created for image classification tasks. Quickly, they were applied to other domains, including Natural Language Processing (NLP). Nowadays, the solutions based on artificial intelligence appear on mobile devices and in embedded systems, which places constraints on, among others, the memory and power consumption. Due to CNNs memory and computing requirements, to map them to hardware they need to be compressed.This paper presents the results of compression of the efficient CNNs for sentiment analysis. The main steps involve pruning and quantization. The process of mapping the compressed network to FPGA and the results of this implementation are described. The conducted simulations showed that 5-bit width is enough to ensure no drop in accuracy when compared to the floating point version of the network. Additionally, the memory footprint was significantly reduced (between 85% and 93% comparing to the original model).