z-logo
open-access-imgOpen Access
MultiAdapt: A Neural Network Adaptation For Pruning Filters Base on Multi-layers Group
Author(s) -
Jie Yang,
Zhihong Xie,
Ping Li
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1873/1/012062
Subject(s) - pruning , computer science , convolutional neural network , reduction (mathematics) , artificial neural network , algorithm , artificial intelligence , pattern recognition (psychology) , base (topology) , deep learning , mathematics , mathematical analysis , geometry , agronomy , biology
Deep convolutional neural networks have been widely used in various AI applications. The most advanced neural networks are becoming deeper and wider, which has caused some large convolutional neural networks to exceed the size limit of the server or application. The pruning algorithm provides a way to reduce the size of the neural network while keeping the accuracy as high as possible. The automatic progressive pruning algorithm is one of the widely used pruning algorithms. The progressive pruning algorithm prunes a certain layer of the network in each iteration to reduce the sparsity while preserving the accuracy as much as possible. In this article, we design a new automatic progressive pruning algorithm named MultiAdapt. MultiAdapt combines the combination method and the greedy algorithm. This multi-layers progressive pruning method greatly increases the search space of the greedy algorithm, making it possible to obtain a better pruning network. We use MultiAdapt to prune large neural networks VGG-16 and ResNet. The experimental results show that the MultiAdapt algorithm is better than other mainstream methods in the balance of neural network model size and accuracy. For image classification tasks on the ImageNet dataset, our method achieved 88.72% and 90.55% TOP-5 accuracy on the 50% sparsity VGG-16 and ResNet, while obtaining nearly 2×reduction in parameters and floating point numbers. The operation is reduced, and the reduction is higher than the recent popular method.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here