
Dense Attention Convolutional Network for Image Classification
Author(s) -
Han Zhang,
Kun Qin,
Ye Zhang,
Zhili Li,
Kai Xu
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1651/1/012184
Subject(s) - discriminative model , convolutional neural network , computer science , artificial intelligence , pattern recognition (psychology) , feature extraction , feature (linguistics) , process (computing) , deep learning , channel (broadcasting) , feature learning , margin (machine learning) , machine learning , computer network , philosophy , linguistics , operating system
Convolutional neural networks (CNN) have made rapid progress for a series of visual tasks, but the bottom-up convolutional feature extraction process is not qualified to mimic the human visual perception process, which takes advantages of discriminative features. Although attention modules extract the top-down discriminative features and have been widely investigated in the past few years, current attention modules interrupt the bottom-up convolutional feature extraction process. To tackle this challenge, in this paper, we introduce dense connection structure to fuse discriminative features from attention modules and convolutional features, which we term as dense attention learning . Also, to alleviate the over-fitting problem caused by rapid feature dimension growth, we propose a channel-wise attention module to compress and refine the convolutional features. Based on these strategies, we build a dense attention convolutional neural network (DA-CNN) for visual recognition. Exhaustive experiments on four challenging datasets including CIFAR-10, CIFAR-100, SVHN and ImageNet demonstrate that our DA-CNN outperforms many state-of-the-art methods. Moreover, the effectiveness of our dense attention learning and channel-wise attention module is also validated.