z-logo
open-access-imgOpen Access
One Draw Attack: Fool Deep Learning Models With Modifying Fewer Pixels
Author(s) -
Shuangjia Gu,
Wanping Liu,
Li Zhi
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1827/1/012179
Subject(s) - computer science , artificial intelligence , adversarial system , pixel , norm (philosophy) , bézier curve , focus (optics) , image (mathematics) , image processing , deep learning , pattern recognition (psychology) , machine learning , computer vision , mathematics , physics , geometry , optics , political science , law
Deep learning plays the leading role in the field of image recognition, natural language processing, and speech processing. But many researches reveal that the existence of adversarial examples would make models vulnerable. Most of the researches focus on restricting norm distance which result in perturbing images too much. With examples given in this paper, we demonstrate that restricting the number of modified pixels is easier to classify in human vision instead of norm distance, meanwhile, it can get a high accuracy of non-targeted attack on VGG16, ResNet50 and DenseNet201. Our method is adding Bézier curves on original image using differential evolution algorithm (DE). Through continuous evolution, kept the good individuals and eliminated the bad ones, DE can guide the search to the best solution. With DE, we do not need to know the gradient information about models, therefore, our method is robust on different models. When we add more Bézier curves on original image, we can get a higher accuracy which also shows that our method is effective. We find that the distribution of prediction labels focuses on a few categories, which means some labels are easier to attack than the most others. We also think the encode order about labels has an influence on the distribution of misclassified labels.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here