
Generating adversarial examples without specifying a target model
Author(s) -
Gaoming Yang,
Mingwei Li,
Xianjing Fang,
Zhang Ji,
Xingzhu Liang
Publication year - 2021
Publication title -
peerj. computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.806
H-Index - 24
ISSN - 2376-5992
DOI - 10.7717/peerj-cs.702
Subject(s) - mnist database , adversarial system , computer science , set (abstract data type) , black box , threat model , artificial intelligence , training set , machine learning , deep learning , theoretical computer science , data mining , computer security , programming language
Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too many queries, and this problem is especially obvious under the black-box setting. To solve the problem, we propose the Attack Without a Target Model (AWTM). Our algorithm does not specify any target model in generating adversarial examples, so it does not need to query the target. Experimental results show that it achieved a maximum attack success rate of 81.78% in the MNIST data set and 87.99% in the CIFAR-10 data set. In addition, it has a low time cost because it is a GAN-based method.