
Targeted Adversarial Examples Generating Method Based on cVAE in Black Box Settings
Author(s) -
Tingyue YU,
Shen WANG,
Chunrui ZHANG,
Zhenbang WANG,
Yetian LI,
Xiangzhan YU
Publication year - 2021
Publication title -
chinese journal of electronics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.267
H-Index - 25
eISSN - 2075-5597
pISSN - 1022-4653
DOI - 10.1049/cje.2021.06.009
Subject(s) - adversarial system , computer science , autoencoder , artificial intelligence , robustness (evolution) , deep learning , black box , machine learning , generative grammar , biochemistry , chemistry , gene
In recent years, adversarial examples has become one of the most important security threats in deep learning applications. For testing the security of deep learning models in adversarial environment, many researches focus on generating adversarial examples quickly and efficiently. In order to solve the problems of existing generative adversarial networks based methods which can not effectively generate the targeted adversarial examples in black box settings, and to improve the temporal performance of gradient‐based generating methods, an adversarial examples generating method based on conditional Variational autoencoder (cVAE) is proposed in this paper, where a cVAE is designed elaborately to generate adversarial examples without most of the detailed information about the attacked deep learning models, of which the output can be controlled arbitrarily by these crafted inputs, used to test the robustness of deep learning models against adversarial examples. The experimental results show that the proposed method can achieve a comparable attack success rate and a better temporal performance than the existing gradient‐based generating methods in black box environment.