
Compressive imaging for defending deep neural networks from adversarial attacks
Author(s) -
Vladislav Kravets,
Bahram Javidi,
Adrian Stern
Publication year - 2021
Publication title -
optics letters/optics index
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.524
H-Index - 272
eISSN - 1071-2763
pISSN - 0146-9592
DOI - 10.1364/ol.418808
Subject(s) - adversarial system , computer science , convolutional neural network , artificial intelligence , ghost imaging , encode , pixel , object (grammar) , deep neural networks , computer vision , deep learning , image (mathematics) , pattern recognition (psychology) , biochemistry , chemistry , gene
Despite their outstanding performance, convolutional deep neural networks (DNNs) are vulnerable to small adversarial perturbations. In this Letter, we introduce a novel approach to thwart adversarial attacks. We propose to employ compressive sensing (CS) to defend DNNs from adversarial attacks, and at the same time to encode the image, thus preventing counterattacks. We present computer simulations and optical experimental results of object classification in adversarial images captured with a CS single pixel camera.