z-logo
open-access-imgOpen Access
Increasing robustness of deep neural network models against adversarial attacks
Author(s) -
Pankaj D. Bhagwat,
Pratibha Shingare
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1797/1/012005
Subject(s) - adversarial system , computer science , artificial intelligence , robustness (evolution) , deep learning , artificial neural network , machine learning , deep neural networks , pattern recognition (psychology) , data mining , biochemistry , chemistry , gene
In Autonomous driving detecting correct object is important, further studies proved that by adding small pattern above object can also lead intentional fooling of network. Small intentional changes in the input can significantly distort output of a deep neural network model. This makes the machine learned model vulnerable to these small changes in images. Hence, these models have wide scope of failure. If we are able to tackle these intentional attacks it will help to make system more robust. In this project, we have combined multiple techniques used for defending against adversarial attacks. First technique is Adversarial training which include modifying training dataset, second technique is pre-processing input data before applying it to deep learned model and Third technique randomly selects image pre-processing technique. Third method is aimed to distract attacker who know the method used in pre-processing by randomly selecting from multiple methods in image pre-processing. We will measure robustness of deep learned model in terms of Accuracy on Designed system and previous deep learned model. Test samples and adversarial images generated from dataset will be used for testing on deep learned model. Among all methods which we have combined, Adversarial training proved best method to defend against white box attack. If we would have used strong defences in Random selection of image transformation then the system could have performed much better. However, Random selection have done its work of confusing the attacker by selecting random transformations.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here