z-logo
open-access-imgOpen Access
Maximising robustness and diversity for improving the deep neural network safety
Author(s) -
Esmaeili Bardia,
Akhavanpour Alireza,
Sabokrou Mohammad
Publication year - 2021
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/ell2.12070
Subject(s) - adversarial system , robustness (evolution) , computer science , artificial neural network , deep neural networks , artificial intelligence , encoder , set (abstract data type) , machine learning , computer security , biochemistry , chemistry , gene , programming language , operating system
This article proposes a novel yet efficient defence method against adversarial attack(er)s aimed to improve the safety of deep neural networks. Removing the adversarial noise by refining adversarial samples as a defence strategy is widely investigated in previous works. Such methods are simply broken if an attacker has access to both main and refiner networks. To cope with this weakness, the authors propose to refine the input samples relying on a set of encoder–decoders, which are trained in such a way to reconstruct the samples on completely different feature spaces. To this end, the authors learn several encoder–decoder networks and force their latent spaces to have a maximum diversion. In this way, if attacker gets access to one of the refiner networks, other ones can play as a defence network. The evaluation of the proposed method confirms its performance against adversarial samples.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here