z-logo
open-access-imgOpen Access
Adversarial Image Perturbation with a Genetic Algorithm
Author(s) -
Rok Kukovec,
Špela Pečnik,
Iztok Fister,
Sašo Karakatič
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18690/978-961-286-516-0.6
Subject(s) - computer science , artificial intelligence , adversarial system , convolutional neural network , image (mathematics) , artificial neural network , pattern recognition (psychology) , image quality , computer vision , genetic algorithm , human visual system model , algorithm , machine learning
The quality of image recognition with neural network models relies heavily on filters and parameters optimized through the training process. These filters are di˙erent compared to how humans see and recognize objects around them. The di˙erence in machine and human recognition yields a noticeable gap, which is prone to exploitation. The workings of these algorithms can be compromised with adversarial perturbations of images. This is where images are seemingly modified imperceptibly, such that humans see little to no di˙erence, but the neural network classifies t he m otif i ncorrectly. This paper explores the adversarial image modifica-tion with an evolutionary algorithm, so that the AlexNet convolutional neural network cannot recognize previously clear motifs while preserving the human perceptibility of the image. The ex-periment was implemented in Python and tested on the ILSVRC dataset. Original images and their recreated counterparts were compared and contrasted using visual assessment and statistical metrics. The findings s uggest t hat t he human eye, without prior knowledge, will hardly spot the di˙erence compared to the original images.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here