z-logo
open-access-imgOpen Access
Face Recognition System Against Adversarial Attack Using Convolutional Neural Network
Author(s) -
Ansam Kadhim,
Salah Al-Darraji
Publication year - 2021
Publication title -
iraqi journal for electrical and electronic engineering/al-maǧallaẗ al-ʻirāqiyyaẗ al-handasaẗ al-kahrabāʼiyyaẗ wa-al-ilikttrūniyyaẗ
Language(s) - English
Resource type - Journals
eISSN - 2078-6069
pISSN - 1814-5892
DOI - 10.37917/ijeee.18.1.1
Subject(s) - computer science , convolutional neural network , artificial intelligence , facial recognition system , face (sociological concept) , exploit , deep learning , pattern recognition (psychology) , artificial neural network , confusion , computer vision , speech recognition , computer security , psychology , social science , sociology , psychoanalysis
Face recognition is the technology that verifies or recognizes faces from images, videos, or real-time streams. It can be used in security or employee attendance systems. Face recognition systems may encounter some attacks that reduce their ability to recognize faces properly. So, many noisy images mixed with original ones lead to confusion in the results. Various attacks that exploit this weakness affect the face recognition systems such as Fast Gradient Sign Method (FGSM), Deep Fool, and Projected Gradient Descent (PGD). This paper proposes a method to protect the face recognition system against these attacks by distorting images through different attacks, then training the recognition deep network model, specifically Convolutional Neural Network (CNN), using the original and distorted images. Diverse experiments have been conducted using combinations of original and distorted images to test the effectiveness of the system. The system showed an accuracy of 93% using FGSM attack, 97% using deep fool, and 95% using PGD.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here