
Deeply vulnerable: a study of the robustness of face recognition to presentation attacks
Author(s) -
Mohammadi Amir,
Bhattacharjee Sushil,
Marcel Sébastien
Publication year - 2018
Publication title -
iet biometrics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.434
H-Index - 28
eISSN - 2047-4946
pISSN - 2047-4938
DOI - 10.1049/iet-bmt.2017.0079
Subject(s) - computer science , trustworthiness , robustness (evolution) , vulnerability (computing) , facial recognition system , artificial intelligence , deep neural networks , margin (machine learning) , face (sociological concept) , presentation (obstetrics) , machine learning , deep learning , artificial neural network , pattern recognition (psychology) , computer security , medicine , social science , biochemistry , chemistry , radiology , sociology , gene
The vulnerability of deep‐learning‐based face‐recognition (FR) methods, to presentation attacks (PA), is studied in this study. Recently, proposed FR methods based on deep neural networks (DNN) have been shown to outperform most other methods by a significant margin. In a trustworthy face‐verification system, however, maximising recognition‐performance alone is not sufficient – the system should also be capable of resisting various kinds of attacks, including PA. Previous experience has shown that the PA vulnerability of FR systems tends to increase with face‐verification accuracy. Using several publicly available PA datasets, the authors show that DNN‐based FR systems compensate for variability between bona fide and PA samples, and tend to score them similarly, which makes such FR systems extremely vulnerable to PAs. Experiments show the vulnerability of the studied DNN‐based FR systems to be consistently higher than 90%, and often higher than 98%.