
An exploratory study of interpretability for face presentation attack detection
Author(s) -
Sequeira Ana F.,
Gonçalves Tiago,
Silva Wilson,
Pinto João Ribeiro,
Cardoso Jaime S.
Publication year - 2021
Publication title -
iet biometrics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.434
H-Index - 28
eISSN - 2047-4946
pISSN - 2047-4938
DOI - 10.1049/bme2.12045
Subject(s) - interpretability , computer science , machine learning , biometrics , artificial intelligence , robustness (evolution) , convolutional neural network , representation (politics) , deep learning , class (philosophy) , face (sociological concept) , presentation (obstetrics) , medicine , social science , radiology , sociology , biochemistry , chemistry , politics , political science , law , gene
Biometric recognition and presentation attack detection (PAD) methods strongly rely on deep learning algorithms. Though often more accurate, these models operate as complex black boxes. Interpretability tools are now being used to delve deeper into the operation of these methods, which is why this work advocates their integration in the PAD scenario. Building upon previous work, a face PAD model based on convolutional neural networks was implemented and evaluated both through traditional PAD metrics and with interpretability tools. An evaluation on the stability of the explanations obtained from testing models with attacks known and unknown in the learning step is made. To overcome the limitations of direct comparison, a suitable representation of the explanations is constructed to quantify how much two explanations differ from each other. From the point of view of interpretability, the results obtained in intra and inter class comparisons led to the conclusion that the presence of more attacks during training has a positive effect in the generalisation and robustness of the models. This is an exploratory study that confirms the urge to establish new approaches in biometrics that incorporate interpretability tools. Moreover, there is a need for methodologies to assess and compare the quality of explanations.