
Integrating Fine-Grained Classification and Motion Relation Analysis for Face Anti-Spoofing
Author(s) -
Ziyang Cheng,
Xiafen Zhang
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3573790
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Face Anti-Spoofing (FAS) is the core technical means used to protect face recognition systems from face presentation attack. Facial movement is a commonly used cue in FAS. Existing face anti-spoofing methods based on facial movement usually extract motion features from the overall face, ignoring micro-movements in facial regions. Additionally, the fine-grained texture information of local facial regions is not well integrated into the motion. Therefore, we propose a face anti-spoofing detection method that integrates fine-grained classification and motion relation analysis. Specifically, we propose Motion Clues Attention Network (MCAN) and Fine-Grained Classification Network (FGCN). By introducing an attention mechanism, the MCAN uses the RAFT optical flow algorithm to adaptively focus on the direction and intensity of micro-movements in key face regions, distinguishing the natural movement of real faces from the static or repetitive motion patterns in spoofing attacks. Inspired by the latest face recognition research, the Distribution Based Additive Margin softmax-Similarity (DAMS-SIM) loss function is designed in the FGCN network to address the asymmetry between real and spoofed samples, enabling the network to capture the local fine-grained texture differences between real faces and spoofing attacks. Finally, we construct a simple feature fusion network that combines the features of MCAN and FGCN to generate the final classification results. We conducted extensive experiments on CASIA-FASD and Replay Attack, and the results show that the proposed method achieve the best performance compared to other methods. Besides, we created our own dataset PR-FASD to evaluate the model’s generalization ability, achieving good results.