z-logo
open-access-imgOpen Access
Compression-Aware Hybrid Framework for deep fake Detection in Low-Quality Video
Author(s) -
Lagsoun Abdel Motalib,
Oujaoura Mustapha,
Hedabou Mustapha
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3592358
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Deep fakes pose a growing threat to digital media integrity by generating highly realistic fake videos that are difficult to detect, especially under the high compression levels commonly used on social media platforms. These compression artifacts often degrade the performance of deep fake detectors, making reliable detection even more challenging. In this paper, we propose a handcrafted deep fake detection framework that integrates wavelet transforms and Conv3D-based spatiotemporal descriptors for feature extraction, followed by a lightweight ResNet-inspired classifier. Unlike end-to-end deep neural networks, our method emphasizes interpretability and computational efficiency, while maintaining high detection accuracy under diverse real-world conditions. We evaluated four configurations based on input modality and attention mechanisms: RGB with attention, RGB without attention, grayscale with attention, and grayscale without attention. Experiments were conducted on the FaceForensics++ dataset (C23 and C40 compression levels) and Celeb-DF v2 (C0 and C40), across intra- and inter-compression settings, as well as cross-dataset scenarios. Results show that RGB inputs without attention achieve the highest accuracy on FaceForensics++, while grayscale inputs without attention perform best in cross-dataset evaluations on Celeb-DF v2, attaining strong AUC scores. Despite its handcrafted nature, our approach matches or surpasses the existing state-of-the-art (SOTA) methods. Grad-CAM visualizations further reveal both strengths and failures (e.g., occlusion and misalignment), offering valuable insights for refinement. These findings underscore the potential of our framework for efficient and effective deep fake detection in low-resource and real-time environments.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom