z-logo
open-access-imgOpen Access
FedChallenger: A Robust Challenge-Response and Aggregation Strategy to Defend Poisoning Attacks in Federated Learning
Author(s) -
M.A. Moyeen,
Kuljeet Kaur,
Anjali Agarwal,
S. Ricardo Manzano,
Marzia Zaman,
Nishith Goel
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3592207
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Growing data privacy concerns in smart applications have spurred the development of Federated Learning (FL), a novel approach enabling heterogeneous clients to jointly train a global model without exchanging private data. However, FL faces significant challenges in aggregating model updates from different client devices, as malicious participants can poison the data and model updates to corrupt the global model. To enhance the global model’s accuracy, many state-of-the-art defence strategies in federated learning rely on aggregation-based security mechanisms. However, the global model can be more accurate if an attacker is excluded from the training. Therefore, this research proposes a dual-layer defence mechanism called FedChallenger to detect and prevent malicious client participation in the FL training process. The defence mechanism incorporates zero-trust challenge-response-based trusted exchange in the first layer, whereas, in the second layer, it uses a variant of the Trimmed-Mean aggregation strategy that uses pairwise cosine similarity along with Median Absolute Deviation (MAD) for aggregation to mitigate the malicious model parameters. Extensive evaluation using MNIST, FMNIST, EMNIST, and CIFAR-10 datasets demonstrates that the proposed FedChallenger outperforms state-of-the-art approaches, including Stake, Shap, Cluster, Trimmed-Mean, Krum, FedAvg, and DUEL, across both attack and non-attack scenarios. Under adversarial conditions with model and data poisoning attacks, FedChallenger achieves a 3-10% improvement in global model accuracy over the closest contender, along with 1.1-2.2 times faster convergence. Additionally, it attains a 2-3% higher F1-Score than the best-competing technique while maintaining robustness against varying attack intensities across different dataset complexities.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom