
Robust and Privacy-Preserving Federated Learning Against Malicious Clients: A Bulyan-Based Adaptive Differential Privacy Framework
Author(s) -
Stuti Pandey,
Onkar Singh,
Ashish Pandey,
Chandrasen Pandey
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3596627
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Federated learning (FL) enables collaborative model training across decentralized clients without sharing raw data. However, malicious or byzantine clients may compromise global model integrity through adversarial updates, while the addition of differential privacy (DP) noise can significantly reduce accuracy if not carefully managed. To address these dual challenges, we propose a robust and privacy-preserving FL framework featuring three core innovations: (i) a Bulyan-based aggregator that discards outlier gradients to neutralize adversarial behaviors, (ii) a GroupNorm-based convolutional neural network (CNN) design for DP compatibility, and (iii) an adaptive noise-scheduling mechanism that gradually reduces noise variance across training rounds. Specifically, we configure the Bulyan aggregator with k = 7 and retain K = 5 gradients per round, using a trim ratio of γ = 0.2. The DP noise follows an exponential decay schedule σ r = 0.30 exp(−0.01 r ). Experimental results on CIFAR-10 under a 10% malicious-client scenario demonstrate that our method consistently mitigates poisoned updates and achieves a final global accuracy of 23.58% while maintaining a rigorous privacy budget of ε = 1.5, δ = 10 −5 . Despite an unavoidable performance gap relative to non-private or non-adversarial baselines, legitimate clients reach local accuracies near 80%. This outcome underscores the interplay between robust aggregation and flexible DP tuning to preserve model integrity and privacy. Our framework thus paves the way for future research on advanced aggregator heuristics, per-layer noise calibration, and refined defenses against sophisticated adversaries in large-scale FL.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom