z-logo
open-access-imgOpen Access
Depriving the Survival Space of Adversaries Against Poisoned Gradients in Federated Learning
Author(s) -
Jianrong Lu,
Shengshan Hu,
Wei Wan,
Minghui Li,
Leo Yu Zhang,
Lulu Xue,
Haohan Wang,
Hai Jin
Publication year - 2024
Publication title -
ieee transactions on information forensics and security
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.613
H-Index - 133
eISSN - 1556-6021
pISSN - 1556-6013
DOI - 10.1109/tifs.2024.3360869
Subject(s) - signal processing and analysis , computing and processing , communication, networking and broadcast technologies
Federated learning (FL) allows clients at the edge to learn a shared global model without disclosing their private data. However, FL is susceptible to poisoning attacks, wherein an adversary injects tainted local models that ultimately corrupt the global model. Despite various defensive mechanisms having been developed to combat poisoning attacks, they all fall short of securing practical FL scenarios with heterogeneous and unbalanced data distribution. Moreover, the cutting-edge defenses currently at our disposal demand access to a proprietary dataset that closely mirrors the distribution of clients’ data, which runs counter to the fundamental principle of privacy protection in FL. It is still challenging to devise an effective defense approach that applies to practical FL. In this work, we strive to narrow the divide between FL defense and its practical use. We first present a general framework to comprehend the effect of poisoning attacks in FL when the training data is not independent and identically distributed (non-IID). We then HeteroFL, a novel FL scheme that incorporates four complementary defensive strategies. These tactics are implemented in succession to refine the aggregated model toward approaching the global optimum. Ultimately, we devise an adaptive attack specifically for HeteroFL, aimed at offering a more thorough evaluation of its robustness. Our extensive experiments over heterogeneous datasets and models show that HeteroFL surpasses all state-of-the-art defenses in thwarting various poisoning attacks, i.e., HeteroFL achieves global model accuracies comparable to the baseline, whereas other defenses suffer a significant accuracy reduction ranging from 34% to 79%.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here