
A Framework Integrating Federated Learning and Fog Computing Based on Client Sampling and Dynamic Thresholding Techniques
Author(s) -
Dang Van Thang,
Artem Volkov,
Ammar Muthanna,
Ibrahim A. Elgendy,
Reem Alkanhel,
Dushantha Nalin K. Jayakody,
Andrey Koucheryavy
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3571979
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
The exponential growth in the number of Internet of Things (IoT) devices and the vast quantity of data they generate present a significant challenge to the efficacy of traditional centralized training models. Federated Learning (FL) is a machine learning framework that effectively addresses this issue and other concerns about data privacy. Furthermore, fog computing represents a robust distributed computing methodology with the potential to bolster and propel the advancement of FL. An integrated distributed architecture combining FL and fog computing (FC) has the potential to overcome the limitations of traditional centralized architectures, offering a promising solution for the future. One of the objectives of implementing this novel architectural framework is to alleviate the burden on communication links within the core network by training a model on distributed training data across many clients. Various techniques and frameworks have been developed and implemented, including approaches to model compression and those addressing data and device heterogeneity. These have demonstrated effectiveness in specific contexts. In this paper, we introduce a novel gradient-driven client-sampling framework that tightly couples Federated Learning with Fog Computing. By dynamically adjusting per-round thresholds based on local gradient change rates, our method selects only the most informative clients and leverages fog nodes for partial aggregation, thereby minimizing redundant transmissions, accelerating convergence under heterogeneous data, and offloading the central server. Extensive simulations on MNIST and CIFAR-10 demonstrate that our approach reduces cumulative communication by 39% and 31%, respectively, without sacrificing convergence speed or final accuracy.