Research Library

open-access-imgOpen AccessTaming Subnet-Drift in D2D-Enabled Fog Learning: A Hierarchical Gradient Tracking Approach
Author(s)
Evan Chen,
Shiqiang Wang,
Christopher G. Brinton
Publication year2024
Federated learning (FL) encounters scalability challenges when implementedover fog networks. Semi-decentralized FL (SD-FL) proposes a solution thatdivides model cooperation into two stages: at the lower stage, device-to-device(D2D) communications is employed for local model aggregations withinsubnetworks (subnets), while the upper stage handles device-server (DS)communications for global model aggregations. However, existing SD-FL schemesare based on gradient diversity assumptions that become performance bottlenecksas data distributions become more heterogeneous. In this work, we developsemi-decentralized gradient tracking (SD-GT), the first SD-FL methodology thatremoves the need for such assumptions by incorporating tracking terms intodevice updates for each communication layer. Analytical characterization ofSD-GT reveals convergence upper bounds for both non-convex and strongly-convexproblems, for a suitable choice of step size. We employ the resulting bounds inthe development of a co-optimization algorithm for optimizing subnet samplingrates and D2D rounds according to a performance-efficiency trade-off. Oursubsequent numerical evaluations demonstrate that SD-GT obtains substantialimprovements in trained model quality and communication cost relative tobaselines in SD-FL and gradient tracking on several datasets.
Language(s)English

Seeing content that should not be on Zendy? Contact us.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here