z-logo
open-access-imgOpen Access
DRACO: Decentralized Asynchronous Federated Learning over Row-Stochastic Wireless Networks
Author(s) -
Eunjeong Jeong,
Marios Kountouris
Publication year - 2025
Publication title -
ieee open journal of the communications society
Language(s) - English
Resource type - Magazines
eISSN - 2644-125X
DOI - 10.1109/ojcoms.2025.3574098
Subject(s) - communication, networking and broadcast technologies
Emerging technologies and use cases, such as smart Internet of Things (IoT), Internet of Agents, and Edge AI, have generated significant interest in training neural networks over fully decentralized, serverless networks. A major obstacle in this context is ensuring stable convergence without imposing stringent assumptions, such as identical data distributions across devices or synchronized updates. In this paper, we introduce DRACO, a novel framework for decentralized asynchronous Stochastic Gradient Descent (SGD) over row-stochastic gossip wireless networks. Our approach leverages continuous communication, allowing edge devices to perform local training and exchange model updates along a continuous timeline, thereby eliminating the need for synchronized timing. Additionally, our algorithm decouples communication and computation schedules, enabling complete autonomy for all users while effectively addressing straggler issues. Through a thorough convergence analysis, we show that DRACO achieves high performance in decentralized optimization while maintaining low variance across users even without predefined scheduling policies. Numerical experiments further validate the effectiveness of our approach, demonstrating that controlling the maximum number of received messages per client significantly reduces redundant communication costs while maintaining robust learning performance.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here