Federated Learning with Random Communication and Dynamic Aggregation
Author(s) -
Ruo-lin Huang,
Ting Lu,
Yiyang Luo,
Guohua Liu,
Shan Chang
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5121/csit.2021.111816
Subject(s) - computer science , hyperparameter , federated learning , random variable , task (project management) , random access , artificial intelligence , communications system , machine learning , algorithm , data mining , computer network , mathematics , management , statistics , economics
Federated Learning (FL) is a setting that allows clients to train a joint global model collaboratively while keeping data locally. Due to FL has advantages of data confidential and distributed computing, interest in this area has increased. In this paper, we designed a new FL algorithm named FedRAD. Random communication and dynamic aggregation methods are proposed for FedRAD. Random communication method enables FL system use the combination of fixed communication interval and constrained variable intervals in a single task. Dynamic aggregation method reforms aggregation weights and makes weights update automately. Both methods aim to improve model performance. We evaluated two proposed methods respectively, and compared FedRAD with three algorithms on three hyperparameters. Results at CIFAR-10 demonstrate that each method has good performance, and FedRAD can achieve higher classification accuracy than state-of-the-art FL algorithms.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom