z-logo
open-access-imgOpen Access
A strategy to the reduction of communication overhead and overfitting in Federated Learning
Author(s) -
Alex Barros,
Denis Rosário,
Eduardo Cerqueira,
Nelson L. S. da Fonseca
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5753/wgrs.2021.17181
Subject(s) - overfitting , computer science , overhead (engineering) , convergence (economics) , reduction (mathematics) , distributed computing , controller (irrigation) , artificial intelligence , federated learning , machine learning , computation , artificial neural network , algorithm , geometry , mathematics , agronomy , economics , biology , economic growth , operating system
Federated learning (FL) is a framework to train machine learning models using decentralized data, especially unbalanced and non-iid. Adaptive methods can be used to accelerate convergence, reducing the number of rounds of local computation and communication to a centralized server. This paper proposes an adaptive controller to adapt the number of epochs needed that employs Poisson distribution to avoid overfitting of the aggregated model, promoting fast convergence. Our results indicate that increasing the local update of the model should be avoided, but yet some complementary mechanism is needed to model performance. We evaluate the impact of an increasing number of epochs of FedAVG and FedADAM.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom