
A strategy to the reduction of communication overhead and overfitting in Federated Learning
Author(s) -
Ana P. Barros,
Rosario Denis,
Eduardo Cerqueira,
Nelson L. S. da Fonseca
Publication year - 2021
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5753/wgrs.2021.17181
Subject(s) - overfitting , computer science , overhead (engineering) , convergence (economics) , reduction (mathematics) , artificial intelligence , controller (irrigation) , distributed computing , federated learning , computation , machine learning , artificial neural network , algorithm , geometry , mathematics , agronomy , economics , biology , economic growth , operating system
Federated learning (FL) is a framework to train machine learning models using decentralized data, especially unbalanced and non-iid. Adaptive methods can be used to accelerate convergence, reducing the number of rounds of local computation and communication to a centralized server. This paper proposes an adaptive controller to adapt the number of epochs needed that employs Poisson distribution to avoid overfitting of the aggregated model, promoting fast convergence. Our results indicate that increasing the local update of the model should be avoided, but yet some complementary mechanism is needed to model performance. We evaluate the impact of an increasing number of epochs of FedAVG and FedADAM.