FedUR: Federated Learning Optimization Through Adaptive Centralized Learning Optimizers

Introducing adaptiveness to federated learning has recently ushered in a new way to optimize its convergence performance. However, adaptive learning strategies originally designed in centralized machine learning are in naїve extended to federated learning in existing works, which does not necessarily improve convergence performance and further reduce communication overhead as expected. In this paper, we fully investigate those centralized learning-based adaptive learning strategies, and propose an adaptive Federated learning algorithm targeting the model parameter Update Rule, called FedUR. Convergence upper bounds under FedUR are derived from the aspect of both local iterations and global aggregations. Through comparison with the convergence upper bounds of original federated learning, we theoretically analyze how those strategies should be tuned to help federated learning effectively optimize convergence performance and reduce overall communication overhead. Extensive experiments are conducted based on several real datasets and machine learning models, which show that FedUR can effectively increase final convergence accuracy with even lower communication overhead requirement.
Source: IEEE Transactions on Signal Processing - Category: Biomedical Engineering Source Type: research