Communication-Efficient Federated Learning: A Variance-Reduced Stochastic Approach With Adaptive Sparsification

Federated learning (FL) is an emerging distributed machine learning paradigm that aims to realize model training without gathering the data from data sources to a central processing unit. A traditional FL framework consists of a central server as well as a number of computing devices (aka workers). Training a model under the FL framework usually consumes a massive amount of communication resources because the server and devices should frequently communicate with each other. To alleviate the communication burden, we, in this paper, propose to adaptively sparsify the gradient vector which is transmitted to the server by each device, thus significantly reducing the amount of information that need to be sent to the central server. The proposed algorithm is built on the sparsified SAGA, a well-known variance-reduced stochastic algorithm. For the proposed algorithm, after the gradient vector is sparsified using conventional sparsification operators, an adaptive sparsification step is further added to identify the most informative elements in the sparsified gradient vector. Convergence analysis indicates that the proposed algorithm enjoys a linear convergence rate. Numerical results show that the adaptive sparsification mechanism can substantially improve the communication efficiency. Specifically, to achieve the same classification accuracy, the proposed method can reduce the communication overhead by at least 60$\boldsymbol{\%}$ as compared with existing state-of-the-art sparsific...
Source: IEEE Transactions on Signal Processing - Category: Biomedical Engineering Source Type: research