Asymptotic Analysis of Federated Learning Under Event-Triggered Communication

Federated learning (FL) is a collaborative machine learning (ML) paradigm based on persistent communication between a central server and multiple edge devices. However, frequent communication of large ML models can incur considerable communication resources, especially costly for wireless network nodes. In this paper, we develop a communication-efficient protocol to reduce the number of communication instances in each round while maintaining convergence rate and asymptotic distribution properties. First, we propose a novel communication-efficient FL algorithm that utilizes an event-triggered communication mechanism, where each edge device updates the model by using stochastic gradient descent with local sampling data and the central server aggregates all local models from the devices by using model averaging. Communication can be reduced since each edge device and the central server transmits its updated model only when the difference between the current model and the last communicated model is larger than a threshold. Thresholds of the devices and server are not necessarily coordinated, and the thresholds and step sizes are not constrained to be of specific forms. Under mild conditions on loss functions, step sizes and thresholds, for the proposed algorithm, we establish asymptotic analysis results in three ways, respectively: convergence in expectation, almost-sure convergence, and asymptotic distribution of the estimation error. In addition, we show that by fine-tunning th...
Source: IEEE Transactions on Signal Processing - Category: Biomedical Engineering Source Type: research