Byzantine-Robust Distributed Online Learning: Taming Adversarial Participants in An Adversarial Environment

This paper studies distributed online learning under Byzantine attacks. The performance of an online learning algorithm is often characterized by (adversarial) regret, which evaluates the quality of one-step-ahead decision-making when an environment incurs adversarial losses, and a sublinear regret bound is preferred. But we prove that, even with a class of state-of-the-art robust aggregation rules, in an adversarial environment and in the presence of Byzantine participants, distributed online gradient descent can only achieve a linear adversarial regret bound, which is tight. This is the inevitable consequence of Byzantine attacks, even though we can control the constant of the linear adversarial regret to a reasonable level. Interestingly, when the environment is not fully adversarial so that the losses of the honest participants are i.i.d. (independent and identically distributed), we show that sublinear stochastic regret, in contrast to the aforementioned adversarial regret, is possible. We develop Byzantine-robust distributed online momentum algorithms to attain such sublinear stochastic regret bounds for a class of robust aggregation rules. Numerical experiments corroborate our theoretical analysis.
Source: IEEE Transactions on Signal Processing - Category: Biomedical Engineering Source Type: research