Online Optimization Under Randomly Corrupted Attacks

Existing algorithms in online optimization usually rely on trustful information, e.g., reliable knowledge of gradients, which makes them vulnerable to attacks. To take into account the security issue in online optimization, this paper investigates the effect of randomly corrupted attacks, which can corrupt gradient information arbitrarily. To conquer the randomly corrupted attack, an onLine muLtiple normaLized Gradient Descent (L3GD) algorithm is proposed. Under mild conditions, the algorithm is proven to achieve satisfactory expected dynamic regret, i.e, $\mathcal{O}(\min\{P_{T}^{*}+T^{\frac{3}{4}},S_{T}^{*}+\sqrt{T}+\sum_{t=1}^{T} \|\nabla f_{t}(x_{t}^{*})\|^{2}\})$ and $\mathcal{O}(F_{T}+T^{\frac{3}{4}})$, without convex assumption, where $P_{T}^{*}$, $S_{T}^{*}$, and $F_{T}$ denote the path-length, squared path-length, and the functional variation, respectively. The results are comparable to state-of-the-art algorithms in the absence of randomly corrupted attacks. To our best knowledge, this paper is the first to consider randomly corrupted attacks in online optimization. Simulations conducted on both synthetic examples and real-world datasets, namely MNIST and CIFAR-10, corroborate the resilience of L3GD.
Source: IEEE Transactions on Signal Processing - Category: Biomedical Engineering Source Type: research