Transferability Properties of Graph Neural Networks

Graph neural networks (GNNs) are composed of layers consisting of graph convolutions and pointwise nonlinearities. Due to their invariance and stability properties, GNNs are provably successful at learning representations from data supported on moderate-scale graphs. However, they are difficult to learn on large-scale graphs. In this article, we study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs. We use graph limits called graphons to define limit objects for graph filters and GNNs—graphon filters and graphon neural networks (${\mathbf{W}}$NNs)—which we interpret as generative models for graph filters and GNNs. We then show that graphon filters and ${\mathbf{W}}$NNs can be approximated by graph filters and GNNs sampled from them on weighted and stochastic graphs. Because the error of these approximations can be upper bounded, by a triangle inequality argument we can further bound the error of transferring a graph filter or a GNN across graphs. Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity. These findings are demonstrated empirically in a recommendation problem and in a decentralized control task.
Source: IEEE Transactions on Signal Processing - Category: Biomedical Engineering Source Type: research