Multi-Timescale Ensemble $Q$-Learning for Markov Decision Process Policy Optimization

Reinforcement learning (RL) is a classical tool to solve network control or policy optimization problems in unknown environments. The original $Q$-learning suffers from performance and complexity challenges across very large networks. Herein, a novel model-free ensemble reinforcement learning algorithm which adapts the classical $Q$-learning is proposed to handle these challenges for networks which admit Markov decision process (MDP) models. Multiple $Q$-learning algorithms are run on multiple, distinct, synthetically created and structurally related Markovian environments in parallel; the outputs are fused using an adaptive weighting mechanism based on the Jensen-Shannon divergence (JSD) to obtain an approximately optimal policy with low complexity. The theoretical justification of the algorithm, including the convergence of key statistics and $Q$-functions are provided. Numerical results across several network models show that the proposed algorithm can achieve up to 55% less average policy error with up to 50% less runtime complexity than the state-of-the-art $Q$-learning algorithms. Numerical results validate assumptions made in the theoretical analysis.
Source: IEEE Transactions on Signal Processing - Category: Biomedical Engineering Source Type: research