Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning

Value function (VF) approximation is a central problem in reinforcement learning (RL). Classical non-parametric VF estimation suffers from the curse of dimensionality. As a result, parsimonious parametric models have been adopted to approximate VFs in high-dimensional spaces, with most efforts being focused on linear and neural network-based approaches. Differently, this paper puts forth a parsimonious non-parametric approach, where we use stochastic low-rank algorithms to estimate the VF matrix in an online and model-free fashion. Furthermore, as VFs tend to be multi-dimensional, we propose replacing the classical VF matrix representation with a tensor (multi-way array) representation, and then using the PARAFAC decomposition to design an online model-free tensor low-rank algorithm. Different versions of the algorithms are proposed, their complexity is analyzed, and their performance is assessed numerically using standardized RL environments.
Source: IEEE Transactions on Signal Processing - Category: Biomedical Engineering Source Type: research