Mutual benefits: Combining reinforcement learning with sequential sampling models

Publication date: Available online 14 November 2019Source: NeuropsychologiaAuthor(s): Steven Miletić, Russell J. Boag, Birte U. ForstmannAbstractReinforcement learning models of error-driven learning and sequential-sampling models of decision making have provided significant insight into the neural basis of a variety of cognitive processes. Until recently, model-based cognitive neuroscience research using both frameworks has evolved separately and independently. Recent efforts have illustrated the complementary nature of both modelling traditions and showed how they can be integrated into a unified theoretical framework, explaining trial-by-trial dependencies in choice behavior as well as response time distributions. Here, we review a theoretical background of integrating the two classes of models, and review recent empirical efforts towards this goal. We furthermore argue that the integration of both modelling traditions provides mutual benefits for both fields, and highlight promises of this approach for cognitive modelling and model-based cognitive neuroscience.
Source: Neuropsychologia - Category: Neurology Source Type: research