Reinforcement-learning in fronto-striatal circuits
We review the current state of knowledge on the computational and neural mechanisms of reinforcement-learning with a particular focus on fronto-striatal circuits. We divide the literature in this area into five broad research themes: the target of the learning—whether it be learning about the value of stimuli or about the value of actions; the nature and complexity of the algorithm used to drive the learning and inference process; how learned values get converted into choices and associated actions; the nature of state representations, and of other cognitive machinery that support the implementation of various reinforcement-learning operations. An emerging fifth area focuses on how the brain allocates or arbitrates control over different reinforcement-learning sub-systems or "experts". We will outline what is known about the role of the prefrontal cortex and striatum in implementing each of these functions. We then conclude by arguing that it will be necessary to build bridges from algorithmic level descriptions of computational reinforcement-learning to implementational level models to better understand how reinforcement-learning emerges from multiple distributed neural networks in the brain.
© The Author(s), under exclusive licence to American College of Neuropsychopharmacology 2021. Received 04 March 2021; Revised 06 July 2021; Accepted 09 July 2021; Published 05 August 2021. JOD is supported by grants from the National Institutes of Mental Health (R01MH11425, R01MH121089, R21MH120805 and the NIMH Caltech Conte Center on the neurobiology of social decision-making, P50MH094258) and the National Institute on Drug Abuse (R01DA040011). This work was supported in part by the intramural research program of NIMH (BBA: ZIA MH002928). Author Contributions: BBA and JOD jointly planned the scope of the review, reviewed the literature, prepared figures, wrote the paper and revised the paper following reviewer comments. The authors declare no competing interests.