Reinforcement learning with associative or discriminative generalization across states and actions: fMRI at 3 T and 7 T
The model-free algorithms of "reinforcement learning" (RL) have gained clout across disciplines, but so too have model-based alternatives. The present study emphasizes other dimensions of this model space in consideration of associative or discriminative generalization across states and actions. This "generalized reinforcement learning" (GRL) model, a frugal extension of RL, parsimoniously retains the single reward-prediction error (RPE), but the scope of learning goes beyond the experienced state and action. Instead, the generalized RPE is efficiently relayed for bidirectional counterfactual updating of value estimates for other representations. Aided by structural information but as an implicit rather than explicit cognitive map, GRL provided the most precise account of human behavior and individual differences in a reversal-learning task with hierarchical structure that encouraged inverse generalization across both states and actions. Reflecting inference that could be true, false (i.e., overgeneralization), or absent (i.e., undergeneralization), state generalization distinguished those who learned well more so than action generalization. With high-resolution high-field fMRI targeting the dopaminergic midbrain, the GRL model's RPE signals (alongside value and decision signals) were localized within not only the striatum but also the substantia nigra and the ventral tegmental area, including specific effects of generalization that also extend to the hippocampus. Factoring in generalization as a multidimensional process in value-based learning, these findings shed light on complexities that, while challenging classic RL, can still be resolved within the bounds of its core computations.
Additional Information2022 The Authors. Human Brain Mapping published by Wiley Periodicals LLC. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Received: 19 January 2022. Revised: 20 May 2022. Accepted: 10 June 2022. Scott T. Grafton and John P. O'Doherty are co-senior authors. This study originated at a workshop on "Learning in Networks" supported by the National Institute for Mathematical and Biological Synthesis. STG was supported by the Institute for Collaborative Biotechnologies under Cooperative Agreement W911NF-19-2-0026 and grant W911NF-16-1-0474 from the Army Research Office. JPOD was supported by National Institute on Drug Abuse grant R01 DA040011 and the National Institute of Mental Health's Caltech Conte Center for Social Decision Making (P50 MH094258). JMT was supported by National Institute of Mental Health grant P50 MH094258. AWT was supported by National Institute of Biomedical Imaging and Bioengineering grant P41 EB015922. JIG was supported by National Institute of Mental Health grant R01 MH115557. DSB was supported by Army Research Office grants W911NF-16-1-0474 and W911NF-18-1-0244. CAH was supported by the Klingenstein-Simons Neuroscience Fellowship. DATA AVAILABILITY STATEMENT. Data are available at https://neurovault.org/collections/RLVWMYCQ/.
Supplemental Material - hbm25988-sup-0001-supinfo.pdf