Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published October 19, 2017 | Supplemental Material + Published
Journal Article Open

Distinct prediction errors in mesostriatal circuits of the human brain mediate learning about the values of both states and actions: evidence from high-resolution fMRI


Prediction-error signals consistent with formal models of "reinforcement learning" (RL) have repeatedly been found within dopaminergic nuclei of the midbrain and dopaminoceptive areas of the striatum. However, the precise form of the RL algorithms implemented in the human brain is not yet well determined. Here, we created a novel paradigm optimized to dissociate the subtypes of reward-prediction errors that function as the key computational signatures of two distinct classes of RL models—namely, "actor/critic" models and action-value-learning models (e.g., the Q-learning model). The state-value-prediction error (SVPE), which is independent of actions, is a hallmark of the actor/critic architecture, whereas the action-value-prediction error (AVPE) is the distinguishing feature of action-value-learning algorithms. To test for the presence of these prediction-error signals in the brain, we scanned human participants with a high-resolution functional magnetic-resonance imaging (fMRI) protocol optimized to enable measurement of neural activity in the dopaminergic midbrain as well as the striatal areas to which it projects. In keeping with the actor/critic model, the SVPE signal was detected in the substantia nigra. The SVPE was also clearly present in both the ventral striatum and the dorsal striatum. However, alongside these purely state-value-based computations we also found evidence for AVPE signals throughout the striatum. These high-resolution fMRI findings suggest that model-free aspects of reward learning in humans can be explained algorithmically with RL in terms of an actor/critic mechanism operating in parallel with a system for more direct action-value learning.

Additional Information

© 2017 Colas et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Received: June 12, 2017; Accepted: October 9, 2017; Published: October 19, 2017. Data Availability Statement: Data are available at https://neurovault.org/collections/ETRQWPUH/. This study was funded by National Institutes of Health (https://www.nih.gov/) grants R01DA033077 (supported by OppNet, NIH's Basic Behavioral and Social Science Opportunity Network) and R01DA040011 to JPOD as well as by the National Science Foundation (https://www.nsf.gov/) Graduate Research Fellowship Program for JTC. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors have declared that no competing interests exist. Author Contributions: Conceptualization: Jaron T. Colas, Tobias Larsen, John P. O'Doherty. Formal analysis: Jaron T. Colas. Funding acquisition: John P. O'Doherty. Investigation: Jaron T. Colas, Tobias Larsen. Methodology: Jaron T. Colas. Resources: Wolfgang M. Pauli, J. Michael Tyszka. Supervision: John P. O'Doherty. Visualization: Jaron T. Colas. Writing ± original draft: Jaron T. Colas, John P. O'Doherty. Writing ± review & editing: Jaron T. Colas, Wolfgang M. Pauli, J. Michael Tyszka, John P. O'Doherty.

Attached Files

Published - journal.pcbi.1005810.pdf

Supplemental Material - 5515324.zip


Files (7.5 MB)
Name Size Download all
5.0 MB Preview Download
2.5 MB Preview Download

Additional details

August 19, 2023
October 23, 2023