CaltechAUTHORS
  A Caltech Library Service

Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments

Cross, Logan and Cockburn, Jeff and Yue, Yisong and O'Doherty, John P. (2020) Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments. Neuron . ISSN 0896-6273. (In Press) https://resolver.caltech.edu/CaltechAUTHORS:20201217-133745087

[img] PDF (Document S1. Table S1 and Figures S1–S8) - Supplemental Material
See Usage Policy.

1882Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20201217-133745087

Abstract

Humans possess an exceptional aptitude to efficiently make decisions from high-dimensional sensory observations. However, it is unknown how the brain compactly represents the current state of the environment to guide this process. The deep Q-network (DQN) achieves this by capturing highly nonlinear mappings from multivariate inputs to the values of potential actions. We deployed DQN as a model of brain activity and behavior in participants playing three Atari video games during fMRI. Hidden layers of DQN exhibited a striking resemblance to voxel activity in a distributed sensorimotor network, extending throughout the dorsal visual pathway into posterior parietal cortex. Neural state-space representations emerged from nonlinear transformations of the pixel space bridging perception to action and reward. These transformations reshape axes to reflect relevant high-level features and strip away information about task-irrelevant sensory features. Our findings shed light on the neural encoding of task representations for decision-making in real-world situations.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.1016/j.neuron.2020.11.021DOIArticle
ORCID:
AuthorORCID
Yue, Yisong0000-0001-9127-1989
O'Doherty, John P.0000-0003-0016-3531
Additional Information:© 2020 Elsevier. Received 10 July 2020, Revised 15 October 2020, Accepted 17 November 2020, Available online 15 December 2020. This work is supported by NIDA (grant R01DA040011 to J.O.D. and L.C.) and the NIMH Caltech Conte Center for the Neurobiology of Social Decision-Making (grant P50 MH094258 to J.O.D.). We would like to thank Kiyohito Iigaya and other members of the O’Doherty lab for helpful feedback and discussions. Author Contributions. L.C., J.C., and J.P.O. designed the project. L.C. and J.C. developed experimental protocol and collected data. L.C. performed the analyses and wrote the draft of the manuscript. L.C., J.C., Y.Y., and J.P.O. discussed analyses and edited the manuscript. J.P.O. acquired funding. The authors declare no competing interests.
Funders:
Funding AgencyGrant Number
NIHR01DA040011
NIHP50 MH094258
Subject Keywords:fMRI; decision-making; deep reinforcement learning; naturalistic task; computational neuroscience
Record Number:CaltechAUTHORS:20201217-133745087
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20201217-133745087
Official Citation:Logan Cross, Jeff Cockburn, Yisong Yue, John P. O’Doherty, Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments, Neuron, 2020, , ISSN 0896-6273, https://doi.org/10.1016/j.neuron.2020.11.021.
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:107165
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:17 Dec 2020 22:25
Last Modified:17 Dec 2020 22:25

Repository Staff Only: item control page