A Caltech Library Service

Reading the mind's eye: Decoding category information during mental imagery

Reddy, Leila and Tsuchiya, Naotsugu and Serre, Thomas (2010) Reading the mind's eye: Decoding category information during mental imagery. NeuroImage, 50 (2). pp. 818-825. ISSN 1053-8119.

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item:


Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventral–temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom–up visual input, for example, during visual imagery? Here, we first ask how well category information can be decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventral–temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable to within the imagery condition. The above results held even when we did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of “diagnostic voxels” (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventral–temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottom–up input, cortical back projections can selectively re-activate specific patterns of neural activity.

Item Type:Article
Related URLs:
URLURL TypeDescription
Additional Information:© 2010 Elsevier. Received 10 November 2009; accepted 28 November 2009. Available online 11 December 2009. We gratefully acknowledge funding from the Fondation pour la Recherche Medicale and the Fyssen Foundation ASUPS AO1 Université P. Sabatier Toulouse (2009) to L.R., DARPA (IPTO and DSO) and NSF to T.S., and the Japan Society for the Promotion of Science to N.T. We also thank John Serences for the use of his meridian mapping code, Niko Kriegeskorte for useful discussions, and Rufin VanRullen, Tomaso Poggio and Michèle Fabre-Thorpe for comments on the manuscript. Finally, we are grateful to Christof Koch for discussions pertaining to the design of the study and for providing financial support for this project (Mathers foundation and NIH).
Group:Koch Laboratory (KLAB)
Funding AgencyGrant Number
Fondation pour la Recherche MedicaleUNSPECIFIED
Fyssen Foundation ASUPS AO1 Universite P. Sabatier ToulouseUNSPECIFIED
Defense Advanced Research Projects Agency (DARPA)UNSPECIFIED
Japan Society for the Promotion of ScienceUNSPECIFIED
Mathers FoundationUNSPECIFIED
Subject Keywords:Imagery; Perception; fMRI; Multi-voxel pattern analysis; Occipito-temporal cortex; Object recognition
Issue or Number:2
Record Number:CaltechAUTHORS:20100316-101018911
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:17748
Deposited By: Tony Diaz
Deposited On:22 Mar 2010 02:28
Last Modified:03 Oct 2019 01:32

Repository Staff Only: item control page