Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published May 6, 2015 | Published + Supplemental Material
Journal Article Open

Intelligent Information Loss: The Coding of Facial Identity, Head Pose, and Non-Face Information in the Macaque Face Patch System


Faces are a behaviorally important class of visual stimuli for primates. Recent work in macaque monkeys has identified six discrete face areas where most neurons have higher firing rates to images of faces compared with other objects (Tsao et al., 2006). While neurons in these areas appear to have different tuning (Freiwald and Tsao, 2010; Issa and DiCarlo, 2012), exactly what types of information and, consequently, which visual behaviors neural populations within each face area can support, is unknown. Here we use population decoding to better characterize three of these face patches (ML/MF, AL, and AM). We show that neural activity in all patches contains information that discriminates between the broad categories of face and nonface objects, individual faces, and nonface stimuli. Information is present in both high and lower firing rate regimes. However, there were significant differences between the patches, with the most anterior patch showing relatively weaker representation of nonface stimuli. Additionally, we find that pose-invariant face identity information increases as one moves to more anterior patches, while information about the orientation of the head decreases. Finally, we show that all the information we can extract from the population is present in patterns of activity across neurons, and there is relatively little information in the total activity of the population. These findings give new insight into the representations constructed by the face patch system and how they are successively transformed.

Additional Information

© 2015 the authors. For the first six months after publication SfN's license will be exclusive. Beginning six months after publication the Work will be made freely available to the public on SfN's website to copy, distribute, or display under a Creative Commons Attribution 4.0 International (CC BY 4.0) license (https://creativecommons.org/licenses/by/4.0/). Received July 25, 2014; revised March 23, 2015; accepted March 25, 2015. This work was supported by the Center for Brains, Minds and Machines, funded by National Science Foundation (NSF) STC award CCF-1231216. Additional support comes from National Institutes of Health (NIH) Grant R01-EY019702 and Klingenstein Fellowship to D.T.; NIH grant R01-EY021594 and Pew Scholarship in the Biomedical Sciences for W.A.F.; and the Defense Advanced Research Planning Agency grants (Information Processing Techniques Office and Defense Sciences Office), NSF grants NSF-0640097 and NSF-0827427, and from Adobe, Honda Research Institute USA, and a King Abdullah University Science and Technology grant to B. DeVore. We would also like to thank Tomaso Poggio for his continual support and Jim Mutch for his help creating the S1 and C2 HMAX features. Author contributions: W.A.F. and D.T. designed research; E.M.M., W.A.F., and D.T. performed research; E.M.M. and M.B. analyzed data; E.M. and D.T. wrote the paper.

Attached Files

Published - 7069.full.pdf

Supplemental Material - Meyers_JNeuosci_2015_supplementary_material.pdf


Files (1.3 MB)
Name Size Download all
408.2 kB Preview Download
941.3 kB Preview Download

Additional details

August 22, 2023
October 23, 2023