CaltechAUTHORS
  A Caltech Library Service

Neural Networks with Recurrent Generative Feedback

Huang, Yujia and Gornet, James and Dai, Sihui and Yu, Zhiding and Nguyen, Tan and Tsao, Doris Y. and Anandkumar, Anima (2020) Neural Networks with Recurrent Generative Feedback. In: Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020). Advances in Neural Information Processing Systems . https://resolver.caltech.edu/CaltechAUTHORS:20201106-120201944

[img]
Preview
PDF - Published Version
See Usage Policy.

4MB
[img] PDF - Supplemental Material
See Usage Policy.

428kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20201106-120201944

Abstract

Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment. Inspired by such hypothesis, we enforce self-consistency in neural networks by incorporating generative recurrent feedback. We instantiate this design on convolutional neural networks (CNNs). The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables to existing CNN architectures, where consistent predictions are made through alternating MAP inference under a Bayesian framework. In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.


Item Type:Book Section
Related URLs:
URLURL TypeDescription
https://proceedings.neurips.cc/paper/2020/hash/0660895c22f8a14eb039bfb9beb0778f-Abstract.htmlPublisherArticle
https://arxiv.org/abs/2007.09200arXivDiscussion Paper
ORCID:
AuthorORCID
Huang, Yujia0000-0001-7667-8342
Gornet, James0000-0002-5431-7340
Tsao, Doris Y.0000-0003-1083-1919
Additional Information:We thank Chaowei Xiao, Haotao Wang, Jean Kossaifi, Francisco Luongo for the valuable feedback. Y. Huang is supported by DARPA LwLL grants. J. Gornet is supported by supported by the NIH Predoctoral Training in Quantitative Neuroscience 1T32NS105595-01A1. D. Y. Tsao is supported by Howard Hughes Medical Institute and Tianqiao and Chrissy Chen Institute for Neuroscience. A. Anandkumar is supported in part by Bren endowed chair, DARPA LwLL grants, Tianqiao and Chrissy Chen Institute for Neuroscience, Microsoft, Google, and Adobe faculty fellowships.
Group:Tianqiao and Chrissy Chen Institute for Neuroscience
Funders:
Funding AgencyGrant Number
NIH Predoctoral Fellowship1T32NS105595-01A1
Howard Hughes Medical Institute (HHMI)UNSPECIFIED
Tianqiao and Chrissy Chen Institute for NeuroscienceUNSPECIFIED
Bren Professor of Computing and Mathematical SciencesUNSPECIFIED
Defense Advanced Research Projects Agency (DARPA)UNSPECIFIED
Learning with Less Labels (LwLL)UNSPECIFIED
Microsoft Faculty FellowshipUNSPECIFIED
Google Faculty Research AwardUNSPECIFIED
AdobeUNSPECIFIED
Record Number:CaltechAUTHORS:20201106-120201944
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20201106-120201944
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:106486
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:06 Nov 2020 22:16
Last Modified:06 Nov 2020 23:23

Repository Staff Only: item control page