Neural Networks with Recurrent Generative Feedback
Abstract
Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment. Inspired by such hypothesis, we enforce self-consistency in neural networks by incorporating generative recurrent feedback. We instantiate this design on convolutional neural networks (CNNs). The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables to existing CNN architectures, where consistent predictions are made through alternating MAP inference under a Bayesian framework. In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
Additional Information
We thank Chaowei Xiao, Haotao Wang, Jean Kossaifi, Francisco Luongo for the valuable feedback. Y. Huang is supported by DARPA LwLL grants. J. Gornet is supported by supported by the NIH Predoctoral Training in Quantitative Neuroscience 1T32NS105595-01A1. D. Y. Tsao is supported by Howard Hughes Medical Institute and Tianqiao and Chrissy Chen Institute for Neuroscience. A. Anandkumar is supported in part by Bren endowed chair, DARPA LwLL grants, Tianqiao and Chrissy Chen Institute for Neuroscience, Microsoft, Google, and Adobe faculty fellowships.Attached Files
Published - NeurIPS-2020-neural-networks-with-recurrent-generative-feedback-Paper.pdf
Supplemental Material - NeurIPS-2020-neural-networks-with-recurrent-generative-feedback-Supplemental.pdf
Files
Name | Size | Download all |
---|---|---|
md5:78efded318848593edb0e69e94754dc5
|
428.8 kB | Preview Download |
md5:4d7d9b60ca547652f3cbc4bb045ac132
|
4.2 MB | Preview Download |
Additional details
- Eprint ID
- 106486
- DOI
- 10.48550/arXiv.2007.09200
- Resolver ID
- CaltechAUTHORS:20201106-120201944
- NIH Predoctoral Fellowship
- 1T32NS105595-01A1
- Howard Hughes Medical Institute (HHMI)
- Tianqiao and Chrissy Chen Institute for Neuroscience
- Bren Professor of Computing and Mathematical Sciences
- Defense Advanced Research Projects Agency (DARPA)
- Learning with Less Labels (LwLL)
- Microsoft Faculty Fellowship
- Google Faculty Research Award
- Adobe
- Created
-
2020-11-06Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field
- Caltech groups
- Tianqiao and Chrissy Chen Institute for Neuroscience, Division of Biology and Biological Engineering