A Caltech Library Service

Neural Rendering Model: Joint Generation and Prediction for Semi-Supervised Learning

Ho, Nhat and Nguyen, Tan and Patel, Ankit and Anandkumar, Anima and Jordan, Michael I. and Baraniuk, Richard G. (2018) Neural Rendering Model: Joint Generation and Prediction for Semi-Supervised Learning. . (Unpublished)

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


Unsupervised and semi-supervised learning are important problems that are especially challenging with complex data like natural images. Progress on these problems would accelerate if we had access to appropriate generative models under which to pose the associated inference tasks. Inspired by the success of Convolutional Neural Networks (CNNs) for supervised prediction in images, we design the Neural Rendering Model (NRM), a new probabilistic generative model whose inference calculations correspond to those in a given CNN architecture. The NRM uses the given CNN to design the prior distribution in the probabilistic model. Furthermore, the NRM generates images from coarse to finer scales. It introduces a small set of latent variables at each level, and enforces dependencies among all the latent variables via a conjugate prior distribution. This conjugate prior yields a new regularizer based on paths rendered in the generative model for training CNNs-the Rendering Path Normalization (RPN). We demonstrate that this regularizer improves generalization, both in theory and in practice. In addition, likelihood estimation in the NRM yields training losses for CNNs, and inspired by this, we design a new loss termed as the Max-Min cross entropy which outperforms the traditional cross-entropy loss for object classification. The Max-Min cross entropy suggests a new deep network architecture, namely the Max-Min network, which can learn from less labeled data while maintaining good prediction performance. Our experiments demonstrate that the NRM with the RPN and the Max-Min architecture exceeds or matches the-state-of-art on benchmarks including SVHN, CIFAR10, and CIFAR100 for semi-supervised and supervised learning tasks.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Patel, Ankit0000-0001-6319-658X
Baraniuk, Richard G.0000-0002-0721-8999
Additional Information:Nhat Ho and Tan Nguyen contributed equally to this work. Anima Anandkumar, Michael I. Jordan, and Richard G. Baraniuk contributed equally to this work. First of all, we are very grateful to Amazon AI for providing a highly stimulating research environment for us to start this research project and further supporting our research through their cloud credits program. We would also like to express our sincere thanks to Gautam Dasarathy for great discussions. Furthermore, we would also like to thank Doris Y. Tsao for suggesting and providing references for connections between our model and feedforward and feedback connections in the brain. Many people during Tan Nguyen's internship at Amazon AI have helped by providing comments and suggestions on our work, including Stefano Soatto, Zack C. Lipton, Yu-Xiang Wang, Kamyar Azizzadenesheli, Fanny Yang, Jean Kossaifi, Michael Tschannen, Ashish Khetan, and Jeremy Bernstein. We also wish to thank Sheng Zha who has provided immense help with MXNet framework to implement our models. Finally, we would like to thank members of DSP group at Rice, Machine Learning group at UC Berkeley, and Anima Anandkumar's TensorLab at Caltech who have always been supportive throughout the time it has taken to finish this project.
Funding AgencyGrant Number
Subject Keywords:neural nets, generative models, semi-supervised learning, cross-entropy, statistical guarantee
Record Number:CaltechAUTHORS:20190327-085814265
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:94181
Deposited By: George Porter
Deposited On:28 Mar 2019 14:54
Last Modified:09 Mar 2020 13:19

Repository Staff Only: item control page