Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 2019 | metadata_only
Journal Article

Solving Bayesian inverse problems from the perspective of deep generative networks


Deep generative networks have achieved great success in high dimensional density approximation, especially for applications in natural images and language. In this paper, we investigate their approximation capability in capturing the posterior distribution in Bayesian inverse problems by learning a transport map. Because only the unnormalized density of the posterior is available, training methods that learn from posterior samples, such as variational autoencoders and generative adversarial networks, are not applicable in our setting. We propose a class of network training methods that can be combined with sample-based Bayesian inference algorithms, such as various MCMC algorithms, ensemble Kalman filter and Stein variational gradient descent. Our experiment results show the pros and cons of deep generative networks in Bayesian inverse problems. They also reveal the potential of our proposed methodology in capturing high dimensional probability distributions.

Additional Information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019. Received: 1 February 2019 / Accepted: 20 May 2019. The research of T. Y. Hou, K. C. Lam, and S. Zhang was in part supported by an NSF Grant DMS-1613861. We would also like to thank Microsoft Research for providing the computing facility in carrying some of the computations reported in this paper.

Additional details

August 19, 2023
August 19, 2023