CaltechAUTHORS
  A Caltech Library Service

Analysis of the Gibbs Sampler for Hierarchical Inverse Problems

Agapiou, Sergios and Bardsley, Jonathan M. and Papaspiliopoulos, Omiros and Stuart, Andrew M. (2014) Analysis of the Gibbs Sampler for Hierarchical Inverse Problems. SIAM/ASA Journal on Uncertainty Quantification, 2 (1). pp. 511-544. ISSN 2166-2525. https://resolver.caltech.edu/CaltechAUTHORS:20160719-141859444

[img] PDF - Published Version
See Usage Policy.

1716Kb
[img] PDF - Submitted Version
See Usage Policy.

1924Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20160719-141859444

Abstract

Many inverse problems arising in applications come from continuum models where the unknown parameter is a field. In practice the unknown field is discretized, resulting in a problem in ℝ^N, with an understanding that refining the discretization, that is, increasing N, will often be desirable. In the context of Bayesian inversion this situation suggests the importance of two issues: (i) defining hyperparameters in such a way that they are interpretable in the continuum limit N →∞ and so that their values may be compared between different discretization levels; and (ii) understanding the efficiency of algorithms for probing the posterior distribution as a function of large $N.$ Here we address these two issues in the context of linear inverse problems subject to additive Gaussian noise within a hierarchical modeling framework based on a Gaussian prior for the unknown field and an inverse-gamma prior for a hyperparameter, namely the amplitude of the prior variance. The structure of the model is such that the Gibbs sampler can be easily implemented for probing the posterior distribution. Subscribing to the dogma that one should think infinite-dimensionally before implementing in finite dimensions, we present function space intuition and provide rigorous theory showing that as $N$ increases, the component of the Gibbs sampler for sampling the amplitude of the prior variance becomes increasingly slower. We discuss a reparametrization of the prior variance that is robust with respect to the increase in dimension; we give numerical experiments which exhibit that our reparametrization prevents the slowing down. Our intuition on the behavior of the prior hyperparameter, with and without reparametrization, is sufficiently general to include a broad class of nonlinear inverse problems as well as other families of hyperpriors.


Item Type:Article
Related URLs:
URLURL TypeDescription
http://dx.doi.org/10.1137/130944229DOIArticle
http://epubs.siam.org/doi/abs/10.1137/130944229PublisherArticle
https://arxiv.org/abs/1311.1138arXivDiscussion Paper
Additional Information:© 2014 SIAM. Received by the editors November 5, 2013; accepted for public ation (in revised form) July 14, 2014; published electronically September 23, 2014.
Subject Keywords:Gaussian process priors, Markov chain Monte Carlo, inverse covariance operators, hierarchical models, diffusion limit
Other Numbering System:
Other Numbering System NameOther Numbering System ID
Andrew StuartJ114
Issue or Number:1
Classification Code:AMS subject classifications. 62G20, 62C10, 62D05, 45Q05
Record Number:CaltechAUTHORS:20160719-141859444
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20160719-141859444
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:69114
Collection:CaltechAUTHORS
Deposited By: Linda Taddeo
Deposited On:19 Jul 2016 22:44
Last Modified:03 Oct 2019 10:19

Repository Staff Only: item control page