CaltechAUTHORS
  A Caltech Library Service

Hyperparameter Estimation in Bayesian MAP Estimation: Parameterizations and Consistency

Dunlop, Matthew M. and Helin, Tapio and Stuart, Andrew M. (2019) Hyperparameter Estimation in Bayesian MAP Estimation: Parameterizations and Consistency. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20190722-134133717

[img] PDF - Submitted Version
See Usage Policy.

835kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20190722-134133717

Abstract

The Bayesian formulation of inverse problems is attractive for three primary reasons: it provides a clear modelling framework; means for uncertainty quantification; and it allows for principled learning of hyperparameters. The posterior distribution may be explored by sampling methods, but for many problems it is computationally infeasible to do so. In this situation maximum a posteriori (MAP) estimators are often sought. Whilst these are relatively cheap to compute, and have an attractive variational formulation, a key drawback is their lack of invariance under change of parameterization. This is a particularly significant issue when hierarchical priors are employed to learn hyperparameters. In this paper we study the effect of the choice of parameterization on MAP estimators when a conditionally Gaussian hierarchical prior distribution is employed. Specifically we consider the centred parameterization, the natural parameterization in which the unknown state is solved for directly, and the noncentred parameterization, which works with a whitened Gaussian as the unknown state variable, and arises when considering dimension-robust MCMC algorithms; MAP estimation is well-defined in the nonparametric setting only for the noncentred parameterization. However, we show that MAP estimates based on the noncentred parameterization are not consistent as estimators of hyperparameters; conversely, we show that limits of finite-dimensional centred MAP estimators are consistent as the dimension tends to infinity. We also consider empirical Bayesian hyperparameter estimation, show consistency of these estimates, and demonstrate that they are more robust with respect to noise than centred MAP estimates. An underpinning concept throughout is that hyperparameters may only be recovered up to measure equivalence, a well-known phenomenon in the context of the Ornstein-Uhlenbeck process.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/1905.04365arXivDiscussion Paper
ORCID:
AuthorORCID
Dunlop, Matthew M.0000-0001-7718-3755
Additional Information:The work of AMS and MMD is funded by US National Science Foundation (NSF) grant DMS 1818977 and AFOSR Grant FA9550-17-1-0185.
Funders:
Funding AgencyGrant Number
NSFDMS-1818977
Air Force Office of Scientific Research (AFOSR)FA9550-17-1-0185
Subject Keywords:Bayesian inverse problems, hierarchical Bayesian, MAP estimation, optimization, nonparametric inference, hyperparameter inference, consistency of estimators
Classification Code:AMS subject classifications: 62G05, 62C10, 62G20, 45Q05
Record Number:CaltechAUTHORS:20190722-134133717
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20190722-134133717
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:97329
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:22 Jul 2019 20:47
Last Modified:03 Oct 2019 21:30

Repository Staff Only: item control page