A Caltech Library Service

Iterative Updating of Model Error for Bayesian Inversion

Calvetti, Daniela and Dunlop, Matthew and Somersalo, Erkki and Stuart, Andrew (2018) Iterative Updating of Model Error for Bayesian Inversion. Inverse Problems, 34 (2). Art. No. 025008. ISSN 0266-5611. doi:10.1088/1361-6420/aaa34d.

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

Item Type:Article
Related URLs:
URLURL TypeDescription Paper
Dunlop, Matthew0000-0001-7718-3755
Additional Information:© 2018 IOP Publishing Ltd. Published 18 January 2018. The work of D Calvetti is partially supported by NSF grant DMS-1522334. E Somersalo’s work is partly supported by the NSF grant DMS-1312424. The research of AM Stuart was partially supported by the EPSRC programme grant EQUIP, by AFOSR Grant FA9550-17-1-0185 and ONR Grant N00014-17-1-2079. M Dunlop was partially supported by the EPSRC MASDOC Graduate Training Program. Both M Dunlop and AM Stuart are supported by DARPA funded program Enabling Quantification of Uncertainty in Physical Systems (EQUiPS), contract W911NF-15-2-0121.
Funding AgencyGrant Number
Engineering and Physical Sciences Research Council (EPSRC)UNSPECIFIED
Air Force Office of Scientific Research (AFOSR)FA9550-17-1-0185
Office of Naval Research (ONR)N00014-17-1-2079
Army Research Office (ARO)W911NF-15-2-0121
Subject Keywords:Model discrepancy, Discretization error, Particle approximation, Importance sampling, Electrical impedance tomography, Darcy flow
Issue or Number:2
Record Number:CaltechAUTHORS:20170801-155824887
Persistent URL:
Official Citation:Daniela Calvetti et al 2018 Inverse Problems 34 025008
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:79719
Deposited By: Tony Diaz
Deposited On:01 Aug 2017 23:06
Last Modified:15 Nov 2021 17:50

Repository Staff Only: item control page