Computationally efficient variational approximations for Bayesian inverse problems

Abstract

The major drawback of the Bayesian approach to model calibration is the computational burden involved in describing the posterior distribution of the unknown model parameters arising from the fact that typical Markov chain Monte Carlo (MCMC) samplers require thousands of forward model evaluations. In this work, we develop a variational Bayesian approach to model calibration which uses an information theoretic criterion to recast the posterior problem as an optimization problem. Specifically, we parameterize the posterior using the family of Gaussian mixtures and seek to minimize the information loss incurred by replacing the true posterior with an approximate one. Our approach is of particular importance in underdetermined problems with expensive forward models in which both the classical approach of minimizing a potentially regularized misfit function and MCMC are not viable options. We test our methodology on two surrogate-free examples and show that it dramatically outperforms MCMC methods.

Publication
Journal of Verification, Validation and Uncertainty Quantification