Page 157 - Contributed Paper Session (CPS) - Volume 2
P. 157
CPS1488 Willem van den B. et al.
propose a hybrid approach combining ideas from deterministic and sampling
approaches that is more accurate than fast approximations currently used
while being computationally less expensive than Monte Carlo sampling. Our
method applies to inverse problems with Gaussian priors (), Gaussian errors
as in (1), and differentiable forward maps h.
Inverse problems are widespread with applications including computer
graphics (Aristidou et al., 2017), geology (Reid et al., 2013), medical imaging
(Bertero and Piana, 2006), and robotics (Duka, 2014). The Bayesian approach
to inference on f in (1) is increasingly receiving attention (Kaipio and
Somersalo, 2005; Reid et al., 2013) as it provides natural uncertainty
quantification on f. It is analytically attractive to take the prior () to be
Gaussian. If in addition the forward map h is linear, then the posterior
computation in (2) is tractable as in Reid et al. (2013). For nonlinear h, the
posterior often does not have an analytical solution. While Markov Chain
Monte Carlo (MCMC) methods can approximate the posterior using sampling,
this is computationally infeasible for the scale of most real-world applications
of inverse problems. As a result, deterministic posterior approximations for
nonlinear inverse problems are popular. For instance, Steinberg and Bonilla
(2014) obtain a Gaussian approximation to ( | ) by solving (2) using a
linearization of h. Through linearizing h iteratively, they obtain a Gauss-
Newton algorithm that converges to a Laplace approximation of ( | ) with
a Gauss-Newton approximation to the Hessian. Gehre and Jin (2014) provide
another example of a fast posterior approximation in inverse problems. They
use expectation propagation (EP, Minka, 2001). As is common in EP, the
approximating distribution is factorized. Then, matching the moments or
expectations between the true posterior ( | ) and the approximating
factors is done via numerical integration, which is feasible because of the low
dimensionality of each factor.
Our method is at the highest level an EP algorithm but employs sampling
inside the steps of the EP iterations: For each of the factors of the
approximating posterior, we employ linearization of h as in Steinberg and
Bonilla (2014), but only as an intermediate step: The resulting Laplace
approximation is used as the proposal distribution for importance sampling
(IS) to further refine the approximation. We regularize the posterior covariance
estimates from the importance sampler based on the structure implied by the
inverse problem in (1). This use of EP at a high level and iterative application
of importance sampling relates to ideas in Gelman et al. (2014) and adaptive
importance sampling (Cornuet et al., 2012). Gianniotis (2019) also obtains an
improved approximation starting from a Laplace approximation but uses
variational inference for this rather than importance sampling and does not
consider an EP type factorization. We name our method EP-IS.
146 | I S I W S C 2 0 1 9