Page 161 - Contributed Paper Session (CPS) - Volume 2
P. 161

CPS1488 Willem van den B. et al.
            blurring matrix  by first setting   =  exp{−min( + ,  −  − )2/25}, ,   =
                                              
             1, . . . , , and then scaling its rows to sum up to one. Take the forward map H(F)
            to consist of n = 30 elements selected at random with replacement from the
            elements with odd indices, as a type of subsampling, of the d-dimensional
            vector  ( ⊙  ⊙ )  where  ‘ ⊙ ’  denotes  the  elementwise  or  Hadamard
                          2
            product. Set σ = 1 and generate y according to (1) with F fixed to a prior draw.
                We approximate ( | ) using a Laplace approximation via Taylor series
            linearization  as  in  Steinberg  and  Bonilla  (2014),  and  using  EP-IS  based  on
            randomly partitioning y into   =  2 vectors of length nk = 15, 20 iterations of
            Step  2  of  Algorithm  1  and  10  iterations  of  Step  3  with  10,000  importace
            samples each. The very first computation of the Jacobian matrix Jµ is not done
            at the initialization µ = 01×d but rather at a µ drawn from the prior () since
            the forward map H has a saddle point at zero such that Step 2 of Algorithm 1
            would remain stuck at its initialization. EP-IS is run separately for the low-rank
            based regularization of ΣIS such that Λk is of rank nk, and the covariance tapering
            with  a  Wendland1  taper  function  of  width  2/5.  For  comparison,  we  draw
            100,000 posterior samples using a random walk Metropolis algorithm which
            does not approximate the forward map H.
                Figure 1 summarizes the results of one run of the simulation. We see that
            the  Laplace  approximation  has  most  trouble  capturing  the  posterior  mean
            while  both  applications  of  EP-IS  perform  similarly  when  it  comes  to  the
            posterior mean. The uncertainty quantification of the Laplace approximation is
            also cruder than that of EP-IS. Covariance tapering in EP-IS leads to virtually
            spot  on  uncertainty  quantification  while  the  low-rank  matrix  regularization
            yields slight overestimation of uncertainty in this simulation. EP-IS succeeds in
            improving on the posterior approximation provided by linearization through
            sampling  at  a  computational  cost  in  between  that  of  the  Laplace
            approximation  and  the  Metropolis  algorithm.  See  Table  2  for  the
            computational cost.
                We  repeat  this  simulation  20  times  and  compute  the  Wasserstein-2
            distance between the d = 100 marginals of the empirical MCMC distribution
            on F from the Metropolis algorithm and the Gaussian approximations. We then
            average  this  distance  over  the  d  marginals  for  each  simulation  and
            approximation  method.  The  results  are  in  Table  1.  Unlike  in  Figure  1,  the
            covariance  tapering  does  not  outperform  the  low-rank  regularization:
            Sometimes the tapering leads to divergence resulting in the large third quartile
            shown.
                Table  2  summarizes  the  computation  times  of  the  20  simulations.  The
            Laplace approximation is fastest. Both EP-IS algorithms take a similar amount
            of time but the increased accuracy comes at a computational cost. Importantly,
            EP-IS is about twice as fast in this setup as the Metropolis algorithm.


                                                               150 | I S I   W S C   2 0 1 9
   156   157   158   159   160   161   162   163   164   165   166