Page 363 - Special Topic Session (STS) - Volume 3
P. 363

STS550 Matteo Mogliani
            as the number of predictors increases. To deal with this issue, in this work we
            adopt  an  alternative  Empirical  Bayes  approach  that  relies  on  stochastic
            approximation  algorithms  to  solve  maximization  problems  when  the
            likelihood  function  is  intractable,  by  mimicking  standard  iterative  methods
            such  as  the  gradient  algorithm.  This  approach  is  computationally  efficient,
            because  it  requires  only  a  single  Monte  Carlo  run.  Using  a  stochastic
            approximation to solve the maximization problem, we get an approximate EM
            algorithm, where both E- and M-steps are approximately implemented. Hence,
            marginal maximum likelihood estimates of the hyper-parameters and draws
            from the posterior distribution of the parameters are both obtained using a
            single run of the Gibbs sampler.

            3.  Results
                We  evaluate  the  performance of  the  proposed  models  through Monte
            Carlo experiments. For this purpose, we use a DGP similar to Equation (1) and
            involving  = {30, 50} predictors sampled at frequency  = 3 and  = 200 in-
            sample  observations.  The  predictors  follow  all  the  same  stationary  AR(1)
            process, but only five are relevant in the model. As for the weighting function
            Β (c; ϑ), we choose an exponential Almon lag function. We investigate three
            alternative  weighting  schemes  that  correspond  to  fast-decaying  weights,
            slow-decaying weights, and near-flat weights. For ease of analysis we assume
            ℎ = 0 .  In  this  specification,  the  error  terms  are  assumed  i.i.d.  normally
            distributed, but the design matrix is allowed to present moderate to extremely
            high correlation structure.
                We compute the average mean squared error (MSE), the average variance
            (VAR), and the average squared bias (BIAS2) over R Monte Carlo replications.
            Further, we evaluate the selection ability of the models by computing the True
            Positive Rate (TPR), the False Positive Rate (FPR), and the Matthews correlation
            coefficient (MCC). Simulation results point to a number of interesting features.
            First, the models perform overall quite similarly in terms of MSE, although the
            BMIDAS-AGL-SS seems to perform somewhat better across DGPs by mainly
            providing the smallest bias. This leads to highest TPR and lowest FPR for this
            model,  entailing  better  classification  of  the  active  and  inactive  sets  across
            simulations.  Second,  the  MSE  increases  substantially  with  the  degree  of
            correlation in the design matrix, but it tends to decrease with more irrelevant
            predictors.  It  follows  that  the  performance  of  the  models  in  selecting  and
            estimating the coefficients of the relevant variables holds the same regardless
            the increase in the degree of sparsity. This result is confirmed by the TPR, which
            is relatively high and hovers around 80-90% for moderate correlation, and it’s
            overall stable across the different values of , suggesting that the models can
            select  the  correct  sparsity  pattern  with  a  high  probability  even  in  finite
            samples. However, it is worth noting that the TPR drops to 30-50% for very

                                                               352 | I S I   W S C   2 0 1 9
   358   359   360   361   362   363   364   365   366   367   368