Page 371 - Special Topic Session (STS) - Volume 3
P. 371
STS550 Kyle Hood et al.
are used in estimation. We also note that while a general version of the
bridging-with-factors model can be found in Giannone, Reichlin and Small
(2008), our model is simplified in that it makes no assumptions on how factors
evolve over time—assumptions that tend to restrain the behavior of the factors
to some extent.
Model-averaging algorithms
Five procedures are selected to average among specifications of the above
GB and BF models. We define three categories of model averaging: simple
averaging, information-criterion-based averaging, and Bates-Granger (BG)
1
averaging with LOOCV. We focus narrowly on averaging the nowcasts
associated with each model, but because the models are linear, this is
equivalent to averaging the parameters.
In the first category, simple averaging, models are averaged using either
equally weighted means or medians of the models, giving us a total of two
simple averaging techniques. Simple averaging is typically optimal when short
samples hamper precise estimation of the weights (Smith and Wallis, 2009).
In the second, category, IC-based averaging, weights are defined to be
proportional to the exponential of the negative of an information criterion, in
general,
exp {− }
ℎ = ℎ . (4)
ℎ
∑ ′ exp {−1 ′}
ℎ =1
Here, h indexes the model, ranging from 1 to H. ICh is an information criterion
associated with the estimated model h. In this paper, we use the Akaike
information criterion (AIC) and the Bayesian information criterion (BIC) for this
purpose, for a total to two IC-based averaging techniques.
The third model-averaging method, BG averaging (Bates and Granger,
1969), has weights that are proportional to the inverse of the sample forecast
variance of model h, denoted ̂ ,
2
ℎ
1 ⁄ 2
ℎ = ̂ 1 ℎ . (5)
∑
′
ℎ =1 ⁄ 2
̂ ′
ℎ
To ensure that we have a “clean” pseudo-out-of-sample subset with which to
compare nowcasts, the inverse forecast variances are computed in-sample
using LOO-CV. The LOO-CV algorithm iterates over the in-sample
observations, leaving each observation out once. An error is computed for
each observation that was left out based on parameters estimated from the
1 Each of these techniques is discussed by Diks and Vrugt (2017). Our Bates-Granger
technique differs slightly in that we are using the leave-one-out cross-validation errors,
rather than in-sample residuals.
360 | I S I W S C 2 0 1 9