Page 16 - Contributed Paper Session (CPS) - Volume 7
P. 16

CPS2014 Ma. S.B.P. et al.
                      This yields the residual matrix  =  − −1  −  that will be used for the
                  sieve bootstrap procedure in the succeeding algorithm.
                  Algorithm 3: Sieve Bootstrap on the Bivariate Residual Matrix
                      The Gram-Schmidt process is the generation of a series of quantities by
                  means of scalar products of vectors. Winch (1996) proved that these series of
                  quantities  are  identical with  those  that  arise  in  the  solution  of  the  normal
                  equations   by    compact    elimination   methods.   The    Gram-Schmidt
                                                                       ̂
                  orthogonalization is applied on the covariance matrix Σ of the residual matrix
                   and the orthogonal transformation  is derived.
                     1.  From the residual matrix  =  − −1  −  in Algorithm 2, derive the
                                                                               ̂
                         orthogonal transformation   of the covariance matrix Σ of   by the
                         Gram-Schmidt orthogonalization [refer to Appendix].  is the square
                         root matrix of Σ. That is, Σ =  .
                                       ̂
                                                      ′
                                                 ̂
                                                                                   
                     2.  Generate a bivariate  matrix of length  such that  = [  1 ], where
                                                                                    2
                          ,  ~ (0,1).
                          1
                             2
                     3.  Resample rows of  for  times ( ≥ 200) each with length . Multiply
                         the orthogonal transformation   to each of the   resampled , with
                                                                   ̂
                         ( () ) =  ( () ) =   =   = Σ,  = 1, … , .
                                        ′
                                                              ′
                                                       ′
                     4.  From each of (), recreate  time series   =   ̂ +  +  () ,
                                                                    ()  −1 ()  
                         for  = 1, … , , where   is the initial estimate of  and  is the estimate
                         of the smooth functions of SPC obtained in Step 1.
                     5.  Estimate  the  parameters  until  convergence  on  each  recreated  time
                         series as in Algorithm 2.
                     6.  Obtain  ̂ (1) , ̂ (2) , … , ̂ ()   on  each  of  the  recreated  time  series
                           (1) ,   (2) , … ,   ()  .
                                      ( )
                     7.  Average of    ,  = 1, … ,  is the final estimate of .
                                       
                     Lagged  values  up  to     of  each  of  the  generated   ,  = 1, … , ,  is
                                              ℎ
                                                                             ,
                  included in the covariate matrix  − ,  = 1, … ,  and  = 1, … , . That is,

                  Each of the 1, , 2, , … , , baseline input time series has the same mean of
                  20 and has the same variance of 4. The baseline input series is simulated by
                  the AR(1) process:
                          (1 − 0.5)( − 20) =  ,   where  ~(0,1).                [3]
                                                              
                                                 
                                      
                  The bivariate time series data  is be generated using its immediate past value
                  −1 with the output autocorrelation matrix , the function of the covariate
                                                                                    2  
                  matrix − , and with the error matrix  ~ (0, Σ ), where Σ = [  11  12  ]
                                                                             
                                                                  
                                                         
                                                             2
                                                                                   12   22 2
                  and  12  ≠ 0. A burn-in period of 1000 is considered for the initialization of
                  values.
                      To test the sensitivity of the model with the misspecification error,  the
                  variance of the error matrix is made 3 or 6 times larger. To achieve robust
                                                                       5 | I S I   W S C   2 0 1 9
   11   12   13   14   15   16   17   18   19   20   21