Page 122 - Contributed Paper Session (CPS) - Volume 5
P. 122

CPS1141 Mahdi Roozbeh
                                                                2
                  symmetric, positive definite known matrix and σ  is an unknown parameter. To
                  estimate  the  parameters  of  the  model  (2.1),  we  first  remove  the  non-
                  parametric  effect,  apparently.  Assuming   to  be  known,  a  natural
                                                   ̂
                  nonparametric estimator of f(.) is (t)= ()( − ), with k(t) = (Kωn (t,t1), . .
                  .  ,  Kωn  (t,tn)),  where Kωn  (.)  is  a  kernel  function  of  order  m  with  bandwidth
                                                    ̂
                  parameter ωn. For the existence of (, ) at the optimal convergence rate
                  n −4/5 , in semiparametric regression models with probability one, we need some
                  conditions on kernel function. See Muller (2000) for more details. Replacing
                         ̂
                  f(t) by () in (2.1), the model is simplified to

                                          ̃
                                              ̃
                                                       =  + ,                                                       (2.2)

                             ̃
                                            ̃
                      Where  = ( − ).  = ( − )   K is the smoother matrix with
                                                  
                                   
                  i,jth  component   (  ). We  can  estimate  the  linear  parameter  in  (2.1)
                                        , 
                  under the assumption cov () =  , by minimizing the generalized sum of
                                                    2
                  squared errors
                                        ̃   ̃     −1 ̃   ̃
                                           (  = ( −  )  ( − ).                                                                 (2.3)
                                 ,
                  The  unique  minimizer  of  (2.3)  is  the  partially  generalized  least  squares
                  estimator (PGLSE) given by

                      Motivated  by  Fallahpour  et  al.  (2012),  we  partition  the  regression
                  parameter  as  = ( ,  ) , where the subvector   has dimension pi, i = 1,
                                        
                                            
                                           2
                                                                     
                                        1
                  2 and p1 + p2 = p. Thus the underlying model has form

                                                                                 ̃
                                                          ̃
                                                       ̃
                         ̃
                  where  is partitioned according to (1, 2) in such a way that i is a n × pi
                  submatrix, i = 1, 2. With respect to this partitioning, the PGLSEs of   and 
                                                                                     1
                                                                                            2
                  are respectively given by

                      The sparse model is defined when Ho:   = 0 is true. In this paper, we refer
                                                            2
                  restricted semiparametric regression model (RSRM) to the sparse model. For
                  the RSRM, the partially generalized restricted least squares estimator (PGRLSE)
                  has form



                      According to Saleh (2006), the PGRLSE performs better than PGLSE when
                  model is sparse, however, the former estimator performs poorly as   deviates
                                                                                   2


                                                                     111 | I S I   W S C   2 0 1 9
   117   118   119   120   121   122   123   124   125   126   127