Page 149 - Contributed Paper Session (CPS) - Volume 3
P. 149

CPS1973 Matúš M. et al.
                In  general,  for   →  0  we  expect  changepoints  to  occur  in  each
                                  
             observation  and  every  level  of  smoothness  while  for   →  ∞  no
                                                                             
              
            changepoints are expected and the final fit is fully determined by the vector
            of parameters   ∈ ℝ (the smooth function  ).
                                  
                                                          0
                           
                The minimization problem (3) is a convex problem and it can be solved
            using standard optimization tools. The smoothing parameter  >  0 and the
                                                                         
            sparsity  parameter  >  0  can  be  selected,  for  instance,  by  some  Cross-
                                
            Validation technique. Alternatively, one can compute the whole solution paths
            for  >  0  using LARS algorithm (Efron at al., 2004) and to choose the final
                
            model from a set of plausible models along the whole solution path.

            2.1 Independent Changepoints: The   type penalty term  ( , . . . ,  −1 ).
                                                                            0
                                                   1
                                                                          1
            in (3) can take various forms. Let us firstly mention the simplest scenario where
            there is no hierarchical restriction imposed on the changepoint occurrences:
            any discontinuity point in the function itself or its derivatives can occur on its
            own. This property can be expressed by a specific penalty form, where

                                                     −1  
              (4)                   ( , . . . ,   ) = ∑ ∑ | () |
                                     1  0   −1           
                                                     =0  =1

            Alternatively, one can consider a whole set of regularization parameters  =
                                                                                    
                            
            ( , . . . ,  (−1) )
              0

                             >
            λJ  =  (λJ0,...,λJ(p−1)) to  control  for  the  sparsity  in  each  smoothness  level
              ∈ {0, . . . ,   −  1} separately.

            2.2  Simultaneous  Changepoints:  Unlike  the  previous  situation  it  can  be
            suitable for some scenarios to link the changepoint at some location across all
            different  levels  of    ∈ {0, . . . ,   −  1}.  The  motivation  comes  from  some
            practical examples where the shock processes in (2) are expected to become
            all active at the same point. This quality can implemented into (3) by replacing
            the standard LASSO penalty in (4) with the group LASSO penalty

              (5)
                                                                           ,
            which either selects the whole group of parameters   (0) , . . . ,   (−1)  for some
              ∈ {1, . . . , } to be nonzero, or all parameters within this group are set to zero
            exactly.

            2.3 Hierarchical Changepoints: An innovative approach to changepoints in
            the nonparametric regression models can be obtained by using the overlap

                                                               138 | I S I   W S C   2 0 1 9
   144   145   146   147   148   149   150   151   152   153   154