Page 272 - Contributed Paper Session (CPS) - Volume 2
P. 272
CPS1854 Shonosuke S. et al.
estimators of and b are defined as the minimizer of (, ). Although it
seems hard to minimize (, ) due to its complicated structure, we can easily
carry it out by employing MM-algorithm (Hunter and Lange, 2004).
2.2 MM-algorithm
Our MM-algorithm entails the following iteration processes:
−1
() ()
() )
(+1) ← (∑ ∑ ) ∑ ∑ ( −
=1 =1 =1 =1
1 +
() )
2 (+1) ← ∑ ∑ () ( − − 2
(+1)
=1 =1
−1
−1
(+1) ← { −2 (+1) ∑ () + () () } −2 (+1) ∑ () ( − (+1) )
=1 =1
1 +
(+1) ← ∑ () (+1)
(+1)
=1
This updating process ensures that the value of the objective function (5)
monotonically decreases in each step.
3. Robust Joint Selection via Reguralization
We may add some penalty terms to (, ) for robust and joint sparse
estimation of β and b. Following Hui et al. (2017), we consider the following
function:
(, ) = (, ) + ∑ | | + ∑ ∥ ∥, (6)
ℓ
°ℓ
=1 ℓ=1
where and are adaptive weights based on preliminary estimates of
ℓ
and , respectively, ◦ ℓ = ( , . . . , ℓ ) denotes all the coefficients
1ℓ
corresponding to the ℓ th random effect, and ∥ · ∥ denotes its L1 norm. We use
an adaptive lasso penalty with weights for the fixed effects, and an adaptive
group lasso penalty with weights for the random effects, linked by one
ℓ
tuning parameter λ > 0.
We may again use MM-algorithm to obtain the minimizer of (, ). We get
the majorization function of () as follows:
261 | I S I W S C 2 0 1 9