Mathematics – Statistics Theory
Scientific paper
2005-08-16
Annals of Statistics 2005, Vol. 33, No. 4, 1617-1642
Mathematics
Statistics Theory
Published at http://dx.doi.org/10.1214/009053605000000200 in the Annals of Statistics (http://www.imstat.org/aos/) by the Inst
Scientific paper
10.1214/009053605000000200
Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests.
Hunter David R.
Li Runze
No associations
LandOfFree
Variable selection using MM algorithms does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Variable selection using MM algorithms, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Variable selection using MM algorithms will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-131484