Mathematics – Statistics Theory
Scientific paper
2007-02-23
Annals of Statistics 2006, Vol. 34, No. 5, 2367-2386
Mathematics
Statistics Theory
Published at http://dx.doi.org/10.1214/009053606000000768 in the Annals of Statistics (http://www.imstat.org/aos/) by the Inst
Scientific paper
10.1214/009053606000000768
Let $(Y,X_1,...,X_m)$ be a random vector. It is desired to predict $Y$ based on $(X_1,...,X_m)$. Examples of prediction methods are regression, classification using logistic regression or separating hyperplanes, and so on. We consider the problem of best subset selection, and study it in the context $m=n^{\alpha}$, $\alpha>1$, where $n$ is the number of observations. We investigate procedures that are based on empirical risk minimization. It is shown, that in common cases, we should aim to find the best subset among those of size which is of order $o(n/\log(n))$. It is also shown, that in some ``asymptotic sense,'' when assuming a certain sparsity condition, there is no loss in letting $m$ be much larger than $n$, for example, $m=n^{\alpha}, \alpha>1$. This is in comparison to starting with the ``best'' subset of size smaller than $n$ and regardless of the value of $\alpha$. We then study conditions under which empirical risk minimization subject to $l_1$ constraint yields nearly the best subset. These results extend some recent results obtained by Greenshtein and Ritov. Finally we present a high-dimensional simulation study of a ``boosting type'' classification procedure.
No associations
LandOfFree
Best subset selection, persistence in high-dimensional statistical learning and optimization under $l_1$ constraint does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Best subset selection, persistence in high-dimensional statistical learning and optimization under $l_1$ constraint, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Best subset selection, persistence in high-dimensional statistical learning and optimization under $l_1$ constraint will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-692036