The Dantzig selector: Statistical estimation when $p$ is much larger than $n$

Mathematics – Statistics Theory

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126], [arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv

Scientific paper

10.1214/009053606000001523

In many important statistical applications, the number of variables or parameters $p$ is much larger than the number of observations $n$. Suppose then that we have observations $y=X\beta+z$, where $\beta\in\mathbf{R}^p$ is a parameter vector of interest, $X$ is a data matrix with possibly far fewer rows than columns, $n\ll p$, and the $z_i$'s are i.i.d. $N(0,\sigma^2)$. Is it possible to estimate $\beta$ reliably based on the noisy data $y$? To estimate $\beta$, we introduce a new estimator--we call it the Dantzig selector--which is a solution to the $\ell_1$-regularization problem \[\min_{\tilde{\b eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad \|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma,\] where $r$ is the residual vector $y-X\tilde{\beta}$ and $t$ is a positive scalar. We show that if $X$ obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector $\beta$ is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, \[\|\hat{\beta}-\beta\|_{\ell_2}^2\le C^2\cdot2\log p\cdot \Biggl(\sigma^2+\sum_i\min(\beta_i^2,\sigma^2)\Biggr).\] Our results are nonasymptotic and we give values for the constant $C$. Even though $n$ may be much smaller than $p$, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

The Dantzig selector: Statistical estimation when $p$ is much larger than $n$ does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with The Dantzig selector: Statistical estimation when $p$ is much larger than $n$, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and The Dantzig selector: Statistical estimation when $p$ is much larger than $n$ will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-175156

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.