$L_0$ regularized estimation for nonlinear models that have sparse underlying linear structures

Mathematics – Statistics Theory

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

We study the estimation of $\beta$ for the nonlinear model $y = f(X\sp{\top}\beta) + \epsilon$ when $f$ is a nonlinear transformation that is known, $\beta$ has sparse nonzero coordinates, and the number of observations can be much smaller than that of parameters ($n\ll p$). We show that in order to bound the $L_2$ error of the $L_0$ regularized estimator $\hat\beta$, i.e., $\|\hat\beta - \beta\|_2$, it is sufficient to establish two conditions. Based on this, we obtain bounds of the $L_2$ error for (1) $L_0$ regularized maximum likelihood estimation (MLE) for exponential linear models and (2) $L_0$ regularized least square (LS) regression for the more general case where $f$ is analytic. For the analytic case, we rely on power series expansion of $f$, which requires taking into account the singularities of $f$.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

$L_0$ regularized estimation for nonlinear models that have sparse underlying linear structures does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with $L_0$ regularized estimation for nonlinear models that have sparse underlying linear structures, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and $L_0$ regularized estimation for nonlinear models that have sparse underlying linear structures will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-18642

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.