Mathematics – Statistics Theory
Scientific paper
2010-08-28
Mathematics
Statistics Theory
Scientific paper
We observe $(X_i,Y_i)_{i=1}^n$ where the $Y_i$'s are real valued outputs and the $X_i$'s are $m\times T$ matrices. We observe a new entry $X$ and we want to predict the output $Y$ associated with it. We focus on the high-dimensional setting, where $m T \gg n$. This includes the matrix completion problem with noise, as well as other problems. We consider linear prediction procedures based on different penalizations, involving a mixture of several norms: the nuclear norm, the Frobenius norm and the $\ell_1$-norm. For these procedures, we prove sharp oracle inequalities, using a statistical learning theory point of view. A surprising fact in our results is that the rates of convergence do not depend on $m$ and $T$ directly. The analysis is conducted without the usually considered incoherency condition on the unknown matrix or restricted isometry condition on the sampling operator. Moreover, our results are the first to give for this problem an analysis of penalization (such nuclear norm penalization) as a regularization algorithm: our oracle inequalities prove that these procedures have a prediction accuracy close to the deterministic oracle one, given that the reguralization parameters are well-chosen.
Gaïffas Stéphane
Lecué Guillaume
No associations
LandOfFree
Sharp oracle inequalities for the prediction of a high-dimensional matrix does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Sharp oracle inequalities for the prediction of a high-dimensional matrix, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Sharp oracle inequalities for the prediction of a high-dimensional matrix will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-530066