Physics – Condensed Matter – Disordered Systems and Neural Networks
Scientific paper
2009-07-18
J. Stat. Mech. (2009) P10009
Physics
Condensed Matter
Disordered Systems and Neural Networks
18 pages, 9 eps figures
Scientific paper
10.1088/1742-5468/2009/10/P10009
One of the crucial tasks in many inference problems is the extraction of sparse information out of a given number of high-dimensional measurements. In machine learning, this is frequently achieved using, as a penality term, the $L_p$ norm of the model parameters, with $p\leq 1$ for efficient dilution. Here we propose a statistical-mechanics analysis of the problem in the setting of perceptron memorization and generalization. Using a replica approach, we are able to evaluate the relative performance of naive dilution (obtained by learning without dilution, following by applying a threshold to the model parameters), $L_1$ dilution (which is frequently used in convex optimization) and $L_0$ dilution (which is optimal but computationally hard to implement). Whereas both $L_p$ diluted approaches clearly outperform the naive approach, we find a small region where $L_0$ works almost perfectly and strongly outperforms the simpler to implement $L_1$ dilution.
Lage-castellanos Alejandro
Pagnani Andrea
Weigt Martin
No associations
LandOfFree
Statistical mechanics of sparse generalization and model selection does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Statistical mechanics of sparse generalization and model selection, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Statistical mechanics of sparse generalization and model selection will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-683458