Computer Science – Information Theory
Scientific paper
2010-06-16
Computer Science
Information Theory
Scientific paper
Recovery of the sparsity pattern (or support) of a sparse vector from a small number of noisy linear samples is a common problem that arises in signal processing and statistics. In the high dimensional setting, it is known that recovery with a vanishing fraction of errors is impossible if the sampling rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases a computationally simple thresholding estimator is near-optimal. Upper bounds on the sampling rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for two different estimators. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models.
Gastpar Michael
Reeves Galen
No associations
LandOfFree
Fundamental Tradeoffs for Sparsity Pattern Recovery does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Fundamental Tradeoffs for Sparsity Pattern Recovery, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fundamental Tradeoffs for Sparsity Pattern Recovery will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-100025