Learning from compressed observations

Computer Science – Information Theory

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

6 pages; submitted to the 2007 IEEE Information Theory Workshop (ITW 2007)

Scientific paper

The problem of statistical learning is to construct a predictor of a random variable $Y$ as a function of a related random variable $X$ on the basis of an i.i.d. training sample from the joint distribution of $(X,Y)$. Allowable predictors are drawn from some specified class, and the goal is to approach asymptotically the performance (expected loss) of the best predictor in the class. We consider the setting in which one has perfect observation of the $X$-part of the sample, while the $Y$-part has to be communicated at some finite bit rate. The encoding of the $Y$-values is allowed to depend on the $X$-values. Under suitable regularity conditions on the admissible predictors, the underlying family of probability distributions and the loss function, we give an information-theoretic characterization of achievable predictor performance in terms of conditional distortion-rate functions. The ideas are illustrated on the example of nonparametric regression in Gaussian noise.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Learning from compressed observations does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Learning from compressed observations, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Learning from compressed observations will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-292853

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.