Discrete MDL Predicts in Total Variation

Mathematics – Probability

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

15 LaTeX pages

Scientific paper

The Minimum Description Length (MDL) principle selects the model that has the shortest code for data plus model. We show that for a countable class of models, MDL predictions are close to the true distribution in a strong sense. The result is completely general. No independence, ergodicity, stationarity, identifiability, or other assumption on the model class need to be made. More formally, we show that for any countable class of models, the distributions selected by MDL (or MAP) asymptotically predict (merge with) the true measure in the class in total variation distance. Implications for non-i.i.d. domains like time-series forecasting, discriminative learning, and reinforcement learning are discussed.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Discrete MDL Predicts in Total Variation does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Discrete MDL Predicts in Total Variation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Discrete MDL Predicts in Total Variation will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-325251

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.