Statistics – Machine Learning
Scientific paper
2009-01-15
Statistics
Machine Learning
5 pages; an intro to a longer article
Scientific paper
We introduce new definitions of universal and superuniversal computable codes, which are based on a code's ability to approximate Kolmogorov complexity within the prescribed margin for all individual sequences from a given set. Such sets of sequences may be singled out almost surely with respect to certain probability measures. Consider a measure parameterized with a real parameter and put an arbitrary prior on the parameter. The Bayesian measure is the expectation of the parameterized measure with respect to the prior. It appears that a modified Shannon-Fano code for any computable Bayesian measure, which we call the Bayesian code, is superuniversal on a set of parameterized measure-almost all sequences for prior-almost every parameter. According to this result, in the typical setting of mathematical statistics no computable code enjoys redundancy which is ultimately much less than that of the Bayesian code. Thus we introduce another characteristic of computable codes: The catch-up time is the length of data for which the code length drops below the Kolmogorov complexity plus the prescribed margin. Some codes may have smaller catch-up times than Bayesian codes.
No associations
LandOfFree
The Redundancy of a Computable Code on a Noncomputable Distribution does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with The Redundancy of a Computable Code on a Noncomputable Distribution, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and The Redundancy of a Computable Code on a Noncomputable Distribution will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-590745