Mathematics – Statistics Theory
Scientific paper
2012-03-26
Mathematics
Statistics Theory
Scientific paper
The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the exponent in the MSE rate of convergence decays increasingly slowly as the dimension $d$ of the samples increases. In particular, the rate is often glacially slow of order $O(T^{-{\gamma}/{d}})$, where $T$ is the number of samples, and $\gamma>0$ is a rate parameter. Examples of such estimators include kernel density estimators, $k$-NN density estimators, $k$-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted convex combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of $O(T^{-1})$. Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample.
Hero III Alfred O.
Sricharan Kumar
No associations
LandOfFree
Ensemble estimators for multivariate entropy estimation does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Ensemble estimators for multivariate entropy estimation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Ensemble estimators for multivariate entropy estimation will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-638693