Computer Science – Information Theory
Scientific paper
2009-12-23
Computer Science
Information Theory
25 pages
Scientific paper
This paper investigates maximizers of the information divergence from an exponential family $E$. It is shown that the $rI$-projection of a maximizer $P$ to $E$ is a convex combination of $P$ and a probability measure $P_-$ with disjoint support and the same value of the sufficient statistics $A$. This observation can be used to transform the original problem of maximizing $D(\cdot||E)$ over the set of all probability measures into the maximization of a function $\Dbar$ over a convex subset of $\ker A$. The global maximizers of both problems correspond to each other. Furthermore, finding all local maximizers of $\Dbar$ yields all local maximizers of $D(\cdot||E)$. This paper also proposes two algorithms to find the maximizers of $\Dbar$ and applies them to two examples, where the maximizers of $D(\cdot||E)$ were not known before.
No associations
LandOfFree
Finding the Maximizers of the Information Divergence from an Exponential Family does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Finding the Maximizers of the Information Divergence from an Exponential Family, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Finding the Maximizers of the Information Divergence from an Exponential Family will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-363507