Computer Science – Learning
Scientific paper
2012-02-14
Computer Science
Learning
Scientific paper
We show that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively. We prove that recent estimation methods such as noise-contrastive estimation, ratio matching, and score matching belong to the proposed framework, and explain their interconnection based on supervised learning. Further, we discuss the role of boosting in unsupervised learning.
Gutmann Michael
Hirayama Jun-ichiro
No associations
LandOfFree
Bregman divergence as general framework to estimate unnormalized statistical models does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Bregman divergence as general framework to estimate unnormalized statistical models, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Bregman divergence as general framework to estimate unnormalized statistical models will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-90472