Computer Science – Learning
Scientific paper
2011-09-15
Computer Science
Learning
14 pages, 17 figures
Scientific paper
Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interests and touches on many important applications in text mining, computer vision and computational biology. This paper represents LDA as a factor graph within the Markov random field (MRF) framework, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Although two commonly-used approximate inference methods, such as variational Bayes (VB) and collapsed Gibbs sampling (GS), have gained great successes in learning LDA, the proposed BP is competitive in both speed and accuracy as validated by encouraging experimental results on four large-scale document data sets. Furthermore, the BP algorithm has the potential to become a generic learning scheme for variants of LDA-based topic models. To this end, we show how to learn two typical variants of LDA-based topic models, such as author-topic models (ATM) and relational topic models (RTM), using BP based on the factor graph representation.
Cheung William K.
Liu Jiming
Zeng Jia
No associations
LandOfFree
Learning Topic Models by Belief Propagation does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Learning Topic Models by Belief Propagation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Learning Topic Models by Belief Propagation will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-673928