Computer Science – Computation and Language
Scientific paper
1997-06-09
Computer Science
Computation and Language
9 pages, 4 PostScript figures, uses psfig.sty and aclap.sty; to appear in the proceedings of EMNLP-2
Scientific paper
We consider the use of language models whose size and accuracy are intermediate between different order n-gram models. Two types of models are studied in particular. Aggregate Markov models are class-based bigram models in which the mapping from words to classes is probabilistic. Mixed-order Markov models combine bigram models whose predictions are conditioned on different words. Both types of models are trained by Expectation-Maximization (EM) algorithms for maximum likelihood estimation. We examine smoothing procedures in which these models are interposed between different order n-grams. This is found to significantly reduce the perplexity of unseen word combinations.
Pereira Fernando
Saul Lawrence
No associations
LandOfFree
Aggregate and mixed-order Markov models for statistical language processing does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Aggregate and mixed-order Markov models for statistical language processing, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Aggregate and mixed-order Markov models for statistical language processing will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-695620