Aggregate and mixed-order Markov models for statistical language processing

Computer Science – Computation and Language

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

9 pages, 4 PostScript figures, uses psfig.sty and aclap.sty; to appear in the proceedings of EMNLP-2

Scientific paper

We consider the use of language models whose size and accuracy are intermediate between different order n-gram models. Two types of models are studied in particular. Aggregate Markov models are class-based bigram models in which the mapping from words to classes is probabilistic. Mixed-order Markov models combine bigram models whose predictions are conditioned on different words. Both types of models are trained by Expectation-Maximization (EM) algorithms for maximum likelihood estimation. We examine smoothing procedures in which these models are interposed between different order n-grams. This is found to significantly reduce the perplexity of unseen word combinations.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Aggregate and mixed-order Markov models for statistical language processing does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Aggregate and mixed-order Markov models for statistical language processing, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Aggregate and mixed-order Markov models for statistical language processing will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-695620

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.