A Bit of Progress in Language Modeling

Computer Science – Computation and Language

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

73 pages, extended version of paper to appear in Computer Speech and Language

Scientific paper

In the past several years, a number of different language modeling improvements over simple trigram models have been found, including caching, higher-order n-grams, skipping, interpolated Kneser-Ney smoothing, and clustering. We present explorations of variations on, or of the limits of, each of these techniques, including showing that sentence mixture models may have more potential. While all of these techniques have been studied separately, they have rarely been studied in combination. We find some significant interactions, especially with smoothing and clustering techniques. We compare a combination of all techniques together to a Katz smoothed trigram model with no count cutoffs. We achieve perplexity reductions between 38% and 50% (1 bit of entropy), depending on training data size, as well as a word error rate reduction of 8.9%. Our perplexity reductions are perhaps the highest reported compared to a fair baseline. This is the extended version of the paper; it contains additional details and proofs, and is designed to be a good introduction to the state of the art in language modeling.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

A Bit of Progress in Language Modeling does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with A Bit of Progress in Language Modeling, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and A Bit of Progress in Language Modeling will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-613260

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.