Computer Science – Computation and Language
Scientific paper
2000-06-11
Proceedings DARPA Broadcast News Transcription and Understanding Workshop, pp. 270-274, Lansdowne, VA, 1998
Computer Science
Computation and Language
5 pages. Typos in published version fixed
Scientific paper
A criterion for pruning parameters from N-gram backoff language models is developed, based on the relative entropy between the original and the pruned model. It is shown that the relative entropy resulting from pruning a single N-gram can be computed exactly and efficiently for backoff models. The relative entropy measure can be expressed as a relative change in training set perplexity. This leads to a simple pruning criterion whereby all N-grams that change perplexity by less than a threshold are removed from the model. Experiments show that a production-quality Hub4 LM can be reduced to 26% its original size without increasing recognition error. We also compare the approach to a heuristic pruning criterion by Seymore and Rosenfeld (1996), and show that their approach can be interpreted as an approximation to the relative entropy criterion. Experimentally, both approaches select similar sets of N-grams (about 85% overlap), with the exact relative entropy criterion giving marginally better performance.
No associations
LandOfFree
Entropy-based Pruning of Backoff Language Models does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Entropy-based Pruning of Backoff Language Models, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Entropy-based Pruning of Backoff Language Models will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-155432