Computer Science – Computation and Language
Scientific paper
1994-10-21
Computer Science
Computation and Language
Uses aclap.sty. Appeared in ANLP 94
Scientific paper
In part of speech tagging by Hidden Markov Model, a statistical model is used to assign grammatical categories to words in a text. Early work in the field relied on a corpus which had been tagged by a human annotator to train the model. More recently, Cutting {\it et al.} (1992) suggest that training can be achieved with a minimal lexicon and a limited amount of {\em a priori} information about probabilities, by using Baum-Welch re-estimation to automatically refine the model. In this paper, I report two experiments designed to determine how much manual training information is needed. The first experiment suggests that initial biasing of either lexical or transition probabilities is essential to achieve a good accuracy. The second experiment reveals that there are three distinct patterns of Baum-Welch re-estimation. In two of the patterns, the re-estimation ultimately reduces the accuracy of the tagging rather than improving it. The pattern which is applicable can be predicted from the quality of the initial model and the similarity between the tagged training corpus (if any) and the corpus to be tagged. Heuristics for deciding how to use re-estimation in an effective manner are given. The conclusions are broadly in agreement with those of Merialdo (1994), but give greater detail about the contributions of different parts of the model.
No associations
LandOfFree
Does Baum-Welch Re-estimation Help Taggers? does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Does Baum-Welch Re-estimation Help Taggers?, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Does Baum-Welch Re-estimation Help Taggers? will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-638278