Computer Science – Computation and Language
Scientific paper
1998-03-02
Computer Science
Computation and Language
4 pages, 5 figures. To appear in the AAAI Spring Symposium on Applying Machine Learning to Discourse Processing. The Alembic W
Scientific paper
We report here on a study of interannotator agreement in the coreference task as defined by the Message Understanding Conference (MUC-6 and MUC-7). Based on feedback from annotators, we clarified and simplified the annotation specification. We then performed an analysis of disagreement among several annotators, concluding that only 16% of the disagreements represented genuine disagreement about coreference; the remainder of the cases were mostly typographical errors or omissions, easily reconciled. Initially, we measured interannotator agreement in the low 80s for precision and recall. To try to improve upon this, we ran several experiments. In our final experiment, we separated the tagging of candidate noun phrases from the linking of actual coreferring expressions. This method shows promise - interannotator agreement climbed to the low 90s - but it needs more extensive validation. These results position the research community to broaden the coreference task to multiple languages, and possibly to different kinds of coreference.
Burger John
Hirschman Lynette
Robinson Patricia
Vilain Marc
No associations
LandOfFree
Automating Coreference: The Role of Annotated Training Data does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Automating Coreference: The Role of Annotated Training Data, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Automating Coreference: The Role of Annotated Training Data will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-720645