Computer Science – Computation and Language
Scientific paper
1997-06-10
Computer Science
Computation and Language
10 pages, 2-column format; uuencoded, gzipped, tarfile
Scientific paper
Studies of the contextual and linguistic factors that constrain discourse phenomena such as reference are coming to depend increasingly on annotated language corpora. In preparing the corpora, it is important to evaluate the reliability of the annotation, but methods for doing so have not been readily available. In this report, I present a method for computing reliability of coreference annotation. First I review a method for applying the information retrieval metrics of recall and precision to coreference annotation proposed by Marc Vilain and his collaborators. I show how this method makes it possible to construct contingency tables for computing Cohen's Kappa, a familiar reliability metric. By comparing recall and precision to reliability on the same data sets, I also show that recall and precision can be misleadingly high. Because Kappa factors out chance agreement among coders, it is a preferable measure for developing annotated corpora where no pre-existing target annotation exists.
No associations
LandOfFree
Applying Reliability Metrics to Co-Reference Annotation does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Applying Reliability Metrics to Co-Reference Annotation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Applying Reliability Metrics to Co-Reference Annotation will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-695631