Tagger Evaluation Given Hierarchical Tag Sets

Computer Science – Computation and Language

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

preprint is 7 pages, laid out differently than printed version

Scientific paper

We present methods for evaluating human and automatic taggers that extend current practice in three ways. First, we show how to evaluate taggers that assign multiple tags to each test instance, even if they do not assign probabilities. Second, we show how to accommodate a common property of manually constructed ``gold standards'' that are typically used for objective evaluation, namely that there is often more than one correct answer. Third, we show how to measure performance when the set of possible tags is tree-structured in an IS-A hierarchy. To illustrate how our methods can be used to measure inter-annotator agreement, we show how to compute the kappa coefficient over hierarchical tag sets.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Tagger Evaluation Given Hierarchical Tag Sets does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Tagger Evaluation Given Hierarchical Tag Sets, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Tagger Evaluation Given Hierarchical Tag Sets will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-536051

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.