Computer Science – Computation and Language
Scientific paper
1998-12-22
Computer Science
Computation and Language
31 pages, 7 figures, 10 tables. uses 11pt, fullname, a4wide tex styles. Pre-print version of article to appear in Machine Lear
Scientific paper
We show that in language learning, contrary to received wisdom, keeping exceptional training instances in memory can be beneficial for generalization accuracy. We investigate this phenomenon empirically on a selection of benchmark natural language processing tasks: grapheme-to-phoneme conversion, part-of-speech tagging, prepositional-phrase attachment, and base noun phrase chunking. In a first series of experiments we combine memory-based learning with training set editing techniques, in which instances are edited based on their typicality and class prediction strength. Results show that editing exceptional instances (with low typicality or low class prediction strength) tends to harm generalization accuracy. In a second series of experiments we compare memory-based learning and decision-tree learning methods on the same selection of tasks, and find that decision-tree learning often performs worse than memory-based learning. Moreover, the decrease in performance can be linked to the degree of abstraction from exceptions (i.e., pruning or eagerness). We provide explanations for both results in terms of the properties of the natural language processing tasks and the learning algorithms.
Daelemans Walter
den Bosch Antal van
Zavrel Jakub
No associations
LandOfFree
Forgetting Exceptions is Harmful in Language Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Forgetting Exceptions is Harmful in Language Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Forgetting Exceptions is Harmful in Language Learning will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-17076