Computer Science – Information Theory
Scientific paper
2011-04-30
Computer Science
Information Theory
(2nd version: 19 pages, 5 figures, 7 tables. Theorems on Bayesian classifiers are extended to multiple variables. Appendix B f
Scientific paper
In this study, both Bayesian classifiers and mutual information classifiers are examined for binary classifications with or without a reject option. The general decision rules in terms of distinctions on error types and reject types are derived for Bayesian classifiers. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of "non-consistency" for interpreting cost terms. If no data is given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the theoretical differences, including the extremely-class-imbalanced cases. Finally, we briefly summarize the Bayesian classifiers and mutual-information classifiers in terms of their application advantages, respectively.
No associations
LandOfFree
What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers? does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers?, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers? will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-65790