Computer Science – Learning
Scientific paper
2012-01-07
Computer Science
Learning
8 pages; The 2012 International Joint Conference on Neural Networks (IJCNN), IEEE, 2012
Scientific paper
We propose a tree regularization framework, which enables many tree models to perform feature selection efficiently. The key idea of the regularization framework is to penalize selecting a new feature for splitting when its gain (e.g. information gain) is similar to the features used in previous splits. The regularization framework is applied on random forest and boosted trees here, and can be easily applied to other tree models. Experimental studies show that the regularized trees can select high-quality feature subsets with regard to both strong and weak classifiers. Because tree models can naturally deal with categorical and numerical variables, missing values, different scales between variables, interactions and nonlinearities etc., the tree regularization framework provides an effective and efficient feature selection solution for many practical problems.
Deng Houtao
Runger George
No associations
LandOfFree
Feature Selection via Regularized Trees does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Feature Selection via Regularized Trees, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Feature Selection via Regularized Trees will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-60