Computer Science – Artificial Intelligence
Scientific paper
2012-01-31
Computer Science
Artificial Intelligence
European Conference on Machine Learning (ECML'09)
Scientific paper
Feature selection in reinforcement learning (RL), i.e. choosing basis functions such that useful approximations of the unkown value function can be obtained, is one of the main challenges in scaling RL to real-world applications. Here we consider the Gaussian process based framework GPTD for approximate policy evaluation, and propose feature selection through marginal likelihood optimization of the associated hyperparameters. Our approach has two appealing benefits: (1) given just sample transitions, we can solve the policy evaluation problem fully automatically (without looking at the learning task, and, in theory, independent of the dimensionality of the state space), and (2) model selection allows us to consider more sophisticated kernels, which in turn enable us to identify relevant subspaces and eliminate irrelevant state variables such that we can achieve substantial computational savings and improved prediction performance.
Jung Tobias
Stone Peter
No associations
LandOfFree
Feature Selection for Value Function Approximation Using Bayesian Model Selection does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Feature Selection for Value Function Approximation Using Bayesian Model Selection, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Feature Selection for Value Function Approximation Using Bayesian Model Selection will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-57468