Computer Science – Learning
Scientific paper
2012-03-15
Computer Science
Learning
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
Scientific paper
Most conventional Reinforcement Learning (RL) algorithms aim to optimize decision-making rules in terms of the expected returns. However, especially for risk management purposes, other risk-sensitive criteria such as the value-at-risk or the expected shortfall are sometimes preferred in real applications. Here, we describe a parametric method for estimating density of the returns, which allows us to handle various criteria in a unified manner. We first extend the Bellman equation for the conditional expected return to cover a conditional probability density of the returns. Then we derive an extension of the TD-learning algorithm for estimating the return densities in an unknown environment. As test instances, several parametric density estimation algorithms are presented for the Gaussian, Laplace, and skewed Laplace distributions. We show that these algorithms lead to risk-sensitive as well as robust RL paradigms through numerical experiments.
Hachiya Hirotaka
Kashima Hisashi
Morimura Tetsuro
Sugiyama Masashi
Tanaka Toshiyuki
No associations
LandOfFree
Parametric Return Density Estimation for Reinforcement Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Parametric Return Density Estimation for Reinforcement Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Parametric Return Density Estimation for Reinforcement Learning will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-32257