Computer Science – Learning
Scientific paper
2009-06-09
Journal of Artificial General Intelligence, 1 (2009) pages 3-24
Computer Science
Learning
24 LaTeX pages, 5 diagrams
Scientific paper
General-purpose, intelligent, learning agents cycle through sequences of observations, actions, and rewards that are complex, uncertain, unknown, and non-Markovian. On the other hand, reinforcement learning is well-developed for small finite state Markov decision processes (MDPs). Up to now, extracting the right state representations out of bare observations, that is, reducing the general agent setup to the MDP framework, is an art that involves significant effort by designers. The primary goal of this work is to automate the reduction process and thereby significantly expand the scope of many existing reinforcement learning algorithms and the agents that employ them. Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in Part II. The role of POMDPs is also considered there.
No associations
LandOfFree
Feature Reinforcement Learning: Part I: Unstructured MDPs does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Feature Reinforcement Learning: Part I: Unstructured MDPs, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Feature Reinforcement Learning: Part I: Unstructured MDPs will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-65039