Computer Science – Learning
Scientific paper
2012-01-12
Computer Science
Learning
15 pages, 2 figures
Scientific paper
We introduce a class of learning problems where the agent is presented with a series of tasks. Intuitively, if there is relation among those tasks, then the information gained during execution of one task has value for the execution of another task. Consequently, the agent is intrinsically motivated to explore its environment beyond the degree necessary to solve the current task it has at hand. We develop a decision theoretic setting that generalises standard reinforcement learning tasks and captures this intuition. More precisely, we consider a multi-stage stochastic game between a learning agent and an opponent. We posit that the setting is a good model for the problem of life-long learning in uncertain environments, where while resources must be spent learning about currently important tasks, there is also the need to allocate effort towards learning about aspects of the world which are not relevant at the moment. This is due to the fact that unpredictable future events may lead to a change of priorities for the decision maker. Thus, in some sense, the model "explains" the necessity of curiosity. Apart from introducing the general formalism, the paper provides algorithms. These are evaluated experimentally in some exemplary domains. In addition, performance bounds are proven for some cases of this problem.
No associations
LandOfFree
Sparse Reward Processes does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Sparse Reward Processes, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Sparse Reward Processes will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-152178