Computer Science – Learning
Scientific paper
2008-10-31
Theoretical Computer Science, 405:3 (2008) pages 274-284
Computer Science
Learning
20 pages
Scientific paper
We address the problem of reinforcement learning in which observations may exhibit an arbitrary form of stochastic dependence on past observations and actions, i.e. environments more general than (PO)MDPs. The task for an agent is to attain the best possible asymptotic reward where the true generating environment is unknown but belongs to a known countable family of environments. We find some sufficient conditions on the class of environments under which an agent exists which attains the best asymptotic reward for any environment in the class. We analyze how tight these conditions are and how they relate to different probabilistic assumptions known in reinforcement learning and related fields, such as Markov Decision Processes and mixing conditions.
Hutter Marcus
Ryabko Daniil
No associations
LandOfFree
On the Possibility of Learning in Reactive Environments with Arbitrary Dependence does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with On the Possibility of Learning in Reactive Environments with Arbitrary Dependence, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and On the Possibility of Learning in Reactive Environments with Arbitrary Dependence will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-456771