Computer Science – Artificial Intelligence
Scientific paper
2009-04-21
Computer Science
Artificial Intelligence
This paper is the extended version of a similarly named paper appearing in ICML'09, containing the rigorous proofs of the main
Scientific paper
In this paper we propose an algorithm for polynomial-time reinforcement learning in factored Markov decision processes (FMDPs). The factored optimistic initial model (FOIM) algorithm, maintains an empirical model of the FMDP in a conventional way, and always follows a greedy policy with respect to its model. The only trick of the algorithm is that the model is initialized optimistically. We prove that with suitable initialization (i) FOIM converges to the fixed point of approximate value iteration (AVI); (ii) the number of steps when the agent makes non-near-optimal decisions (with respect to the solution of AVI) is polynomial in all relevant quantities; (iii) the per-step costs of the algorithm are also polynomial. To our best knowledge, FOIM is the first algorithm with these properties. This extended version contains the rigorous proofs of the main theorem. A version of this paper appeared in ICML'09.
Lorincz Andras
Szita Istvan
No associations
LandOfFree
Optimistic Initialization and Greediness Lead to Polynomial Time Learning in Factored MDPs - Extended Version does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Optimistic Initialization and Greediness Lead to Polynomial Time Learning in Factored MDPs - Extended Version, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Optimistic Initialization and Greediness Lead to Polynomial Time Learning in Factored MDPs - Extended Version will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-274523