Computer Science – Learning
Scientific paper
2006-05-09
Proc. 17th International Conf. on Algorithmic Learning Theory (ALT 2006) pages 244-258
Computer Science
Learning
17 pages, 1 table
Scientific paper
Consider an agent interacting with an environment in cycles. In every interaction cycle the agent is rewarded for its performance. We compare the average reward U from cycle 1 to m (average value) with the future discounted reward V from cycle k to infinity (discounted value). We consider essentially arbitrary (non-geometric) discount sequences and arbitrary reward sequences (non-MDP environments). We show that asymptotically U for m->infinity and V for k->infinity are equal, provided both limits exist. Further, if the effective horizon grows linearly with k or faster, then existence of the limit of U implies that the limit of V exists. Conversely, if the effective horizon grows linearly with k or slower, then existence of the limit of V implies that the limit of U exists.
No associations
LandOfFree
General Discounting versus Average Reward does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with General Discounting versus Average Reward, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and General Discounting versus Average Reward will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-675173