General Discounting versus Average Reward

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

17 pages, 1 table

Scientific paper

Consider an agent interacting with an environment in cycles. In every interaction cycle the agent is rewarded for its performance. We compare the average reward U from cycle 1 to m (average value) with the future discounted reward V from cycle k to infinity (discounted value). We consider essentially arbitrary (non-geometric) discount sequences and arbitrary reward sequences (non-MDP environments). We show that asymptotically U for m->infinity and V for k->infinity are equal, provided both limits exist. Further, if the effective horizon grows linearly with k or faster, then existence of the limit of U implies that the limit of V exists. Conversely, if the effective horizon grows linearly with k or slower, then existence of the limit of V implies that the limit of U exists.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

General Discounting versus Average Reward does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with General Discounting versus Average Reward, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and General Discounting versus Average Reward will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-675173

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.