Multiarmed Bandit Problems with Delayed Feedback

Computer Science – Data Structures and Algorithms

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

This version provides simpler proofs of the main results and highlights the technical novelty

Scientific paper

In this paper we initiate the study of optimization of bandit type problems in scenarios where the feedback of a play is not immediately known. This arises naturally in allocation problems which have been studied extensively in the literature, albeit in the absence of delays in the feedback. We study this problem in the Bayesian setting. In presence of delays, no solution with provable guarantees is known to exist with sub-exponential running time. We show that bandit problems with delayed feedback that arise in allocation settings can be forced to have significant structure, with a slight loss in optimality. This structure gives us the ability to reason about the relationship of single arm policies to the entangled optimum policy, and eventually leads to a O(1) approximation for a significantly general class of priors. The structural insights we develop are of key interest and carry over to the setting where the feedback of an action is available instantaneously, and we improve all previous results in this setting as well.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Multiarmed Bandit Problems with Delayed Feedback does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Multiarmed Bandit Problems with Delayed Feedback, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multiarmed Bandit Problems with Delayed Feedback will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-146002

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.