Linearly Parameterized Bandits

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

40 pages; updated results and references

Scientific paper

We consider bandit problems involving a large (possibly infinite) collection of arms, in which the expected reward of each arm is a linear function of an $r$-dimensional random vector $\mathbf{Z} \in \mathbb{R}^r$, where $r \geq 2$. The objective is to minimize the cumulative regret and Bayes risk. When the set of arms corresponds to the unit sphere, we prove that the regret and Bayes risk is of order $\Theta(r \sqrt{T})$, by establishing a lower bound for an arbitrary policy, and showing that a matching upper bound is obtained through a policy that alternates between exploration and exploitation phases. The phase-based policy is also shown to be effective if the set of arms satisfies a strong convexity condition. For the case of a general set of arms, we describe a near-optimal policy whose regret and Bayes risk admit upper bounds of the form $O(r \sqrt{T} \log^{3/2} T)$.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Linearly Parameterized Bandits does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Linearly Parameterized Bandits, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Linearly Parameterized Bandits will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-172766

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.