Contextual Bandit Algorithms with Supervised Learning Guarantees

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

10 pages

Scientific paper

We address the problem of learning in an online, bandit setting where the learner must repeatedly select among $K$ actions, but only receives partial feedback based on its choices. We establish two new facts: First, using a new algorithm called Exp4.P, we show that it is possible to compete with the best in a set of $N$ experts with probability $1-\delta$ while incurring regret at most $O(\sqrt{KT\ln(N/\delta)})$ over $T$ time steps. The new algorithm is tested empirically in a large-scale, real-world dataset. Second, we give a new algorithm called VE that competes with a possibly infinite set of policies of VC-dimension $d$ while incurring regret at most $O(\sqrt{T(d\ln(T) + \ln (1/\delta))})$ with probability $1-\delta$. These guarantees improve on those of all previous algorithms, whether in a stochastic or adversarial environment, and bring us closer to providing supervised learning type guarantees for the contextual bandit setting.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Contextual Bandit Algorithms with Supervised Learning Guarantees does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Contextual Bandit Algorithms with Supervised Learning Guarantees, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Contextual Bandit Algorithms with Supervised Learning Guarantees will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-373230

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.