Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multi-armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-301570

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.