Algorithm Selection as a Bandit Problem with Unbounded Losses

Computer Science – Artificial Intelligence

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

15 pages, 2 figures

Scientific paper

Algorithm selection is typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. In recent work, we adopted an online approach, in which a performance model is iteratively updated and used to guide selection on a sequence of problem instances. The resulting exploration-exploitation trade-off was represented as a bandit problem with expert advice, using an existing solver for this game, but this required the setting of an arbitrary bound on algorithm runtimes, thus invalidating the optimal regret of the solver. In this paper, we propose a simpler framework for representing algorithm selection as a bandit problem, with partial information, and an unknown bound on losses. We adapt an existing solver to this game, proving a bound on its expected regret, which holds also for the resulting algorithm selection technique. We present preliminary experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Algorithm Selection as a Bandit Problem with Unbounded Losses does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Algorithm Selection as a Bandit Problem with Unbounded Losses, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Algorithm Selection as a Bandit Problem with Unbounded Losses will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-688425

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.