Computer Science – Learning
Scientific paper
2011-10-14
Computer Science
Learning
6 Pages
Scientific paper
Bayesian optimization (BO) algorithms try to optimize an unknown function that is expensive to evaluate using minimum number of evaluations/experiments. Most of the proposed algorithms in BO are sequential, where only one experiment is selected at each iteration. This method can be time inefficient when each experiment takes a long time and more than one experiment can be ran concurrently. On the other hand, requesting a fix-sized batch of experiments at each iteration causes performance inefficiency in BO compared to the sequential policies. In this paper, we present an algorithm that asks a batch of experiments at each time step t where the batch size p_t is dynamically determined in each step. Our algorithm is based on the observation that the sequence of experiments selected by the sequential policy can sometimes be almost independent from each other. Our algorithm identifies such scenarios and request those experiments at the same time without degrading the performance. We evaluate our proposed method using the Expected Improvement policy and the results show substantial speedup with little impact on the performance in eight real and synthetic benchmarks.
Azimi Javad
Fern Xiaoli
Jalali Ali
No associations
LandOfFree
Dynamic Batch Bayesian Optimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Dynamic Batch Bayesian Optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Dynamic Batch Bayesian Optimization will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-687229