Computer Science – Neural and Evolutionary Computing
Scientific paper
2010-10-07
Computer Science
Neural and Evolutionary Computing
Preliminary version appeared at PPSN XI. Compared to version 1, a small bug in the constants was fixed ($\gamma$ is slightly l
Scientific paper
Extending previous analyses on function classes like linear functions, we analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotone. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant $c$ in the mutation probability $p(n) = c/n$ can make a decisive difference. We show that if $c < 1$, then the (1+1) evolutionary algorithm finds the optimum of every such function in $\Theta(n \log n)$ iterations. For $c=1$, we can still prove an upper bound of $O(n^{3/2})$. However, for $c > 33$, we present a strictly monotone function such that the (1+1) evolutionary algorithm with overwhelming probability does not find the optimum within $2^{\Omega(n)}$ iterations. This is the first time that we observe that a constant factor change of the mutation probability changes the run-time by more than constant factors.
Doerr Benjamin
Jansen Thomas
Sudholt Dirk
Winzen Carola
Zarges Christine
No associations
LandOfFree
Optimizing Monotone Functions Can Be Difficult does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Optimizing Monotone Functions Can Be Difficult, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Optimizing Monotone Functions Can Be Difficult will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-509669