Recent Results on No-Free-Lunch Theorems for Optimization

Computer Science – Neural and Evolutionary Computing

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

10 pages, LaTeX, see http://www.neuroinformatik.rub.de/PROJECTS/SONN/

Scientific paper

The sharpened No-Free-Lunch-theorem (NFL-theorem) states that the performance of all optimization algorithms averaged over any finite set F of functions is equal if and only if F is closed under permutation (c.u.p.) and each target function in F is equally likely. In this paper, we first summarize some consequences of this theorem, which have been proven recently: The average number of evaluations needed to find a desirable (e.g., optimal) solution can be calculated; the number of subsets c.u.p. can be neglected compared to the overall number of possible subsets; and problem classes relevant in practice are not likely to be c.u.p. Second, as the main result, the NFL-theorem is extended. Necessary and sufficient conditions for NFL-results to hold are given for arbitrary, non-uniform distributions of target functions. This yields the most general NFL-theorem for optimization presented so far.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Recent Results on No-Free-Lunch Theorems for Optimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Recent Results on No-Free-Lunch Theorems for Optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Recent Results on No-Free-Lunch Theorems for Optimization will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-101829

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.