It is important to bear in mind exactly what all of this does (not) imply about the relationship between natural selection in the biological world and optimization (i.e. genetic algorithms). To this end, consider the extremely simplified view in which natural selection is viewed as optimization over a cost or "fitness" function is static over time.

In this paper we measure an algorithm's performance based on *all* X values it has sampled since it began, and therefore we don't allow an algorithm to resample points it had already visited. Our NFL theorem states that all algorithms are equivalent by this measure. However one might consider different measures. In particular, we may be interested in the evolution through time of "generations" consisting of temporally contiguous subsets of our population, generations that are updated by our search algorithm. In such a scenario, it *does* make sense to resample points already visited. Moreover, our NFL theorem does not apply to this alternative kind of performance measure. For example, according to this alternative performance measure, an algorithm that resamples old points in X that are fit and adds them to the current generation will always do better than one that resamples old points that are not fit.

Now when we examine the biological world around us, we are implicitly using this second kind of measure; we only see the organisms from the current generation. In addition, natural selection means that only (essential characteristics of) good points in X are kept around from one generation to the next. Accordingly, using this second kind of performance measure, one expects that the average fitness across a generation improves with time. (Or would if the environment - i.e., cost function - didn't change in time, etc.) This is nothing more than the tautology that natural selection improves the fitness of the members of a generation.

However this empirical evidence that natural selection performs well according to this second measure does not mean anything concerning its performance according to the first measure. In particular, it does not mean that if we wish to do a search, and *are* able to keep around all points sampled so far, that we have any reason to believe that natural selection is an effective search strategy. Nor does it mean that natuarl selection works well as far as the tail of the measure based on the entire population is concerned. Yet it is precisely that tail that is of interest to the engineering world.

In short, the empirical evidence of the biological world does not indicate in any sense that natural selection is an effective search strategy, even in the biological world. We simply have not had a chance to observe the behavior of alternative strategies. For all we know, the strategy of breeding only the least fit members of the population may have done a better job at finding the extrema of the cost function faced by biological organisms. The experiment just has not been done. The breed-the-worst strategy will in general result in worse recent *generations*, but using that strategy implies nothing about the quality of the *populations* over the long term. If however, we relax the unrealistic assumption that the fitness function is constant over time then it is possible that there may be disadvantages to a breed-the-worst policy.

Wolpert And Macready (1995)

"In this part, we mostly try to raise questions concerning the validity of applying the genetic model to the problem of optimization."

"We also speculate on the nature of natural genetics from a computational search perspective, and ponder the somewhat philosophical question of how life could arise despite the NFL theorems. In particular, we contend that the fact that life evolved does not imply that genetic search is an efficient *universal* problem solver."

"So what does all of this say about using genetic evolution as a model of search in optimization settings? Basically it says that the faith in using a GA as a blind search optimization engine is misplaced It is based on an understanding of genetic evolution that does not necessarily apply within a computational optimization setting.

In other words, there is no reason to believe that a genetic algorithm will be of any more general use than any other approach to optimization. And in particular, if a GA is going to be used, then the user will have to perform an analysis of the problem within the context of the GA and determine operations and representations that are compatible with the problem and will enhance the search If heuristic methods are sufficient for the researchers problem then again there is no a priori reason to choose a GA over other standard heuristic programming methods that might lend themselves to the problem at hand."

Culberson (1996)

"We shortly discuss scenarios where some structural knowledge on the function to be optimized is given. Then optimization techniques like Evolutionary Algorithms certainly can have an advantage. But this is no argument in favor of the claim that Evolutionary Algorithms do better than other optimization techniques" (p 2)

"Specific algorithms are very successful but only recently Michalewicz (1998) has presented a problem specific crossover operator (inver-over) and has obtained some impressive results with Evolutionary Algorithms. It seems to be necessary to use problem specific components for Evolutionary Algorithms in order to compete with other problem specific algorithms." (p 3)

"Hence, the claim that Evolutionary Algorithms are superior to other optimization techniques is meaningful only in some type of restricted black box scenario." (p 4)

"Indeed, all explanations of the success of Evolutionary Algorithms are based on the fact that we can deduce something on unknown function values from known ones" (p 6)

"So one may hope that the random modules of Evolutionary Algorithms implicitly benefit from the dependencies implied by the bounded complexity of the considered functions." (p 6)

Droste, Jansen and Wegener (1998)

"Therefore, we present in Section 5 a natural and simple function which is diffcult for all typically used search heuristics."

"A non-artificial and simple function which is hard for simulated annealing and evolutionary algorithms"

"However, there are simple non-artificial functions where some frequently used search heuristics can be proved to need exponential time with overwhelming probability and where it is conjectured that this holds for all “reasonable” search heuristics."

Droste, Jansen and Wegener (2002)

"However, there are a number of so-called “black-box” optimisation algorithms, which use little or no problem-specific information. Key examples of the black-box approach are Evolutionary Algorithms (EAs) and Simulated Annealing (SA). Random search is also a black-box optimisation algorithm, and so represents an important benchmark against which the performance of other search algorithms may be measured." Montgomery (2002)

"Contrary to Dembski’s assertions, the NFL theorems do not at all prohibit Darwinian evolution (or make evolutionary algorithms in general incapable of outperforming a random sampling – which is Demsbki’s position)."

"Dembski’s conclusions from the NFL theorems are indeed unsubstantiated since these theorems, contrary to Dembski’s assertion, in no way mean that evolutionary algorithms cannot outperform a random sampling."

"The NFL theorems do not at all prevent evolutionary algorithms from outperforming a random sampling (or blind search) because these theorems are about performance averaged over all possible fitness functions."

Perakh

Book Review

Shallit (2002)

- CULBERSON, Joseph C. On the Futility of Blind Search, 1996. [about 79]
- DROSTE, Stefan, Thomas JANSEN and Ingo WEGENER, Optimization with randomized search heuristics---the (A)NFL theorem, realistic scenarios, and difficult functions,
*Theoretical Computer Science*, 287 (2002) 131-144. [about 75] - DROSTE, Stefan, Thomas JANSEN and Ingo WEGENER, Perhaps Not a Free Lunch But At Least a Free Appetizer [about 77]

- MONTGOMERY, James, The No Free Lunch Theorems for Optimisation: An Overview [1]
- PERAKH, Mark, The No Free Lunch Theorems and their Applications to Evolutionary Algorithms, 2003.
- RADCLIFFE, Nicholas J. and Patrick D. SURRY, Fundamental Limitations on Search Algorithms: Evolutionary Computing in Perspective
- SHALLIT, Jeffrey No Free Lunch: Why Specified Complexity Cannot be Purchased Without Intelligence, William Dembski, Rowman and Littlefield, 2002.
*BioSystems*66 (2002) 93-99.

- SPEARS, William M. and Kenneth A. De JONG, Dining with GAs: Operator Lunch Theorems
- WOLPERT, David H. and William G. MACREADY, No Free Lunch Theorems for Search (Technical Report SFI-TR-95-02-010). Sante Fe, NM, USA: Santa Fe Institute, 1995. [about 364]