(EC) In EC, we often want to maintain diversity in the POPULATION. Sometimes a FITNESS function may be known to be multimodal, and we want to locate all the peaks. We may consider each peak in the fitness function as analogous to a niche. By applying techniques such as fitness sharing (Goldberg & Richardson, [ICGA87]), the population can be prevented from converging on a single peak, and instead stable SUB-POPULATIONs form at each peak. This is analogous to different species occupying different niches. See also SPECIES, SPECIATION.
For any pair of search algorithms, there are "as many" problems for which the first algorithm outperforms the second as for which the reverse is true. One consequence of this is that if we don't put any domain knowledge into our algorithm, it is as likely to perform worse than random search as it is likely to perform better. This is true for all algorimths including GENETIC ALGORITHMs.
More detailed description:
The NFL work of Wolpert and Macready is a framework that addresses the core aspects of search, focusing on the connection between FITNESS functions and effective search algorithms. The central importance of this connection is demonstrated by the No Free Lunch theorem which states that averaged over all problems, all search algorithms perform equally. This result implies that if we are comparing a genetic algorithm to some other algorithm (e.g., simulated annealing, or even random search) and the genetic algorithm performs better on some class of problems, then the other algorithm necessarily performs better on problems outside the class. Thus it is essential to incorporate knowledge of the problem into the search algorithm.
The NFL framework also does the following: it provides a geometric interpretation of what it means for an algorithm to be well matched to a problem; it provides information theoretic insight into the search procedure; it investigates time-varying fitness functions; it proves that independent of the fitness function, one cannot (without prior domain knowledge) successfully choose between two algorithms based on their previous behavior; it provides a number of formal measures of how well an algorithm performs; and it addresses the difficulty of OPTIMIZATION problems from a viewpoint outside of traditional computational complexity.
Go to entries beginning with O
[Glossary top level] [HHGTEC main contents page]
Mistakes in this page?
Hitch Hiker's Guide to Evolutionary Computation,
Issue 7.4, released 18 January 2000
Copyright © 1993-2000 by J. Heitkötter and
D. Beasley, all rights reserved.