User:Eugene M. Izhikevich/Proposed/Evolutionary algorithms
Evolutionary algorithms (see Fogel, (2006); Bäck, (1996); Michalewicz and Fogel, (2005); and many others) are optimization procedures that generally rely on a population of contending ideas to solve a particular problem. The ideas compete for survival, which provides opportunity to generate new ideas, which in turn again compete for survival. Thus, the idea of iterated variation and selection that is common to evolutionary processes is modeled in an algorithm and used to iteratively improve the quality of solutions.
Darwinian evolution is the great unifying principle of biology, the principle that links all organisms together. Each individual is a historical event, with a past that provided the basis for its coming to life, and a future that it may influence. Over many generations, random variation and natural selection can shape the behaviors of individuals and species to meet the demands of an environment. Evolution is an optimization process that can often lead, as Darwin wrote, to "organs of extreme perfection" (Darwin, 1859). While not all organs are perfect and not all adaptations remain functional, harnessing the evolutionary process within a computer (i.e., creating an evolutionary algorithm) can provide a means for addressing complex problems typified by chaos, chance, and nonlinear interactions.
Contents |
Basic approach
Using an evolutionary algorithm to address a problem requires choosing at least five elements: (1) representation for the solution, (2) variation operators to apply, (3) fitness criteria (or a simple performance index), (4) selection method, and (5) initialization. Figure 1 provides a flowchart of the process. Representation refers to a form of data structure within the computer for storing solutions. Variation operations are applied to existing solutions to create new solutions. Usually, variation is based on random perturbation with respect to some probability distribution. The fitness criteria measure the quality of any given solution, either in terms of maximizing fitness, or minimizing error, or potentially some multiplicity of conflicting criteria. The selection method uses the score obtained for each solution to determine which to save and which to eliminate from the population at each generation. Those solutions that survive are described as parents. The initialization of an evolutionary algorithm can be completely at random, or can incorporate human or other expertise about solutions that may work better than others. Many refinements, such as enforcing diversity (Della Cioppa et al., 2004), self-adaptation of control parameters (Rudolph, (2001); Fogel et al., (2001)), and having selection depend on user feedback (Takagi, (2001); Caldwell and Johnston, (1991)), extend the applicability of evolutionary algorithms to a large domain of optimization problems.
Origins
The origins of evolutionary algorithms date back to the 1950s and 1960s. Some of the first experiments performed on John von Neumann’s computer in Princeton involved what is now called artificial life. In 1953, Nils Barricelli (Barricelli, 1954) wrote a program for von Neumann’s machine that simulated an environment, separated by cells in a grid. Numbers resided in each cell and migrated to neighboring cells based on a set of rules. When two numbers collided in the same cell, they competed for survival. Barricelli was interested in studying the emergent properties that would arise from such a simulation. He found that even with very simple rules for propagating throughout the environment, certain numeric patterns would evolve and could only persist when other patterns were also present. Barricelli’s first experiments already demonstrated something akin to symbiosis.
Random variation in evolutionary algorithms can come in many forms. Even as early as the 1950s, researchers such as Alex Fraser (Fraser, 1957) in Canberra, Australia were experimenting with different means for simulating sexual recombination between multiple solutions for studying genetic systems. The common form of recombination was a crossover operator that took parts of one solution and matched them up with parts of another. But other researchers tried different mechanisms. For example, in 1962, Hans Bremermann (Bremmerman, 1962) of the University of California at Berkeley suggested a blending recombination where parameters from multiple solutions, even more than two, could be averaged and used in evolutionary optimization. When coupled with mutation operators that act on only a single solution, rather than pairs or higher-order couplings, it is possible to generate a robust and often surprisingly efficient search of many complicated solution spaces. Other forms of recombination were offered in the 1960s, including Lawrence Fogel’s suggestion for mating finite state machines as predictors for future environments in generating artificial intelligence (Fogel et al., 1966).
Additional early contributions to evolutionary algorithms included the work of George Friedman (Friedman, 1956) on evolutionary robotics, George Box (Box, 1957) on simulating evolution for industrial plant productivity, Richard Friedberg (Friedberg, 1958) on evolving computer programs, Ingo Rechenberg and Hans-Paul Schwefel (Rechenberg, 1965; Schwefel, 1965) on evolving physical devices, Howard Kaufman (Kaufman, 1967) on evolving mathematical systems, Michael Conrad on simulating ecosystems (Conrad, 1969), John Holland (Holland, 1967) on modeling adaptive systems, George Burgin on evolving solutions to games (Burgin, 1969) and many others.
Basic mathematical properties
Some simple mathematical results based on Markov chains indicate that versions of evolutionary algorithms can be made to converge to the best possible solutions in the limit (Fogel, 1992; Rudolph, 1994). Other analyses provide convergence rates on mathematically tractable functions or with local approximations (Bäck and Hoffmeister, 2004; Beyer, 2001). These theoretical findings are, however, not very practical. It is much more advantageous to view evolutionary methods as a means for generating useful solutions rather than perfect solutions. In addition, the rates of convergence on real problems may not be particularly meaningful because the problems are often dynamic and change during the time they are being solved. In comparison to traditional problem-solving techniques, evolutionary algorithms are often much faster and more adept at adapting to changes in the environment because whatever knowledge has been learned about how to solve a problem is contained in the collection of individual solutions that has survived up to that point.
The utility and applicability of evolutionary algorithms requires a shift in thinking that may require some time to overtake conventional engineering. The typical approach to problem solving is to think in terms of reducing complex situations into simpler elements – building blocks – and then work to optimize each of these components. The hope is that when brought back together, these optimized components will serve to make the best design. But it is well known that outstanding individuals do not necessarily make a great team. Evolution works in an entirely different manner. Only the entire cohesive functional unit is measured by selection. As Francois Jacob offered "Evolution does not work like an engineer. It works like a tinkerer" (Jacob, 1974). Owing to the complex effects of genetic interactions, where a single genetic change can affect myriad behavioral traits (a relationship known as pleiotropy), evolution tinkers with the underlying genetics and simultaneously generates a diverse array of new behaviors. Selection then culls out the least appropriate designs.
Even though there have been some attempts to analyze evolutionary algorithms as a parallel to human engineering, focusing on recombination as a mechanism for bringing together "good ideas," there is no reason to expect that the evolutionary model of learning is the model that we use as engineers. Moreover, there is little reason to expect that we will be able to improve evolutionary algorithms by capturing idealized models of genetic operators that occur in nature. The effects of those operators may be wholly different in the natural setting. In evolutionary algorithms, variation operators must be tailored to the task at hand in order to gain the most benefit and achieve the greatest efficiency.
This notion was set down in mathematics in what is called the "no free lunch theorem" (Wolpert and Macready, 1997). Within some overarching assumptions, all algorithms perform exactly the same on average when applied across all possible functions. More plainly, blind random search is as good at finding the maximum of all functions as is hill climbing, on average. The consequence of this result is that if an evolutionary algorithm is good for a certain class of problem, then it is by consequence not good (worse than random search) on some other class. There is no single best construction for an evolutionary search (or any other search algorithm). Much of the early theory from one branch of evolutionary algorithms, namely genetic algorithms, that supported using binary encodings, one-point crossover, and proportional selection has now been eschewed as either being incorrect or of no practical value (Macready and Wolpert, 1998; Fogel and Ghozeil, 1997). What remains is a significant challenge for the future: Analyzing classes of optimization problems and determining useful means for identifying which types of evolutionary algorithms are appropriate in each case. This is a wide open area of investigation and it promises to be quite difficult.
Application areas
Evolutionary algorithms have been applied to virtually every imaginable area of optimization. Even an incomplete reference listing here would be ineffective at establishing the basis of prior art; however, the reader who is new to this field should be aware that successful applications of evolutionary algorithms have been implemented in industrial scheduling, production mix optimization, financial forecasting, military planning, anomaly screening, pharmaceutical design, diagnostic device optimization, image and other signal analysis, data mining, video game character development, credit scoring, modeling social conflicts, and a long list of other cases.
Coevolution
The typical requirement for an external performance index or fitness criteria can sometimes be difficult to fulfill. For example, suppose the challenge is to develop a strategy in a new game for which there are no human experts to provide feedback on whether or not a particular move is good (i.e., reinforcement learning). In this case, a coevolutionary approach can be adopted in which a population of strategies for the game compete and the winners survive to become new parents. This method has been used for many years, as far back as the 1960s (Reed et al., 1967), but more recently has gained favor in creating solutions in competitive environments such as games (Fogel, 2002; Fogel et al., 2004; Kim and Cho, 2005; Stanley et al., 2005; Runarsson and Lucas, 2005). Another famous work in this area was provided by Hillis (1992), which evolved sorting networks in competing populations. This approach was also followed in (Sebald and Schlenzig, 1994) to evolve closed-loop controllers of simulated medical activities on patients.
Related areas
Evolutionary algorithms have many variations, and although these variations are now mostly unimportant technically, readers may see terms such as genetic algorithms, evolution strategies, evolutionary programming, and others. With few exceptions, these terms now are mostly historical monikers rather than offering specific technical insights. The initial development of each of these different areas of evolutionary algorithms took place mostly independently, with nuanced variations occurring in each. Today, however, with better theory to explain how evolutionary search algorithms function, it is clear that canonical forms of these approaches do not offer generally superior approaches to optimization, despite some of the early claims that were made, and may not offer much benefit on certain problems that were once thought to be well suited for their use (for details, see (Holland, 1975; Schwefel, 1981; Fogel, 1998; Eshelman and Schaffer, 1993), among others).
Other methods that have similarities to evolutionary algorithms have been offered within the past two decades. These include differential evolution (Price et al., 2005), particle swarm optimization (Eberhart et al., 2001), ant colony optimization (Dorigo and Stuetzle, 2004), cultural algorithms (Reynolds, 1994), and multi-objective optimization (Cvetkovic and Parmee, 2002).
Future vision
Computing power has now caught up with the concept of evolving solutions to complex problems. Evolution is an inherently parallel process in which potential solutions can often be evaluated simultaneously rather than sequentially. In these cases, a parallel distributed computing architecture offers tremendous speed up. An evolutionary algorithm can be implemented with multiple populations on independent processors, with occasional migration being applied to introduce solutions from different populations. The parallel approach to supercomputing is becoming increasingly attractive and appears to be perfectly suited to evolutionary problem solving.
References
- Bäck, T. (1996) Evolutionary Algorithms in Theory and Practice, Oxford, NY.
- Bäck, T. and Hoffmeister, F. (2004) “Basic aspects of evolution strategies,” Statistics and Computing, Vol. 4:2, pp. 51-63.
- Barricelli, N.A. (1954) “Esempi numerici di processi di evoluzione,” Methodos, pp. 45-68.
- Beyer, H.-G. (2001) “On the performance of the (1,)-evolution strategies for the ridge function class,” IEEE Transactions on Evolutionary Computation, Vol. 5:3, pp. 218-235.
- Box, G.E.P. (1957) “Evolutionary operation: A method for increasing industrial productivity,” Applied Statistics, Vol. 6:2, pp. 81-101.
- Bremermann, H.J. (1962) “Optimization through evolution and recombination,” Self-Organizing Systems-1962, M. Yovits, G.T. Jacobi, and G.D. Goldstein (Eds.), Spartan Books, Washington D.C., pp. 93-106.
- Burgin, G.H. (1969) “On playing two-person zero-sum games against non-minimax players,” IEEE Transactions on Systems Science and Cybernetics, Vol. 5:4, pp. 369-370.
- Caldwell, C. and Johnston, V.S. (1991) “Tracking a criminal suspect through ‘face-space’ with a genetic algorithm,” in Proceedings of the Fourth International Conference on Genetic Algorithm, Morgan Kaufmann Publisher, pp. 416-421.
- Conrad, M. (1969) “Computer experiments on the evolution of coadaptation in a primitive ecosystem,” Ph.D. dissertation, Stanford University.
- Cvetkovic, D. and Parmee, I.C. (2002) “.Preferences and their Application in Evolutionary Multiobjective Optimisation,” IEEE Transactions on Evolutionary Computation, Vol. 6:1, pp. 42—57.
- Darwin, C. (1859) On the Origin of Species, London.
- Della Cioppa, A., De Stefano, C., Marcelli, A. (2004) “On the role of population size and niche radius in fitness sharing,” IEEE Transactions on Evolutionary Computation, Vol. 8:6, pp. 580-592.
- Dorigo, M. and Stuetzle T. (2004) Ant Colony Optimization, MIT Press, Cambridge, MA.
- Eberhart, R.C., Shi, Y., and Kennedy, J. (2001) Swarm Intelligence, Morgan Kaufman, San Francisco, CA.
- Eshelman, L.J. and Schaffer, J.D. (1993) “Crossover’s niche,” Proc. 5th Intern. Conf. on Genetic Algorithms, S. Forrest (Ed.), Morgan Kaufmann, San Mateo, CA, pp 9-14.
- Fogel, D.B. (1992) “Evolving artificial intelligence,” Ph.D. dissertation, UC San Diego.
- Fogel, D.B. (ed.) (1998) Evolutionary Computation: The Fossil Record, IEEE Press, Piscataway, NJ.
- Fogel, D.B. (2002) Blondie24: Playing at the Edge of AI, Morgan Kaufman, San Francisco, CA.
- Fogel, D.B. (2006) Evolutionary Computation: Toward a New Philosophy of Machine Intelligence, 3rd Ed., IEEE Press, NY.
- Fogel, D.B., Fogel, G.B. and Ohkura, K. (2001) “Multiple-vector self-adaptation in evolutionary algorithms,” BioSystems, Vol. 61:2-3, pp. 155-162.
- Fogel, D.B. and Ghozeil, A. (1997) “A note on representations and variation operators,” IEEE Transactions on Evolutionary Computation, Vol. 1:2, pp. 159-161.
- Fogel, D.B., Hays, T.J., Hahn, S.L., and Quon, J. (2004) “A self-learning evolutionary chess program,” Proceedings of the IEEE, Vol. 92:12, pp. 1947-1954.
- Fogel, L.J., Owens, A.J., Walsh, M.J. (1966) Artificial Intelligence through Simulated Evolution, John Wiley, NY.
- Fraser, A.S. (1957) “Simulation of genetic systems by automatic digital computers. I. Introduction,” Australian Journal of Biological Sciences, Vol. 10, pp. 484-491.
- Friedberg, R.M. (1958) “A learning machine: Part 1,” IBM Journal of Research and Development, Vol. 2:1, pp. 2-13.
- Friedman, G.J. (1956) “Selective feedback computers for engineering synthesis and nervous system analogy,” master’s thesis, UCLA.
- Hillis, W.D. (1992) “Co-evolving parasites improve simulated evolution as an optimization procedure,” in Artificial Life II, C.G. Langton, C. Taylor, J.D. Farmer, and S. Rasmussen (eds.), Addison-Wesley, Reading, MA, pp. 313-324.
- Holland, J. (1967) “Nonlinear environments permitting efficient adaptation,” Computer and Information Sciences-II, J.T. Tou (Ed.), Academic Press, NY, pp. 147-164.
- Holland, J.H. (1975) Adaptation in Natural and Artificial Systems, Univ. Michigan Press, Ann Arbor, MI.
- Jacob, F. (1974) “Evolution and Tinkering,” Science, Vol. 196, pp. 1161-1166.
- Kaufman, H. (1967) “An experimental investigation of process identification by competitive evolution,” IEEE Trans. Systems Science and Cybernetics, Vol. SSC3-1, pp. 11-16.
- Kim, K.-J. and Cho, S.-B. (2005) “Systematically incorporating domain-specific knowledge into evolutionary speciated checkers,” IEEE Transactions on Evolutionary Computation, Vol. 9:6, pp. 615-627.
- Macready, W.G. and Wolpert, D.H. (1998) “Bandit Problems and the Exploration/Exploitation Tradeoff,” IEEE Trans. Evolutionary Computation, Vol. 2:1, pp. 2-22.
- Michalewicz, Z. and Fogel, D.B. (2005) How to Solve It: Modern Heuristics, 2nd Ed., Springer, Berlin.
- Price, K.V., Storn, R.M. and Lampinen J.A. (2005) Differential Evolution: A Practical Approach to Global Optimization, Springer, Berlin.
- Rechenberg, I. (1965) “Cybernetic solution path of an experimental problem,” Royal Aircraft Establishment, Library Translation 1122.
- Reed, J., Toombs, R. and Barricelli, N.A. (1967) “Simulation of biological evolution and machine learning. I. Selection of self-reproducing numeric patterns by data processing machines, effects on hereditary control, mutation type and crossing,” Journal of Theoretical Biology, Vol. 17, pp. 319-342.
- Reynolds, R.G. (1994) “An introduction to cultural algorithms” Proc. 3rd Annual Conf. on Evolutionary Programming, A.V. Sebald and L.J. Fogel (Eds.), World Scientific, River Edge, NJ.
- Rudolph, G. (1994) “Convergence properties of canonical genetic algorithms,” IEEE Transactions on Neural Networks, Vol. 5:1, 1994.
- Rudolph, G. (2001) “Self-adaptive mutations may lead to premature convergence,” IEEE Transactions on Evolutionary Computation, Vol. 5:4, pp. 410-414.
- Runarsson, T.P. and Lucas, S.M. (2005) “Coevolution versus self-play temporal difference learning for acquiring position evaluation in small board go,” IEEE Transactions on Evolutionary Computation, Vol. 9:6, pp. 628-640.
- Schwefel, H.-P. (1965) “Kybernetische evolution als strategie der experimentellen forschung in der stroemungstechnik,” Dipl-Ing. Thesis, Technical Univ. Berlin.
- Schwefel, H.-P. (1981) Numerical Optimization of Computer Models, John Wiley, Chichester, U.K.
- Sebald, A.V. and Schlenzig, J. (1994) “Minimax design of neural-net controllers for uncertain plants,” IEEE Transactions on Neural Networks, Vol. 5:1, pp. 73-82.
- Stanley, K., Bryant, B.D., and Miikkulainen, R. (2005) “Real-time neuroevolution in the NERO video game,” IEEE Transactions on Evolutionary Computation, Vol. 9:6, pp. 653-668.
- Takagi, H. (2001) “Interactive evolutionary computation: Fusion of the capacities of EC optimization and human evaluation,” Proceedings of the IEEE, Vol. 89:9, pp. 1275-1296.
- Wolpert, D.H. and Macready, W.G. (1997) “No Free Lunch Theorems for Optimization,” IEEE Trans. Evolutionary Computation, Vol. 1:1, pp. 67-82.
Recommended reading
External links
See also
Evolution strategies, Genetic algorithms, Genetic programming, Differential evolution, Particle swarm optimization, Ant colony optimization, Cultural algorithms, Multiobjective evolutionary algorithms