Evolution strategy

From HandWiki
Revision as of 14:28, 6 February 2024 by Gametune (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In computer science, an evolution strategy (ES) is an optimization technique based on ideas of evolution. It belongs to the general class of evolutionary computation or artificial evolution methodologies.

History

The 'evolution strategy' optimization technique was created in the early 1960s and developed further in the 1970s and later by Ingo Rechenberg, Hans-Paul Schwefel and their co-workers.

Methods

Evolution strategies use natural problem-dependent representations, so problem space and search space are identical. In common with evolutionary algorithms, the operators are applied in a loop. An iteration of the loop is called a generation. The sequence of generations is continued until a termination criterion is met.

The special feature of the ES is the self-adaptation of mutation step sizes and the coevolution associated with it. The ES is briefly presented using the standard form,[1][2][3] pointing out that there are many variants.[4][5][6][7] The real-valued chromosome contains, in addition to the [math]\displaystyle{ n }[/math] decision variables, [math]\displaystyle{ n' }[/math] mutation step sizes [math]\displaystyle{ {\sigma}_j }[/math], where: [math]\displaystyle{ 1\leq j\leq n'\leq n }[/math]. Often one mutation step size is used for all decision variables or each has its own step size. Mate selection to produce [math]\displaystyle{ \lambda }[/math] offspring is random, i.e. independent of fitness. First, new mutation step sizes are generated per mating by intermediate recombination of the parental [math]\displaystyle{ {\sigma }_{j} }[/math] with subsequent mutation as follows:

[math]\displaystyle{ {\sigma}'_j = \sigma_j \cdot e^{(\mathcal{N}(0,1)-\mathcal{N}_j(0,1))} }[/math]

where [math]\displaystyle{ \mathcal{N}(0,1) }[/math] is a normally distributed random variable with mean [math]\displaystyle{ 0 }[/math] and standard deviation [math]\displaystyle{ 1 }[/math]. [math]\displaystyle{ \mathcal{N}(0,1) }[/math] applies to all [math]\displaystyle{ {\sigma}'_j }[/math], while [math]\displaystyle{ \mathcal{N}_j(0,1) }[/math] is newly determined for each [math]\displaystyle{ {\sigma}'_j }[/math]. Next, discrete recombination of the decision variables is followed by a mutation using the new mutation step sizes as standard deviations of the normal distribution. The new decision variables [math]\displaystyle{ x_j' }[/math] are calculated as follows:

[math]\displaystyle{ x_j'=x_j+\mathcal{N}_j(0,{\sigma}_j') }[/math]

This results in an evolutionary search on two levels: First, at the problem level itself and second, at the mutation step size level. In this way, it can be ensured that the ES searches for its target in ever finer steps. However, there is also the danger of being able to skip larger invalid areas in the search space only with difficulty.

The ES knows two variants of best selection for the generation of the next parent population: In the [math]\displaystyle{ (\mu ,\lambda ) }[/math]-ES, only the [math]\displaystyle{ \mu }[/math] best offspring are used, whereas in the elitist [math]\displaystyle{ (\mu +\lambda ) }[/math]-ES, the [math]\displaystyle{ \mu }[/math] best are selected from parents and children.

Bäck and Schwefel recommend that the value of [math]\displaystyle{ \lambda }[/math] should be seven times the population size [math]\displaystyle{ \mu }[/math],[2] whereby [math]\displaystyle{ \mu }[/math] must not be chosen too small because of the strong selection pressure. Suitable values for [math]\displaystyle{ \mu }[/math] are application-dependent and must be determined experimentally.

Individual step sizes for each coordinate, or correlations between coordinates, which are essentially defined by an underlying covariance matrix, are controlled in practice either by self-adaptation or by covariance matrix adaptation (CMA-ES).[6] When the mutation step is drawn from a multivariate normal distribution using an evolving covariance matrix, it has been hypothesized that this adapted matrix approximates the inverse Hessian of the search landscape. This hypothesis has been proven for a static model relying on a quadratic approximation.[8]

The selection of the next generation in evolution strategies is deterministic and only based on the fitness rankings, not on the actual fitness values. The resulting algorithm is therefore invariant with respect to monotonic transformations of the objective function. The simplest evolution strategy operates on a population of size two: the current point (parent) and the result of its mutation. Only if the mutant's fitness is at least as good as the parent one, it becomes the parent of the next generation. Otherwise the mutant is disregarded. This is a [math]\displaystyle{ \mathit{(1+1)} }[/math]-ES. More generally, [math]\displaystyle{ \lambda }[/math] mutants can be generated and compete with the parent, called [math]\displaystyle{ \mathit{(1+\lambda )} }[/math]-ES. In [math]\displaystyle{ (1,\lambda ) }[/math]-ES the best mutant becomes the parent of the next generation while the current parent is always disregarded. For some of these variants, proofs of linear convergence (in a stochastic sense) have been derived on unimodal objective functions.[9][10]

See also

References

  1. Schwefel, Hans-Paul (1995). Evolution and Optimum Seeking. Sixth-generation computer technology series. New York: Wiley. ISBN 978-0-471-57148-3. 
  2. 2.0 2.1 Bäck, Thomas; Schwefel, Hans-Paul (1993). "An Overview of Evolutionary Algorithms for Parameter Optimization" (in en). Evolutionary Computation 1 (1): 1–23. doi:10.1162/evco.1993.1.1.1. ISSN 1063-6560. https://direct.mit.edu/evco/article/1/1/1-23/1092. 
  3. Schwefel, Hans-Paul; Rudolph, Günter; Bäck, Thomas (1995), "Contemporary Evolution Strategies", Proceedings of the Third European Conference on Advances in Artificial Life (Berlin, Heidelberg: Springer): pp. 893-907, doi:10.1007/3-540-59496-5_351, ISBN 978-3-540-59496-3, https://www.researchgate.net/publication/2606149_Contemporary_Evolution_Strategies 
  4. Bäck, Thomas; Hoffmeister, Frank; Schwefel, Hans-Paul (1991), "A Survey of Evolution Strategies", Proceedings of the Fourth International Conference on Genetic Algorithms (ICGA) (San Mateo, Calif: Morgan Kaufmann), ISBN 978-1-55860-208-3 
  5. Coelho, V. N.; Coelho, I. M.; Souza, M. J. F.; Oliveira, T. A.; Cota, L. P.; Haddad, M. N.; Mladenovic, N.; Silva, R. C. P. et al. (2016). "Hybrid Self-Adaptive Evolution Strategies Guided by Neighborhood Structures for Combinatorial Optimization Problems" (in en). Evolutionary Computation 24 (4): 637–666. doi:10.1162/EVCO_a_00187. ISSN 1063-6560. https://direct.mit.edu/evco/article/24/4/637-666/1033. 
  6. 6.0 6.1 Hansen, Nikolaus; Ostermeier, Andreas (2001). "Completely Derandomized Self-Adaptation in Evolution Strategies" (in en). Evolutionary Computation 9 (2): 159–195. doi:10.1162/106365601750190398. ISSN 1063-6560. https://direct.mit.edu/evco/article/9/2/159-195/892. 
  7. Hansen, Nikolaus; Kern, Stefan (2004), Yao, Xin; Burke, Edmund K.; Lozano, José A. et al., eds., "Evaluating the CMA Evolution Strategy on Multimodal Test Functions", Parallel Problem Solving from Nature - PPSN VIII (Berlin, Heidelberg: Springer) 3242: pp. 282–291, doi:10.1007/978-3-540-30217-9_29, ISBN 978-3-540-23092-2, http://link.springer.com/10.1007/978-3-540-30217-9_29 
  8. Shir, O.M.; A. Yehudayoff (2020). "On the covariance-Hessian relation in evolution strategies". Theoretical Computer Science (Elsevier) 801: 157–174. doi:10.1016/j.tcs.2019.09.002. https://www.sciencedirect.com/science/article/pii/S0304397519305468. 
  9. Auger, A. (2005). "Convergence results for the (1,λ)-SA-ES using the theory of φ-irreducible Markov chains". Theoretical Computer Science (Elsevier) 334 (1–3): 35–69. doi:10.1016/j.tcs.2004.11.017. 
  10. Jägersküpper, J. (2006). "How the (1+1) ES using isotropic mutations minimizes positive definite quadratic forms". Theoretical Computer Science (Elsevier) 361 (1): 38–56. doi:10.1016/j.tcs.2006.04.004. 

Bibliography

  • Ingo Rechenberg (1971): Evolutionsstrategie – Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Frommann-Holzboog (1973). ISBN:3-7728-1642-8
  • Hans-Paul Schwefel (1974): Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977).
  • Hans-Paul Schwefel: Evolution and Optimum Seeking. New York: Wiley & Sons 1995. ISBN:0-471-57148-2
  • H.-G. Beyer and H.-P. Schwefel. Evolution Strategies: A Comprehensive Introduction. Journal Natural Computing, 1(1):3–52, 2002.
  • Hans-Georg Beyer: The Theory of Evolution Strategies. Springer, April 27, 2001. ISBN:3-540-67297-4
  • Ingo Rechenberg: Evolutionsstrategie '94. Stuttgart: Frommann-Holzboog 1994. ISBN:3-7728-1642-8
  • J. Klockgether and H. P. Schwefel (1970). Two-Phase Nozzle And Hollow Core Jet Experiments. AEG-Forschungsinstitut. MDH Staustrahlrohr Project Group. Berlin, Federal Republic of Germany. Proceedings of the 11th Symposium on Engineering Aspects of Magneto-Hydrodynamics, Caltech, Pasadena, Cal., 24.–26.3. 1970.
  • M. Emmerich, O.M. Shir, and H. Wang: Evolution Strategies. In: Handbook of Heuristics, 1-31. Springer International Publishing (2018).

Research centers