Stochastic optimization

From HandWiki
Short description: Optimization method

Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization.[1] Stochastic optimization methods generalize deterministic methods for deterministic problems.

Methods for stochastic functions

Partly random input data arise in such areas as real-time estimation and control, simulation-based optimization where Monte Carlo simulations are run as estimates of an actual system,[2][3] and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use statistical inference tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include:

Randomized search methods

On the other hand, even when the data set consists of precise measurements, some methods introduce randomness into the search-process to accelerate progress.[7] Such randomness can also make the method less sensitive to modeling errors. Another advantage is that randomness into the search-process can be used for obtaining interval estimates of the minimum of a function via extreme value statistics.[8][9] Further, the injected randomness may enable the method to escape a local optimum and eventually to approach a global optimum. Indeed, this randomization principle is known to be a simple and effective way to obtain algorithms with almost certain good performance uniformly across many data sets, for many sorts of problems. Stochastic optimization methods of this kind include:

In contrast, some authors have argued that randomization can only improve a deterministic algorithm if the deterministic algorithm was poorly designed in the first place.[21] Fred W. Glover[22] argues that reliance on random elements may prevent the development of more intelligent and better deterministic components. The way in which results of stochastic optimization algorithms are usually presented (e.g., presenting only the average, or even the best, out of N runs without any mention of the spread), may also result in a positive bias towards randomness.

In fact, some important open problems in Complexity involve finding out whether randomness allows to solve problems in shorter time (e.g. the P = BPP problem). Sometimes the convergence of stochastic local search algorithm to the optimal solution comes as a direct consequence of the fact that, at each iteration, the algorithm has a probability greater than zero of jumping from any solution to any other solution in the search space, so the optimal solution will eventually be found. If no additional conditions can be taken into account, then the average time to find the solution of that random search is the same as if an exhaustive search were performed. The No free lunch theorem for optimization establishes conditions under which the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. Moreover, it has been proved that essential semantic properties of stochastic local search algorithms such as e.g. whether they will find the optimal solution or a solution at some distance of the optimal value are undecidable in general. The reason for this is that these algorithms can simulate any program (i.e. they are Turing-complete) —in particular, also if their basic ingredients (e.g. fitness function, crossover and mutation operators, etc.) are required to be very simple.[23]

See also

References

  1. Spall, J. C. (2003). Introduction to Stochastic Search and Optimization. Wiley. ISBN 978-0-471-33052-3. http://www.jhuapl.edu/ISSO. 
  2. Fu, M. C. (2002). "Optimization for Simulation: Theory vs. Practice". INFORMS Journal on Computing 14 (3): 192–227. doi:10.1287/ijoc.14.3.192.113. 
  3. M.C. Campi and S. Garatti. The Exact Feasibility of Randomized Solutions of Uncertain Convex Programs. SIAM J. on Optimization, 19, no.3: 1211–1230, 2008.[1]
  4. Robbins, H.; Monro, S. (1951). "A Stochastic Approximation Method". Annals of Mathematical Statistics 22 (3): 400–407. doi:10.1214/aoms/1177729586. 
  5. J. Kiefer; J. Wolfowitz (1952). "Stochastic Estimation of the Maximum of a Regression Function". Annals of Mathematical Statistics 23 (3): 462–466. doi:10.1214/aoms/1177729392. 
  6. Spall, J. C. (1992). "Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation". IEEE Transactions on Automatic Control 37 (3): 332–341. doi:10.1109/9.119632. http://www.jhuapl.edu/SPSA. 
  7. Holger H. Hoos and Thomas Stützle, Stochastic Local Search: Foundations and Applications, Morgan Kaufmann / Elsevier, 2004.
  8. M. de Carvalho (2011). "Confidence intervals for the minimum of a function using extreme value statistics". International Journal of Mathematical Modelling and Numerical Optimisation 2 (3): 288–296. doi:10.1504/IJMMNO.2011.040793. https://www.maths.ed.ac.uk/~mdecarv/papers/decarvalho2011.pdf. 
  9. M. de Carvalho (2012). "A generalization of the Solis-Wets method". Journal of Statistical Planning and Inference 142 (3): 633‒644. doi:10.1016/j.jspi.2011.08.016. https://www.maths.ed.ac.uk/~mdecarv/papers/decarvalho2012c.pdf. 
  10. S. Kirkpatrick; C. D. Gelatt; M. P. Vecchi (1983). "Optimization by Simulated Annealing". Science 220 (4598): 671–680. doi:10.1126/science.220.4598.671. PMID 17813860. Bibcode1983Sci...220..671K. http://citeseer.ist.psu.edu/kirkpatrick83optimization.html. 
  11. D.H. Wolpert; S.R. Bieniawski (2011). "Probability Collectives in Optimization". http://www.santafe.edu/research/working-papers/abstract/f752fdb9c2b41e4e04947d7531421d61/. 
  12. Battiti, Roberto; Gianpietro Tecchiolli (1994). "The reactive tabu search". ORSA Journal on Computing 6 (2): 126–140. doi:10.1287/ijoc.6.2.126. http://rtm.science.unitn.it/~battiti/archive/TheReactiveTabuSearch.PDF. 
  13. Battiti, Roberto; Mauro Brunato; Franco Mascia (2008). Reactive Search and Intelligent Optimization. Springer Verlag. ISBN 978-0-387-09623-0. 
  14. Rubinstein, R. Y.; Kroese, D. P. (2004). The Cross-Entropy Method. Springer-Verlag. ISBN 978-0-387-21240-1. 
  15. Zhigljavsky, A. A. (1991). Theory of Global Random Search. Kluwer Academic. ISBN 978-0-7923-1122-5. 
  16. Kagan E.; Ben-Gal I. (2014). "A Group-Testing Algorithm with Online Informational Learning". IIE Transactions 46 (2): 164–184. doi:10.1080/0740817X.2013.803639. 
  17. W. Wenzel; K. Hamacher (1999). "Stochastic tunneling approach for global optimization of complex potential energy landscapes". Phys. Rev. Lett. 82 (15): 3003. doi:10.1103/PhysRevLett.82.3003. Bibcode1999PhRvL..82.3003W. 
  18. E. Marinari; G. Parisi (1992). "Simulated tempering: A new monte carlo scheme". Europhys. Lett. 19 (6): 451–458. doi:10.1209/0295-5075/19/6/002. Bibcode1992EL.....19..451M. 
  19. Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley. ISBN 978-0-201-15767-3. http://www-illigal.ge.uiuc.edu. 
  20. Tavridovich, S. A. (2017). "COOMA: an object-oriented stochastic optimization algorithm". International Journal of Advanced Studies 7 (2): 26–47. doi:10.12731/2227-930x-2017-2-26-47. http://journal-s.org/index.php/ijas/article/view/10121/pdf. 
  21. Yudkowsky, Eliezer. "Worse Than Random - LessWrong". http://lesswrong.com/lw/vp/worse_than_random/. 
  22. Glover, F. (2007). "Tabu search—uncharted domains". Annals of Operations Research 149: 89–98. doi:10.1007/s10479-006-0113-9. 
  23. Daniel Loscos, Narciso Martí-Oliet and Ismael Rodríguez (2022). "Generalization and completeness of stochastic local search algorithms". Swarm and Evolutionary Computation 68. doi:10.1016/j.swevo.2021.100982. 

Further reading

External links