Beam search

From HandWiki
Revision as of 21:43, 6 February 2024 by NBrush (talk | contribs) (add)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Heuristic search algorithm

In computer science, beam search is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is an optimization of best-first search that reduces its memory requirements. Best-first search is a graph search which orders all partial solutions (states) according to a chosen heuristic. But in beam search, only a predetermined number of best partial solutions are kept as candidates.[1] It is thus a greedy algorithm. Implemented with an unlimited set of candidates, beam search becomes a backtracking algorithm.

The term "beam search" was coined by Raj Reddy of Carnegie Mellon University in 1977.[2]

Details

Beam search uses breadth-first search to build its search tree. At each level of the tree, it generates all successors of the states at the current level, sorting them in increasing order of heuristic cost.[3] However, it only stores a predetermined number, [math]\displaystyle{ \beta }[/math], of best states at each level (called the beam width). Only those states are expanded next. The greater the beam width, the fewer states are pruned. With an infinite beam width, no states are pruned and beam search is identical to breadth-first search. The beam width bounds the memory required to perform the search. Since a goal state could potentially be pruned, beam search sacrifices completeness (the guarantee that an algorithm will terminate with a solution, if one exists). Beam search is not optimal (that is, there is no guarantee that it will find the best solution). [4]

Uses

A beam search is most often used to maintain tractability in large systems with insufficient amount of memory to store the entire search tree.[5] For example, it has been used in many machine translation systems.[6] (The state of the art now primarily uses neural machine translation based methods, especially large language models) To select the best translation, each part is processed, and many different ways of translating the words appear. The top best translations according to their sentence structures are kept, and the rest are discarded. The translator then evaluates the translations according to a given criterion, choosing the translation which best keeps the goals. The first use of a beam search was in the Harpy Speech Recognition System, CMU 1976.[7]

Variants

Beam search has been made complete by combining it with depth-first search, resulting in beam stack search[8] and depth-first beam search,[5] and with limited discrepancy search,[5] resulting in beam search using limited discrepancy backtracking[5] (BULB). The resulting search algorithms are anytime algorithms that find good but likely sub-optimal solutions quickly, like beam search, then backtrack and continue to find improved solutions until convergence to an optimal solution.

In the context of a local search, we call local beam search a specific algorithm that begins selecting [math]\displaystyle{ \beta }[/math] randomly generated states and then, for each level of the search tree, it always considers [math]\displaystyle{ \beta }[/math] new states among all the possible successors of the current ones, until it reaches a goal.[9][10]

Since local beam search often ends up on local maxima, a common solution is to choose the next [math]\displaystyle{ \beta }[/math] states in a random way, with a probability dependent from the heuristic evaluation of the states. This kind of search is called stochastic beam search.[11]

Other variants are flexible beam search and recovery beam search.[10]

References

  1. "FOLDOC - Computing Dictionary". http://foldoc.org/index.cgi?query=beam+search&action=Search. 
  2. Defense Technical Information Center (1977-08-01) (in english). DTIC ADA049288: Speech Understanding Systems. Summary of Results of the Five-Year Research Effort at Carnegie-Mellon University. http://archive.org/details/DTIC_ADA049288. 
  3. "BRITISH MUSEUM SEARCH". http://bradley.bradley.edu/~chris/searches.html. 
  4. Norvig, Peter (1992). Paradigms of Artificial Intelligence Programming: Case Studies in Common LISP. Morgan Kaufmann. ISBN 9781558601918. https://books.google.com/books?id=X4mhySvjqUAC. 
  5. 5.0 5.1 5.2 5.3 Furcy, D.; Koenig, S. (2005). "Limited discrepancy beam search". Proceedings of the 19th international joint conference on Artificial intelligence. Morgan Kaufmann. pp. 125–131. https://dl.acm.org/doi/abs/10.5555/1642293.1642313. 
  6. Tillmann, C.; Ney, H. (2003). "Word reordering and a dynamic programming beam search algorithm for statistical machine translation". Computational Linguistics 29 (1): 97–133. doi:10.1162/089120103321337458. https://direct.mit.edu/coli/article-abstract/29/1/97/1794. 
  7. Lowerre, Bruce T. (1976). The Harpy Speech Recognition System (PhD). Carnegie Mellon University.
  8. Zhou, Rong; Hansen, Eric (2005). "Beam-Stack Search: Integrating Backtracking with Beam Search". ICAPS. pp. 90–98. http://www.aaai.org/Library/ICAPS/2005/icaps05-010.php. Retrieved 2011-04-09. 
  9. Svetlana Lazebnik. "Local search algorithms". University of North Carolina at Chapel Hill, Department of Computer Science. p. 15. https://www.cs.unc.edu/~lazebnik/fall10/lec06_local_search.pdf. 
  10. 10.0 10.1 Pushpak Bhattacharyya. "Beam Search". Indian Institute of Technology Bombay, Department of Computer Science and Engineering (CSE). pp. 39–40. https://www.cse.iitb.ac.in/~cs344/2011/slides/cs344-beam-search-2feb11.pptx. 
  11. James Parker (2017-09-28). "Local Search". University of Minnesota. p. 17. http://www-users.cselabs.umn.edu/classes/Fall-2017/csci4511/slides/week4/9.28.17.pdf.