Learning augmented algorithm

From HandWiki
Revision as of 19:06, 20 July 2022 by imported>Ohm (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

A learning augmented algorithm is an algorithm that can make use of a prediction to improve its performance.[1] Whereas in regular algorithms just the problem instance is inputted, learning augmented algorithms accept an extra parameter. This extra parameter often is a prediction of some property of the solution. This prediction is then used by the algorithm to improve its running time or the quality of its output.

Description

A learning augmented algorithm typically takes an input [math]\displaystyle{ (\mathcal{I}, \mathcal{A}) }[/math]. Here [math]\displaystyle{ \mathcal{I} }[/math] is a problem instance and [math]\displaystyle{ \mathcal{A} }[/math] is the advice: a prediction about a certain property of the optimal solution. The type of the problem instance and the prediction depend on the algorithm. Learning augmented algorithms usually satisfy the following two properties:

  • Consistency. A learning augmented algorithm is said to be consistent if the algorithm can be proven to have a good performance when it is provided with an accurate prediction.[1] Usually, this is quantified by giving a bound on the performance that depends on the error in the prediction.
  • Robustnesss. An algorithm is called robust if its worst-case performance can be bounded even if the given prediction is inaccurate.[1]

Learning augmented algorithms generally do not prescribe how the prediction should be done. For this purpose machine learning can be used.[citation needed]

Examples

Binary search

The binary search algorithm is an algorithm for finding elements of a sorted list [math]\displaystyle{ x_1,\ldots,x_n }[/math]. It needs [math]\displaystyle{ O(\log(n)) }[/math] steps to find an element with some known value [math]\displaystyle{ x }[/math] in a list of length [math]\displaystyle{ n }[/math]. With a prediction [math]\displaystyle{ i }[/math] for the position of [math]\displaystyle{ x }[/math], the following learning augmented algorithm can be used.[1]

  • First, look at position [math]\displaystyle{ i }[/math] in the list. If [math]\displaystyle{ x_i=y }[/math], the element has been found.
  • If [math]\displaystyle{ x_i\lt y }[/math], look at positions [math]\displaystyle{ i+1,i+2,i+4,\ldots }[/math] until an index [math]\displaystyle{ j }[/math] with [math]\displaystyle{ x_j\geq y }[/math] is found.
    • Now perform a binary search on [math]\displaystyle{ x_i,\ldots, x_j }[/math].
  • If [math]\displaystyle{ x_i\gt y }[/math], do the same as in the previous case, but instead consider [math]\displaystyle{ i-1,i-2,i-4,\ldots }[/math].

The error is defined to be [math]\displaystyle{ \eta=|i-i^*| }[/math], where [math]\displaystyle{ i^* }[/math] is the real index of [math]\displaystyle{ y }[/math]. In the learning augmented algorithm, probing the positions [math]\displaystyle{ i+1,i+2,i+4,\ldots }[/math] takes [math]\displaystyle{ \log_2(\eta) }[/math] steps. Then a binary search is performed on a list of size at most [math]\displaystyle{ 2\eta }[/math], which takes [math]\displaystyle{ \log_2(\eta) }[/math] steps. This makes the total running time of the algorithm [math]\displaystyle{ 2\log_2(\eta) }[/math]. So, when the error is small, the algorithm is faster than a normal binary search. This shows that the algorithm is consistent. Even in the worst case, the error will be at most [math]\displaystyle{ n }[/math]. Then the algorithm takes at most [math]\displaystyle{ O(\log(n)) }[/math] steps, so the algorithm is robust.

More examples

Learning augmented algorithms are known for:

See also

References

  1. 1.0 1.1 1.2 1.3 Mitzenmacher, Michael; Vassilvitskii, Sergei (31 December 2020). "Algorithms with Predictions". Beyond the Worst-Case Analysis of Algorithms. Cambridge University Press. pp. 646–662. doi:10.1017/9781108637435.037. 
  2. Wang, Shufan; Li, Jian; Wang, Shiqiang (2020). "Online Algorithms for Multi-shop Ski Rental with Machine Learned Advice". NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems. ISBN 1-71382-954-1. OCLC 1263313383. 
  3. Dinitz, Michael; Im, Sungjin; Lavastida, Thomas; Benjamin, Benjamin; Vassilvitskii, Sergei (2021). "Faster Matchings via Learned Duals". Advances in Neural Information Processing Systems. Curran Associates, Inc.. https://proceedings.neurips.cc/paper/2021/file/5616060fb8ae85d93f334e7267307664-Paper.pdf. 
  4. Bansal, Nikhil; Coester, Christian; Kumar, Ravi; Purohit, Manish; Vee, Erik (January 2022). "Learning-Augmented Weighted Paging". Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). Society for Industrial and Applied Mathematics. pp. 67–89. doi:10.1137/1.9781611977073.4. 

External links