Algorithm selection

From HandWiki
Short description: Meta-algorithmic technique to choose an algorithm


Algorithm selection (sometimes also called per-instance algorithm selection or offline algorithm selection) is a meta-algorithmic technique to choose an algorithm from a portfolio on an instance-by-instance basis. It is motivated by the observation that on many practical problems, different algorithms have different performance characteristics. That is, while one algorithm performs well in some scenarios, it performs poorly in others and vice versa for another algorithm. If we can identify when to use which algorithm, we can optimize for each scenario and improve overall performance. This is what algorithm selection aims to do. The only prerequisite for applying algorithm selection techniques is that there exists (or that there can be constructed) a set of complementary algorithms.

Definition

Given a portfolio [math]\displaystyle{ \mathcal{P} }[/math] of algorithms [math]\displaystyle{ \mathcal{A} \in \mathcal{P} }[/math], a set of instances [math]\displaystyle{ i \in \mathcal{I} }[/math] and a cost metric [math]\displaystyle{ m: \mathcal{P} \times \mathcal{I} \to \mathbb{R} }[/math], the algorithm selection problem consists of finding a mapping [math]\displaystyle{ s: \mathcal{I} \to \mathcal{P} }[/math] from instances [math]\displaystyle{ \mathcal{I} }[/math] to algorithms [math]\displaystyle{ \mathcal{P} }[/math] such that the cost [math]\displaystyle{ \sum_{i \in \mathcal{I}} m(s(i),i) }[/math] across all instances is optimized.[1][2]

Examples

Boolean satisfiability problem (and other hard combinatorial problems)

A well-known application of algorithm selection is the Boolean satisfiability problem. Here, the portfolio of algorithms is a set of (complementary) SAT solvers, the instances are Boolean formulas, the cost metric is for example average runtime or number of unsolved instances. So, the goal is to select a well-performing SAT solver for each individual instance. In the same way, algorithm selection can be applied to many other [math]\displaystyle{ \mathcal{NP} }[/math]-hard problems (such as mixed integer programming, CSP, AI planning, TSP, MAXSAT, QBF and answer set programming). Competition-winning systems in SAT are SATzilla,[3] 3S[4] and CSHC[5]

Machine learning

In machine learning, algorithm selection is better known as meta-learning. The portfolio of algorithms consists of machine learning algorithms (e.g., Random Forest, SVM, DNN), the instances are data sets and the cost metric is for example the error rate. So, the goal is to predict which machine learning algorithm will have a small error on each data set.

Instance features

The algorithm selection problem is mainly solved with machine learning techniques. By representing the problem instances by numerical features [math]\displaystyle{ f }[/math], algorithm selection can be seen as a multi-class classification problem by learning a mapping [math]\displaystyle{ f_{i} \mapsto \mathcal{A} }[/math] for a given instance [math]\displaystyle{ i }[/math].

Instance features are numerical representations of instances. For example, we can count the number of variables, clauses, average clause length for Boolean formulas,[6] or number of samples, features, class balance for ML data sets to get an impression about their characteristics.

Static vs. probing features

We distinguish between two kinds of features:

  1. Static features are in most cases some counts and statistics (e.g., clauses-to-variables ratio in SAT). These features ranges from very cheap features (e.g. number of variables) to very complex features (e.g., statistics about variable-clause graphs).
  2. Probing features (sometimes also called landmarking features) are computed by running some analysis of algorithm behavior on an instance (e.g., accuracy of a cheap decision tree algorithm on an ML data set, or running for a short time a stochastic local search solver on a Boolean formula). These feature often cost more than simple static features.

Feature costs

Depending on the used performance metric [math]\displaystyle{ m }[/math], feature computation can be associated with costs. For example, if we use running time as performance metric, we include the time to compute our instance features into the performance of an algorithm selection system. SAT solving is a concrete example, where such feature costs cannot be neglected, since instance features for CNF formulas can be either very cheap (e.g., to get the number of variables can be done in constant time for CNFs in the DIMACs format) or very expensive (e.g., graph features which can cost tens or hundreds of seconds).

It is important to take the overhead of feature computation into account in practice in such scenarios; otherwise a misleading impression of the performance of the algorithm selection approach is created. For example, if the decision which algorithm to choose can be made with perfect accuracy, but the features are the running time of the portfolio algorithms, there is no benefit to the portfolio approach. This would not be obvious if feature costs were omitted.

Approaches

Regression approach

One of the first successful algorithm selection approaches predicted the performance of each algorithm [math]\displaystyle{ \hat{m}_{\mathcal{A}}: \mathcal{I} \to \mathbb{R} }[/math] and selected the algorithm with the best predicted performance [math]\displaystyle{ arg\min_{\mathcal{A}\in\mathcal{P}} \hat{m}_{\mathcal{A}}(i) }[/math] for an instance [math]\displaystyle{ i }[/math].[3]

Clustering approach

A common assumption is that the given set of instances [math]\displaystyle{ \mathcal{I} }[/math] can be clustered into homogeneous subsets and for each of these subsets, there is one well-performing algorithm for all instances in there. So, the training consists of identifying the homogeneous clusters via an unsupervised clustering approach and associating an algorithm with each cluster. A new instance is assigned to a cluster and the associated algorithm selected.[7]

A more modern approach is cost-sensitive hierarchical clustering[5] using supervised learning to identify the homogeneous instance subsets.

Pairwise cost-sensitive classification approach

A common approach for multi-class classification is to learn pairwise models between every pair of classes (here algorithms) and choose the class that was predicted most often by the pairwise models. We can weight the instances of the pairwise prediction problem by the performance difference between the two algorithms. This is motivated by the fact that we care most about getting predictions with large differences correct, but the penalty for an incorrect prediction is small if there is almost no performance difference. Therefore, each instance [math]\displaystyle{ i }[/math] for training a classification model [math]\displaystyle{ \mathcal{A}_1 }[/math] vs [math]\displaystyle{ \mathcal{A}_2 }[/math] is associated with a cost [math]\displaystyle{ |m(\mathcal{A}_1,i) - m(\mathcal{A}_2,i)| }[/math].[8]

Requirements

Clustering of SAT solvers from SAT12-INDU ASlib scenario according to the correlation coefficient of spearman.
Shapley values for complementary analysis on SAT12-INDU ASlib Scenario[9]

The algorithm selection problem can be effectively applied under the following assumptions:

  • The portfolio [math]\displaystyle{ \mathcal{P} }[/math] of algorithms is complementary with respect to the instance set [math]\displaystyle{ \mathcal{I} }[/math], i.e., there is no single algorithm [math]\displaystyle{ \mathcal{A} \in \mathcal{P} }[/math] that dominates the performance of all other algorithms over [math]\displaystyle{ \mathcal{I} }[/math] (see figures to the right for examples on complementary analysis).
  • In some application, the computation of instance features is associated with a cost. For example, if the cost metric is running time, we have also to consider the time to compute the instance features. In such cases, the cost to compute features should not be larger than the performance gain through algorithm selection.

Application domains

Algorithm selection is not limited to single domains but can be applied to any kind of algorithm if the above requirements are satisfied. Application domains include:

For an extensive list of literature about algorithm selection, we refer to a literature overview.

Variants of algorithm selection

Online selection

Online algorithm selection refers to switching between different algorithms during the solving process. This is useful as a hyper-heuristic. In contrast, offline algorithm selection selects an algorithm for a given instance only once and before the solving process.

Computation of schedules

An extension of algorithm selection is the per-instance algorithm scheduling problem, in which we do not select only one solver, but we select a time budget for each algorithm on a per-instance base. This approach improves the performance of selection systems in particular if the instance features are not very informative and a wrong selection of a single solver is likely.[11]

Selection of parallel portfolios

Given the increasing importance of parallel computation, an extension of algorithm selection for parallel computation is parallel portfolio selection, in which we select a subset of the algorithms to simultaneously run in a parallel portfolio.[12]

External links

References

  1. Rice, John R. (1976). "The Algorithm Selection Problem". Advances in Computers. 15. pp. 65–118. doi:10.1016/S0065-2458(08)60520-3. ISBN 9780120121151. https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1098&context=cstech. 
  2. Bischl, Bernd; Kerschke, Pascal; Kotthoff, Lars; Lindauer, Marius; Malitsky, Yuri; Fréchette, Alexandre; Hoos, Holger; Hutter, Frank et al. (2016). "ASlib: A benchmark library for algorithm selection". Artificial Intelligence 237: 41–58. doi:10.1016/j.artint.2016.04.003. 
  3. 3.0 3.1 L. Xu; F. Hutter; H. Hoos; K. Leyton-Brown (2008). "SATzilla: Portfolio-based Algorithm Selection for SAT". Journal of Artificial Intelligence Research 32: 565–606. doi:10.1613/jair.2490. 
  4. S. Kadioglu; Y. Malitsky; A. Sabharwal; H. Samulowitz; M. Sellmann (2011). "Algorithm Selection and Scheduling". in Lee, J.. Principles and Practice of Constraint Programming. Lecture Notes in Computer Science. 6876. pp. 454–469. doi:10.1007/978-3-642-23786-7_35. ISBN 978-3-642-23785-0. 
  5. 5.0 5.1 Y. Malitsky; A. Sabharwal; H. Samulowitz; M. Sellmann (2013). "Algorithm Portfolios Based on Cost-Sensitive Hierarchical Clustering". Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence. pp. 608–614. ISBN 978-1-57735-633-2. 
  6. E. Nudelman; K. Leyton-Brown; H. Hoos; A. Devkar; Y. Shoham (2004). "Understanding Random SAT: Beyond the Clauses-to-Variables Ratio". Proceedings of CP. https://ai.stanford.edu/~shoham/www%20papers/CP04randomsat.pdf. 
  7. S. Kadioglu; Y. Malitsky; M. Sellmann; K. Tierney (2010). "ISAC – Instance-Specific Algorithm Configuration". Proceedings of the European Conference on Artificial Intelligence. https://wiwi.uni-paderborn.de/fileadmin/dep3ls7/Downloads/Publikationen/PDFs/isac-ecai2010.pdf. 
  8. L. Xu; F. Hutter; J. Shen; H. Hoos; K. Leyton-Brown (2012). "SATzilla2012: Improved Algorithm Selection Based on Cost-sensitive Classification Models". Proceedings of the SAT Challenge 2012: Solver and Benchmark Descriptions. http://www.academia.edu/download/30605172/sc2012_proceedings.pdf#page=58. [|permanent dead link|dead link}}]
  9. A. Frechette; L. Kotthoff; T. Michalak; T. Rahwan; H. Hoos; K. Leyton-Brown (2016). "Using the Shapley Value to Analyze Algorithm Portfolios". Proceedings of the International Conference on AAAI 30. doi:10.1609/aaai.v30i1.10440. https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/download/12495/12107. 
  10. Kotthoff, Lars. "Algorithm selection for combinatorial search problems: A survey." Data Mining and Constraint Programming. Springer, Cham, 2016. 149-190.
  11. M. Lindauer; R. Bergdoll; F. Hutter (2016). "An Empirical Study of Per-Instance Algorithm Scheduling". Proceedings of the International Conference on Learning and Intelligent Optimization. Lecture Notes in Computer Science 10079: 253–259. doi:10.1007/978-3-319-50349-3_20. ISBN 978-3-319-50348-6. http://ml.informatik.uni-freiburg.de/papers/16-LION-ASschedules.pdf. 
  12. M. Lindauer; H. Hoos; F. Hutter (2015). "From Sequential Algorithm Selection to Parallel Portfolio Selection". Proceedings of the International Conference on Learning and Intelligent Optimization. Lecture Notes in Computer Science 8994: 1–16. doi:10.1007/978-3-319-19084-6_1. ISBN 978-3-319-19083-9. https://ml.informatik.uni-freiburg.de/papers/15-LION-ParallelAS.pdf.