Configuration linear program

From HandWiki
Short description: Linear programming for Combinatorial optimization

The configuration linear program (configuration-LP) is a linear programming technique used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem.[1][2] Later, it has been applied to the bin packing[3][4] and job scheduling problems.[5][6] In the configuration-LP, there is a variable for each possible configuration - each possible multiset of items that can fit in a single bin (these configurations are also known as patterns) . Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations.

In bin packing

The integral LP

In the bin packing problem, there are n items with different sizes. The goal is to pack the items into a minimum number of bins, where each bin can contain at most B. A feasible configuration is a set of sizes with a sum of at most B.

  • Example:[7] suppose the item sizes are 3,3,3,3,3,4,4,4,4,4, and B=12. Then the possible configurations are: 3333; 333; 33, 334; 3, 34, 344; 4, 44, 444. If we had only three items of size 3, then we could not use the 3333 configuration.

Denote by S the set of different sizes (and their number). Denote by C the set of different configurations (and their number). For each size s in S and configuration c in C, denote:

  • ns - the number of items of size s.
  • as,c - the number of occurrences of size s in configuration c.
  • xc - a variable denoting the number of bins with configuration c.

Then, the configuration LP of bin-packing is:

[math]\displaystyle{ \text{minimize}~~~\sum_{c\in C}x_c~~~\text{subject to} }[/math]

[math]\displaystyle{ \sum_{c\in C}a_{s,c}x_c \geq n_s }[/math] for all s in S (- all ns items of size s are packed).

[math]\displaystyle{ x_c\in\{0,\ldots,n\} }[/math] for all c in C (- there are at most n bins overall, so at most n of each individual configuration).

The configuration LP is an integer linear program, so in general it is NP-hard. Moreover, even the problem itself is generally very large: it has C variables and S constraints. If the smallest item size is eB (for some fraction e in (0,1)), then there can be up to 1/e items in each bin, so the number of configurations C ~ S1/e, which can be very large if e is small (if e is considered a constant, then the integer LP can be solved by exhaustive search: there are at most S1/e configurations, and for each configuration there are at most n possible values, so there are at most [math]\displaystyle{ n^{S^{1/e}} }[/math] combinations to check. For each combination, we have to check S constraints, so the run-time is [math]\displaystyle{ S\cdot n^{S^{1/e}} }[/math], which is polynomial in n when S, e are constant).[7]

However, this ILP serves as a basis for several approximation algorithms. The main idea of these algorithms is to reduce the original instance into a new instance in which S is small and e is large, so C is relatively small. Then, the ILP can be solved either by complete search (if S, C are sufficiently small), or by relaxing it into a fractional LP.

The fractional LP

The fractional configuration LP of bin-packing It is the linear programming relaxation of the above ILP. It replaces the last constraint [math]\displaystyle{ x_c\in\{0,\ldots,n\} }[/math] with the constraint [math]\displaystyle{ x_c \geq 0 }[/math]. In other words, each configuration can be used a fractional number of times. The relaxation was first presented by Gilmore and Gomory,[2] and it is often called the Gilmore-Gomory linear program.[8]

  • Example: suppose there are 31 items of size 3 and 7 items of size 4, and the bin-size is 10. The configurations are: 4, 44, 34, 334, 3, 33, 333. The constraints are [0,0,1,2,1,2,3]*x=31 and [1,2,1,1,0,0,0]*x=7. An optimal solution to the fractional LP is [0,0,0,7,0,0,17/3] That is: there are 7 bins of configuration 334 and 17/3 bins of configuration 333. Note that only two different configurations are needed.

In short, the fractional LP can be written as follows:

[math]\displaystyle{ \text{minimize}~~\mathbf{1}\cdot \mathbf{x}~~~\text{s.t.}~~ A \mathbf{x}\geq \mathbf{n}~~~\text{and}~~ \mathbf{x}\geq 0 }[/math]

Where 1 is the vector (1,...,1) of size C, A is an S-by-C matrix in which each column represents a single configuration, and n is the vector (n1,...,nS).

Solving the fractional LP

A linear program with no integrality constraints can be solved in time polynomial in the number of variables and constraints. The problem is that the number of variables in the fractional configuration LP is equal to the number of possible configurations, which might be huge. Karmarkar and Karp[9] present an algorithm that overcomes this problem.

First, they construct the dual linear program of the fractional LP:

[math]\displaystyle{ \text{maximize}~~\mathbf{n}\cdot \mathbf{y}~~~\text{s.t.}~~ A^T \mathbf{y} \leq \mathbf{1}~~~\text{and}~~ \mathbf{y}\geq 0 }[/math].

It has S variables y1,...,yS, and C constraints: for each configuration c, there is a constraint [math]\displaystyle{ A^c\cdot y\leq 1 }[/math], where [math]\displaystyle{ A^c }[/math] is the column of A representing the configuration c. 3It has the following economic interpretation.[9] For each size s, we should determine a nonnegative price ys. Our profit is the total price of all items. We want to maximize the profit n y subject to the constraints that the total price of items in each configuration is at most 1.

Second, they apply a variant of the ellipsoid method, which does not need to list all the constraints - it just needs a separation oracle. A separation oracle is an algorithm that, given a vector y, either asserts that it is feasible, or finds a constraint that it violates. The separation oracle for the dual LP can be implemented by solving the knapsack problem with sizes s and values y: if the optimal solution of the knapsack problem has a total value at most 1, then y is feasible; if it is larger than 1, than y is not feasible, and the optimal solution of the knapsack problem identifies a configuration for which the constraint is violated.

Third, they show that, with an approximate solution to the knapsack problem, one can get an approximate solution to the dual LP, and from this, an approximate solution to the primal LP; see Karmarkar-Karp bin packing algorithms.

All in all, for any tolerance factor h, finds a basic feasible solution of cost at most LOPT(I) + h, and runs in time:

[math]\displaystyle{ O\left(S^8 \log{S} \log^2(\frac{S n}{e h}) + \frac{S^4 n \log{S}}{h}\log(\frac{S n}{e h}) \right) }[/math],

where S is the number of different sizes, n is the number of different items, and the size of the smallest item is eB. In particular, if e ≥ 1/n and h=1, the algorithm finds a solution with at most LOPT+1 bins in time: [math]\displaystyle{ O\left(S^8 \log{S} \log^2{n} + S^4 n \log{S}\log{n} \right) }[/math]. A randomized variant of this algorithm runs in expected time:

[math]\displaystyle{ O\left(S^7 \log{S} \log^2(\frac{S n}{e h}) + \frac{S^4 n \log{S}}{h}\log(\frac{S n}{e h}) \right) }[/math].

Rounding the fractional LP

Karmarkar and Karp further developed a way to round the fractional LP into an approximate solution to the integral LP; see Karmarkar-Karp bin packing algorithms. Their proof shows that the additive integrality gap of this LP is in O(log2(n)). Later, Hoberg and Rothvoss[10] improved their result and proved that the integrality gap is in O(log(n)). The best known lower bound on the integrality gap is a constant Ω(1). Finding the exact integrality gap is an open problem.

In bin covering

In the bin covering problem, there are n items with different sizes. The goal is to pack the items into a maximum number of bins, where each bin should contain at least B. A natural configuration LP for this problem could be:

[math]\displaystyle{ \text{maximize}~~\mathbf{1}\cdot \mathbf{x}~~~\text{s.t.}~~ A \mathbf{x}\leq \mathbf{n}~~~\text{and}~~ \mathbf{x}\geq 0 }[/math]

where A represents all configurations of items with sum at least B (one can take only the inclusion-minimal configurations). The problem with this LP is that, in the bin-covering problem, handling small items is problematic, since small items may be essential for the optimal solution. With small items allowed, the number of configurations may be too large even for the technique of Karmarkar and Karp. Csirik, Johnson and Kenyon[11] present an alternative LP. First, they define a set of items that are called small. Let T be the total size of all small items. Then, they construct a matrix A representing all configurations with sum < 2. Then, they consider the above LP with one additional constraint:[math]\displaystyle{ \text{maximize}~~\mathbf{1}\cdot \mathbf{x}~~\text{s.t.} }[/math][math]\displaystyle{ A \mathbf{x}\leq \mathbf{n} }[/math][math]\displaystyle{ \sum_{c\in C: sum(c)\lt B} (B-sum(c))\cdot x_c \leq T }[/math][math]\displaystyle{ \mathbf{x}\geq 0 }[/math]The additional constraint guarantees that the "vacant space" in the bins can be filled by the small items. The dual of this LP is more complex and cannot be solved by a simple knapsack-problem separation oracle. Csirik, Johnson and Kenyon[11] present a different method to solve it approximately in time exponential in 1/epsilon. Jansen and Solis-Oba[12] present an improved method to solve it approximately in time exponential in 1/epsilon.

In machine scheduling

In the problem of unrelated-machines scheduling, there are some m different machines that should process some n different jobs. When machine i processes job j, it takes time pi,j. The goal is to partition the jobs among the machines such that maximum completion time of a machine is as small as possible. The decision version of this problem is: given time T, is there a partition in which the completion time of all machines is at most T?

For each machine i, there are finitely many subsets of jobs that can be processed by machine i in time at most T. Each such subset is called a configuration for machine i. Denote by Ci(T) the set of all configurations for machine i, given time T. For each machine i and configuration c in Ci(T), define a variable [math]\displaystyle{ x_{i,c} }[/math] which equals 1 iff the actual configuration used in machine i is c, and 0 otherwise. Then, the LP constraints are:

  • [math]\displaystyle{ \sum_{c\in C_i(T)}x_{i,c} = 1 }[/math] for every machine i in 1,...,m;
  • [math]\displaystyle{ \sum_{i=1}^m \sum_{c\ni j, c\in C_i(T)}x_{i,c} = 1 }[/math] for every job j in 1,...,n;
  • [math]\displaystyle{ x_{i,j} \in \{0,1\} }[/math] for every i, j.

Properties

The integrality gap of the configuration-LP for unrelated-machines scheduling is 2.[5]

See also

References

  1. Eisemann, Kurt (1957-04-01). "The Trim Problem". Management Science 3 (3): 279–284. doi:10.1287/mnsc.3.3.279. ISSN 0025-1909. https://pubsonline.informs.org/doi/abs/10.1287/mnsc.3.3.279. 
  2. 2.0 2.1 Gilmore, P. C.; Gomory, R. E. (1961). "A Linear Programming Approach to the Cutting-Stock Problem". Operations Research 9 (6): 849–859. doi:10.1287/opre.9.6.849. 
  3. Karmarkar, Narendra; Karp, Richard M. (1982-11-01). "An efficient approximation scheme for the one-dimensional bin-packing problem". 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982): 312–320. doi:10.1109/SFCS.1982.61. https://ieeexplore.ieee.org/document/4568405. 
  4. Bansal, Nikhil; Caprara, Alberto; Sviridenko, Maxim (2006-10-01). "Improved approximation algorithms for multidimensional bin packing problems". 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06). pp. 697–708. doi:10.1109/FOCS.2006.38. ISBN 0-7695-2720-5. https://ieeexplore.ieee.org/document/4031404. 
  5. 5.0 5.1 Verschae, José; Wiese, Andreas (2014-08-01). "On the configuration-LP for scheduling on unrelated machines" (in en). Journal of Scheduling 17 (4): 371–383. doi:10.1007/s10951-013-0359-4. ISSN 1099-1425. https://doi.org/10.1007/s10951-013-0359-4. 
  6. Knop, Dušan; Koutecký, Martin (2020-03-04). "Scheduling Kernels via Configuration LP". arXiv:2003.02187 [cs.DS].
  7. 7.0 7.1 Claire Mathieu. "Approximation Algorithms Part I, Week 3: bin packing". https://www.coursera.org/learn/approximation-algorithms-part-1/home/week/3. 
  8. Rothvoß, T. (2013-10-01). "Approximating Bin Packing within O(log OPT · Log Log OPT) Bins". 2013 IEEE 54th Annual Symposium on Foundations of Computer Science. pp. 20–29. doi:10.1109/FOCS.2013.11. ISBN 978-0-7695-5135-7. https://ieeexplore.ieee.org/document/6686137. 
  9. 9.0 9.1 Karmarkar, Narendra; Karp, Richard M. (November 1982). "An efficient approximation scheme for the one-dimensional bin-packing problem". 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982): 312–320. doi:10.1109/SFCS.1982.61. https://ieeexplore.ieee.org/document/4568405/references#references. 
  10. Hoberg, Rebecca; Rothvoss, Thomas (2017). "A Logarithmic Additive Integrality Gap for Bin Packing". Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics. pp. 2616–2625. doi:10.1137/1.9781611974782.172. ISBN 978-1-61197-478-2. 
  11. 11.0 11.1 Csirik, Janos; Johnson, David S.; Kenyon, Claire (2001-01-09). "Better approximation algorithms for bin covering". SODA '01: Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics. pp. 557–566. ISBN 978-0-89871-490-6. https://dl.acm.org/doi/abs/10.5555/365411.365533. 
  12. Jansen, Klaus; Solis-Oba, Roberto (2002-11-21). "An Asymptotic Fully Polynomial Time Approximation Scheme for Bin Covering". Algorithms and Computation. Lecture Notes in Computer Science. 2518. Springer-Verlag. pp. 175–186. doi:10.1007/3-540-36136-7_16. ISBN 978-3-540-00142-3. https://dl.acm.org/doi/abs/10.5555/646345.689912.