Set cover problem

From HandWiki
Short description: Classical problem in combinatorics
Example of an instance of set cover problem.

The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory.

Given a set of elements {1, 2, …, n} (called the universe) and a collection S of m subsets whose union equals the universe, the set cover problem is to identify the smallest sub-collection of S whose union equals the universe. For example, consider the universe U = {1, 2, 3, 4, 5} and the collection of sets S = { {1, 2, 3}, {2, 4}, {3, 4}, {4, 5} }. Clearly the union of S is U. However, we can cover all elements with only two sets: { {1, 2, 3}, {4, 5} }, see picture. Therefore, the solution to the set cover problem has size 2.

More formally, given a universe [math]\displaystyle{ \mathcal{U} }[/math] and a family [math]\displaystyle{ \mathcal{S} }[/math] of subsets of [math]\displaystyle{ \mathcal{U} }[/math], a set cover is a subfamily [math]\displaystyle{ \mathcal{C}\subseteq\mathcal{S} }[/math] of sets whose union is [math]\displaystyle{ \mathcal{U} }[/math].

  • In the set cover decision problem, the input is a pair [math]\displaystyle{ (\mathcal{U},\mathcal{S}) }[/math] and an integer [math]\displaystyle{ k }[/math]; the question is whether there is a set cover of size [math]\displaystyle{ k }[/math] or less.
  • In the set cover optimization problem, the input is a pair [math]\displaystyle{ (\mathcal{U},\mathcal{S}) }[/math], and the task is to find a set cover that uses the fewest sets.

The decision version of set covering is NP-complete. It is one of Karp's 21 NP-complete problems shown to be NP-complete in 1972. The optimization/search version of set cover is NP-hard.[1] It is a problem "whose study has led to the development of fundamental techniques for the entire field" of approximation algorithms.[2]

Variants

In the weighted set cover problem, each set is assigned a positive weight (representing its cost), and the goal is to find a set cover with a smallest weight. The usual (unweighted) set cover corresponds to all sets having a weight of 1.

In the fractional set cover problem, it is allowed to select fractions of sets, rather than entire sets. A fractional set cover is an assignment of a fraction (a number in [0,1]) to each set in [math]\displaystyle{ \mathcal{S} }[/math], such that for each element x in the universe, the sum of fractions of sets that contain x is at least 1. The goal is to find a fractional set cover in which the sum of fractions is as small as possible. Note that a (usual) set cover is equivalent to a fractional set cover in which all fractions are either 0 or 1; therefore, the size of the smallest fractional cover is at most the size of the smallest cover, but may be smaller. For example, consider the universe U = {1, 2, 3} and the collection of sets S = { {1, 2}, {2, 3}, {3, 1} }. The smallest set cover has a size of 2, e.g. { {1, 2}, {2, 3} }. But there is a fractional set cover of size 1.5, in which a 0.5 fraction of each set is taken.

Linear program formulation

The set cover problem can be formulated as the following integer linear program (ILP).[3]

minimize [math]\displaystyle{ \sum_{s \in \mathcal S} x_s }[/math] (minimize the number of sets)
subject to [math]\displaystyle{ \sum_{s\colon e \in s} x_s \geqslant 1 }[/math] for all [math]\displaystyle{ e\in \mathcal U }[/math] (cover every element of the universe)
[math]\displaystyle{ x_s \in \{0,1\} }[/math] for all [math]\displaystyle{ s\in \mathcal S }[/math]. (every set is either in the set cover or not)

For a more compact representation of the covering constraint, one can define an incidence matrix [math]\displaystyle{ A }[/math], where each row corresponds to an element and each column corresponds to a set, and [math]\displaystyle{ A_{e,s}=1 }[/math] if element e is in set s, and [math]\displaystyle{ A_{e,s}=0 }[/math] otherwise. Then, the covering constraint can be written as [math]\displaystyle{ A x \geqslant 1 }[/math].

Weighted set cover is described by a program identical to the one given above, except that the objective function to minimize is [math]\displaystyle{ \sum_{s \in \mathcal S} w_s x_s }[/math], where [math]\displaystyle{ w_{s} }[/math] is the weight of set [math]\displaystyle{ s\in \mathcal{S} }[/math].

Fractional set cover is described by a program identical to the one given above, except that [math]\displaystyle{ x_s }[/math] can be non-integer, so the last constraint is replaced by [math]\displaystyle{ 0 \leq x_s\leq 1 }[/math].

This linear program belongs to the more general class of LPs for covering problems, as all the coefficients in the objective function and both sides of the constraints are non-negative. The integrality gap of the ILP is at most [math]\displaystyle{ \scriptstyle \log n }[/math] (where [math]\displaystyle{ \scriptstyle n }[/math] is the size of the universe). It has been shown that its relaxation indeed gives a factor-[math]\displaystyle{ \scriptstyle \log n }[/math] approximation algorithm for the minimum set cover problem.[4] See randomized rounding#setcover for a detailed explanation.

Hitting set formulation

Set covering is equivalent to the hitting set problem. That is seen by observing that an instance of set covering can be viewed as an arbitrary bipartite graph, with the universe represented by vertices on the left, the sets represented by vertices on the right, and edges representing the membership of elements to sets. The task is then to find a minimum cardinality subset of left-vertices that has a non-trivial intersection with each of the right-vertices, which is precisely the hitting set problem.

In the field of computational geometry, a hitting set for a collection of geometrical objects is also called a stabbing set or piercing set.[5]

Greedy algorithm

There is a greedy algorithm for polynomial time approximation of set covering that chooses sets according to one rule: at each stage, choose the set that contains the largest number of uncovered elements. This method can be implemented in time linear in the sum of sizes of the input sets, using a bucket queue to prioritize the sets.[6] It achieves an approximation ratio of [math]\displaystyle{ H(s) }[/math], where [math]\displaystyle{ s }[/math] is the size of the set to be covered.[7] In other words, it finds a covering that may be [math]\displaystyle{ H(n) }[/math] times as large as the minimum one, where [math]\displaystyle{ H(n) }[/math] is the [math]\displaystyle{ n }[/math]-th harmonic number: [math]\displaystyle{ H(n) = \sum_{k=1}^{n} \frac{1}{k} \le \ln{n} +1 }[/math]

This greedy algorithm actually achieves an approximation ratio of [math]\displaystyle{ H(s^\prime) }[/math] where [math]\displaystyle{ s^\prime }[/math] is the maximum cardinality set of [math]\displaystyle{ S }[/math]. For [math]\displaystyle{ \delta- }[/math]dense instances, however, there exists a [math]\displaystyle{ c \ln{m} }[/math]-approximation algorithm for every [math]\displaystyle{ c \gt 0 }[/math].[8]

Tight example for the greedy algorithm with k=3

There is a standard example on which the greedy algorithm achieves an approximation ratio of [math]\displaystyle{ \log_2(n)/2 }[/math]. The universe consists of [math]\displaystyle{ n=2^{(k+1)}-2 }[/math] elements. The set system consists of [math]\displaystyle{ k }[/math] pairwise disjoint sets [math]\displaystyle{ S_1,\ldots,S_k }[/math] with sizes [math]\displaystyle{ 2,4,8,\ldots,2^k }[/math] respectively, as well as two additional disjoint sets [math]\displaystyle{ T_0,T_1 }[/math], each of which contains half of the elements from each [math]\displaystyle{ S_i }[/math]. On this input, the greedy algorithm takes the sets [math]\displaystyle{ S_k,\ldots,S_1 }[/math], in that order, while the optimal solution consists only of [math]\displaystyle{ T_0 }[/math] and [math]\displaystyle{ T_1 }[/math]. An example of such an input for [math]\displaystyle{ k=3 }[/math] is pictured on the right.

Inapproximability results show that the greedy algorithm is essentially the best-possible polynomial time approximation algorithm for set cover up to lower order terms (see Inapproximability results below), under plausible complexity assumptions. A tighter analysis for the greedy algorithm shows that the approximation ratio is exactly [math]\displaystyle{ \ln{n} - \ln{\ln{n}} + \Theta(1) }[/math].[9]

Low-frequency systems

If each element occurs in at most f sets, then a solution can be found in polynomial time that approximates the optimum to within a factor of f using LP relaxation.

If the constraint [math]\displaystyle{ x_S\in\{0,1\} }[/math] is replaced by [math]\displaystyle{ x_S \geq 0 }[/math] for all S in [math]\displaystyle{ \mathcal{S} }[/math] in the integer linear program shown above, then it becomes a (non-integer) linear program L. The algorithm can be described as follows:

  1. Find an optimal solution O for the program L using some polynomial-time method of solving linear programs.
  2. Pick all sets S for which the corresponding variable xS has value at least 1/f in the solution O.[10]

Inapproximability results

When [math]\displaystyle{ n }[/math] refers to the size of the universe, (Lund Yannakakis) showed that set covering cannot be approximated in polynomial time to within a factor of [math]\displaystyle{ \tfrac{1}{2}\log_2{n} \approx 0.72\ln{n} }[/math], unless NP has quasi-polynomial time algorithms. Feige (1998) improved this lower bound to [math]\displaystyle{ \bigl(1-o(1)\bigr)\cdot\ln{n} }[/math] under the same assumptions, which essentially matches the approximation ratio achieved by the greedy algorithm. (Raz Safra) established a lower bound of [math]\displaystyle{ c\cdot\ln{n} }[/math], where [math]\displaystyle{ c }[/math] is a certain constant, under the weaker assumption that P[math]\displaystyle{ \not= }[/math]NP. A similar result with a higher value of [math]\displaystyle{ c }[/math] was recently proved by (Alon Moshkovitz). (Dinur Steurer) showed optimal inapproximability by proving that it cannot be approximated to [math]\displaystyle{ \bigl(1 - o(1)\bigr) \cdot \ln{n} }[/math] unless P[math]\displaystyle{ = }[/math]NP.

Weighted set cover

Relaxing the integer linear program for weighted set cover stated above, one may use randomized rounding to get an [math]\displaystyle{ O(\log n) }[/math]-factor approximation. Non weighted set cover can be adapted to the weighted case.[11]

Fractional set cover

Related problems

  • Hitting set is an equivalent reformulation of Set Cover.
  • Vertex cover is a special case of Hitting Set.
  • Edge cover is a special case of Set Cover.
  • Geometric set cover is a special case of Set Cover when the universe is a set of points in [math]\displaystyle{ \mathbb{R}^d }[/math] and the sets are induced by the intersection of the universe and geometric shapes (e.g., disks, rectangles).
  • Set packing
  • Maximum coverage problem is to choose at most k sets to cover as many elements as possible.
  • Dominating set is the problem of selecting a set of vertices (the dominating set) in a graph such that all other vertices are adjacent to at least one vertex in the dominating set. The Dominating set problem was shown to be NP complete through a reduction from Set cover.
  • Exact cover problem is to choose a set cover with no element included in more than one covering set.
  • Red Blue Set Cover.[12]
  • Set-cover abduction.
  • Monotone dualization is a computational problem equivalent to either listing all minimal hitting sets or listing all minimal set covers of a given set family.[13]

Notes

  1. Korte & Vygen 2012, p. 414.
  2. (Vazirani 2001)
  3. (Vazirani 2001)
  4. (Vazirani 2001)
  5. Nielsen, Frank (2000-09-06). "Fast stabbing of boxes in high dimensions". Theoretical Computer Science 246 (1): 53–72. doi:10.1016/S0304-3975(98)00336-3. ISSN 0304-3975. http://www.lix.polytechnique.fr/%7Enielsen/pdf/2000-FastStabbingBoxes-TCS.pdf. 
  6. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990]. "Exercise 35.3-3". Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. pp. 1122. ISBN 0-262-03384-4. 
  7. "A Greedy Heuristic for the Set-Covering Problem", Mathematics of Operations Research 4 (3): 233–235, August 1979, doi:10.1287/moor.4.3.233 
  8. Karpinski & Zelikovsky 1998
  9. Slavík Petr A tight analysis of the greedy algorithm for set cover. STOC'96, Pages 435-441, doi:10.1145/237814.237991
  10. (Vazirani 2001)
  11. (Vazirani 2001)
  12. Information., Sandia National Laboratories. United States. Department of Energy. United States. Department of Energy. Office of Scientific and Technical (1999). On the Red-Blue Set Cover Problem.. United States. Dept. of Energy. OCLC 68396743. http://worldcat.org/oclc/68396743. 
  13. Gainer-Dewar, Andrew; Vera-Licona, Paola (2017), "The minimal hitting set generation problem: algorithms and computation", SIAM Journal on Discrete Mathematics 31 (1): 63–100, doi:10.1137/15M1055024 

References

External links