Submodular set function
In mathematics, a submodular set function (also known as a submodular function) is a set function that, informally, describes the relationship between a set of inputs and an output, where adding more of one input has a decreasing additional benefit (diminishing returns). The natural diminishing returns property which makes them suitable for many applications, including approximation algorithms, game theory (as functions modeling user preferences) and electrical networks. Recently, submodular functions have also found utility in several real world problems in machine learning and artificial intelligence, including automatic summarization, multi-document summarization, feature selection, active learning, sensor placement, image collection summarization and many other domains.[1][2][3][4]
Definition
If [math]\displaystyle{ \Omega }[/math] is a finite set, a submodular function is a set function [math]\displaystyle{ f:2^{\Omega}\rightarrow \mathbb{R} }[/math], where [math]\displaystyle{ 2^\Omega }[/math] denotes the power set of [math]\displaystyle{ \Omega }[/math], which satisfies one of the following equivalent conditions.[5]
- For every [math]\displaystyle{ X, Y \subseteq \Omega }[/math] with [math]\displaystyle{ X \subseteq Y }[/math] and every [math]\displaystyle{ x \in \Omega \setminus Y }[/math] we have that [math]\displaystyle{ f(X\cup \{x\})-f(X)\geq f(Y\cup \{x\})-f(Y) }[/math].
- For every [math]\displaystyle{ S, T \subseteq \Omega }[/math] we have that [math]\displaystyle{ f(S)+f(T)\geq f(S\cup T)+f(S\cap T) }[/math].
- For every [math]\displaystyle{ X\subseteq \Omega }[/math] and [math]\displaystyle{ x_1,x_2\in \Omega\backslash X }[/math] such that [math]\displaystyle{ x_1\neq x_2 }[/math] we have that [math]\displaystyle{ f(X\cup \{x_1\})+f(X\cup \{x_2\})\geq f(X\cup \{x_1,x_2\})+f(X) }[/math].
A nonnegative submodular function is also a subadditive function, but a subadditive function need not be submodular. If [math]\displaystyle{ \Omega }[/math] is not assumed finite, then the above conditions are not equivalent. In particular a function [math]\displaystyle{ f }[/math] defined by [math]\displaystyle{ f(S) = 1 }[/math] if [math]\displaystyle{ S }[/math] is finite and [math]\displaystyle{ f(S) = 0 }[/math] if [math]\displaystyle{ S }[/math] is infinite satisfies the first condition above, but the second condition fails when [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] are infinite sets with finite intersection.
Types and examples of submodular functions
Monotone
A set function [math]\displaystyle{ f }[/math] is monotone if for every [math]\displaystyle{ T\subseteq S }[/math] we have that [math]\displaystyle{ f(T)\leq f(S) }[/math]. Examples of monotone submodular functions include:
- Linear (Modular) functions
- Any function of the form [math]\displaystyle{ f(S)=\sum_{i\in S}w_i }[/math] is called a linear function. Additionally if [math]\displaystyle{ \forall i,w_i\geq 0 }[/math] then f is monotone.
- Budget-additive functions
- Any function of the form [math]\displaystyle{ f(S)=\min\left\{B,~\sum_{i\in S}w_i\right\} }[/math] for each [math]\displaystyle{ w_i\geq 0 }[/math] and [math]\displaystyle{ B\geq 0 }[/math] is called budget additive.[6]
- Coverage functions
- Let [math]\displaystyle{ \Omega=\{E_1,E_2,\ldots,E_n\} }[/math] be a collection of subsets of some ground set [math]\displaystyle{ \Omega' }[/math]. The function [math]\displaystyle{ f(S)=\left|\bigcup_{E_i\in S}E_i\right| }[/math] for [math]\displaystyle{ S\subseteq \Omega }[/math] is called a coverage function. This can be generalized by adding non-negative weights to the elements.
- Entropy
- Let [math]\displaystyle{ \Omega=\{X_1,X_2,\ldots,X_n\} }[/math] be a set of random variables. Then for any [math]\displaystyle{ S\subseteq \Omega }[/math] we have that [math]\displaystyle{ H(S) }[/math] is a submodular function, where [math]\displaystyle{ H(S) }[/math] is the entropy of the set of random variables [math]\displaystyle{ S }[/math], a fact known as Shannon's inequality.[7] Further inequalities for the entropy function are known to hold, see entropic vector.
- Matroid rank functions
- Let [math]\displaystyle{ \Omega=\{e_1,e_2,\dots,e_n\} }[/math] be the ground set on which a matroid is defined. Then the rank function of the matroid is a submodular function.[8]
Non-monotone
A submodular function that is not monotone is called non-monotone.
Symmetric
A non-monotone submodular function [math]\displaystyle{ f }[/math] is called symmetric if for every [math]\displaystyle{ S\subseteq \Omega }[/math] we have that [math]\displaystyle{ f(S)=f(\Omega-S) }[/math]. Examples of symmetric non-monotone submodular functions include:
- Graph cuts
- Let [math]\displaystyle{ \Omega=\{v_1,v_2,\dots,v_n\} }[/math] be the vertices of a graph. For any set of vertices [math]\displaystyle{ S\subseteq \Omega }[/math] let [math]\displaystyle{ f(S) }[/math] denote the number of edges [math]\displaystyle{ e=(u,v) }[/math] such that [math]\displaystyle{ u\in S }[/math] and [math]\displaystyle{ v\in \Omega-S }[/math]. This can be generalized by adding non-negative weights to the edges.
- Mutual information
- Let [math]\displaystyle{ \Omega=\{X_1,X_2,\ldots,X_n\} }[/math] be a set of random variables. Then for any [math]\displaystyle{ S\subseteq \Omega }[/math] we have that [math]\displaystyle{ f(S)=I(S;\Omega-S) }[/math] is a submodular function, where [math]\displaystyle{ I(S;\Omega-S) }[/math] is the mutual information.
Asymmetric
A non-monotone submodular function which is not symmetric is called asymmetric.
- Directed cuts
- Let [math]\displaystyle{ \Omega=\{v_1,v_2,\dots,v_n\} }[/math] be the vertices of a directed graph. For any set of vertices [math]\displaystyle{ S\subseteq \Omega }[/math] let [math]\displaystyle{ f(S) }[/math] denote the number of edges [math]\displaystyle{ e=(u,v) }[/math] such that [math]\displaystyle{ u\in S }[/math] and [math]\displaystyle{ v\in \Omega-S }[/math]. This can be generalized by adding non-negative weights to the directed edges.
Continuous extensions of submodular set functions
Often, given a submodular set function that describes the values of various sets, we need to compute the values of fractional sets. For example: we know that the value of receiving house A and house B is V, and we want to know the value of receiving 40% of house A and 60% of house B. To this end, we need a continuous extension of the submodular set function.
Formally, a set function [math]\displaystyle{ f:2^{\Omega}\rightarrow \mathbb{R} }[/math] with [math]\displaystyle{ |\Omega|=n }[/math] can be represented as a function on [math]\displaystyle{ \{0, 1\}^{n} }[/math], by associating each [math]\displaystyle{ S\subseteq \Omega }[/math] with a binary vector [math]\displaystyle{ x^{S}\in \{0, 1\}^{n} }[/math] such that [math]\displaystyle{ x_{i}^{S}=1 }[/math] when [math]\displaystyle{ i\in S }[/math], and [math]\displaystyle{ x_{i}^{S}=0 }[/math] otherwise. A continuous extension of [math]\displaystyle{ f }[/math] is a continuous function [math]\displaystyle{ F:[0, 1]^{n}\rightarrow \mathbb{R} }[/math], that matches the value of [math]\displaystyle{ f }[/math] on [math]\displaystyle{ x\in \{0, 1\}^{n} }[/math], i.e. [math]\displaystyle{ F(x^{S})=f(S) }[/math].
Several kinds of continuous extensions of submodular functions are commonly used, which are described below.
Lovász extension
This extension is named after mathematician László Lovász.[9] Consider any vector [math]\displaystyle{ \mathbf{x}=\{x_1,x_2,\dots,x_n\} }[/math] such that each [math]\displaystyle{ 0\leq x_i\leq 1 }[/math]. Then the Lovász extension is defined as
[math]\displaystyle{ f^L(\mathbf{x})=\mathbb{E}(f(\{i|x_i\geq \lambda\})) }[/math]
where the expectation is over [math]\displaystyle{ \lambda }[/math] chosen from the uniform distribution on the interval [math]\displaystyle{ [0,1] }[/math]. The Lovász extension is a convex function if and only if [math]\displaystyle{ f }[/math] is a submodular function.
Multilinear extension
Consider any vector [math]\displaystyle{ \mathbf{x}=\{x_1,x_2,\ldots,x_n\} }[/math] such that each [math]\displaystyle{ 0\leq x_i\leq 1 }[/math]. Then the multilinear extension is defined as [10][11][math]\displaystyle{ F(\mathbf{x})=\sum_{S\subseteq \Omega} f(S) \prod_{i\in S} x_i \prod_{i\notin S} (1-x_i) }[/math].
Intuitively, xi represents the probability that item i is chosen for the set. For every set S, the two inner products represent the probability that the chosen set is exactly S. Therefore, the sum represents the expected value of f for the set formed by choosing each item i at random with probability xi, independently of the other items.
Convex closure
Consider any vector [math]\displaystyle{ \mathbf{x}=\{x_1,x_2,\dots,x_n\} }[/math] such that each [math]\displaystyle{ 0\leq x_i\leq 1 }[/math]. Then the convex closure is defined as [math]\displaystyle{ f^-(\mathbf{x})=\min\left(\sum_S \alpha_S f(S):\sum_S \alpha_S 1_S=\mathbf{x},\sum_S \alpha_S=1,\alpha_S\geq 0\right) }[/math].
The convex closure of any set function is convex over [math]\displaystyle{ [0,1]^n }[/math].
Concave closure
Consider any vector [math]\displaystyle{ \mathbf{x}=\{x_1,x_2,\dots,x_n\} }[/math] such that each [math]\displaystyle{ 0\leq x_i\leq 1 }[/math]. Then the concave closure is defined as [math]\displaystyle{ f^+(\mathbf{x})=\max\left(\sum_S \alpha_S f(S):\sum_S \alpha_S 1_S=\mathbf{x},\sum_S \alpha_S=1,\alpha_S\geq 0\right) }[/math].
Relations between continuous extensions
For the extensions discussed above, it can be shown that [math]\displaystyle{ f^{+}(\mathbf{x}) \geq F(\mathbf{x}) \geq f^{-}(\mathbf{x})=f^L(\mathbf{x}) }[/math] when [math]\displaystyle{ f }[/math] is submodular.[12]
Properties
- The class of submodular functions is closed under non-negative linear combinations. Consider any submodular function [math]\displaystyle{ f_1,f_2,\ldots,f_k }[/math] and non-negative numbers [math]\displaystyle{ \alpha_1,\alpha_2,\ldots,\alpha_k }[/math]. Then the function [math]\displaystyle{ g }[/math] defined by [math]\displaystyle{ g(S)=\sum_{i=1}^k \alpha_i f_i(S) }[/math] is submodular.
- For any submodular function [math]\displaystyle{ f }[/math], the function defined by [math]\displaystyle{ g(S)=f(\Omega \setminus S) }[/math] is submodular.
- The function [math]\displaystyle{ g(S)=\min(f(S),c) }[/math], where [math]\displaystyle{ c }[/math] is a real number, is submodular whenever [math]\displaystyle{ f }[/math] is monotone submodular. More generally, [math]\displaystyle{ g(S)=h(f(S)) }[/math] is submodular, for any non decreasing concave function [math]\displaystyle{ h }[/math].
- Consider a random process where a set [math]\displaystyle{ T }[/math] is chosen with each element in [math]\displaystyle{ \Omega }[/math] being included in [math]\displaystyle{ T }[/math] independently with probability [math]\displaystyle{ p }[/math]. Then the following inequality is true [math]\displaystyle{ \mathbb{E}[f(T)]\geq p f(\Omega)+(1-p) f(\varnothing) }[/math] where [math]\displaystyle{ \varnothing }[/math] is the empty set. More generally consider the following random process where a set [math]\displaystyle{ S }[/math] is constructed as follows. For each of [math]\displaystyle{ 1\leq i\leq l, A_i\subseteq \Omega }[/math] construct [math]\displaystyle{ S_i }[/math] by including each element in [math]\displaystyle{ A_i }[/math] independently into [math]\displaystyle{ S_i }[/math] with probability [math]\displaystyle{ p_i }[/math]. Furthermore let [math]\displaystyle{ S=\cup_{i=1}^l S_i }[/math]. Then the following inequality is true [math]\displaystyle{ \mathbb{E}[f(S)]\geq \sum_{R\subseteq [l]} \Pi_{i\in R}p_i \Pi_{i\notin R}(1-p_i)f(\cup_{i\in R}A_i) }[/math].[citation needed]
Optimization problems
Submodular functions have properties which are very similar to convex and concave functions. For this reason, an optimization problem which concerns optimizing a convex or concave function can also be described as the problem of maximizing or minimizing a submodular function subject to some constraints.
Submodular set function minimization
The hardness of minimizing a submodular set function depends on constraints imposed on the problem.
- The unconstrained problem of minimizing a submodular function is computable in polynomial time,[13][14] and even in strongly-polynomial time.[15][16] Computing the minimum cut in a graph is a special case of this minimization problem.
- The problem of minimizing a submodular function with a cardinality lower bound is NP-hard, with polynomial factor lower bounds on the approximation factor.[17][18]
Submodular set function maximization
Unlike the case of minimization, maximizing a generic submodular function is NP-hard even in the unconstrained setting. Thus, most of the works in this field are concerned with polynomial-time approximation algorithms, including greedy algorithms or local search algorithms.
- The problem of maximizing a non-negative submodular function admits a 1/2 approximation algorithm.[19][20] Computing the maximum cut of a graph is a special case of this problem.
- The problem of maximizing a monotone submodular function subject to a cardinality constraint admits a [math]\displaystyle{ 1 - 1/e }[/math] approximation algorithm.[21][page needed][22] The maximum coverage problem is a special case of this problem.
- The problem of maximizing a monotone submodular function subject to a matroid constraint (which subsumes the case above) also admits a [math]\displaystyle{ 1 - 1/e }[/math] approximation algorithm.[23][24][25]
Many of these algorithms can be unified within a semi-differential based framework of algorithms.[18]
Related optimization problems
Apart from submodular minimization and maximization, there are several other natural optimization problems related to submodular functions.
- Minimizing the difference between two submodular functions[26] is not only NP hard, but also inapproximable.[27]
- Minimization/maximization of a submodular function subject to a submodular level set constraint (also known as submodular optimization subject to submodular cover or submodular knapsack constraint) admits bounded approximation guarantees.[28]
- Partitioning data based on a submodular function to maximize the average welfare is known as the submodular welfare problem, which also admits bounded approximation guarantees (see welfare maximization).
Applications
Submodular functions naturally occur in several real world applications, in economics, game theory, machine learning and computer vision.[4][29] Owing to the diminishing returns property, submodular functions naturally model costs of items, since there is often a larger discount, with an increase in the items one buys. Submodular functions model notions of complexity, similarity and cooperation when they appear in minimization problems. In maximization problems, on the other hand, they model notions of diversity, information and coverage.
See also
Citations
- ↑ H. Lin and J. Bilmes, A Class of Submodular Functions for Document Summarization, ACL-2011.
- ↑ S. Tschiatschek, R. Iyer, H. Wei and J. Bilmes, Learning Mixtures of Submodular Functions for Image Collection Summarization, NIPS-2014.
- ↑ A. Krause and C. Guestrin, Near-optimal nonmyopic value of information in graphical models, UAI-2005.
- ↑ 4.0 4.1 A. Krause and C. Guestrin, Beyond Convexity: Submodularity in Machine Learning, Tutorial at ICML-2008
- ↑ (Schrijver 2003, §44, p. 766)
- ↑ Buchbinder, Niv; Feldman, Moran (2018). "Submodular Functions Maximization Problems". in Gonzalez, Teofilo F.. Handbook of Approximation Algorithms and Metaheuristics, Second Edition: Methodologies and Traditional Applications. Chapman and Hall/CRC. doi:10.1201/9781351236423. ISBN 9781351236423. https://www.taylorfrancis.com/chapters/edit/10.1201/9781351236423-42/submodular-functions-maximization-problems-niv-buchbinder-moran-feldman.
- ↑ "Information Processing and Learning". cmu. https://www.cs.cmu.edu/~aarti/Class/10704_Spring15/lecs/lec3.pdf.
- ↑ Fujishige (2005) p.22
- ↑ Lovász, L. (1983). "Submodular functions and convexity". Mathematical Programming the State of the Art. pp. 235–257. doi:10.1007/978-3-642-68874-4_10. ISBN 978-3-642-68876-8.
- ↑ Vondrak, Jan (2008-05-17). "Optimal approximation for the submodular welfare problem in the value oracle model". Proceedings of the fortieth annual ACM symposium on Theory of computing. STOC '08. New York, NY, USA: Association for Computing Machinery. pp. 67–74. doi:10.1145/1374376.1374389. ISBN 978-1-60558-047-0. https://doi.org/10.1145/1374376.1374389.
- ↑ Calinescu, Gruia; Chekuri, Chandra; Pál, Martin; Vondrák, Jan (January 2011). "Maximizing a Monotone Submodular Function Subject to a Matroid Constraint" (in en). SIAM Journal on Computing 40 (6): 1740–1766. doi:10.1137/080733991. ISSN 0097-5397. http://epubs.siam.org/doi/10.1137/080733991.
- ↑ Vondrák, Jan. "Polyhedral techniques in combinatorial optimization: Lecture 17". https://theory.stanford.edu/~jvondrak/CS369P/lec17.pdf.
- ↑ Grötschel, M.; Lovasz, L.; Schrijver, A. (1981). "The ellipsoid method and its consequences in combinatorial optimization". Combinatorica 1 (2): 169–197. doi:10.1007/BF02579273.
- ↑ Cunningham, W. H. (1985). "On submodular function minimization". Combinatorica 5 (3): 185–192. doi:10.1007/BF02579361.
- ↑ Iwata, S.; Fleischer, L.; Fujishige, S. (2001). "A combinatorial strongly polynomial algorithm for minimizing submodular functions". J. ACM 48 (4): 761–777. doi:10.1145/502090.502096.
- ↑ Schrijver, A. (2000). "A combinatorial algorithm minimizing submodular functions in strongly polynomial time". J. Combin. Theory Ser. B 80 (2): 346–355. doi:10.1006/jctb.2000.1989. https://ir.cwi.nl/pub/2108.
- ↑ Z. Svitkina and L. Fleischer, Submodular approximation: Sampling-based algorithms and lower bounds, SIAM Journal on Computing (2011).
- ↑ 18.0 18.1 R. Iyer, S. Jegelka and J. Bilmes, Fast Semidifferential based submodular function optimization, Proc. ICML (2013).
- ↑ U. Feige, V. Mirrokni and J. Vondrák, Maximizing non-monotone submodular functions, Proc. of 48th FOCS (2007), pp. 461–471.
- ↑ N. Buchbinder, M. Feldman, J. Naor and R. Schwartz, A tight linear time (1/2)-approximation for unconstrained submodular maximization, Proc. of 53rd FOCS (2012), pp. 649-658.
- ↑ Nemhauser, George; Wolsey, L. A.; Fisher, M. L. (1978). "An analysis of approximations for maximizing submodular set functions I". Mathematical Programming 14 (14): 265–294. doi:10.1007/BF01588971.
- ↑ Williamson, David P.. "Bridging Continuous and Discrete Optimization: Lecture 23". https://people.orie.cornell.edu/dpw/orie6334/lecture23.pdf.
- ↑ G. Calinescu, C. Chekuri, M. Pál and J. Vondrák, Maximizing a submodular set function subject to a matroid constraint, SIAM J. Comp. 40:6 (2011), 1740-1766.
- ↑ M. Feldman, J. Naor and R. Schwartz, A unified continuous greedy algorithm for submodular maximization, Proc. of 52nd FOCS (2011).
- ↑ Y. Filmus, J. Ward, A tight combinatorial algorithm for submodular maximization subject to a matroid constraint, Proc. of 53rd FOCS (2012), pp. 659-668.
- ↑ M. Narasimhan and J. Bilmes, A submodular-supermodular procedure with applications to discriminative structure learning, In Proc. UAI (2005).
- ↑ R. Iyer and J. Bilmes, Algorithms for Approximate Minimization of the Difference between Submodular Functions, In Proc. UAI (2012).
- ↑ R. Iyer and J. Bilmes, Submodular Optimization Subject to Submodular Cover and Submodular Knapsack Constraints, In Advances of NIPS (2013).
- ↑ J. Bilmes, Submodularity in Machine Learning Applications, Tutorial at AAAI-2015.
References
- Schrijver, Alexander (2003), Combinatorial Optimization, Springer, ISBN 3-540-44389-4
- Lee, Jon (2004), A First Course in Combinatorial Optimization, Cambridge University Press, ISBN 0-521-01012-8
- Fujishige, Satoru (2005), Submodular Functions and Optimization, Elsevier, ISBN 0-444-52086-4
- Narayanan, H. (1997), Submodular Functions and Electrical Networks, Elsevier, ISBN 0-444-82523-1
- Oxley, James G. (1992), Matroid theory, Oxford Science Publications, Oxford: Oxford University Press, ISBN 0-19-853563-5
External links
- http://www.cs.berkeley.edu/~stefje/references.html has a longer bibliography
- http://submodularity.org/ includes further material on the subject
Original source: https://en.wikipedia.org/wiki/Submodular set function.
Read more |