Indicator function

From HandWiki
Short description: Mathematical function characterizing set membership


A three-dimensional plot of an indicator function, shown over a square two-dimensional domain (set X): the "raised" portion overlays those two-dimensional points which are members of the "indicated" subset (A).

In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if A is a subset of some set X, then [math]\displaystyle{ \mathbf{1}_{A}(x)=1 }[/math] if [math]\displaystyle{ x\in A, }[/math] and [math]\displaystyle{ \mathbf{1}_{A}(x)=0 }[/math] otherwise, where [math]\displaystyle{ \mathbf{1}_A }[/math] is a common notation for the indicator function. Other common notations are [math]\displaystyle{ I_A, }[/math] and [math]\displaystyle{ \chi_A. }[/math]

The indicator function of A is the Iverson bracket of the property of belonging to A; that is,

[math]\displaystyle{ \mathbf{1}_{A}(x)=[x\in A]. }[/math]

For example, the Dirichlet function is the indicator function of the rational numbers as a subset of the real numbers.

Definition

The indicator function of a subset A of a set X is a function

[math]\displaystyle{ \mathbf{1}_A \colon X \to \{ 0, 1 \} }[/math]

defined as

[math]\displaystyle{ \mathbf{1}_A(x) := \begin{cases} 1 ~&\text{ if }~ x \in A~, \\ 0 ~&\text{ if }~ x \notin A~. \end{cases} }[/math]

The Iverson bracket provides the equivalent notation, [math]\displaystyle{ [x\in A] }[/math] or xA, to be used instead of [math]\displaystyle{ \mathbf{1}_{A}(x)\,. }[/math]

The function [math]\displaystyle{ \mathbf{1}_A }[/math] is sometimes denoted IA, χA, KA, or even just A.[lower-alpha 1][lower-alpha 2]

Notation and terminology

The notation [math]\displaystyle{ \chi_A }[/math] is also used to denote the characteristic function in convex analysis, which is defined as if using the reciprocal of the standard definition of the indicator function.

A related concept in statistics is that of a dummy variable. (This must not be confused with "dummy variables" as that term is usually used in mathematics, also called a bound variable.)

The term "characteristic function" has an unrelated meaning in classic probability theory. For this reason, traditional probabilists use the term indicator function for the function defined here almost exclusively, while mathematicians in other fields are more likely to use the term characteristic function[lower-alpha 1] to describe the function that indicates membership in a set.

In fuzzy logic and modern many-valued logic, predicates are the characteristic functions of a probability distribution. That is, the strict true/false valuation of the predicate is replaced by a quantity interpreted as the degree of truth.

Basic properties

The indicator or characteristic function of a subset A of some set X maps elements of X to the range [math]\displaystyle{ \{0,1\} }[/math].

This mapping is surjective only when A is a non-empty proper subset of X. If [math]\displaystyle{ A \equiv X, }[/math] then [math]\displaystyle{ \mathbf{1}_A=1. }[/math] By a similar argument, if [math]\displaystyle{ A\equiv\emptyset }[/math] then [math]\displaystyle{ \mathbf{1}_A=0. }[/math]

If [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] are two subsets of [math]\displaystyle{ X, }[/math] then [math]\displaystyle{ \begin{align} \mathbf{1}_{A\cap B} &= \min\{\mathbf{1}_A,\mathbf{1}_B\} = \mathbf{1}_A \cdot\mathbf{1}_B, \\ \mathbf{1}_{A\cup B} &= \max\{{\mathbf{1}_A,\mathbf{1}_B}\} = \mathbf{1}_A + \mathbf{1}_B - \mathbf{1}_A \cdot\mathbf{1}_B, \end{align} }[/math]

and the indicator function of the complement of [math]\displaystyle{ A }[/math] i.e. [math]\displaystyle{ A^C }[/math] is: [math]\displaystyle{ \mathbf{1}_{A^\complement} = 1-\mathbf{1}_A. }[/math]

More generally, suppose [math]\displaystyle{ A_1, \dotsc, A_n }[/math] is a collection of subsets of X. For any [math]\displaystyle{ x \in X: }[/math]

[math]\displaystyle{ \prod_{k \in I} ( 1 - \mathbf{1}_{A_k}(x)) }[/math]

is clearly a product of 0s and 1s. This product has the value 1 at precisely those [math]\displaystyle{ x \in X }[/math] that belong to none of the sets [math]\displaystyle{ A_k }[/math] and is 0 otherwise. That is

[math]\displaystyle{ \prod_{k \in I} ( 1 - \mathbf{1}_{A_k}) = \mathbf{1}_{X - \bigcup_{k} A_k} = 1 - \mathbf{1}_{\bigcup_{k} A_k}. }[/math]

Expanding the product on the left hand side,

[math]\displaystyle{ \mathbf{1}_{\bigcup_{k} A_k}= 1 - \sum_{F \subseteq \{1, 2, \dotsc, n\}} (-1)^{|F|} \mathbf{1}_{\bigcap_F A_k} = \sum_{\emptyset \neq F \subseteq \{1, 2, \dotsc, n\}} (-1)^{|F|+1} \mathbf{1}_{\bigcap_F A_k} }[/math]

where [math]\displaystyle{ |F| }[/math] is the cardinality of F. This is one form of the principle of inclusion-exclusion.

As suggested by the previous example, the indicator function is a useful notational device in combinatorics. The notation is used in other places as well, for instance in probability theory: if X is a probability space with probability measure [math]\displaystyle{ \operatorname{P} }[/math] and A is a measurable set, then [math]\displaystyle{ \mathbf{1}_A }[/math] becomes a random variable whose expected value is equal to the probability of A:

[math]\displaystyle{ \operatorname{E}(\mathbf{1}_A)= \int_{X} \mathbf{1}_A(x)\,d\operatorname{P} = \int_{A} d\operatorname{P} = \operatorname{P}(A). }[/math]

This identity is used in a simple proof of Markov's inequality.

In many cases, such as order theory, the inverse of the indicator function may be defined. This is commonly called the generalized Möbius function, as a generalization of the inverse of the indicator function in elementary number theory, the Möbius function. (See paragraph below about the use of the inverse in classical recursion theory.)

Mean, variance and covariance

Given a probability space [math]\displaystyle{ \textstyle (\Omega, \mathcal F, \operatorname{P}) }[/math] with [math]\displaystyle{ A \in \mathcal F, }[/math] the indicator random variable [math]\displaystyle{ \mathbf{1}_A \colon \Omega \rightarrow \mathbb{R} }[/math] is defined by [math]\displaystyle{ \mathbf{1}_A (\omega) = 1 }[/math] if [math]\displaystyle{ \omega \in A, }[/math] otherwise [math]\displaystyle{ \mathbf{1}_A (\omega) = 0. }[/math]

Mean
[math]\displaystyle{ \operatorname{E}(\mathbf{1}_A (\omega)) = \operatorname{P}(A) }[/math] (also called "Fundamental Bridge").
Variance
[math]\displaystyle{ \operatorname{Var}(\mathbf{1}_A (\omega)) = \operatorname{P}(A)(1 - \operatorname{P}(A)) }[/math]
Covariance
[math]\displaystyle{ \operatorname{Cov}(\mathbf{1}_A (\omega), \mathbf{1}_B (\omega)) = \operatorname{P}(A \cap B) - \operatorname{P}(A)\operatorname{P}(B) }[/math]

Characteristic function in recursion theory, Gödel's and Kleene's representing function

Kurt Gödel described the representing function in his 1934 paper "On undecidable propositions of formal mathematical systems" (the "¬" indicates logical inversion, i.e. "NOT"):[1](p42)

There shall correspond to each class or relation R a representing function [math]\displaystyle{ \phi(x_1, \ldots x_n) = 0 }[/math] if [math]\displaystyle{ R(x_1,\ldots x_n) }[/math] and [math]\displaystyle{ \phi(x_1,\ldots x_n) = 1 }[/math] if [math]\displaystyle{ \neg R(x_1,\ldots x_n). }[/math]

Kleene offers up the same definition in the context of the primitive recursive functions as a function φ of a predicate P takes on values 0 if the predicate is true and 1 if the predicate is false.[2]

For example, because the product of characteristic functions [math]\displaystyle{ \phi_1 * \phi_2 * \cdots * \phi_n = 0 }[/math] whenever any one of the functions equals 0, it plays the role of logical OR: IF [math]\displaystyle{ \phi_1 = 0 }[/math] OR [math]\displaystyle{ \phi_2 = 0 }[/math] OR ... OR [math]\displaystyle{ \phi_n = 0 }[/math] THEN their product is 0. What appears to the modern reader as the representing function's logical inversion, i.e. the representing function is 0 when the function R is "true" or satisfied", plays a useful role in Kleene's definition of the logical functions OR, AND, and IMPLY,[2]:228 the bounded-[2]:228 and unbounded-[2]:279 ff mu operators and the CASE function.[2]:229

Characteristic function in fuzzy set theory

In classical mathematics, characteristic functions of sets only take values 1 (members) or 0 (non-members). In fuzzy set theory, characteristic functions are generalized to take value in the real unit interval [0, 1], or more generally, in some algebra or structure (usually required to be at least a poset or lattice). Such generalized characteristic functions are more usually called membership functions, and the corresponding "sets" are called fuzzy sets. Fuzzy sets model the gradual change in the membership degree seen in many real-world predicates like "tall", "warm", etc.

Smoothness

In general, the indicator function of a set is not smooth; it is continuous if and only if its support is a connected component. In the algebraic geometry of finite fields, however, every affine variety admits a (Zariski) continuous indicator function.[3] Given a finite set of functions [math]\displaystyle{ f_\alpha \in \mathbb{F}_q[x_1,\ldots,x_n] }[/math] let [math]\displaystyle{ V = \left\{ x \in \mathbb{F}_q^n : f_\alpha(x) = 0 \right\} }[/math] be their vanishing locus. Then, the function [math]\displaystyle{ P(x) = \prod\left(1 - f_\alpha(x)^{q-1}\right) }[/math] acts as an indicator function for [math]\displaystyle{ V }[/math]. If [math]\displaystyle{ x \in V }[/math] then [math]\displaystyle{ P(x) = 1 }[/math], otherwise, for some [math]\displaystyle{ f_\alpha }[/math], we have [math]\displaystyle{ f_\alpha(x) \neq 0 }[/math], which implies that [math]\displaystyle{ f_\alpha(x)^{q-1} = 1 }[/math], hence [math]\displaystyle{ P(x) = 0 }[/math].

Although indicator functions are not smooth, they admit weak derivatives. For example, consider Heaviside step function [math]\displaystyle{ H(x) := \mathbf{1}_{x \gt 0} }[/math] The distributional derivative of the Heaviside step function is equal to the Dirac delta function, i.e. [math]\displaystyle{ \frac{d H(x)}{dx}=\delta(x) }[/math] and similarly the distributional derivative of [math]\displaystyle{ G(x) := \mathbf{1}_{x \lt 0} }[/math] is [math]\displaystyle{ \frac{d G(x)}{dx}=-\delta(x) }[/math]

Thus the derivative of the Heaviside step function can be seen as the inward normal derivative at the boundary of the domain given by the positive half-line. In higher dimensions, the derivative naturally generalises to the inward normal derivative, while the Heaviside step function naturally generalises to the indicator function of some domain D. The surface of D will be denoted by S. Proceeding, it can be derived that the inward normal derivative of the indicator gives rise to a 'surface delta function', which can be indicated by [math]\displaystyle{ \delta_S(\mathbf{x}) }[/math]: [math]\displaystyle{ \delta_S(\mathbf{x}) = -\mathbf{n}_x \cdot \nabla_x\mathbf{1}_{\mathbf{x}\in D} }[/math] where n is the outward normal of the surface S. This 'surface delta function' has the following property:[4] [math]\displaystyle{ -\int_{\R^n}f(\mathbf{x})\,\mathbf{n}_x\cdot\nabla_x\mathbf{1}_{\mathbf{x}\in D}\;d^{n}\mathbf{x} = \oint_{S}\,f(\mathbf{\beta})\;d^{n-1}\mathbf{\beta}. }[/math]

By setting the function f equal to one, it follows that the inward normal derivative of the indicator integrates to the numerical value of the surface area S.

See also


Notes

  1. 1.0 1.1 The Greek letter χ appears because it is the initial letter of the Greek word χαρακτήρ, which is the ultimate origin of the word characteristic.
  2. The set of all indicator functions on X can be identified with [math]\displaystyle{ \mathcal{P}(X), }[/math] the power set of X. Consequently, both sets are sometimes denoted by [math]\displaystyle{ 2^X. }[/math] This is a special case ([math]\displaystyle{ Y = \{0,1\} = 2 }[/math]) of the notation [math]\displaystyle{ Y^X }[/math] for the set of all functions [math]\displaystyle{ f:X \to Y. }[/math]

References

  1. Davis, Martin, ed (1965). The Undecidable. New York, NY: Raven Press Books. pp. 41–74. 
  2. 2.0 2.1 2.2 2.3 2.4 Kleene, Stephen (1971). Introduction to Metamathematics (Sixth reprint, with corrections ed.). Netherlands: Wolters-Noordhoff Publishing and North Holland Publishing Company. p. 227. 
  3. Serre. Course in Arithmetic. pp. 5. 
  4. Lange, Rutger-Jan (2012). "Potential theory, path integrals and the Laplacian of the indicator". Journal of High Energy Physics 2012 (11): 29–30. doi:10.1007/JHEP11(2012)032. Bibcode2012JHEP...11..032L. 

Sources

  • Folland, G.B. (1999). Real Analysis: Modern Techniques and Their Applications (Second ed.). John Wiley & Sons, Inc.. ISBN 978-0-471-31716-6. 
  • Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 5.2: Indicator random variables". Introduction to Algorithms (Second ed.). MIT Press and McGraw-Hill. pp. 94–99. ISBN 978-0-262-03293-3. 
  • Davis, Martin, ed (1965). The Undecidable. New York, NY: Raven Press Books. 
  • Kleene, Stephen (1971). Introduction to Metamathematics (Sixth reprint, with corrections ed.). Netherlands: Wolters-Noordhoff Publishing and North Holland Publishing Company. 
  • Boolos, George; Burgess, John P.; Jeffrey, Richard C. (2002). Computability and Logic. Cambridge UK: Cambridge University Press. ISBN 978-0-521-00758-0. 
  • Zadeh, L.A., Information and Control , Wikidata Q25938993
  • Goguen, Joseph (1967). "L-fuzzy sets". Journal of Mathematical Analysis and Applications 18 (1): 145–174. doi:10.1016/0022-247X(67)90189-8.