Rademacher complexity

From HandWiki
Short description: Measure of complexity of real-valued functions

In computational learning theory (machine learning and theory of computation), Rademacher complexity, named after Hans Rademacher, measures richness of a class of sets with respect to a probability distribution. The concept can also be extended to real valued functions.

Definitions

Rademacher complexity of a set

Given a set [math]\displaystyle{ A\subseteq \mathbb{R}^m }[/math], the Rademacher complexity of A is defined as follows:[1][2]:326

[math]\displaystyle{ \operatorname{Rad}(A) := \frac{1}{m} \mathbb{E}_\sigma \left[ \sup_{a \in A} \sum_{i=1}^m \sigma_i a_i \right] }[/math]

where [math]\displaystyle{ \sigma_1, \sigma_2, \dots, \sigma_m }[/math] are independent random variables drawn from the Rademacher distribution i.e. [math]\displaystyle{ \Pr(\sigma_i = +1) = \Pr(\sigma_i = -1) = 1/2 }[/math] for [math]\displaystyle{ i=1,2,\dots,m }[/math], and [math]\displaystyle{ a=(a_1, \ldots, a_m) }[/math]. Some authors take the absolute value of the sum before taking the supremum, but if [math]\displaystyle{ A }[/math] is symmetric this makes no difference.

Rademacher complexity of a function class

Let [math]\displaystyle{ S=\{z_1, z_2, \dots, z_m\} \subset Z^m }[/math] be a sample of points and consider a function class [math]\displaystyle{ \mathcal{F} }[/math] of real-valued functions over [math]\displaystyle{ Z^m }[/math]. Then, the empirical Rademacher complexity of [math]\displaystyle{ \mathcal{F} }[/math] given [math]\displaystyle{ S }[/math] is defined as:

[math]\displaystyle{ \operatorname{Rad}_S(\mathcal{F}) = \frac{1}{m} \mathbb{E}_\sigma \left[ \sup_{f \in \mathcal{F}} \sum_{i=1}^m \sigma_i f(z_i) \right] }[/math]

This can also be written using the previous definition:[2]:326

[math]\displaystyle{ \operatorname{Rad}_S(\mathcal{F}) = \operatorname{Rad}(\mathcal{F} \circ S) }[/math]

where [math]\displaystyle{ \mathcal{F} \circ S }[/math] denotes function composition, i.e.:

[math]\displaystyle{ \mathcal{F} \circ S := \{ (f(z_1),\ldots,f(z_m))\mid f\in \mathcal{F}\} }[/math]

Let [math]\displaystyle{ P }[/math] be a probability distribution over [math]\displaystyle{ Z }[/math]. The Rademacher complexity of the function class [math]\displaystyle{ \mathcal{F} }[/math] with respect to [math]\displaystyle{ P }[/math] for sample size [math]\displaystyle{ m }[/math] is:

[math]\displaystyle{ \operatorname{Rad}_{P,m}(\mathcal{F}) := \mathbb{E}_{S\sim P^m} \left[ \operatorname{Rad}_S(\mathcal{F}) \right] }[/math]

where the above expectation is taken over an identically independently distributed (i.i.d.) sample [math]\displaystyle{ S=(z_1, z_2, \dots, z_m) }[/math] generated according to [math]\displaystyle{ P }[/math].

Intuition

The Rademacher complexity is typically applied on a function class of models that are used for classification, with the goal of measuring their ability to classify points drawn from a probability space under arbitrary labellings. When the function class is rich enough, it contains functions that can appropriately adapt for each arrangement of labels, simulated by the random draw of [math]\displaystyle{ \sigma_i }[/math] under the expectation, so that this quantity in the sum is maximised.

Examples

1. [math]\displaystyle{ A }[/math] contains a single vector, e.g., [math]\displaystyle{ A = \{(a,b)\} \subset \mathbb{R}^2 }[/math]. Then:

[math]\displaystyle{ \operatorname{Rad}(A) = {1\over 2}\cdot \left({1\over 4}\cdot(a+b) + {1\over 4}\cdot(a-b) + {1\over 4}\cdot(-a+b) + {1\over 4}\cdot(-a-b)\right) = 0 }[/math]

The same is true for every singleton hypothesis class.[3]:56

2. [math]\displaystyle{ A }[/math] contains two vectors, e.g., [math]\displaystyle{ A = \{(1,1),(1,2)\} \subset \mathbb{R}^2 }[/math]. Then:

[math]\displaystyle{ \begin{align} \operatorname{Rad}(A) & = {1\over 2}\cdot \left({1\over 4}\cdot\max(1+1, 1+2) + {1\over 4}\cdot\max(1-1, 1-2) + {1\over 4} \cdot \max(-1+1, -1+2) + {1\over 4}\cdot\max(-1-1, -1-2)\right) \\[5pt] & = {1\over 8}(3+0+1-2) = {1\over 4} \end{align} }[/math]

Using the Rademacher complexity

The Rademacher complexity can be used to derive data-dependent upper-bounds on the learnability of function classes. Intuitively, a function-class with smaller Rademacher complexity is easier to learn.

Bounding the representativeness

In machine learning, it is desired to have a training set that represents the true distribution of some sample data [math]\displaystyle{ S }[/math]. This can be quantified using the notion of representativeness. Denote by [math]\displaystyle{ P }[/math] the probability distribution from which the samples are drawn. Denote by [math]\displaystyle{ H }[/math] the set of hypotheses (potential classifiers) and denote by [math]\displaystyle{ F }[/math] the corresponding set of error functions, i.e., for every hypothesis [math]\displaystyle{ h\in H }[/math], there is a function [math]\displaystyle{ f_h\in F }[/math], that maps each training sample (features,label) to the error of the classifier [math]\displaystyle{ h }[/math] (note in this case hypothesis and classifier are used interchangeably). For example, in the case that [math]\displaystyle{ h }[/math] represents a binary classifier, the error function is a 0–1 loss function, i.e. the error function [math]\displaystyle{ f_h }[/math] returns 1 if [math]\displaystyle{ h }[/math] correctly classifies a sample and 0 else. We omit the index and write [math]\displaystyle{ f }[/math] instead of [math]\displaystyle{ f_h }[/math] when the underlying hypothesis is irrelevant. Define:

[math]\displaystyle{ L_P(f) := \mathbb E_{z\sim P}[f(z)] }[/math] – the expected error of some error function [math]\displaystyle{ f\in F }[/math] on the real distribution [math]\displaystyle{ P }[/math];
[math]\displaystyle{ L_S(f) := {1\over m} \sum_{i=1}^m f(z_i) }[/math] – the estimated error of some error function [math]\displaystyle{ f\in F }[/math] on the sample [math]\displaystyle{ S }[/math].

The representativeness of the sample [math]\displaystyle{ S }[/math], with respect to [math]\displaystyle{ P }[/math] and [math]\displaystyle{ F }[/math], is defined as:

[math]\displaystyle{ \operatorname{Rep}_P(F,S) := \sup_{f\in F} (L_P(f) - L_S(f)) }[/math]

Smaller representativeness is better, since it provides a way to avoid overfitting: it means that the true error of a classifier is not much higher than its estimated error, and so selecting a classifier that has low estimated error will ensure that the true error is also low. Note however that the concept of representativeness is relative and hence can not be compared across distinct samples.

The expected representativeness of a sample can be bounded above by the Rademacher complexity of the function class:[2]:326

[math]\displaystyle{ \mathbb E_{S\sim P^m} [\operatorname{Rep}_P(F,S)] \leq 2 \cdot \mathbb E_{S\sim P^m} [\operatorname{Rad}(F\circ S)] }[/math]

Bounding the generalization error

When the Rademacher complexity is small, it is possible to learn the hypothesis class H using empirical risk minimization.

For example, (with binary error function),[2]:328 for every [math]\displaystyle{ \delta\gt 0 }[/math], with probability at least [math]\displaystyle{ 1-\delta }[/math], for every hypothesis [math]\displaystyle{ h\in H }[/math]:

[math]\displaystyle{ L_P(h) - L_S(h) \leq 2 \operatorname{Rad}(F\circ S) + 4 \sqrt{2\ln(4/\delta)\over m} }[/math]

Bounding the Rademacher complexity

Since smaller Rademacher complexity is better, it is useful to have upper bounds on the Rademacher complexity of various function sets. The following rules can be used to upper-bound the Rademacher complexity of a set [math]\displaystyle{ A \subset \mathbb{R}^m }[/math].[2]:329–330

1. If all vectors in [math]\displaystyle{ A }[/math] are translated by a constant vector [math]\displaystyle{ a_0 \in \mathbb{R}^m }[/math], then Rad(A) does not change.

2. If all vectors in [math]\displaystyle{ A }[/math] are multiplied by a scalar [math]\displaystyle{ c\in \mathbb{R} }[/math], then Rad(A) is multiplied by [math]\displaystyle{ |c| }[/math].

3. [math]\displaystyle{ \operatorname{Rad}(A+B) = \operatorname{Rad}(A) + \operatorname{Rad}(B) }[/math].[3]:56

4. (Kakade & Tewari Lemma) If all vectors in [math]\displaystyle{ A }[/math] are operated by a Lipschitz function, then Rad(A) is (at most) multiplied by the Lipschitz constant of the function. In particular, if all vectors in [math]\displaystyle{ A }[/math] are operated by a contraction mapping, then Rad(A) strictly decreases.

5. The Rademacher complexity of the convex hull of [math]\displaystyle{ A }[/math] equals Rad(A).

6. (Massart Lemma) The Rademacher complexity of a finite set grows logarithmically with the set size. Formally, let [math]\displaystyle{ A }[/math] be a set of [math]\displaystyle{ N }[/math] vectors in [math]\displaystyle{ \mathbb{R}^m }[/math], and let [math]\displaystyle{ \bar{a} }[/math] be the mean of the vectors in [math]\displaystyle{ A }[/math]. Then:

[math]\displaystyle{ \operatorname{Rad}(A) \leq \max_{a\in A} \|a-\bar{a}\| \cdot {\sqrt{2\log N}\over m} }[/math]

In particular, if [math]\displaystyle{ A }[/math] is a set of binary vectors, the norm is at most [math]\displaystyle{ \sqrt{m} }[/math], so:

[math]\displaystyle{ \operatorname{Rad}(A) \leq \sqrt{2\log N \over m} }[/math]

Bounds related to the VC dimension

Let [math]\displaystyle{ H }[/math] be a set family whose VC dimension is [math]\displaystyle{ d }[/math]. It is known that the growth function of [math]\displaystyle{ H }[/math] is bounded as:

for all [math]\displaystyle{ m\gt d+1 }[/math]: [math]\displaystyle{ \operatorname{Growth}(H,m)\leq (em/d)^d }[/math]

This means that, for every set [math]\displaystyle{ h }[/math] with at most [math]\displaystyle{ m }[/math] elements, [math]\displaystyle{ |H\cap h|\leq (em/d)^d }[/math]. The set-family [math]\displaystyle{ H\cap h }[/math] can be considered as a set of binary vectors over [math]\displaystyle{ \mathbb{R}^m }[/math]. Substituting this in Massart's lemma gives:

[math]\displaystyle{ \operatorname{Rad}(H\cap h) \leq {\sqrt{2 d \log(em/d) \over m}} }[/math]

With more advanced techniques (Dudley's entropy bound and Haussler's upper bound[4]) one can show, for example, that there exists a constant [math]\displaystyle{ C }[/math], such that any class of [math]\displaystyle{ \{0,1\} }[/math]-indicator functions with Vapnik–Chervonenkis dimension [math]\displaystyle{ d }[/math] has Rademacher complexity upper-bounded by [math]\displaystyle{ C\sqrt{\frac{d}{m}} }[/math].

Bounds related to linear classes

The following bounds are related to linear operations on [math]\displaystyle{ S }[/math] – a constant set of [math]\displaystyle{ m }[/math] vectors in [math]\displaystyle{ \mathbb{R}^n }[/math].[2]:332–333

1. Define [math]\displaystyle{ A_2 = \{(w\cdot x_1,\ldots,w\cdot x_m) \mid \|w\|_2\leq 1\} = }[/math] the set of dot-products of the vectors in [math]\displaystyle{ S }[/math] with vectors in the unit ball. Then:

[math]\displaystyle{ \operatorname{Rad}(A_2) \leq {\max_i\|x_i\|_2 \over \sqrt{m}} }[/math]

2. Define [math]\displaystyle{ A_1 = \{(w\cdot x_1,\ldots,w\cdot x_m) \mid \|w\|_1\leq 1\} = }[/math] the set of dot-products of the vectors in [math]\displaystyle{ S }[/math] with vectors in the unit ball of the 1-norm. Then:

[math]\displaystyle{ \operatorname{Rad}(A_1) \leq \max_i\|x_i\|_\infty\cdot \sqrt{2\log(2n) \over m} }[/math]

Bounds related to covering numbers

The following bound relates the Rademacher complexity of a set [math]\displaystyle{ A }[/math] to its external covering number – the number of balls of a given radius [math]\displaystyle{ r }[/math] whose union contains [math]\displaystyle{ A }[/math]. The bound is attributed to Dudley.[2]:338

Suppose [math]\displaystyle{ A\subset \mathbb{R}^m }[/math] is a set of vectors whose length (norm) is at most [math]\displaystyle{ c }[/math]. Then, for every integer [math]\displaystyle{ M\gt 0 }[/math]:

[math]\displaystyle{ \operatorname{Rad}(A) \leq {c\cdot 2^{-M}\over \sqrt{m}} + {6c \over m}\cdot \sum_{i=1}^M 2^{-i}\sqrt{\log\left(N^{\text{ext}}_{c\cdot 2^{-i}}(A)\right)} }[/math]

In particular, if [math]\displaystyle{ A }[/math] lies in a d-dimensional subspace of [math]\displaystyle{ \mathbb{R}^m }[/math], then:

[math]\displaystyle{ \forall r\gt 0: N^{\text{ext}}_r(A) \leq (2 c \sqrt{d}/r)^d }[/math]

Substituting this in the previous bound gives the following bound on the Rademacher complexity:

[math]\displaystyle{ \operatorname{Rad}(A) \leq {6c \over m}\cdot \bigg(\sqrt{d\log(2\sqrt{d})} + 2\sqrt{d}\bigg) = O\bigg({c\sqrt{d\log(d)}\over m}\bigg) }[/math]

Gaussian complexity

Gaussian complexity is a similar complexity with similar physical meanings, and can be obtained from the Rademacher complexity using the random variables [math]\displaystyle{ g_i }[/math] instead of [math]\displaystyle{ \sigma_i }[/math], where [math]\displaystyle{ g_i }[/math] are Gaussian i.i.d. random variables with zero-mean and variance 1, i.e. [math]\displaystyle{ g_i \sim \mathcal{N}(0,1) }[/math]. Gaussian and Rademacher complexities are known to be equivalent up to logarithmic factors.

Equivalence of Rademacher and Gaussian complexity

Given a set [math]\displaystyle{ A\subseteq\mathbb{R}^n }[/math] then it holds that[5]:
[math]\displaystyle{ \frac{G(A)}{2\sqrt{\log{n}}} \leq \text{Rad}(A) \leq \sqrt{\frac{\pi}{2}}G(A) }[/math]
Where [math]\displaystyle{ G(A) }[/math] is the Gaussian Complexity of A. As an example, consider the rademacher and gaussian complexities of the L1 ball. The Rademacher complexity is given by exactly 1, whereas the Gaussian complexity is on the order of [math]\displaystyle{ \sqrt {\log d} }[/math] (which can be shown by applying known properties of suprema of a set of subgaussian random variables).[5]

References

  1. Balcan, Maria-Florina (November 15–17, 2011). "Machine Learning Theory – Rademacher Complexity". https://www.cs.cmu.edu/~ninamf/ML11/lect1117.pdf. Retrieved 10 December 2016. 
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Chapter 26 in Shalev-Shwartz, Shai; Ben-David, Shai (2014). Understanding Machine Learning – from Theory to Algorithms. Cambridge University Press. ISBN 9781107057135. 
  3. 3.0 3.1 Mohri, Mehryar; Rostamizadeh, Afshin; Talwalkar, Ameet (2012). Foundations of Machine Learning. USA, Massachusetts: MIT Press. ISBN 9780262018258. 
  4. Bousquet, O. (2004). Introduction to Statistical Learning Theory. Biological Cybernetics, 3176(1), 169–207. doi:10.1007/978-3-540-28650-9_8
  5. 5.0 5.1 Wainwright, Martin (2019). High-dimensional statistics : a non-asymptotic viewpoint. Cambridge, United Kingdom. pp. Exercise 5.5. ISBN 978-1-108-62777-1. OCLC 1089254580. https://www.worldcat.org/oclc/1089254580. 
  • Peter L. Bartlett, Shahar Mendelson (2002) Rademacher and Gaussian Complexities: Risk Bounds and Structural Results. Journal of Machine Learning Research 3 463–482
  • Giorgio Gnecco, Marcello Sanguineti (2008) Approximation Error Bounds via Rademacher's Complexity. Applied Mathematical Sciences, Vol. 2, 2008, no. 4, 153–176