Wasserstein metric

From HandWiki
Short description: Distance function defined between probability distributions

In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space [math]\displaystyle{ M }[/math]. It is named after Leonid Vaseršteĭn.

Intuitively, if each distribution is viewed as a unit amount of earth (soil) piled on [math]\displaystyle{ M }[/math], the metric is the minimum "cost" of turning one pile into the other, which is assumed to be the amount of earth that needs to be moved times the mean distance it has to be moved. This problem was first formalised by Gaspard Monge in 1781. Because of this analogy, the metric is known in computer science as the earth mover's distance.

The name "Wasserstein distance" was coined by R. L. Dobrushin in 1970, after learning of it in the work of Leonid Vaseršteĭn on Markov processes describing large systems of automata[1] (Russian, 1969). However the metric was first defined by Leonid Kantorovich in The Mathematical Method of Production Planning and Organization[2] (Russian original 1939) in the context of optimal transport planning of goods and materials. Some scholars thus encourage use of the terms "Kantorovich metric" and "Kantorovich distance". Most English-language publications use the German spelling "Wasserstein" (attributed to the name "Vaseršteĭn" being of German origin).

Definition

Let [math]\displaystyle{ (M,d) }[/math] be a metric space for which every Borel probability measure on [math]\displaystyle{ M }[/math] is a Radon measure (a so-called Radon space). For [math]\displaystyle{ p\ge1 }[/math], let [math]\displaystyle{ P_p(M) }[/math] denote the collection of all probability measures [math]\displaystyle{ \mu }[/math] on [math]\displaystyle{ M }[/math] with finite [math]\displaystyle{ p^{\text{th}} }[/math] moment, that is, there exists some [math]\displaystyle{ x_0 }[/math] in [math]\displaystyle{ M }[/math] such that:

[math]\displaystyle{ \int_M d(x, x_0)^{p} \, \mathrm{d} \mu (x) \lt \infty. }[/math]

The [math]\displaystyle{ p^\text{th} }[/math] Wasserstein distance between two probability measures [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \nu }[/math] in [math]\displaystyle{ P_p(M) }[/math] is defined as

[math]\displaystyle{ W_p (\mu, \nu):=\left( \inf_{\gamma \in \Gamma (\mu, \nu)} \int_{M \times M} d(x, y)^p \, \mathrm{d} \gamma (x, y) \right)^{1/p}, }[/math]

where [math]\displaystyle{ \Gamma(\mu,\nu) }[/math] denotes the collection of all measures on [math]\displaystyle{ M \times M }[/math] with marginals [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \nu }[/math] on the first and second factors respectively. (The set [math]\displaystyle{ \Gamma(\mu,\nu) }[/math] is also called the set of all couplings of [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \nu }[/math].)

The above distance is usually denoted [math]\displaystyle{ W_p(\mu,\nu) }[/math] (typically among authors who prefer the "Wasserstein" spelling) or [math]\displaystyle{ \ell_p(\mu,\nu) }[/math] (typically among authors who prefer the "Vaserstein" spelling). The remainder of this article will use the [math]\displaystyle{ W_p }[/math] notation.

The Wasserstein metric may be equivalently defined by

[math]\displaystyle{ W_{p} (\mu, \nu) = \left( \inf \operatorname{\mathbf{E}} \big[ d( X , Y )^p \big] \right)^{1/p}, }[/math]

where [math]\displaystyle{ \mathbf{E}[Z] }[/math] denotes the expected value of a random variable [math]\displaystyle{ Z }[/math] and the infimum is taken over all joint distributions of the random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] with marginals [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \nu }[/math] respectively.

Intuition and connection to optimal transport

Two one-dimensional distributions [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \nu }[/math], plotted on the x and y axes, and one possible joint distribution that defines a transport plan between them. The joint distribution/transport plan is not unique

One way to understand the above definition is to consider the optimal transport problem. That is, for a distribution of mass [math]\displaystyle{ \mu(x) }[/math] on a space [math]\displaystyle{ X }[/math], we wish to transport the mass in such a way that it is transformed into the distribution [math]\displaystyle{ \nu(x) }[/math] on the same space; transforming the 'pile of earth' [math]\displaystyle{ \mu }[/math] to the pile [math]\displaystyle{ \nu }[/math]. This problem only makes sense if the pile to be created has the same mass as the pile to be moved; therefore without loss of generality assume that [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \nu }[/math] are probability distributions containing a total mass of 1. Assume also that there is given some cost function

[math]\displaystyle{ c(x,y) \mapsto [0,\infty) }[/math]

that gives the cost of transporting a unit mass from the point [math]\displaystyle{ x }[/math] to the point [math]\displaystyle{ y }[/math]. A transport plan to move [math]\displaystyle{ \mu }[/math] into [math]\displaystyle{ \nu }[/math] can be described by a function [math]\displaystyle{ \gamma(x,y) }[/math] which gives the amount of mass to move from [math]\displaystyle{ x }[/math] to [math]\displaystyle{ y }[/math]. You can imagine the task as the need to move a pile of earth of shape [math]\displaystyle{ \mu }[/math] to the hole in the ground of shape [math]\displaystyle{ \nu }[/math] such that at the end, both the pile of earth and the hole in the ground completely vanish. In order for this plan to be meaningful, it must satisfy the following properties

[math]\displaystyle{ \begin{align} \int \gamma(x,y) \,\mathrm{d} y = \mu(x) & \qquad \text{(the amount of earth moved out of point } x \text{ needs to equal the amount that was there to begin with)} \\ \int \gamma(x,y) \,\mathrm{d} x = \nu(y) & \qquad \text{(the amount of earth moved into point } y \text{ needs to equal the depth of the hole that was there at the beginning)} \end{align} }[/math]

That is, that the total mass moved out of an infinitesimal region around [math]\displaystyle{ x }[/math] must be equal to [math]\displaystyle{ \mu(x) \mathrm{d}x }[/math] and the total mass moved into a region around [math]\displaystyle{ y }[/math] must be [math]\displaystyle{ \nu(y)\mathrm{d}y }[/math]. This is equivalent to the requirement that [math]\displaystyle{ \gamma }[/math] be a joint probability distribution with marginals [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \nu }[/math]. Thus, the infinitesimal mass transported from [math]\displaystyle{ x }[/math] to [math]\displaystyle{ y }[/math] is [math]\displaystyle{ \gamma(x,y) \, \mathrm{d} x \, \mathrm{d} y }[/math], and the cost of moving is [math]\displaystyle{ c(x,y) \gamma(x,y) \, \mathrm{d} x \, \mathrm{d} y }[/math], following the definition of the cost function. Therefore, the total cost of a transport plan [math]\displaystyle{ \gamma }[/math] is

[math]\displaystyle{ \iint c(x,y) \gamma(x,y) \, \mathrm{d} x \, \mathrm{d} y = \int c(x,y) \, \mathrm{d} \gamma(x,y) }[/math]

The plan [math]\displaystyle{ \gamma }[/math] is not unique; the optimal transport plan is the plan with the minimal cost out of all possible transport plans. As mentioned, the requirement for a plan to be valid is that it is a joint distribution with marginals [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \nu }[/math]; letting [math]\displaystyle{ \Gamma }[/math] denote the set of all such measures as in the first section, the cost of the optimal plan is

[math]\displaystyle{ C = \inf_{\gamma \in \Gamma(\mu, \nu)} \int c(x,y) \, \mathrm{d} \gamma(x,y) }[/math]

If the cost of a move is simply the distance between the two points, then the optimal cost is identical to the definition of the [math]\displaystyle{ W_1 }[/math] distance.

Examples

Point masses (degenerate distributions)

Let [math]\displaystyle{ \mu_{1} = \delta_{a_{1}} }[/math] and [math]\displaystyle{ \mu_{2} = \delta_{a_{2}} }[/math] be two degenerate distributions (i.e. Dirac delta distributions) located at points [math]\displaystyle{ a_{1} }[/math] and [math]\displaystyle{ a_{2} }[/math] in [math]\displaystyle{ \mathbb{R} }[/math]. There is only one possible coupling of these two measures, namely the point mass [math]\displaystyle{ \delta_{(a_{1}, a_{2})} }[/math] located at [math]\displaystyle{ (a_{1}, a_{2}) \in \mathbb{R}^{2} }[/math]. Thus, using the usual absolute value function as the distance function on [math]\displaystyle{ \mathbb{R} }[/math], for any [math]\displaystyle{ p \geq 1 }[/math], the [math]\displaystyle{ p }[/math]-Wasserstein distance between [math]\displaystyle{ \mu_{1} }[/math] and [math]\displaystyle{ \mu_2 }[/math] is

[math]\displaystyle{ W_p (\mu_1, \mu_2) = | a_1 - a_2 | . }[/math]

By similar reasoning, if [math]\displaystyle{ \mu_{1} = \delta_{a_{1}} }[/math] and [math]\displaystyle{ \mu_{2} = \delta_{a_{2}} }[/math] are point masses located at points [math]\displaystyle{ a_{1} }[/math] and [math]\displaystyle{ a_{2} }[/math] in [math]\displaystyle{ \mathbb{R}^{n} }[/math], and we use the usual Euclidean norm on [math]\displaystyle{ \mathbb{R}^{n} }[/math] as the distance function, then

[math]\displaystyle{ W_p (\mu_1, \mu_2) = \| a_1 - a_2 \|_2 . }[/math]

Normal distributions

Let [math]\displaystyle{ \mu_1 = \mathcal{N}(m_1, C_1) }[/math] and [math]\displaystyle{ \mu_2 = \mathcal{N}(m_2, C_2) }[/math] be two non-degenerate Gaussian measures (i.e. normal distributions) on [math]\displaystyle{ \mathbb{R}^n }[/math], with respective expected values [math]\displaystyle{ m_1 }[/math] and [math]\displaystyle{ m_2 \in \mathbb{R}^n }[/math] and symmetric positive semi-definite covariance matrices [math]\displaystyle{ C_{1} }[/math] and [math]\displaystyle{ C_2 \in \mathbb{R}^{n \times n} }[/math]. Then,[3] with respect to the usual Euclidean norm on [math]\displaystyle{ \mathbb{R}^{n} }[/math], the 2-Wasserstein distance between [math]\displaystyle{ \mu_{1} }[/math] and [math]\displaystyle{ \mu_{2} }[/math] is

[math]\displaystyle{ W_{2} (\mu_1, \mu_2)^2 = \| m_1 - m_2 \|_2^2 + \mathop{\mathrm{trace}} \bigl( C_1 + C_2 - 2 \bigl( C_2^{1/2} C_1 C_2^{1/2} \bigr)^{1/2} \bigr) . }[/math]

This result generalises the earlier example of the Wasserstein distance between two point masses (at least in the case [math]\displaystyle{ p = 2 }[/math]), since a point mass can be regarded as a normal distribution with covariance matrix equal to zero, in which case the trace term disappears and only the term involving the Euclidean distance between the means remains.

One-dimensional distributions

Let [math]\displaystyle{ \mu_1, \mu_2 \in P_p(\mathbb{R}) }[/math] be probability measures on [math]\displaystyle{ \mathbb{R} }[/math], and denote their cumulative distribution functions by [math]\displaystyle{ F_1(x) }[/math] and [math]\displaystyle{ F_2(x) }[/math]. Then the transport problem has an analytic solution: Optimal transport preserves the order of probability mass elements, so the mass at quantile [math]\displaystyle{ q }[/math] of [math]\displaystyle{ \mu_1 }[/math] moves to quantile [math]\displaystyle{ q }[/math] of [math]\displaystyle{ \mu_2 }[/math]. Thus, the [math]\displaystyle{ p }[/math]-Wasserstein distance between [math]\displaystyle{ \mu_1 }[/math] and [math]\displaystyle{ \mu_2 }[/math] is

[math]\displaystyle{ W_p(\mu_1, \mu_2) = \left(\int_0^1 \left| F_1^{-1}(q) - F_2^{-1}(q) \right|^p \, \mathrm{d} q\right)^{1/p} }[/math]

where [math]\displaystyle{ F_1^{-1} }[/math] and [math]\displaystyle{ F_2^{-1} }[/math] are the quantile functions (inverse CDFs). In the case of [math]\displaystyle{ p=1 }[/math], a change of variables leads to the formula

[math]\displaystyle{ W_1(\mu_1, \mu_2) = \int_{\mathbb{R}} \left| F_1(x) - F_2(x) \right| \, \mathrm{d} x }[/math].

Applications

The Wasserstein metric is a natural way to compare the probability distributions of two variables X and Y, where one variable is derived from the other by small, non-uniform perturbations (random or deterministic).

In computer science, for example, the metric W1 is widely used to compare discrete distributions, e.g. the color histograms of two digital images; see earth mover's distance for more details.

In their paper 'Wasserstein GAN', Arjovsky et al.[4] use the Wasserstein-1 metric as a way to improve the original framework of Generative Adversarial Networks (GAN), to alleviate the vanishing gradient and the mode collapse issues. The special case of normal distributions is used in a Frechet Inception Distance.

The Wasserstein metric has a formal link with Procrustes analysis, with application to chirality measures,[5] and to shape analysis.[6]

In computational biology, Wasserstein metric can be used to compare between persistence diagrams of cytometry datasets.[7]

The Wasserstein metric also has been used in inverse problems in geophysics.[8]

Properties

Metric structure

It can be shown that Wp satisfies all the axioms of a metric on Pp(M). Furthermore, convergence with respect to Wp is equivalent to the usual weak convergence of measures plus convergence of the first pth moments.[9]

Dual representation of W1

The following dual representation of W1 is a special case of the duality theorem of Kantorovich and Rubinstein (1958): when μ and ν have bounded support,

[math]\displaystyle{ W_1 (\mu, \nu) = \sup \left\{ \left. \int_{M} f(x) \, \mathrm{d} (\mu - \nu) (x) \right| \text{continuous } f : M \to \mathbb{R}, \operatorname{Lip} (f) \leq 1 \right\}, }[/math]

where Lip(f) denotes the minimal Lipschitz constant for f.

Compare this with the definition of the Radon metric:

[math]\displaystyle{ \rho (\mu, \nu) := \sup \left\{ \left. \int_M f(x) \, \mathrm{d} (\mu - \nu) (x) \right| \text{continuous } f : M \to [-1, 1] \right\}. }[/math]

If the metric d is bounded by some constant C, then

[math]\displaystyle{ 2 W_1 (\mu, \nu) \leq C \rho (\mu, \nu), }[/math]

and so convergence in the Radon metric (identical to total variation convergence when M is a Polish space) implies convergence in the Wasserstein metric, but not vice versa.

Equivalence of W2 and a negative-order Sobolev norm

Under suitable assumptions, the Wasserstein distance [math]\displaystyle{ W_2 }[/math] of order two is Lipschitz equivalent to a negative-order homogeneous Sobolev norm.[10] More precisely, if we take [math]\displaystyle{ M }[/math] to be a connected Riemannian manifold equipped with a positive measure [math]\displaystyle{ \pi }[/math], then we may define for [math]\displaystyle{ f \colon M \to \mathbb{R} }[/math] the seminorm

[math]\displaystyle{ \| f \|_{\dot{H}^{1}(\pi)}^{2} = \int_{M} | \nabla f(x) |^{2} \, \pi(\mathrm{d} x) }[/math]

and for a signed measure [math]\displaystyle{ \mu }[/math] on [math]\displaystyle{ M }[/math] the dual norm

[math]\displaystyle{ \| \mu \|_{\dot{H}^{-1}(\pi)} = \sup \bigg\{ | \langle f, \mu \rangle | \,\bigg|\, \| f \|_{\dot{H}^{1}(\pi)} \leq 1 \bigg\} . }[/math]

Then any two probability measures [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \nu }[/math] on [math]\displaystyle{ M }[/math] satisfy the upper bound

[math]\displaystyle{ W_{2} (\mu, \nu) \leq 2 \| \mu - \nu \|_{\dot{H}^{-1}(\mu)} . }[/math]

In the other direction, if [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \nu }[/math] each have densities with respect to the standard volume measure on [math]\displaystyle{ M }[/math] that are both bounded above some [math]\displaystyle{ 0 \lt C \lt \infty }[/math], and [math]\displaystyle{ M }[/math] has non-negative Ricci curvature, then

[math]\displaystyle{ \| \mu - \nu \|_{\dot{H}^{-1}(\pi)} \leq \sqrt{C} W_{2} (\mu, \nu) . }[/math]

Separability and completeness

For any p ≥ 1, the metric space (Pp(M), Wp) is separable, and is complete if (M, d) is separable and complete.[11]

See also

References

  1. "Markov processes over denumerable products of spaces, describing large systems of automata". Problemy Peredači Informacii 5 (3): 64–72. 1969. http://www.mathnet.ru/links/b90e28ab49fa878b215cbbdfa0235264/ppi1811.pdf. 
  2. "Mathematical Methods of Organizing and Planning Production". Management Science 6 (4): 366–422. 1939. doi:10.1287/mnsc.6.4.366. 
  3. "The distance between two random vectors with given dispersion matrices". Linear Algebra and Its Application 48: 257–263. October 1982. doi:10.1016/0024-3795(82)90112-4. ISSN 0024-3795. 
  4. "Wasserstein Generative Adversarial Networks". International Conference on Machine Learning 214-223: 214–223. July 2017. https://proceedings.mlr.press/v70/arjovsky17a.html. 
  5. "Chiral mixtures". Journal of Mathematical Physics 43 (8): 4147–4157. 2002. doi:10.1063/1.1484559. Bibcode2002JMP....43.4147P. https://hal.archives-ouvertes.fr/hal-02122882/file/PMP.JMP_2002.pdf. 
  6. "From shape similarity to shape complementarity: toward a docking theory". Journal of Mathematical Chemistry 35 (3): 147–158. 2004. doi:10.1023/B:JOMC.0000033252.59423.6b. 
  7. "Determining clinically relevant features in cytometry data using persistent homology". PLOS Computational Biology 18 (3): e1009931. March 2022. doi:10.1371/journal.pcbi.1009931. PMID 35312683. Bibcode2022PLSCB..18E9931M. 
  8. Frederick, Christina; Yang, Yunan (2022-05-06). "Seeing through rock with help from optimal transport". Snapshots of Modern Mathematics from Oberwolfach. doi:10.14760/SNAP-2022-004-EN. https://publications.mfo.de/handle/mfo/3941. 
  9. "An elementary proof of the triangle inequality for the Wasserstein metric". Proceedings of the American Mathematical Society 136 (1): 333–339. 2008. doi:10.1090/S0002-9939-07-09020-X. https://www.ams.org/journals/proc/2008-136-01/S0002-9939-07-09020-X/home.html. 
  10. "Comparison between W2 distance and −1 norm, and localization of Wasserstein distance". ESAIM: Control, Optimisation and Calculus of Variations 24 (4): 1489–1501. October 2018. doi:10.1051/cocv/2017050. ISSN 1292-8119.  (See Theorems 2.1 and 2.5.)
  11. "The Monge–Kantorovich problem: achievements, connections, and perspectives". Russian Mathematical Surveys 67 (5): 785–890. October 2012. doi:10.1070/RM2012v067n05ABEH004808. Bibcode2012RuMaS..67..785B. 

Further reading

External links