von Mises–Fisher distribution

From HandWiki

In directional statistics, the von Mises–Fisher distribution (named after Richard von Mises and Ronald Fisher), is a probability distribution on the [math]\displaystyle{ (p-1) }[/math]-sphere in [math]\displaystyle{ \mathbb{R}^{p} }[/math]. If [math]\displaystyle{ p=2 }[/math] the distribution reduces to the von Mises distribution on the circle.

Definition

The probability density function of the von Mises–Fisher distribution for the random p-dimensional unit vector [math]\displaystyle{ \mathbf{x} }[/math] is given by:

[math]\displaystyle{ f_{p}(\mathbf{x}; \boldsymbol{\mu}, \kappa) = C_{p}(\kappa) \exp \left( {\kappa \boldsymbol{\mu}^\mathsf{T} \mathbf{x} } \right), }[/math]

where [math]\displaystyle{ \kappa \ge 0, \left \Vert \boldsymbol{\mu} \right \Vert = 1 }[/math] and the normalization constant [math]\displaystyle{ C_{p}(\kappa) }[/math] is equal to

[math]\displaystyle{ C_{p}(\kappa)=\frac {\kappa^{p/2-1}} {(2\pi)^{p/2}I_{p/2-1}(\kappa)}, }[/math]

where [math]\displaystyle{ I_{v} }[/math] denotes the modified Bessel function of the first kind at order [math]\displaystyle{ v }[/math]. If [math]\displaystyle{ p = 3 }[/math], the normalization constant reduces to

[math]\displaystyle{ C_{3}(\kappa) = \frac {\kappa} {4\pi\sinh \kappa} = \frac {\kappa} {2\pi(e^{\kappa}-e^{-\kappa})}. }[/math]

The parameters [math]\displaystyle{ \boldsymbol{\mu} }[/math] and [math]\displaystyle{ \kappa }[/math] are called the mean direction and concentration parameter, respectively. The greater the value of [math]\displaystyle{ \kappa }[/math], the higher the concentration of the distribution around the mean direction [math]\displaystyle{ \boldsymbol{\mu} }[/math]. The distribution is unimodal for [math]\displaystyle{ \kappa \gt 0 }[/math], and is uniform on the sphere for [math]\displaystyle{ \kappa = 0 }[/math].

The von Mises–Fisher distribution for [math]\displaystyle{ p=3 }[/math] is also called the Fisher distribution.[1][2] It was first used to model the interaction of electric dipoles in an electric field.[3] Other applications are found in geology, bioinformatics, and text mining.

Note on the normalization constant

In the textbook, Directional Statistics [3] by Mardia and Jupp, the normalization constant given for the Von Mises Fisher probability density is apparently different from the one given here: [math]\displaystyle{ C_{p}(\kappa) }[/math]. In that book, the normalization constant is specified as:

[math]\displaystyle{ C^*_{p}(\kappa)=\frac {(\frac{\kappa}2)^{p/2-1}} {\Gamma(p/2)I_{p/2-1}(\kappa)} }[/math]

where [math]\displaystyle{ \Gamma }[/math] is the gamma function. This is resolved by noting that Mardia and Jupp give the density "with respect to the uniform distribution", while the density here is specified in the usual way, with respect to Lebesgue measure. The density (w.r.t. Lebesgue measure) of the uniform distribution is the reciprocal of the surface area of the (p-1)-sphere, so that the uniform density function is given by the constant:

[math]\displaystyle{ C_{p}(0)=\frac{\Gamma(p/2)}{2\pi^{p/2}} }[/math]

It then follows that:

[math]\displaystyle{ C^*_{p}(\kappa) = \frac{C_{p}(\kappa)}{C_{p}(0)} }[/math]

While the value for [math]\displaystyle{ C_{p}(0) }[/math] was derived above via the surface area, the same result may be obtained by setting [math]\displaystyle{ \kappa=0 }[/math] in the above formula for [math]\displaystyle{ C_{p}(\kappa) }[/math]. This can be done by noting that the series expansion for [math]\displaystyle{ I {p/2-1}(\kappa) }[/math] divided by [math]\displaystyle{ \kappa^{p/2-1} }[/math] has but one non-zero term at [math]\displaystyle{ \kappa=0 }[/math]. (To evaluate that term, one needs to use the definition [math]\displaystyle{ 0^0=1 }[/math].)

Support

The support of the Von Mises–Fisher distribution is the hypersphere, or more specifically, the [math]\displaystyle{ (p-1) }[/math]-sphere, denoted as

[math]\displaystyle{ S^{p-1} = \left\{ \mathbf{x} \in \mathbb{R}^p : \left\| \mathbf{x} \right\| = 1 \right\} }[/math]

This is a [math]\displaystyle{ (p-1) }[/math]-dimensional manifold embedded in [math]\displaystyle{ p }[/math]-dimensional Euclidean space, [math]\displaystyle{ \mathbb{R}^p }[/math].

Relation to normal distribution

Starting from a normal distribution with isotropic covariance [math]\displaystyle{ \kappa^{-1}\mathbf{I} }[/math] and mean [math]\displaystyle{ \boldsymbol{\mu} }[/math] of length [math]\displaystyle{ r\gt 0 }[/math], whose density function is:

[math]\displaystyle{ G_{p}(\mathbf{x}; \boldsymbol{\mu}, \kappa) = \left(\sqrt{\frac{\kappa}{2\pi}}\right)^p \exp\left( -\kappa \frac{(\mathbf{x}-\boldsymbol{\mu})'(\mathbf{x}-\boldsymbol{\mu})}{2} \right), }[/math]

the Von Mises–Fisher distribution is obtained by conditioning on [math]\displaystyle{ \left\|\mathbf{x}\right\|=1 }[/math]. By expanding

[math]\displaystyle{ (\mathbf{x}-\boldsymbol{\mu})'(\mathbf{x}-\boldsymbol{\mu}) = \mathbf{x}'\mathbf{x} + \boldsymbol{\mu}'\boldsymbol{\mu} - 2\boldsymbol{\mu}' \mathbf{x}, }[/math]

and using the fact that the first two right-hand-side terms are fixed, the Von Mises-Fisher density, [math]\displaystyle{ f_{p}(\mathbf{x}; r^{-1}\boldsymbol{\mu}, r\kappa) }[/math] is recovered by recomputing the normalization constant by integrating [math]\displaystyle{ \mathbf{x} }[/math] over the unit sphere. If [math]\displaystyle{ r=0 }[/math], we get the uniform distribution, with density [math]\displaystyle{ f_{p}(\mathbf{x}; \boldsymbol{0}, 0) }[/math].

More succinctly, the restriction of any isotropic multivariate normal density to the unit hypersphere, gives a Von Mises-Fisher density, up to normalization.

This construction can be generalized by starting with a normal distribution with a general covariance matrix, in which case conditioning on [math]\displaystyle{ \left\|\mathbf{x}\right\|=1 }[/math] gives the Fisher-Bingham distribution.

Estimation of parameters

Mean direction

A series of N independent unit vectors [math]\displaystyle{ x_i }[/math] are drawn from a von Mises–Fisher distribution. The maximum likelihood estimates of the mean direction [math]\displaystyle{ \mu }[/math] is simply the normalized arithmetic mean, a sufficient statistic:[3]

[math]\displaystyle{ \mu = \bar{x}/\bar{R}, \text{where } \bar{x} = \frac{1}{N}\sum_i^N x_i, \text{and } \bar{R} = \|\bar{x}\|, }[/math]

Concentration parameter

Use the modified Bessel function of the first kind to define

[math]\displaystyle{ A_{p}(\kappa) = \frac {I_{p/2}(\kappa)} {I_{p/2-1}(\kappa)} . }[/math]

Then:

[math]\displaystyle{ \kappa = A_p^{-1}(\bar{R}) . }[/math]

Thus [math]\displaystyle{ \kappa }[/math] is the solution to

[math]\displaystyle{ A_p(\kappa) = \frac{\left\| \sum_i^N x_i \right\|}{N} = \bar{R} . }[/math]

A simple approximation to [math]\displaystyle{ \kappa }[/math] is (Sra, 2011)

[math]\displaystyle{ \hat{\kappa} = \frac{\bar{R}(p-\bar{R}^2)}{1-\bar{R}^2} , }[/math]

A more accurate inversion can be obtained by iterating the Newton method a few times

[math]\displaystyle{ \hat{\kappa}_1 = \hat{\kappa} - \frac{A_p(\hat{\kappa}) - \bar{R}}{1-A_p(\hat{\kappa})^2 - \frac{p - 1}{\hat{\kappa}} A_p(\hat{\kappa})} , }[/math]
[math]\displaystyle{ \hat{\kappa}_2 = \hat{\kappa}_1 - \frac{A_p(\hat{\kappa}_1)-\bar{R}}{1-A_p(\hat{\kappa}_1)^2-\frac{p - 1}{\hat{\kappa}_1} A_p(\hat{\kappa}_1)} . }[/math]

Standard error

For N ≥ 25, the estimated spherical standard error of the sample mean direction can be computed as:[4]

[math]\displaystyle{ \hat{\sigma} = \left(\frac{d}{N\bar{R}^2}\right)^{1/2} }[/math]

where

[math]\displaystyle{ d = 1 - \frac{1}{N} \sum_i^N \left(\mu^Tx_i\right)^2 }[/math]

It is then possible to approximate a [math]\displaystyle{ 100(1-\alpha)\% }[/math] a spherical confidence interval (a confidence cone) about [math]\displaystyle{ \mu }[/math] with semi-vertical angle:

[math]\displaystyle{ q = \arcsin\left(e_\alpha^{1/2}\hat{\sigma}\right), }[/math] where [math]\displaystyle{ e_\alpha = -\ln(\alpha). }[/math]

For example, for a 95% confidence cone, [math]\displaystyle{ \alpha = 0.05, e_\alpha = -\ln(0.05) = 2.996, }[/math] and thus [math]\displaystyle{ q = \arcsin(1.731\hat{\sigma}). }[/math]

Expected value

The expected value of the Von Mises–Fisher distribution is not on the unit hypersphere, but instead has a length of less than one. This length is given by [math]\displaystyle{ A_p(\kappa) }[/math] as defined above. For a Von Mises–Fisher distribution with mean direction [math]\displaystyle{ \boldsymbol{\mu} }[/math] and concentration [math]\displaystyle{ \kappa\gt 0 }[/math], the expected value is:

[math]\displaystyle{ A_p(\kappa)\boldsymbol{\mu} }[/math].

For [math]\displaystyle{ \kappa=0 }[/math], the expected value is at the origin. For finite [math]\displaystyle{ \kappa\gt 0 }[/math], the length of the expected value, is strictly between zero and one and is a monotonic rising function of [math]\displaystyle{ \kappa }[/math].

The empirical mean (arithmetic average) of a collection of points on the unit hypersphere behaves in a similar manner, being close to the origin for widely spread data and close to the sphere for concentrated data. Indeed, for the Von Mises–Fisher distribution, the expected value of the maximum-likelihood estimate based on a collection of points is equal to the empirical mean of those points.

Entropy and KL divergence

The expected value can be used to compute differential entropy and KL divergence.

The differential entropy of [math]\displaystyle{ \text{VMF}(\boldsymbol{\mu}, \kappa) }[/math] is:

[math]\displaystyle{ \bigl\langle -\log f_{p}(\mathbf{x}; \boldsymbol{\mu}, \kappa)\bigr\rangle_{\mathbf{x}\sim\text{VMF}(\boldsymbol{\mu}, \kappa)} =-\log f_{p}(A_p(\kappa)\boldsymbol{\mu}; \boldsymbol{\mu}, \kappa) = -\log C_p(\kappa) -\kappa A_p(\kappa) }[/math]

where the angle brackets denote expectation. Notice that the entropy is a function of [math]\displaystyle{ \kappa }[/math] only.

The KL divergence between [math]\displaystyle{ \text{VMF}(\boldsymbol{\mu_0}, \kappa_0) }[/math] and [math]\displaystyle{ \text{VMF}(\boldsymbol{\mu_1}, \kappa_1) }[/math] is:

[math]\displaystyle{ \Bigl\langle \log \frac{f_{p}(\mathbf{x}; \boldsymbol{\mu_0}, \kappa_0)} {f_{p}(\mathbf{x}; \boldsymbol{\mu_1}, \kappa_1)} \Bigr\rangle_{\mathbf{x}\sim\text{VMF}(\boldsymbol{\mu_0}, \kappa_0)} =\log \frac{f_{p}(A_p(\kappa_0)\boldsymbol{\mu_0}; \boldsymbol{\mu_0}, \kappa_0)} {f_{p}(A_p(\kappa_0)\boldsymbol{\mu_0}; \boldsymbol{\mu_1}, \kappa_1)} }[/math]

Transformation

Von Mises-Fisher (VMF) distributions are closed under orthogonal linear transforms. Let [math]\displaystyle{ \mathbf{U} }[/math] be a [math]\displaystyle{ p }[/math]-by-[math]\displaystyle{ p }[/math] orthogonal matrix. Let [math]\displaystyle{ \mathbf{x}\sim\text{VMF}(\boldsymbol\mu,\kappa) }[/math] and apply the invertible linear transform: [math]\displaystyle{ \mathbf{y}=\mathbf{Ux} }[/math]. The inverse transform is [math]\displaystyle{ \mathbf{x}=\mathbf{U'y} }[/math], because the inverse of an orthogonal matrix is its transpose: [math]\displaystyle{ \mathbf{U}^{-1}=\mathbf{U}' }[/math]. The Jacobian of the transform is [math]\displaystyle{ \mathbf{U} }[/math], for which the absolute value of its determinant is 1, also because of the orthogonality. Using these facts and the form of the VMF density, it follows that:

[math]\displaystyle{ \mathbf{y}\sim\text{VMF}(\mathbf{U}\boldsymbol{\mu},\kappa). }[/math]

One may verify that since [math]\displaystyle{ \boldsymbol{\mu} }[/math] and [math]\displaystyle{ \mathbf{x} }[/math] are unit vectors, then by the orthogonality, so are [math]\displaystyle{ \mathbf{U}\boldsymbol{\mu} }[/math] and [math]\displaystyle{ \mathbf{y} }[/math].

Pseudo-random number generation

General case

An algorithm for drawing pseudo-random samples from the Von Mises Fisher (VMF) distribution was given by Ulrich[5] and later corrected by Wood.[6] An implementation in R is given by Hornik and Grün;[7] and a fast Python implementation is described by Pinzón and Jung.[8]

To simulate from a VMF distribution on the [math]\displaystyle{ (p-1) }[/math]-dimensional unitsphere, [math]\displaystyle{ S^{p-1} }[/math], with mean direction [math]\displaystyle{ \boldsymbol{\mu}\in S^{p-1} }[/math], these algorithms use the following radial-tangential decomposition for a point [math]\displaystyle{ \mathbf{x}\in S^{p-1}\subset\mathbb{R}^p }[/math] :

[math]\displaystyle{ \mathbf{x} = t\boldsymbol{\mu}+\sqrt{1-t^2}\mathbf{v} }[/math]

where [math]\displaystyle{ \mathbf{v}\in\mathbb{R}^p }[/math] lives in the tangential [math]\displaystyle{ (p-2) }[/math]-dimensional unit-subsphere that is centered at and perpendicular to [math]\displaystyle{ \boldsymbol{\mu} }[/math]; while [math]\displaystyle{ t\in[-1,1] }[/math]. To draw a sample [math]\displaystyle{ \mathbf{x} }[/math] from a VMF with parameters [math]\displaystyle{ \boldsymbol{\mu} }[/math] and [math]\displaystyle{ \kappa }[/math], [math]\displaystyle{ \mathbf{v} }[/math] must be drawn from the uniform distribution on the tangential subsphere; and the radial component, [math]\displaystyle{ t }[/math], must be drawn independently from the distribution with density:

[math]\displaystyle{ f_\text{radial}(t;\kappa,p)=\frac{(\kappa/2)^\nu} {\Gamma(\frac12)\Gamma(\nu+\frac12)I_\nu(\kappa)} e^{t\kappa}(1-t^2)^{\nu-\frac12} }[/math]

where [math]\displaystyle{ \nu=\frac{p}2-1 }[/math]. The normalization constant for this density may be verified by using:

[math]\displaystyle{ I_\nu(\kappa) = \frac{(\kappa/2)^\nu}{\Gamma(\frac12)\Gamma(\nu+\frac12)} \int_{-1}^{1}e^{t\kappa}(1-t^2)^{\nu-\frac12}\,dt }[/math]

as given in Appendix 1 (A.3) in Directional Statistics.[3] Drawing the [math]\displaystyle{ t }[/math] samples from this density by using a rejection sampling algorithm is explained in the above references. To draw the uniform [math]\displaystyle{ \mathbf{v} }[/math] samples perpendicular to [math]\displaystyle{ \boldsymbol{\mu} }[/math], see the algorithm in,[8] or otherwise a Householder transform can be used as explained in Algorithm 1 in.[9]

3-D sphere

To generate a Von Mises–Fisher distributed pseudo-random spherical 3-D unit vector[10][11] [math]\displaystyle{ \mathbf X_{s} }[/math] on the [math]\displaystyle{ S^{2} }[/math]sphere for a given [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \kappa }[/math], define

[math]\displaystyle{ \mathbf X_{s} = [\phi, \theta , r] }[/math]

where [math]\displaystyle{ \phi }[/math] is the polar angle, [math]\displaystyle{ \theta }[/math] the equatorial angle, and [math]\displaystyle{ r=1 }[/math] the distance to the center of the sphere

for [math]\displaystyle{ \mathbf \mu = [0,(.),1] }[/math] the pseudo-random vector is then given by

[math]\displaystyle{ \mathbf X_{s} = [\arccos W, V , 1] }[/math]

where [math]\displaystyle{ V }[/math] is sampled from the continuous uniform distribution [math]\displaystyle{ U(a,b) }[/math] with lower bound [math]\displaystyle{ a }[/math] and upper bound [math]\displaystyle{ b }[/math]

[math]\displaystyle{ V \sim U(0, 2\pi) }[/math]

and

[math]\displaystyle{ W = 1+ \frac {1} {\kappa} (\ln\xi+\ln(1- \frac {\xi-1} {\xi}e^{-2\kappa})) }[/math]

where [math]\displaystyle{ \xi }[/math] is sampled from the standard continuous uniform distribution [math]\displaystyle{ U(0,1) }[/math]

[math]\displaystyle{ \xi \sim U(0, 1) }[/math]

here, [math]\displaystyle{ W }[/math]should be set to [math]\displaystyle{ W = 1 }[/math] when [math]\displaystyle{ \mathbf \xi=0 }[/math] and [math]\displaystyle{ \mathbf X_{s} }[/math] rotated to match any other desired [math]\displaystyle{ \mu }[/math].

Distribution of polar angle

For [math]\displaystyle{ p = 3 }[/math], the angle θ between [math]\displaystyle{ \mathbf{x} }[/math] and [math]\displaystyle{ \boldsymbol{\mu} }[/math] satisfies [math]\displaystyle{ \cos\theta=\boldsymbol{\mu}^\mathsf{T} \mathbf{x} }[/math]. It has the distribution

[math]\displaystyle{ p(\theta)=\int d^2x f(x; \boldsymbol{\mu}, \kappa)\, \delta\left(\theta-\text{arc cos}(\boldsymbol{\mu}^\mathsf{T} \mathbf{x})\right) }[/math],

which can be easily evaluated as

[math]\displaystyle{ p(\theta)=2\pi C_3(\kappa)\,\sin\theta\, e^{\kappa\cos\theta} }[/math].

For the general case, [math]\displaystyle{ p\ge2 }[/math], the distribution for the cosine of this angle:

[math]\displaystyle{ \cos\theta = t = \boldsymbol{\mu}^\mathsf{T} \mathbf{x} }[/math]

is given by [math]\displaystyle{ f_\text{radial}(t;\kappa,p) }[/math], as explained above.

The uniform hypersphere distribution

When [math]\displaystyle{ \kappa=0 }[/math], the Von Mises–Fisher distribution, [math]\displaystyle{ \text{VMF}(\boldsymbol{\mu},\kappa) }[/math] on [math]\displaystyle{ S^{p-1} }[/math] simplifies to the uniform distribution on [math]\displaystyle{ S^{p-1}\subset\mathbb{R}^p }[/math]. The density is constant with value [math]\displaystyle{ C_p(0) }[/math]. Pseudo-random samples can be generated by generating samples in [math]\displaystyle{ \mathbb{R}^p }[/math] from the standard multivariate normal distribution, followed by normalization to unit norm.

Component marginal of uniform distribution

For [math]\displaystyle{ 1\le i\le p }[/math], let [math]\displaystyle{ x_i }[/math] be any component of [math]\displaystyle{ \mathbf{x}\in S^{p-1} }[/math]. The marginal distribution for [math]\displaystyle{ x_i }[/math] has the density:[12][13]

[math]\displaystyle{ f_i(x_i;p) = f_\text{radial}(x_i;\kappa=0,p)=\frac{(1-x_i^2)^{\frac{p-1}2-1}}{B\bigl(\frac12,\frac{p-1}2\bigr)} }[/math]

where [math]\displaystyle{ B(\alpha,\beta) }[/math] is the beta function. This distribution may be better understood by highlighting its relation to the beta distribution:

[math]\displaystyle{ \begin{align} x_i^2&\sim\text{Beta}\bigl(\frac12,\frac{p-1}2\bigr) &&\text{and}& \frac{x_i+1}{2}&\sim\text{Beta}\bigl(\frac{p-1}2,\frac{p-1}2\bigr) \end{align} }[/math]

where the Legendre duplication formula is useful to understand the relationships between the normalization constants of the various densities above.

Note that the components of [math]\displaystyle{ \mathbf{x}\in S^{p-1} }[/math] are not independent, so that the uniform density is not the product of the marginal densities; and [math]\displaystyle{ \mathbf{x} }[/math] cannot be assembled by independent sampling of the components.

Distribution of dot-products

In machine learning, especially in image classification, to-be-classified inputs (e.g. images) are often compared using cosine similarity, which is the dot product between intermediate representations in the form of unitvectors (termed embeddings). The dimensionality is typically high, with [math]\displaystyle{ p }[/math] at least several hundreds. The deep neural networks that extract embeddings for classification should learn to spread the classes as far apart as possible and ideally this should give classes that are uniformly distributed on [math]\displaystyle{ S^{p-1} }[/math].[14] For a better statistical understanding of across-class cosine similarity, the distribution of dot-products between unitvectors independently sampled from the uniform distribution may be helpful.


Let [math]\displaystyle{ \mathbf{x},\mathbf{y}\in S^{p-1} }[/math] be unitvectors in [math]\displaystyle{ \mathbb{R}^p }[/math], independently sampled from the uniform distribution. Define:

[math]\displaystyle{ \begin{align} t&=\mathbf{x}'\mathbf{y}\in[-1,1], & r&=\frac{t+1}{2}\in[0,1], & s&=\text{logit}(r) =\log\frac{1+t}{1-t} \in\mathbb{R} \end{align} }[/math]

where [math]\displaystyle{ t }[/math] is the dot-product and [math]\displaystyle{ r,s }[/math] are transformed versions of it. Then the distribution for [math]\displaystyle{ t }[/math] is the same as the marginal component distribution given above;[13] the distribution for [math]\displaystyle{ r }[/math] is symmetric beta and the distribution for [math]\displaystyle{ s }[/math] is symmetric logistic-beta:

[math]\displaystyle{ \begin{align} r&\sim \text{Beta}\bigl(\frac{p-1}2,\frac{p-1}2\bigr), & s&\sim B_\sigma\bigl(\frac{p-1}2,\frac{p-1}2\bigr) \end{align} }[/math]

The means and variances are:

[math]\displaystyle{ \begin{align} E[t]&=0, & E[r]&=\frac12, & E[s]&=0, \end{align} }[/math]

and

[math]\displaystyle{ \begin{align} \text{var}[t]&=\frac1p, & \text{var}[r]&=\frac1{4p}, & \text{var}[s]&=2\psi'\bigl(\frac{p-1}2\bigr)\approx\frac4{p-1} \end{align} }[/math]

where [math]\displaystyle{ \psi'=\psi^{(1)} }[/math] is the first polygamma function. The variances decrease, the distributions of all three variables become more Gaussian, and the final approximation gets better as the dimensionality, [math]\displaystyle{ p }[/math], is increased.

Generalizations

Matrix Von Mises-Fisher

The matrix von Mises-Fisher distribution (also known as matrix Langevin distribution[15][16]) has the density

[math]\displaystyle{ f_{n, p}(\mathbf{X}; \mathbf{F}) \propto \exp(\operatorname{tr}(\mathbf{F}^\mathsf{T}\mathbf{X})) }[/math]

supported on the Stiefel manifold of [math]\displaystyle{ n \times p }[/math] orthonormal p-frames [math]\displaystyle{ \mathbf{X} }[/math], where [math]\displaystyle{ \mathbf{F} }[/math] is an arbitrary [math]\displaystyle{ n \times p }[/math] real matrix.[17][18]

Saw distributions

Ulrich,[5] in designing an algorithm for sampling from the VMF distribution, makes use of a family of distributions named after and explored by John G. Saw.[19] A Saw distribution is a distribution on the [math]\displaystyle{ (p-1) }[/math]-sphere, [math]\displaystyle{ S^{p-1} }[/math], with modal vector [math]\displaystyle{ \boldsymbol{\mu}\in S^{p-1} }[/math] and concentration [math]\displaystyle{ \kappa\ge0 }[/math], and of which the density function has the form:

[math]\displaystyle{ f_\text{Saw}(\mathbf{x};\boldsymbol\mu,\kappa) = \frac{g(\kappa\mathbf x'\boldsymbol\mu)}{K_p(\kappa)} }[/math]

where [math]\displaystyle{ g }[/math] is a non-negative, increasing function; and where [math]\displaystyle{ K_P(\kappa) }[/math] is the normalization constant. The above-mentioned radial-tangential decomposition generalizes to the Saw family and the radial compoment, [math]\displaystyle{ t=\mathbf x'\boldsymbol\mu }[/math] has the density:

[math]\displaystyle{ f_\text{Saw-radial}(t;\kappa)=\frac{2\pi^{p/2}}{\Gamma(p/2)}\frac{g(\kappa t)(1-t^2)^{(p-3)/2}}{B\bigl(\frac12,\frac{p-1}2\bigr)K_p(\kappa)}. }[/math]

where [math]\displaystyle{ B }[/math] is the beta function. Also notice that the left-hand factor of the radial density is the surface area of [math]\displaystyle{ S^{p-1} }[/math].

By setting [math]\displaystyle{ g(\kappa\mathbf x'\boldsymbol\mu)=e^{\kappa\mathbf x'\boldsymbol\mu} }[/math], one recovers the VMF distribution.

See also

References

  1. Fisher, R. A. (1953). "Dispersion on a sphere". Proc. R. Soc. Lond. A 217 (1130): 295–305. doi:10.1098/rspa.1953.0064. Bibcode1953RSPSA.217..295F. 
  2. Watson, G. S. (1980). "Distributions on the Circle and on the Sphere". J. Appl. Probab. 19: 265–280. doi:10.2307/3213566. 
  3. 3.0 3.1 3.2 3.3 Mardia, Kanti; Jupp, P. E. (1999). Directional Statistics. John Wiley & Sons Ltd.. ISBN 978-0-471-95333-3. 
  4. Embleton, N. I. Fisher, T. Lewis, B. J. J. (1993). Statistical analysis of spherical data (1st pbk. ed.). Cambridge: Cambridge University Press. pp. 115–116. ISBN 0-521-45699-1. https://archive.org/details/statisticalanaly0000fish/page/115. 
  5. 5.0 5.1 Ulrich, Gary (1984). "Computer generation of distributions on the m-sphere". Applied Statistics 33 (2): 158–163. doi:10.2307/2347441. https://www.jstor.org/stable/2347441. 
  6. Wood, Andrew T (1994). "Simulation of the Von Mises Fisher distribution". Communications in Statistics - Simulation and Computation 23 (1): 157–164. doi:10.1080/03610919408813161. https://www.tandfonline.com/doi/abs/10.1080/03610919408813161. 
  7. Hornik, Kurt; Grün, Bettina (2014). "movMF: An R Package for Fitting Mixtures of Von Mises-Fisher Distributions". Journal of Statistical Software 58 (10). doi:10.18637/jss.v058.i10. https://www.jstatsoft.org/article/view/v058i10. 
  8. 8.0 8.1 Pinzón, Carlos; Jung, Kangsoo (2023-03-03) (in en), Fast Python sampler for the von Mises Fisher distribution, https://hal.science/hal-04004568, retrieved 2023-03-30 
  9. De Cao, Nicola; Aziz, Wilker (13 Feb 2023). "The Power Spherical distribution". arXiv:2006.04437 [stat.ML].
  10. Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark (2018-04-06). "Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization" (in English). Solid Earth 9 (2): 385–402. doi:10.5194/se-9-385-2018. ISSN 1869-9510. Bibcode2018SolE....9..385P. https://se.copernicus.org/articles/9/385/2018/. 
  11. A., Wood, Andrew T. (1992). Simulation of the Von Mises Fisher distribution. Centre for Mathematics & its Applications, Australian National University. OCLC 221030477. http://worldcat.org/oclc/221030477. 
  12. Gosmann, J; Eliasmith, C (2016). "Optimizing Semantic Pointer Representations for Symbol-Like Processing in Spiking Neural Networks". PLOS ONE 11 (2): e0149928. doi:10.1371/journal.pone.0149928. PMID 26900931. Bibcode2016PLoSO..1149928G. 
  13. 13.0 13.1 Voelker, Aaron R.; Gosmann, Jan; Stewart, Terrence C.. "Efficiently sampling vectors and coordinates from the n-sphere and n-ball". Centre for Theoretical Neuroscience – Technical Report, 2017. http://compneuro.uwaterloo.ca/files/publications/voelker.2017.pdf. 
  14. Wang, Tongzhou; Isola, Phillip (2020). "Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere". International Conference on Machine Learning (ICML). 
  15. Pal, Subhadip; Sengupta, Subhajit; Mitra, Riten; Banerjee, Arunava (2020). "Conjugate Priors and Posterior Inference for the Matrix Langevin Distribution on the Stiefel Manifold". Bayesian Analysis 15 (3): 871–908. doi:10.1214/19-BA1176. ISSN 1936-0975. 
  16. Chikuse, Yasuko (1 May 2003). "Concentrated matrix Langevin distributions" (in en). Journal of Multivariate Analysis 85 (2): 375–394. doi:10.1016/S0047-259X(02)00065-9. ISSN 0047-259X. 
  17. Jupp (1979). "Maximum likelihood estimators for the matrix von Mises-Fisher and Bingham distributions". The Annals of Statistics 7 (3): 599–606. doi:10.1214/aos/1176344681. https://projecteuclid.org/euclid.aos/1176344681. 
  18. Downs (1972). "Orientational statistics". Biometrika 59 (3): 665–676. doi:10.1093/biomet/59.3.665. 
  19. Saw, John G (1978). "A family of distributions on the m-sphere and some hypothesis tests". Biometrika 65 (`): 69–73. doi:10.2307/2335278. https://www.jstor.org/stable/2335278. 

Further reading

  • Dhillon, I., Sra, S. (2003) "Modeling Data using Directional Distributions". Tech. rep., University of Texas, Austin.
  • Banerjee, A., Dhillon, I. S., Ghosh, J., & Sra, S. (2005). "Clustering on the unit hypersphere using von Mises-Fisher distributions". Journal of Machine Learning Research, 6(Sep), 1345-1382.
  • Sra, S. (2011). "A short note on parameter approximation for von Mises-Fisher distributions: And a fast implementation of I_s(x)". Computational Statistics 27: 177–190. doi:10.1007/s00180-011-0232-x.