Stein discrepancy
A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers,[1] but has since been used in diverse settings in statistics, machine learning and computer science.[2]
Definition
Let [math]\displaystyle{ \mathcal{X} }[/math] be a measurable space and let [math]\displaystyle{ \mathcal{M} }[/math] be a set of measurable functions of the form [math]\displaystyle{ m : \mathcal{X} \rightarrow \mathbb{R} }[/math]. A natural notion of distance between two probability distributions [math]\displaystyle{ P }[/math], [math]\displaystyle{ Q }[/math], defined on [math]\displaystyle{ \mathcal{X} }[/math], is provided by an integral probability metric[3]
- [math]\displaystyle{ (1.1) \quad d_{\mathcal{M}}(P , Q) := \sup_{m \in \mathcal{M}} |\mathbb{E}_{X \sim P}[m(X)] - \mathbb{E}_{Y \sim Q}[m(Y)]| , }[/math]
where for the purposes of exposition we assume that the expectations exist, and that the set [math]\displaystyle{ \mathcal{M} }[/math] is sufficiently rich that (1.1) is indeed a metric on the set of probability distributions on [math]\displaystyle{ \mathcal{X} }[/math], i.e. [math]\displaystyle{ d_{\mathcal{M}}(P,Q) = 0 }[/math] if and only if [math]\displaystyle{ P=Q }[/math]. The choice of the set [math]\displaystyle{ \mathcal{M} }[/math] determines the topological properties of (1.1). However, for practical purposes the evaluation of (1.1) requires access to both [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math], often rendering direct computation of (1.1) impractical.
Stein's method is a theoretical tool that can be used to bound (1.1). Specifically, we suppose that we can identify an operator [math]\displaystyle{ \mathcal{A}_{P} }[/math] and a set [math]\displaystyle{ \mathcal{F}_{P} }[/math] of real-valued functions in the domain of [math]\displaystyle{ \mathcal{A}_{P} }[/math], both of which may be [math]\displaystyle{ P }[/math]-dependent, such that for each [math]\displaystyle{ m \in \mathcal{M} }[/math] there exists a solution [math]\displaystyle{ f_m \in \mathcal{F}_{P} }[/math] to the Stein equation
- [math]\displaystyle{ (1.2) \quad m(x) - \mathbb{E}_{X \sim P}[m(X)] = \mathcal{A}_{P} f_m(x) . }[/math]
The operator [math]\displaystyle{ \mathcal{A}_{P} }[/math] is termed a Stein operator and the set [math]\displaystyle{ \mathcal{F}_{P} }[/math] is called a Stein set. Substituting (1.2) into (1.1), we obtain an upper bound
- [math]\displaystyle{ d_{\mathcal{M}}(P , Q) = \sup_{m \in \mathcal{M}} |\mathbb{E}_{Y \sim Q}[m(Y) - \mathbb{E}_{X \sim P}[m(X)] ]| = \sup_{m \in \mathcal{M}} |\mathbb{E}_{Y \sim Q}[ \mathcal{A}_{P} f_m(Y) ]| \leq \sup_{f \in \mathcal{F}_{P}} |\mathbb{E}_{Y \sim Q}[\mathcal{A}_{P} f(Y)]| }[/math] .
This resulting bound
- [math]\displaystyle{ D_P(Q) := \sup_{f \in \mathcal{F}_P} |\mathbb{E}_{Y \sim Q}[\mathcal{A}_P f(Y)]| }[/math]
is called a Stein discrepancy.[1] In contrast to the original integral probability metric [math]\displaystyle{ d_{\mathcal{M}}(P , Q) }[/math], it may be possible to analyse or compute [math]\displaystyle{ D_{P}(Q) }[/math] using expectations only with respect to the distribution [math]\displaystyle{ Q }[/math].
Examples
Several different Stein discrepancies have been studied, with some of the most widely used presented next.
Classical Stein discrepancy
For a probability distribution [math]\displaystyle{ P }[/math] with positive and differentiable density function [math]\displaystyle{ p }[/math] on a convex set [math]\displaystyle{ \mathcal{X} \subseteq \mathbb{R}^d }[/math], whose boundary is denoted [math]\displaystyle{ \partial \mathcal{X} }[/math], the combination of the Langevin–Stein operator [math]\displaystyle{ \mathcal{A}_{P} f = \nabla \cdot f + f \cdot \nabla \log p }[/math] and the classical Stein set
- [math]\displaystyle{ \mathcal{F}_P = \left\{ f : \mathcal{X} \rightarrow \mathbb{R}^d \,\Biggl\vert\, \sup_{x \neq y} \max \left( \|f(x)\| , \|\nabla f(x) \|, \frac{\|\nabla f(x) - \nabla f(y) \|}{\|x-y\|} \right) \leq 1 , \; \langle f(x) , n(x) \rangle = 0 \; \forall x \in \partial \mathcal{X} \right\} }[/math]
yields the classical Stein discrepancy.[1] Here [math]\displaystyle{ \|\cdot\| }[/math] denotes the Euclidean norm and [math]\displaystyle{ \langle \cdot , \cdot \rangle }[/math] the Euclidean inner product. Here [math]\displaystyle{ \| M \| = \textstyle \sup_{v \in \mathbb{R}^d, \|v\| = 1} \|Mv\| }[/math] is the associated operator norm for matrices [math]\displaystyle{ M \in \R^{d \times d} }[/math], and [math]\displaystyle{ n(x) }[/math] denotes the outward unit normal to [math]\displaystyle{ \partial \mathcal{X} }[/math] at location [math]\displaystyle{ x \in \partial \mathcal{X} }[/math]. If [math]\displaystyle{ \mathcal{X} = \R^d }[/math] then we interpret [math]\displaystyle{ \partial \mathcal{X} = \emptyset }[/math].
In the univariate case [math]\displaystyle{ d=1 }[/math], the classical Stein discrepancy can be computed exactly by solving a quadratically constrained quadratic program.[1]
Graph Stein discrepancy
The first known computable Stein discrepancies were the graph Stein discrepancies (GSDs). Given a discrete distribution [math]\displaystyle{ Q = \textstyle \sum_{i=1}^n w_i \delta(x_i) }[/math], one can define the graph [math]\displaystyle{ G }[/math] with vertex set [math]\displaystyle{ V = \{x_1, \dots, x_n\} }[/math] and edge set [math]\displaystyle{ E \subseteq V \times V }[/math]. From this graph, one can define the graph Stein set as
- [math]\displaystyle{ \begin{align} \mathcal{F}_P = \Big\{ f : \mathcal{X} \rightarrow \mathbb{R}^d & \,\Bigl\vert\, \max \left(\|f(v)\|_\infty, \|\nabla f(v)\|_\infty, {\textstyle\frac{\|f(x) - f(y)\|_\infty}{\|x - y\|_1}}, {\textstyle \frac{\|\nabla f(x) - \nabla f(y)\|_\infty}{\|x - y\|_1}}\right) \le 1, \\[8pt] & {\textstyle\frac{\|f(x) - f(y) - {\nabla (x)}{(x - y)}\|_\infty}{\frac{1}{2}\|x - y\|_1^2} \leq 1}, {\textstyle\frac{\|f(x) - f(y) -{\nabla f(y)}{(x - y)}\|_\infty}{\frac{1}{2}\|x - y\|_1^2} \leq 1}, \; \forall v \in \operatorname{supp}(Q_n), (x,y)\in E \Big\}. \end{align} }[/math]
The combination of the Langevin–Stein operator and the graph Stein set is called the graph Stein discrepancy (GSD). The GSD is actually the solution of a finite-dimensional linear program, with the size of [math]\displaystyle{ E }[/math] as low as linear in [math]\displaystyle{ n }[/math], meaning that the GSD can be efficiently computed.[1]
Kernel Stein discrepancy
The supremum arising in the definition of Stein discrepancy can be evaluated in closed form using a particular choice of Stein set. Indeed, let [math]\displaystyle{ \mathcal{F}_P = \{f \in H(K) : \|f\|_{H(K)} \leq 1\} }[/math] be the unit ball in a (possibly vector-valued) reproducing kernel Hilbert space [math]\displaystyle{ H(K) }[/math] with reproducing kernel [math]\displaystyle{ K }[/math], whose elements are in the domain of the Stein operator [math]\displaystyle{ \mathcal{A}_P }[/math]. Suppose that
- For each fixed [math]\displaystyle{ x \in \mathcal{X} }[/math], the map [math]\displaystyle{ f \mapsto \mathcal{A}_P[f](x) }[/math] is a continuous linear functional on [math]\displaystyle{ \mathcal{F}_P }[/math].
- [math]\displaystyle{ \mathbb{E}_{X \sim Q}[ \mathcal{A}_P \mathcal{A}_P' K(X,X) ] \lt \infty }[/math].
where the Stein operator [math]\displaystyle{ \mathcal{A}_P }[/math] acts on the first argument of [math]\displaystyle{ K(\cdot,\cdot) }[/math] and [math]\displaystyle{ \mathcal{A}_P' }[/math] acts on the second argument. Then it can be shown[4] that
- [math]\displaystyle{ D_P(Q) = \sqrt{ \mathbb{E}_{X,X' \sim Q} [ \mathcal{A}_P \mathcal{A}_P' K(X,X') ] } }[/math],
where the random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ X' }[/math] in the expectation are independent. In particular, if [math]\displaystyle{ Q = \sum_{i=1}^n w_i \delta(x_i) }[/math] is a discrete distribution on [math]\displaystyle{ \mathcal{X} }[/math], then the Stein discrepancy takes the closed form
- [math]\displaystyle{ D_P(Q) = \sqrt{ \sum_{i=1}^n \sum_{j=1}^n w_i w_j \mathcal{A}_P \mathcal{A}_P' K(x_i,x_j) }. }[/math]
A Stein discrepancy constructed in this manner is called a kernel Stein discrepancy[5][6][7][8] and the construction is closely connected to the theory of kernel embedding of probability distributions.
Let [math]\displaystyle{ k : \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R} }[/math] be a reproducing kernel. For a probability distribution [math]\displaystyle{ P }[/math] with positive and differentiable density function [math]\displaystyle{ p }[/math] on [math]\displaystyle{ \mathcal{X} = \R^d }[/math], the combination of the Langevin--Stein operator [math]\displaystyle{ \mathcal{A}_{P} f = \nabla \cdot f + f \cdot \nabla \log p }[/math] and the Stein set
- [math]\displaystyle{ \mathcal{F}_P = \left\{f \in H(k) \times \dots \times H(k) : \sum_{i=1}^d \|f_i\|_{H(k)}^2 \leq 1\right\}, }[/math]
associated to the matrix-valued reproducing kernel [math]\displaystyle{ K(x,x') = k(x,x') I_{d \times d} }[/math], yields a kernel Stein discrepancy with[5]
- [math]\displaystyle{ \mathcal{A}_P \mathcal{A}_P' K(x,x') = \nabla_x \cdot \nabla_{x'} k(x,x') + \nabla_x k(x,x') \cdot \nabla_{x'} \log p(x') +\nabla_{x'} k(x,x') \cdot \nabla_x \log p(x) + k(x,x') \nabla_x \log p(x) \cdot \nabla_{x'} \log p(x') }[/math]
where [math]\displaystyle{ \nabla_x }[/math] (resp. [math]\displaystyle{ \nabla_{x'} }[/math]) indicated the gradient with respect to the argument indexed by [math]\displaystyle{ x }[/math] (resp. [math]\displaystyle{ x' }[/math]).
Concretely, if we take the inverse multi-quadric kernel [math]\displaystyle{ k(x,x') = (1 + (x-x')^\top \Sigma^{-1} (x-x') )^{-\beta} }[/math] with parameters [math]\displaystyle{ \beta \gt 0 }[/math] and [math]\displaystyle{ \Sigma \in \mathbb{R}^{d \times d} }[/math] a symmetric positive definite matrix, and if we denote [math]\displaystyle{ u(x) = \nabla \log p(x) }[/math], then we have
[math]\displaystyle{ (2.1) \quad \mathcal{A}_P \mathcal{A}_P' K(x,x') = - \frac{4 \beta (\beta + 1) (x-x')^\top \Sigma^{-2} (x-x')}{ \left(1 + (x-x')^\top \Sigma^{-1} (x-x') \right)^{\beta + 2} } + 2 \beta \left[ \frac{ \text{tr}(\Sigma^{-1}) + [u(x) - u(x')]^\top \Sigma^{-1} (x-x') }{ \left(1 + (x-x')^\top \Sigma^{-1} (x-x') \right)^{1+\beta} } \right] + \frac{ u(x)^\top u(x') }{ \left(1 + (x-x')^\top \Sigma^{-1} (x-x') \right)^{\beta} } }[/math].
Diffusion Stein discrepancy
Diffusion Stein discrepancies[9] generalize the Langevin Stein operator [math]\displaystyle{ \mathcal{A}_{P} f = \nabla \cdot f + f \cdot \nabla \log p = \textstyle\frac{1}{p}\nabla \cdot (f p) }[/math] to a class of diffusion Stein operators [math]\displaystyle{ \mathcal{A}_{P} f = \textstyle\frac{1}{p}\nabla \cdot (m f p) }[/math], each representing an Itô diffusion that has [math]\displaystyle{ P }[/math] as its stationary distribution. Here, [math]\displaystyle{ m }[/math] is a matrix-valued function determined by the infinitesimal generator of the diffusion.
Other Stein discrepancies
Additional Stein discrepancies have been developed for constrained domains,[10] non-Euclidean domains[11][12][10], discrete domains,[13][14] improved scalability.[15][16], and gradient-free Stein discrepancies where derivatives of the density [math]\displaystyle{ p }[/math] are circumvented.[17]
Properties
The flexibility in the choice of Stein operator and Stein set in the construction of Stein discrepancy precludes general statements of a theoretical nature. However, much is known about the particular Stein discrepancies.
Computable without the normalisation constant
Stein discrepancy can sometimes be computed in challenging settings where the probability distribution [math]\displaystyle{ P }[/math] admits a probability density function [math]\displaystyle{ p }[/math] (with respect to an appropriate reference measure on [math]\displaystyle{ \mathcal{X} }[/math]) of the form [math]\displaystyle{ p(x) = \textstyle \frac{1}{Z} \tilde{p}(x) }[/math], where [math]\displaystyle{ \tilde{p}(x) }[/math] and its derivative can be numerically evaluated but whose normalisation constant [math]\displaystyle{ Z }[/math] is not easily computed or approximated. Considering (2.1), we observe that the dependence of [math]\displaystyle{ \mathcal{A}_P \mathcal{A}_P K(x,x') }[/math] on [math]\displaystyle{ P }[/math] occurs only through the term
- [math]\displaystyle{ u(x) = \nabla \log p(x) = \nabla \log \left( \frac{\tilde{p}(x)}{Z} \right) = \nabla \log \tilde{p}(x) - \nabla \log Z = \nabla \log \tilde{p}(x) }[/math]
which does not depend on the normalisation constant [math]\displaystyle{ Z }[/math].
Stein discrepancy as a statistical divergence
A basic requirement of Stein discrepancy is that it is a statistical divergence, meaning that [math]\displaystyle{ D_P(Q) \geq 0 }[/math] and [math]\displaystyle{ D_P(Q) = 0 }[/math] if and only if [math]\displaystyle{ Q=P }[/math]. This property can be shown to hold for classical Stein discrepancy[1] and kernel Stein discrepancy[6][7][8] a provided that appropriate regularity conditions hold.
Convergence control
A stronger property, compared to being a statistical divergence, is convergence control, meaning that [math]\displaystyle{ D_P(Q_n) \rightarrow 0 }[/math] implies [math]\displaystyle{ Q_n }[/math] converges to [math]\displaystyle{ P }[/math] in a sense to be specified. For example, under appropriate regularity conditions, both the classical Stein discrepancy and graph Stein discrepancy enjoy Wasserstein convergence control, meaning that [math]\displaystyle{ D_P(Q_n) \rightarrow 0 }[/math] implies that the Wasserstein metric between [math]\displaystyle{ Q_n }[/math] and [math]\displaystyle{ P }[/math] converges to zero.[1][18][9] For the kernel Stein discrepancy, weak convergence control has been established[8][19] under regularity conditions on the distribution [math]\displaystyle{ P }[/math] and the reproducing kernel [math]\displaystyle{ K }[/math], which are applicable in particular to (2.1). Other well-known choices of [math]\displaystyle{ K }[/math], such as based on the Gaussian kernel, provably do not enjoy weak convergence control.[8]
Convergence detection
The converse property to convergence control is convergence detection, meaning that [math]\displaystyle{ D_P(Q_n) \rightarrow 0 }[/math] whenever [math]\displaystyle{ Q_n }[/math] converges to [math]\displaystyle{ P }[/math] in a sense to be specified. For example, under appropriate regularity conditions, classical Stein discrepancy enjoys a particular form of mean square convergence detection[1][9], meaning that [math]\displaystyle{ D_P(Q_n) \rightarrow 0 }[/math] whenever [math]\displaystyle{ X_n \sim Q_n }[/math] converges in mean-square to [math]\displaystyle{ X \sim P }[/math] and [math]\displaystyle{ \nabla \log p(X_m) }[/math] converges in mean-square to [math]\displaystyle{ \nabla \log p(X) }[/math]. For kernel Stein discrepancy, Wasserstein convergence detection has been established,[8] under appropriate regularity conditions on the distribution [math]\displaystyle{ P }[/math] and the reproducing kernel [math]\displaystyle{ K }[/math].
Applications of Stein discrepancy
Several applications of Stein discrepancy have been proposed, some of which are now described.
Optimal quantisation
File:Stein Thinning of MCMC output.webm
Given a probability distribution [math]\displaystyle{ P }[/math] defined on a measurable space [math]\displaystyle{ \mathcal{X} }[/math], the quantization task is to select a small number of states [math]\displaystyle{ x_1,\dots,x_n \in \mathcal{X} }[/math] such that the associated discrete distribution [math]\displaystyle{ Q^n = \frac{1}{n} \sum_{i=1}^n \delta(x_i) }[/math] is an accurate approximation of [math]\displaystyle{ P }[/math] in a sense to be specified.
Stein points[19] are the result of performing optimal quantisation via minimisation of Stein discrepancy:
[math]\displaystyle{ (3.1) \quad \underset{x_1,\dots,x_n \in \mathcal{X}}{\operatorname{arg\,min}} \; D_{P}\left( \frac{1}{n} \sum_{i=1}^n \delta(x_i) \right) }[/math]
Under appropriate regularity conditions, it can be shown[19] that [math]\displaystyle{ D_P(Q^n) \rightarrow 0 }[/math] as [math]\displaystyle{ n \rightarrow \infty }[/math]. Thus, if the Stein discrepancy enjoys convergence control, it follows that [math]\displaystyle{ Q^n }[/math] converges to [math]\displaystyle{ P }[/math]. Extensions of this result, to allow for imperfect numerical optimisation, have also been derived.[19][21][20]
Sophisticated optimisation algorithms have been designed to perform efficient quantisation based on Stein discrepancy, including gradient flow algorithms that aim to minimise kernel Stein discrepancy over an appropriate space of probability measures.[22]
Optimal weighted approximation
If one is allowed to consider weighted combinations of point masses, then more accurate approximation is possible compared to (3.1). For simplicity of exposition, suppose we are given a set of states [math]\displaystyle{ \{x_i\}_{i=1}^n \subset \mathcal{X} }[/math]. Then the optimal weighted combination of the point masses [math]\displaystyle{ \delta(x_i) }[/math], i.e.
- [math]\displaystyle{ Q_n := \sum_{i=1}^n w_i^* \delta(x_i), \qquad w^* \in \underset{w_1 + \cdots + w_n = 1}{\operatorname{arg\,min}} \; D_P\left( \sum_{i=1}^n w_i \delta(x_i) \right), }[/math]
which minimise Stein discrepancy can be obtained in closed form when a kernel Stein discrepancy is used.[5] Some authors[23][24] consider imposing, in addition, a non-negativity constraint on the weights, i.e. [math]\displaystyle{ w_i \geq 0 }[/math]. However, in both cases the computation required to compute the optimal weights [math]\displaystyle{ w^* }[/math] can involve solving linear systems of equations that are numerically ill-conditioned. Interestingly, it has been shown[20] that greedy approximation of [math]\displaystyle{ Q_n }[/math] using an un-weighted combination of [math]\displaystyle{ m \ll n }[/math] states can reduce this computational requirement. In particular, the greedy Stein thinning algorithm
- [math]\displaystyle{ Q_{n,m} := \frac{1}{m} \sum_{i=1}^m \delta(x_{\pi(i)}), \qquad \pi(m) \in \underset{j=1,\dots,n}{\operatorname{arg\,min}} \; D_P\left( \frac{1}{m} \sum_{i=1}^{m-1} \delta(x_{\pi(i)}) + \frac{1}{m} \delta(x_j) \right) }[/math]
has been shown to satisfy an error bound
- [math]\displaystyle{ D_P(Q_{n,m}) = D_P(Q_n) + O\left(\sqrt{\frac{\log m}{m}} \right). }[/math]
Non-myopic and mini-batch generalisations of the greedy algorithm have been demonstrated[25] to yield further improvement in approximation quality relative to computational cost.
Variational inference
Stein discrepancy has been exploited as a variational objective in variational Bayesian methods.[26][27] Given a collection [math]\displaystyle{ \{Q_\theta\}_{\theta \in \Theta} }[/math] of probability distributions on [math]\displaystyle{ \mathcal{X} }[/math], parametrised by [math]\displaystyle{ \theta \in \Theta }[/math], one can seek the distribution in this collection that best approximates a distribution [math]\displaystyle{ P }[/math] of interest:
- [math]\displaystyle{ \underset{\theta \in \Theta}{\operatorname{arg\,min}} \; D_P(Q_\theta) }[/math]
A possible advantage of Stein discrepancy in this context,[27] compared to the traditional Kullback–Leibler variational objective, is that [math]\displaystyle{ Q_\theta }[/math] need not be absolutely continuous with respect to [math]\displaystyle{ P }[/math] in order for [math]\displaystyle{ D_P(Q_\theta) }[/math] to be well-defined. This property can be used to circumvent the use of flow-based generative models, for example, which impose diffeomorphism constraints in order to enforce absolute continuity of [math]\displaystyle{ Q_\theta }[/math] and [math]\displaystyle{ P }[/math].
Statistical estimation
Stein discrepancy has been proposed as a tool to fit parametric statistical models to data. Given a dataset [math]\displaystyle{ \{x_i\}_{i=1}^n \subset \mathcal{X} }[/math], consider the associated discrete distribution [math]\displaystyle{ Q^n = \textstyle \frac{1}{n}\sum_{i=1}^n \delta(x_i) }[/math]. For a given parametric collection [math]\displaystyle{ \{P_\theta\}_{\theta \in \Theta} }[/math] of probability distributions on [math]\displaystyle{ \mathcal{X} }[/math], one can estimate a value of the parameter [math]\displaystyle{ \theta }[/math] which is compatible with the dataset using a minimum Stein discrepancy estimator[28]
- [math]\displaystyle{ \underset{\theta \in \Theta}{\operatorname{arg\,min}} \; D_{P_\theta}(Q^n). }[/math]
The approach is closely related to the framework of minimum distance estimation, with the role of the "distance" being played by the Stein discrepancy. Alternatively, a generalised Bayesian approach to estimation of the parameter [math]\displaystyle{ \theta }[/math] can be considered[4] where, given a prior probability distribution with density function [math]\displaystyle{ \pi(\theta) }[/math], [math]\displaystyle{ \theta \in \Theta }[/math], (with respect to an appropriate reference measure on [math]\displaystyle{ \Theta }[/math]), one constructs a generalised posterior with probability density function
- [math]\displaystyle{ \pi^n(\theta) \propto \pi(\theta) \exp\left( - \gamma D_{P_\theta}(Q^n)^2 \right) , }[/math]
for some [math]\displaystyle{ \gamma \gt 0 }[/math] to be specified or determined.
Hypothesis testing
The Stein discrepancy has also been used as a test statistic for performing goodness-of-fit testing[6][7] and comparing latent variable models.[29] Since the aforementioned tests have a computational cost quadratic in the sample size, alternatives have been developed with (near-)linear runtimes.[30][15]
See also
References
- ↑ 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 J. Gorham and L. Mackey. Measuring Sample Quality with Stein's Method. Advances in Neural Information Processing Systems, 2015.
- ↑ Anastasiou, A., Barp, A., Briol, F-X., Ebner, B., Gaunt, R. E., Ghaderinezhad, F., Gorham, J., Gretton, A., Ley, C., Liu, Q., Mackey, L., Oates, C. J., Reinert, G. & Swan, Y. (2021). Stein’s method meets statistics: A review of some recent developments. arXiv:2105.03481.
- ↑ Müller, Alfred (1997). "Integral Probability Metrics and Their Generating Classes of Functions" (in en). Advances in Applied Probability 29 (2): 429–443. doi:10.2307/1428011. ISSN 0001-8678. https://www.cambridge.org/core/product/identifier/S000186780002807X/type/journal_article.
- ↑ 4.0 4.1 Mastubara, T., Knoblauch, J., Briol, F-X., Oates, C. J. Robust Generalised Bayesian Inference for Intractable Likelihoods. arXiv:2104.07359.
- ↑ 5.0 5.1 5.2 Oates, C. J., Girolami, M., & Chopin, N. (2017). Control functionals for Monte Carlo integration. Journal of the Royal Statistical Society B: Statistical Methodology, 79(3), 695–718.
- ↑ 6.0 6.1 6.2 Liu, Q., Lee, J. D., & Jordan, M. I. (2016). A kernelized Stein discrepancy for goodness-of-fit tests and model evaluation. International Conference on Machine Learning, 276–284.
- ↑ 7.0 7.1 7.2 Chwialkowski, K., Strathmann, H., & Gretton, A. (2016). A kernel test of goodness of fit. International Conference on Machine Learning, 2606–2615.
- ↑ 8.0 8.1 8.2 8.3 8.4 Gorham J, Mackey L. Measuring sample quality with kernels. International Conference on Machine Learning 2017 Jul 17 (pp. 1292-1301). PMLR.
- ↑ 9.0 9.1 9.2 Gorham, J., Duncan, A. B., Vollmer, S. J., & Mackey, L. (2019). Measuring sample quality with diffusions. The Annals of Applied Probability, 29(5), 2884-2928.
- ↑ 10.0 10.1 Shi, J., Liu, C., & Mackey, L. (2021). Sampling with Mirrored Stein Operators. arXiv preprint arXiv:2106.12506
- ↑ Barp A, Oates CJ, Porcu E, Girolami M. A Riemann-Stein kernel method. arXiv preprint arXiv:1810.04946. 2018.
- ↑ Xu W, Matsuda T. Interpretable Stein Goodness-of-fit Tests on Riemannian Manifolds. In ICML 2021.
- ↑ Yang J, Liu Q, Rao V, Neville J. Goodness-of-fit testing for discrete distributions via Stein discrepancy. In ICML 2018 (pp. 5561-5570). PMLR.
- ↑ Shi J, Zhou Y, Hwang J, Titsias M, Mackey L. Gradient Estimation with Discrete Stein Operators. arXiv preprint arXiv:2202.09497. 2022.
- ↑ 15.0 15.1 Huggins JH, Mackey L. Random Feature Stein Discrepancies. In NeurIPS 2018.
- ↑ Gorham J, Raj A, Mackey L. Stochastic Stein Discrepancies. In NeurIPS 2020.
- ↑ Fisher M, Oates CJ. Gradient-Free Kernel Stein Discrepancy. arXiv preprint arXiv:2207.02636. 2022.
- ↑ Mackey, L., & Gorham, J. (2016). Multivariate Stein factors for a class of strongly log-concave distributions. Electronic Communications in Probability, 21, 1-14.
- ↑ 19.0 19.1 19.2 19.3 Chen WY, Mackey L, Gorham J, Briol FX, Oates CJ. Stein points. In International Conference on Machine Learning 2018 (pp. 844-853). PMLR.
- ↑ 20.0 20.1 20.2 Riabiz M, Chen W, Cockayne J, Swietach P, Niederer SA, Mackey L, Oates CJ. Optimal thinning of MCMC output. Journal of the Royal Statistical Society B: Statistical Methodology, to appear. 2021. arXiv:2005.03952
- ↑ Chen WY, Barp A, Briol FX, Gorham J, Girolami M, Mackey L, Oates CJ. Stein Point Markov Chain Monte Carlo. International Conference on Machine Learning (ICML 2019). arXiv:1905.03673
- ↑ Korba A, Aubin-Frankowski PC, Majewski S, Ablin P. "Kernel Stein Discrepancy Descent." arXiv preprint arXiv:2105.09994. 2021.
- ↑ Liu Q, Lee J. Black-box importance sampling. In Artificial Intelligence and Statistics 2017 (pp. 952-961). PMLR.
- ↑ Hodgkinson L, Salomone R, Roosta F. The reproducing Stein kernel approach for post-hoc corrected sampling. arXiv preprint arXiv:2001.09266. 2020.
- ↑ Teymur O, Gorham J, Riabiz M, Oates CJ. Optimal quantisation of probability measures using maximum mean discrepancy. In International Conference on Artificial Intelligence and Statistics 2021 (pp. 1027-1035). PMLR.
- ↑ Ranganath R, Tran D, Altosaar J, Blei D. Operator variational inference. Advances in Neural Information Processing Systems. 2016;29:496-504.
- ↑ 27.0 27.1 Fisher M, Nolan T, Graham M, Prangle D, Oates CJ. Measure transport with kernel Stein discrepancy. International Conference on Artificial Intelligence and Statistics 2021 (pp. 1054-1062). PMLR.
- ↑ Barp, A., Briol, F.-X., Duncan, A. B., Girolami, M., & Mackey, L. (2019). Minimum Stein discrepancy estimators. Neural Information Processing Systems, 12964–12976.
- ↑ Kanagawa, H., Jitkrittum, W., Mackey, L., Fukumizu, K., & Gretton, A. (2019). A kernel Stein test for comparing latent variable models. arXiv preprint arXiv:1907.00586.
- ↑ Jitkrittum W, Xu W, Szabó Z, Fukumizu K, Gretton A. A Linear-Time Kernel Goodness-of-Fit Test.
Original source: https://en.wikipedia.org/wiki/Stein discrepancy.
Read more |