Distributional data analysis

From HandWiki
Short description: Branch of nonparametric statistics

Distributional data analysis is a branch of nonparametric statistics that is related to functional data analysis. It is concerned with random objects that are probability distributions, i.e., the statistical analysis of samples of random distributions where each atom of a sample is a distribution. One of the main challenges in distributional data analysis is that the space of probability distributions is, while a convex space, is not a vector space.

Notation

Let [math]\displaystyle{ \nu }[/math] be a probability measure on [math]\displaystyle{ D }[/math], where [math]\displaystyle{ D \subset \R^p }[/math] with [math]\displaystyle{ p \ge 1 }[/math]. The probability measure [math]\displaystyle{ \nu }[/math] can be equivalently characterized as cumulative distribution function [math]\displaystyle{ F }[/math] or probability density function [math]\displaystyle{ f }[/math] if it exists. For univariate distributions with [math]\displaystyle{ p = 1 }[/math], quantile function [math]\displaystyle{ Q=F^{-1} }[/math] can also be used.

Let [math]\displaystyle{ \mathcal{F} }[/math] be a space of distributions [math]\displaystyle{ \nu }[/math] and let [math]\displaystyle{ d }[/math] be a metric on [math]\displaystyle{ \mathcal{F} }[/math] so that [math]\displaystyle{ (\mathcal{F}, d) }[/math] forms a metric space. There are various metrics available for [math]\displaystyle{ d }[/math].[1] For example, suppose [math]\displaystyle{ \nu_1, \; \nu_2 \in \mathcal{F} }[/math], and let [math]\displaystyle{ f_1 }[/math] and [math]\displaystyle{ f_2 }[/math] be the density functions of [math]\displaystyle{ \nu_1 }[/math] and [math]\displaystyle{ \nu_2 }[/math], respectively. The Fisher-Rao metric is defined as [math]\displaystyle{ d_{FR}(f_1, f_2) = \arccos \left( \int_D \sqrt{f_1(x) f_2(x)} dx \right). }[/math]

For univariate distributions, let [math]\displaystyle{ Q_1 }[/math] and [math]\displaystyle{ Q_2 }[/math] be the quantile functions of [math]\displaystyle{ \nu_1 }[/math] and [math]\displaystyle{ \nu_2 }[/math]. Denote the [math]\displaystyle{ L^p }[/math]-Wasserstein space as [math]\displaystyle{ \mathcal{W}_p }[/math], which is the space of distributions with finite [math]\displaystyle{ p }[/math]-th moments. Then, for [math]\displaystyle{ \nu_1, \; \nu_2 \in \mathcal{W}_p }[/math], the [math]\displaystyle{ L^p }[/math]-Wasserstein metric is defined as [math]\displaystyle{ d_{W_p}(\nu_1, \nu_2) = \left( \int_0^1 [Q_1(s) - Q_2(s)]^p ds \right)^{1/p}. }[/math]

Mean and variance

For a probability measure [math]\displaystyle{ \nu \in \mathcal{F} }[/math], consider a random process [math]\displaystyle{ \mathfrak{F} }[/math] such that [math]\displaystyle{ \nu \sim \mathfrak{F} }[/math]. One way to define mean and variance of [math]\displaystyle{ \nu }[/math] is to introduce the Fréchet mean and the Fréchet variance. With respect to the metric [math]\displaystyle{ d }[/math] on [math]\displaystyle{ \mathcal{F} }[/math], the Fréchet mean [math]\displaystyle{ \mu_\oplus }[/math], also known as the barycenter, and the Fréchet variance [math]\displaystyle{ V_\oplus }[/math] are defined as[2] [math]\displaystyle{ \begin{align} \mu_\oplus &= \operatorname{argmin}_{\mu \in \mathcal{F}} \mathbb{E}[d^2(\nu, \mu)], \\ V_\oplus &= \mathbb{E}[d^2(\nu, \mu_\oplus)]. \end{align} }[/math]

A widely used example is the Wasserstein-Fréchet mean, or simply the Wasserstein mean, which is the Fréchet mean with the [math]\displaystyle{ L^2 }[/math]-Wasserstein metric [math]\displaystyle{ d_{W_2} }[/math].[3] For [math]\displaystyle{ \nu, \; \mu \in \mathcal{W}_2 }[/math], let [math]\displaystyle{ Q_\nu, \; Q_\mu }[/math] be the quantile functions of [math]\displaystyle{ \nu }[/math] and [math]\displaystyle{ \mu }[/math], respectively. The Wasserstein mean and Wasserstein variance is defined as [math]\displaystyle{ \begin{align} \mu_\oplus^* &= \operatorname{argmin}_{\mu \in \mathcal{W}_2} \mathbb{E} \left[ \int_0^1 (Q_\nu (s) - Q_\mu (s))^2 ds \right], \\ V_\oplus^* &= \mathbb{E} \left[ \int_0^1 (Q_\nu (s) - Q_{\mu_\oplus^*} (s))^2 ds \right]. \end{align} }[/math]

Modes of variation

Modes of variation are useful concepts in depicting the variation of data around the mean function. Based on the Karhunen-Loève representation, modes of variation show the contribution of each eigenfunction to the mean.

Functional principal component analysis

Functional principal component analysis(FPCA) can be directly applied to the probability density functions.[4] Consider a distribution process [math]\displaystyle{ \nu \sim \mathfrak{F} }[/math] and let [math]\displaystyle{ f }[/math] be the density function of [math]\displaystyle{ \nu }[/math]. Let the mean density function as [math]\displaystyle{ \mu(t) = \mathbb{E}\left[f(t)\right] }[/math] and the covariance function as [math]\displaystyle{ G(s,t) = \operatorname{Cov}(f(s), f(t)) }[/math] with orthonormal eigenfunctions [math]\displaystyle{ \{\phi_j\}_{j=1}^\infty }[/math] and eigenvalues [math]\displaystyle{ \{\lambda_j\}_{j=1}^\infty }[/math].

By the Karhunen-Loève theorem, [math]\displaystyle{ f(t) = \mu(t) + \sum_{j=1}^\infty \xi_j \phi_j(t) }[/math], where principal components [math]\displaystyle{ \xi_j = \int_D [f(t) - \mu(t)] \phi_j(t) dt }[/math]. The [math]\displaystyle{ j }[/math]th mode of variation is defined as [math]\displaystyle{ g_{j}(t, \alpha) = \mu(t) + \alpha \sqrt{\lambda_j} \phi_j(t), \quad t \in D, \; \alpha \in [-A, A] }[/math] with some constant [math]\displaystyle{ A }[/math], such as 2 or 3.

Transformation FPCA

Assume the probability density functions [math]\displaystyle{ f }[/math] exist, and let [math]\displaystyle{ \mathcal{F}_f }[/math] be the space of density functions. Transformation approaches introduce a continuous and invertible transformation [math]\displaystyle{ \Psi: \mathcal{F}_f \to \mathbb{H} }[/math], where [math]\displaystyle{ \mathbb{H} }[/math] is a Hilbert space of functions. For instance, the log quantile density transformation or the centered log ratio transformation are popular choices.[5][6]

For [math]\displaystyle{ f \in \mathcal{F}_f }[/math], let [math]\displaystyle{ Y = \Psi(f) }[/math], the transformed functional variable. The mean function [math]\displaystyle{ \mu_Y(t) = \mathbb{E}\left[Y(t)\right] }[/math] and the covariance function [math]\displaystyle{ G_Y(s,t) = \operatorname{Cov}(Y(s), Y(t)) }[/math] are defined accordingly, and let [math]\displaystyle{ \{\lambda_j, \phi_j\}_{j=1}^\infty }[/math] be the eigenpairs of [math]\displaystyle{ G_Y(s,t) }[/math]. The Karhunen-Loève decomposition gives [math]\displaystyle{ Y(t) = \mu_Y(t) + \sum_{j=1}^\infty \xi_j \phi_j(t) }[/math], where [math]\displaystyle{ \xi_j = \int_D [Y(t) - \mu_Y(t)] \phi_j(t) dt }[/math]. Then, the [math]\displaystyle{ j }[/math]th transformation mode of variation is defined as[7] [math]\displaystyle{ g_{j}^{TF}(t, \alpha) = \Psi^{-1} \left( \mu_Y + \alpha \sqrt{\lambda_j}\phi_j \right)(t), \quad t \in D, \; \alpha \in [-A, A]. }[/math]

Log FPCA and Wasserstein Geodesic PCA

Endowed with metrics such as the Wasserstein metric [math]\displaystyle{ d_{W_2} }[/math] or the Fisher-Rao metric [math]\displaystyle{ d_{FR} }[/math], we can employ the (pseudo) Riemannian structure of [math]\displaystyle{ \mathcal{F} }[/math]. Denote the tangent space at the Fréchet mean [math]\displaystyle{ \mu_\oplus }[/math] as [math]\displaystyle{ T_{\mu_\oplus} }[/math], and define the logarithm and exponential maps [math]\displaystyle{ \log_{\mu_\oplus}:\mathcal{F} \to T_{\mu_\oplus} }[/math] and [math]\displaystyle{ \exp_{\mu_\oplus}: T_{\mu_\oplus} \to \mathcal{F} }[/math]. Let [math]\displaystyle{ Y }[/math] be the projected density onto the tangent space, [math]\displaystyle{ Y = \log_{\mu_\oplus}(f) }[/math].

In Log FPCA, FPCA is performed to [math]\displaystyle{ Y }[/math] and then projected back to [math]\displaystyle{ \mathcal{F} }[/math] using the exponential map.[8] Therefore, with [math]\displaystyle{ Y(t) = \mu_Y(t) + \sum_{j=1}^\infty \xi_j \phi_j(t) }[/math], the [math]\displaystyle{ j }[/math]th Log FPCA mode of variation is defined as [math]\displaystyle{ g_j^{Log}(t, \alpha) = \exp_{f_\oplus} \left( \mu_{f_\oplus} + \alpha \sqrt{\lambda_j} \phi_j \right)(t), \quad t \in D, \; \alpha \in [-A, A]. }[/math]

As a special case, consider [math]\displaystyle{ L^2 }[/math]-Wasserstein space [math]\displaystyle{ \mathcal{W}_2 }[/math], a random distribution [math]\displaystyle{ \nu \in \mathcal{W}_2 }[/math], and a subset [math]\displaystyle{ G \subset \mathcal{W}_2 }[/math]. Let [math]\displaystyle{ d_{W_2}(\nu, G) = \inf_{\mu \in G} d_{W_2}(\nu, \mu) }[/math] and [math]\displaystyle{ K_{W_2}(G) = \mathbb{E}\left[d_{W_2}^2(\nu, G) \right] }[/math]. Let [math]\displaystyle{ \text{CL}(\mathcal{W}_2) }[/math] be the metric space of nonempty, closed subsets of [math]\displaystyle{ \mathcal{W}_2 }[/math], endowed with Hausdorff distance, and define [math]\displaystyle{ \operatorname{CG}_{\nu_0, k}(\mathcal{W}_2) = \{G \in \operatorname{CL}(\mathcal{W}_2) : \nu_0 \in G, G \text{ is a geodesic set s.t. }\operatorname{dim}(G) \le k \}, \; k \ge 1. }[/math] Let the reference measure [math]\displaystyle{ \nu_0 }[/math] be the Wasserstein mean [math]\displaystyle{ \mu_\oplus }[/math]. Then, a principal geodesic subspace (PGS) of dimension [math]\displaystyle{ k }[/math] with respect to [math]\displaystyle{ \mu_\oplus }[/math] is a set [math]\displaystyle{ G_k = \operatorname{argmin}_{G \in \text{CG}_{\nu_\oplus, k}(\mathcal{W}_2)} K_{W_2}(G) }[/math].[9][10]

Note that the tangent space [math]\displaystyle{ T_{\mu_\oplus} }[/math] is a subspace of [math]\displaystyle{ L^2_{\mu_\oplus} }[/math], the Hilbert space of [math]\displaystyle{ {\mu_\oplus} }[/math]-square-integrable functions. Obtaining the PGS is equivalent to performing PCA in [math]\displaystyle{ L^2_{\mu_\oplus} }[/math] under constraints to lie in the convex and closed subset.[10] Therefore, a simple approximation of the Wasserstein Geodesic PCA is the Log FPCA by relaxing the geodesicity constraint, while alternative techniques are suggested.[9][10]

Distributional regression

Fréchet regression

Fréchet regression is a generalization of regression with responses taking values in a metric space and Euclidean predictors.[11][12] Using the Wasserstein metric [math]\displaystyle{ d_{W_2} }[/math], Fréchet regression models can be applied to distributional objects. The global Wasserstein-Fréchet regression model is defined as

[math]\displaystyle{ \begin{align} m_\oplus (x) &= \operatorname{argmin}_{\omega \in \mathcal{F}} \mathbb{E}\left[ s_G(X,x) d_{W_2}^2(\nu,\omega) \right], \\ s_G(X,x)& = 1 + (X - \mathbb{E}[X])^\top \text{Var}(X)^{-1} (x - \mathbb{E}[X]), \end{align} }[/math]

 

 

 

 

(1)

which generalizes the standard linear regression.

For the local Wasserstein-Fréchet regression, consider a scalar predictor [math]\displaystyle{ X\in \mathbb{R} }[/math] and introduce a smoothing kernel [math]\displaystyle{ K_h(\cdot) = h^{-1}K(\cdot/h) }[/math]. The local Fréchet regression model, which generalizes the local linear regression model, is defined as [math]\displaystyle{ \begin{align} l_\oplus (x) &= \operatorname{argmin}_{\omega \in \mathcal{F}} \mathbb{E}\left[ s_L(X,x,h) d_{W_2}^2(\nu,\omega) \right],\\ s_L(X,x,h) &= \sigma_0^{-2} \{ K_h(X-x)[\mu_2 - \mu_1 (X-x)]\}, \end{align} }[/math] where [math]\displaystyle{ \mu_j = \mathbb{E} \left[K_h(X-x)(X-x)^j \right] }[/math], [math]\displaystyle{ j = 0,1,2, }[/math] and [math]\displaystyle{ \sigma_0^2 = \mu_0 \mu_2 - \mu_1^2 }[/math].

Transformation based approaches

Consider the response variable [math]\displaystyle{ \nu }[/math] to be probability distributions. With the space of density functions [math]\displaystyle{ \mathcal{F}_f }[/math] and a Hilbert space of functions [math]\displaystyle{ \mathbb{H} }[/math], consider continuous and invertible transformations [math]\displaystyle{ \Psi: \mathcal{F}_f \to \mathbb{H} }[/math]. Examples of transformations include log hazard transformation, log quantile density transformation, or centered log-ratio transformation. Linear methods such as functional linear models are applied to the transformed variables. The fitted models are interpreted back in the original density space [math]\displaystyle{ \mathcal{F} }[/math] using the inverse transformation.[12]

Random object approaches

In Wasserstein regression, both predictors [math]\displaystyle{ \omega }[/math] and responses [math]\displaystyle{ \nu }[/math] can be distributional objects. Let [math]\displaystyle{ \omega{\oplus} }[/math] and [math]\displaystyle{ \nu_{\oplus} }[/math] be the Wasserstein mean of [math]\displaystyle{ \omega }[/math] and [math]\displaystyle{ \nu }[/math], respectively. The Wasserstein regression model is defined as [math]\displaystyle{ \mathbb{E}(\log_{\nu_{\oplus}} \nu | \log_{\omega{\oplus}} \omega) = \Gamma(\log_{\omega{\oplus}} \omega), }[/math] with a linear regression operator [math]\displaystyle{ \Gamma g(t) = \langle \beta(\cdot, t),g \rangle_{\omega{\oplus}}, \; t \in D, \; g \in T_{\omega{\oplus}}, \; \beta:D^2 \to \R. }[/math] Estimation of the regression operator is based on empirical estimators obtained from samples.[13] Also, the Fisher-Rao metric [math]\displaystyle{ d_{FR} }[/math] can be used in a similar fashion.[12][14]

Hypothesis testing

Wasserstein F-test

Wasserstein [math]\displaystyle{ F }[/math]-test has been proposed to test for the effects of the predictors in the Fréchet regression framework with the Wasserstein metric.[15] Consider Euclidean predictors [math]\displaystyle{ X \in \R^p }[/math] and distributional responses [math]\displaystyle{ \nu \in \mathcal{W}_2 }[/math]. Denote the Wasserstein mean of [math]\displaystyle{ \nu }[/math] as [math]\displaystyle{ \mu_\oplus^* }[/math], and the sample Wasserstein mean as [math]\displaystyle{ \hat{\mu}_\oplus^* }[/math]. Consider the global Wasserstein-Fréchet regression model [math]\displaystyle{ m_\oplus (x) }[/math] defined in (1), which is the conditional Wasserstein mean given [math]\displaystyle{ X=x }[/math]. The estimator of [math]\displaystyle{ m_\oplus (x) }[/math], [math]\displaystyle{ \hat{m}_\oplus (x) }[/math] is obtained by minimizing the empirical version of the criterion.

Let [math]\displaystyle{ F }[/math], [math]\displaystyle{ Q }[/math], [math]\displaystyle{ f }[/math], [math]\displaystyle{ F_\oplus^* }[/math], [math]\displaystyle{ Q_\oplus^* }[/math], [math]\displaystyle{ f_\oplus^* }[/math], [math]\displaystyle{ F_\oplus(x) }[/math], [math]\displaystyle{ Q_\oplus(x) }[/math], and [math]\displaystyle{ f_\oplus(x) }[/math] denote the cumulative distribution, quantile, and density functions of [math]\displaystyle{ \nu }[/math], [math]\displaystyle{ \mu_\oplus^* }[/math], and [math]\displaystyle{ m_\oplus(x) }[/math], respectively. For a pair [math]\displaystyle{ (X, \nu) }[/math], define [math]\displaystyle{ T = Q \circ F_\oplus (X) }[/math] be the optimal transport map from [math]\displaystyle{ m_\oplus(X) }[/math] to [math]\displaystyle{ \nu }[/math]. Also, define [math]\displaystyle{ S = Q_\oplus (X) \circ F_\oplus^* }[/math], the optimal transport map from [math]\displaystyle{ \mu_\oplus^* }[/math] to [math]\displaystyle{ m_\oplus(x) }[/math]. Finally, define the covariance kernel [math]\displaystyle{ K(u, v) = \mathbb{E}[\text{Cov}((T\circ S)(u), (T\circ S)(v) )] }[/math] and by the Mercer decomposition, [math]\displaystyle{ K(u, v) = \sum_{j=1}^\infty \lambda_j \phi_j(u) \phi_j(v) }[/math].

If there are no regression effects, the conditional Wasserstein mean would equal the Wasserstein mean. That is, hypotheses for the test of no effects are [math]\displaystyle{ H_0: m_\oplus (x) \equiv \mu_\oplus^* \quad \text{v.s.} \quad H_1: \text{Not }H_0. }[/math] To test for these hypotheses, the proposed global Wasserstein [math]\displaystyle{ F }[/math]-statistic and its asymptotic distribution are [math]\displaystyle{ F_G = \sum_{i=1}^n d_{W_2}^2( \hat{m}_\oplus (x), \hat{\mu}_\oplus^*), \quad F_G|X_1, \cdots, X_n \overset{d}{\longrightarrow} \sum_{j=1}^\infty \lambda_j V_j \; a.s., }[/math] where [math]\displaystyle{ V_j \overset{iid}{\sim} \chi_p^2 }[/math].[15] An extension to hypothesis testing for partial regression effects, and alternative testing approximations using the Satterthwaite's approximation or a bootstrap approach are proposed.[15]

Tests for the intrinsic mean

The Hilbert sphere [math]\displaystyle{ \mathcal{S}^\infty }[/math] is defined as [math]\displaystyle{ \mathcal{S}^\infty = \left\{f \in \mathbb{H} : \| f \|_{\mathbb{H}}=1 \right\} }[/math], where [math]\displaystyle{ \mathbb{H} }[/math] is a separable infinite-dimensional Hilbert space with inner product [math]\displaystyle{ \langle \cdot, \cdot \rangle_{\mathbb{H}} }[/math] and norm [math]\displaystyle{ \| \cdot \|_{\mathbb{H}} }[/math]. Consider the space of square root densities [math]\displaystyle{ \mathcal{X} = \left\{ x:D \to \mathbb{R}: x = \sqrt{f}, \int_D f(t)dt = 1 \right\} }[/math]. Then with the Fisher-Rao metric [math]\displaystyle{ d_{FR} }[/math] on [math]\displaystyle{ f }[/math], [math]\displaystyle{ \mathcal{X} }[/math] is the positive orthant of the Hilbert sphere [math]\displaystyle{ \mathcal{S}^\infty }[/math] with [math]\displaystyle{ \mathbb{H} = L^2(D) }[/math].

Let a chart [math]\displaystyle{ \tau: U \subset \mathcal{S}^\infty \to \mathbb{G} }[/math] as a smooth homeomorphism that maps [math]\displaystyle{ U }[/math] onto an open subset [math]\displaystyle{ \tau(U) }[/math] of a separable Hilbert space [math]\displaystyle{ \mathbb{G} }[/math] for coordinates. For example, [math]\displaystyle{ \tau }[/math] can be the logarithm map.[14]

Consider a random element [math]\displaystyle{ x = \sqrt{f} \in \mathcal{X} }[/math] equipped with the Fisher-Rao metric, and write its Fréchet mean as [math]\displaystyle{ \mu }[/math]. Let the empirical estimator of [math]\displaystyle{ \mu }[/math] using [math]\displaystyle{ n }[/math] samples as [math]\displaystyle{ \hat{\mu} }[/math]. Then central limit theorem for [math]\displaystyle{ \hat{\mu}_\tau = \tau(\hat{\mu}) }[/math] and [math]\displaystyle{ \mu_\tau = \tau(\mu) }[/math] holds: [math]\displaystyle{ \sqrt{n}(\hat{\mu}_\tau - \mu_\tau ) \overset{L}{\longrightarrow} Z, \; n \to \infty }[/math], where [math]\displaystyle{ Z }[/math] is a Gaussian random element in [math]\displaystyle{ \mathbb{G} }[/math] with mean 0 and covariance operator [math]\displaystyle{ \mathcal{T} }[/math]. Let the eigenvalue-eigenfunction pairs of [math]\displaystyle{ \mathcal{T} }[/math] and the estimated covariance operator [math]\displaystyle{ \hat{\mathcal{T}} }[/math] as [math]\displaystyle{ (\lambda_k, \phi_k)_{k=1}^\infty }[/math] and [math]\displaystyle{ (\hat\lambda_k, \hat\phi_k)_{k=1}^\infty }[/math], respectively.

Consider one-sample hypothesis testing [math]\displaystyle{ H_0: \mu = \mu_0 \quad \text{v.s.} \quad H_1: \mu \neq \mu_0, }[/math] with [math]\displaystyle{ \mu_0 \in \mathcal{S}^\infty }[/math]. Denote [math]\displaystyle{ \| \cdot \|_{\mathbb{G}} }[/math] and [math]\displaystyle{ \langle \cdot, \cdot \rangle_{\mathbb{G}} }[/math] as the norm and inner product in [math]\displaystyle{ \mathbb{G} }[/math]. The test statistics and their limiting distributions are [math]\displaystyle{ \begin{align} T_1 &= n \| \tau(\hat{\mu}) - \tau(\mu_0)\|_\mathbb{G}^2 \overset{L}{\longrightarrow} \lambda_k W_k, \\ S_1 &= n \sum_{k=1}^K \frac{\langle \tau(\hat{\mu}) - \tau(\mu_0), \hat{\phi}_k \rangle_\mathbb{G}^2}{\hat{\lambda}_k} \overset{L}{\longrightarrow} \chi_K^2, \end{align} }[/math] where [math]\displaystyle{ W_k \overset{iid}{\sim} \chi_1^2 }[/math]. The actual testing procedure can be done by employing the limiting distributions with Monte Carlo simulations, or bootstrap tests are possible. An extension to the two-sample test and paired test are also proposed.[14]

Distributional time series

Autoregressive (AR) models for distributional time series are constructed by defining stationarity and utilizing the notion of difference between distributions using [math]\displaystyle{ d_{W_2} }[/math] and [math]\displaystyle{ d_{FR} }[/math].

In Wasserstein autoregressive model (WAR), consider a stationary density time series [math]\displaystyle{ f_t }[/math] with Wasserstein mean [math]\displaystyle{ f_\oplus }[/math].[16] Denote the difference between [math]\displaystyle{ f_t }[/math] and [math]\displaystyle{ f_\oplus }[/math] using the logarithm map, [math]\displaystyle{ f_t \ominus f_{\oplus} = \log_{f_\oplus} f_t = T_t - \text{id} }[/math], where [math]\displaystyle{ T_t = Q_t \circ F_\oplus }[/math] is the optimal transport from [math]\displaystyle{ f_\oplus }[/math] to [math]\displaystyle{ f_t }[/math] in which [math]\displaystyle{ F_t }[/math] and [math]\displaystyle{ F_{\oplus} }[/math] are the cdf of [math]\displaystyle{ f_t }[/math] and [math]\displaystyle{ f_{\oplus} }[/math]. An [math]\displaystyle{ AR(1) }[/math] model on the tangent space [math]\displaystyle{ T_{f_\oplus} }[/math] is defined as [math]\displaystyle{ V_t = \beta V_{t-1} + \epsilon_t, \; t \in \mathbb{Z}, }[/math] for [math]\displaystyle{ V_t \in T_{f_\oplus} }[/math] with the autoregressive parameter [math]\displaystyle{ \beta \in \mathbb{R} }[/math] and mean zero random i.i.d. innovations [math]\displaystyle{ \epsilon_t }[/math]. Under proper conditions, [math]\displaystyle{ \mu_t = \exp_{f_\oplus}(V_t) }[/math] with densities [math]\displaystyle{ f_t }[/math] and [math]\displaystyle{ V_t = \log_{f_\oplus}(\mu_t) }[/math]. Accordingly, [math]\displaystyle{ WAR(1) }[/math], with a natural extension to order [math]\displaystyle{ p }[/math], is defined as [math]\displaystyle{ T_t - \text{id} = \beta (T_{t-1} - \text{id} ) + \epsilon_t. }[/math]

On the other hand, the spherical autoregressive model (SAR) considers the Fisher-Rao metric.[17] Following the settings of ##Tests for the intrinsic mean, let [math]\displaystyle{ x_t \in \mathcal{X} }[/math] with Fréchet mean [math]\displaystyle{ \mu_x }[/math]. Let [math]\displaystyle{ \theta = \arccos(\langle x_t, \mu_x \rangle ) }[/math], which is the geodesic distance between [math]\displaystyle{ x_t }[/math] and [math]\displaystyle{ \mu_x }[/math]. Define a rotation operator [math]\displaystyle{ Q_{x_t, \mu_x} }[/math] that rotates [math]\displaystyle{ x_t }[/math] to [math]\displaystyle{ \mu_x }[/math]. The spherical difference between [math]\displaystyle{ x_t }[/math] and [math]\displaystyle{ \mu_x }[/math] is represented as [math]\displaystyle{ R_t = x_t \ominus \mu_x = \theta Q_{x_t, \mu_x} }[/math]. Assume that [math]\displaystyle{ R_t }[/math] is a stationary sequence with the Fréchet mean [math]\displaystyle{ \mu_R }[/math], then [math]\displaystyle{ SAR(1) }[/math] is defined as [math]\displaystyle{ R_t - \mu_R = \beta (R_{t-1} - \mu_R) + \epsilon_t, }[/math] where [math]\displaystyle{ \mu_R = \mathbb{E}R_t }[/math] and mean zero random i.i.d innovations [math]\displaystyle{ \epsilon_t }[/math]. An alternative model, the differenced based spherical autoregressive (DSAR) model is defined with [math]\displaystyle{ R_t = x_{t+1} \ominus x_t }[/math], with natural extensions to order [math]\displaystyle{ p }[/math]. A similar extension to the Wasserstein space was introduced.[18]

References

  1. Deza, M.M.; Deza, E. (2013). Encyclopedia of distances. Springer. 
  2. Fréchet, M. (1948). "Les éléments aléatoires de nature quelconque dans un espace distancié". Annales de l'Institut Henri Poincaré 10 (4): 215–310. 
  3. Agueh, A.; Carlier, G. (2011). "Barycenters in the {Wasserstein} space". SIAM Journal on Mathematical Analysis 43 (2): 904–924. doi:10.1137/100805741. https://hal.archives-ouvertes.fr/hal-00637399/file/AC_bary_revis.pdf. 
  4. Kneip, A.; Utikal, K.J. (2001). "Inference for density families using functional principal component analysis". Journal of the American Statistical Association 96 (454): 519–532. doi:10.1198/016214501753168235. 
  5. Petersen, A.; Müller, H.-G. (2016). "Functional data analysis for density functions by transformation to a Hilbert space". Annals of Statistics 44 (1): 183–218. doi:10.1214/15-AOS1363. 
  6. van den Boogaart, K.G.; Egozcue, J.J.; Pawlowsky-Glahn, V. (2014). "Bayes Hilbert spaces". Australian and New Zealand Journal of Statistics 56 (2): 171–194. doi:10.1111/anzs.12074. 
  7. Petersen, A.; Müller, H.-G. (2016). "Functional data analysis for density functions by transformation to a Hilbert space". Annals of Statistics 44 (1): 183–218. doi:10.1214/15-AOS1363. 
  8. Fletcher, T.F.; Lu, C.; Pizer, S.M.; Joshi, S. (2004). "Principal geodesic analysis for the study of nonlinear statistics of shape". IEEE Transactions on Medical Imaging 23 (8): 995–1005. doi:10.1109/TMI.2004.831793. PMID 15338733. 
  9. 9.0 9.1 Bigot, J.; Gouet, R.; Klein, T.; López, A. (2017). "Geodesic PCA in the Wasserstein space by convex PCA". Annales de l'institut Henri Poincare (B) Probability and Statistics 53 (1): 1–26. doi:10.1214/15-AIHP706. Bibcode2017AnIHP..53....1B. https://hal.archives-ouvertes.fr/hal-01978864/file/AIHP706.pdf. 
  10. 10.0 10.1 10.2 Cazelles, E.; Seguy, V.; Bigot, J.; Cuturi, M.; Papadakis, N. (2018). "Geodesic PCA versus Log-PCA of histograms in the Wasserstein space". SIAM Journal on Scientific Computing 40 (2): B429–B456. doi:10.1137/17M1143459. Bibcode2018SJSC...40B.429C. 
  11. Petersen, A.; Müller, H.-G. (2019). "Fréchet regression for random objects with Euclidean predictors". Annals of Statistics 47 (2): 691–719. doi:10.1214/17-AOS1624. 
  12. 12.0 12.1 12.2 Petersen, A.; Zhang, C.; Kokoszka, P. (2022). "Modeling probability density functions as data objects". Econometrics and Statistics 21: 159–178. doi:10.1016/j.ecosta.2021.04.004. 
  13. Chen, Y.; Lin, Z.; Müller, H.-G. (2023). "Wasserstein regression". Journal of the American Statistical Association 118 (542): 869–882. doi:10.1080/01621459.2021.1956937. 
  14. 14.0 14.1 14.2 Dai, X. (2022). "Statistical inference on the Hilbert sphere with application to random densities". Electronic Journal of Statistics 16 (1): 700–736. doi:10.1214/21-EJS1942. 
  15. 15.0 15.1 15.2 Petersen, A.; Liu, X.; Divani, A.A. (2021). "Wasserstein F-tests and confidence bands for the Fréchet regression of density response curves". Annals of Statistics 49 (1): 590–611. doi:10.1214/20-AOS1971. 
  16. Zhang, C.; Kokoszka, P.; Petersen, A. (2022). "Wasserstein autoregressive models for density time series". Journal of Time Series Analysis 43 (1): 30–52. doi:10.1111/jtsa.12590. 
  17. Zhu, C.; Müller, H.-G. (2023). "Spherical autoregressive models, with application to distributional and compositional time series". Journal of Econometrics. doi:10.1016/j.jeconom.2022.12.008. 
  18. Zhu, C.; Müller, H.-G. (2023). "Autoregressive optimal transport models". Journal of the Royal Statistical Society Series B: Statistical Methodology 85 (3): 1012–1033. doi:10.1093/jrsssb/qkad051. PMID 37521164.