Autocovariance
Part of a series on Statistics |
Correlation and covariance |
---|
Correlation and covariance of random vectors |
Correlation and covariance of stochastic processes |
Correlation and covariance of deterministic signals
|
In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the process in question.
Auto-covariance of stochastic processes
Definition
With the usual notation [math]\displaystyle{ \operatorname{E} }[/math] for the expectation operator, if the stochastic process [math]\displaystyle{ \left\{X_t\right\} }[/math] has the mean function [math]\displaystyle{ \mu_t = \operatorname{E}[X_t] }[/math], then the autocovariance is given by[1]:p. 162
[math]\displaystyle{ \operatorname{K}_{XX}(t_1,t_2) = \operatorname{cov}\left[X_{t_1}, X_{t_2}\right] = \operatorname{E}[(X_{t_1} - \mu_{t_1})(X_{t_2} - \mu_{t_2})] = \operatorname{E}[X_{t_1} X_{t_2}] - \mu_{t_1} \mu_{t_2} }[/math] |
|
( ) |
where [math]\displaystyle{ t_1 }[/math] and [math]\displaystyle{ t_2 }[/math] are two instances in time.
Definition for weakly stationary process
If [math]\displaystyle{ \left\{X_t\right\} }[/math] is a weakly stationary (WSS) process, then the following are true:[1]:p. 163
- [math]\displaystyle{ \mu_{t_1} = \mu_{t_2} \triangleq \mu }[/math] for all [math]\displaystyle{ t_1,t_2 }[/math]
and
- [math]\displaystyle{ \operatorname{E}[|X_t|^2] \lt \infty }[/math] for all [math]\displaystyle{ t }[/math]
and
- [math]\displaystyle{ \operatorname{K}_{XX}(t_1,t_2) = \operatorname{K}_{XX}(t_2 - t_1,0) \triangleq \operatorname{K}_{XX}(t_2 - t_1) = \operatorname{K}_{XX}(\tau), }[/math]
where [math]\displaystyle{ \tau = t_2 - t_1 }[/math] is the lag time, or the amount of time by which the signal has been shifted.
The autocovariance function of a WSS process is therefore given by:[2]:p. 517
[math]\displaystyle{ \operatorname{K}_{XX}(\tau) = \operatorname{E}[(X_t - \mu_t)(X_{t- \tau} - \mu_{t- \tau})] = \operatorname{E}[X_t X_{t-\tau}] - \mu_t \mu_{t-\tau} }[/math] |
|
( ) |
which is equivalent to
- [math]\displaystyle{ \operatorname{K}_{XX}(\tau) = \operatorname{E}[(X_{t+ \tau} - \mu_{t +\tau})(X_{t} - \mu_{t})] = \operatorname{E}[X_{t+\tau} X_t] - \mu^2 }[/math].
Normalization
It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.
The definition of the normalized auto-correlation of a stochastic process is
- [math]\displaystyle{ \rho_{XX}(t_1,t_2) = \frac{\operatorname{K}_{XX}(t_1,t_2)}{\sigma_{t_1}\sigma_{t_2}} = \frac{\operatorname{E}[(X_{t_1} - \mu_{t_1})(X_{t_2} - \mu_{t_2})]}{\sigma_{t_1}\sigma_{t_2}} }[/math].
If the function [math]\displaystyle{ \rho_{XX} }[/math] is well-defined, its value must lie in the range [math]\displaystyle{ [-1,1] }[/math], with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.
For a WSS process, the definition is
- [math]\displaystyle{ \rho_{XX}(\tau) = \frac{\operatorname{K}_{XX}(\tau)}{\sigma^2} = \frac{\operatorname{E}[(X_t - \mu)(X_{t+\tau} - \mu)]}{\sigma^2} }[/math].
where
- [math]\displaystyle{ \operatorname{K}_{XX}(0) = \sigma^2 }[/math].
Properties
Symmetry property
- [math]\displaystyle{ \operatorname{K}_{XX}(t_1,t_2) = \overline{\operatorname{K}_{XX}(t_2,t_1)} }[/math][3]:p.169
respectively for a WSS process:
- [math]\displaystyle{ \operatorname{K}_{XX}(\tau) = \overline{\operatorname{K}_{XX}(-\tau)} }[/math][3]:p.173
Linear filtering
The autocovariance of a linearly filtered process [math]\displaystyle{ \left\{Y_t\right\} }[/math]
- [math]\displaystyle{ Y_t = \sum_{k=-\infty}^\infty a_k X_{t+k}\, }[/math]
is
- [math]\displaystyle{ K_{YY}(\tau) = \sum_{k,l=-\infty}^\infty a_k a_l K_{XX}(\tau+k-l).\, }[/math]
Calculating turbulent diffusivity
Autocovariance can be used to calculate turbulent diffusivity.[4] Turbulence in a flow can cause the fluctuation of velocity in space and time. Thus, we are able to identify turbulence through the statistics of those fluctuations[citation needed].
Reynolds decomposition is used to define the velocity fluctuations [math]\displaystyle{ u'(x,t) }[/math] (assume we are now working with 1D problem and [math]\displaystyle{ U(x,t) }[/math] is the velocity along [math]\displaystyle{ x }[/math] direction):
- [math]\displaystyle{ U(x,t) = \langle U(x,t) \rangle + u'(x,t), }[/math]
where [math]\displaystyle{ U(x,t) }[/math] is the true velocity, and [math]\displaystyle{ \langle U(x,t) \rangle }[/math] is the expected value of velocity. If we choose a correct [math]\displaystyle{ \langle U(x,t) \rangle }[/math], all of the stochastic components of the turbulent velocity will be included in [math]\displaystyle{ u'(x,t) }[/math]. To determine [math]\displaystyle{ \langle U(x,t) \rangle }[/math], a set of velocity measurements that are assembled from points in space, moments in time or repeated experiments is required.
If we assume the turbulent flux [math]\displaystyle{ \langle u'c' \rangle }[/math] ([math]\displaystyle{ c' = c - \langle c \rangle }[/math], and c is the concentration term) can be caused by a random walk, we can use Fick's laws of diffusion to express the turbulent flux term:
- [math]\displaystyle{ J_{\text{turbulence}_x} = \langle u'c' \rangle \approx D_{T_x} \frac{\partial \langle c \rangle}{\partial x}. }[/math]
The velocity autocovariance is defined as
- [math]\displaystyle{ K_{XX} \equiv \langle u'(t_0) u'(t_0 + \tau)\rangle }[/math] or [math]\displaystyle{ K_{XX} \equiv \langle u'(x_0) u'(x_0 + r)\rangle, }[/math]
where [math]\displaystyle{ \tau }[/math] is the lag time, and [math]\displaystyle{ r }[/math] is the lag distance.
The turbulent diffusivity [math]\displaystyle{ D_{T_x} }[/math] can be calculated using the following 3 methods:
- If we have velocity data along a Lagrangian trajectory:
- [math]\displaystyle{ D_{T_x} = \int_\tau^\infty u'(t_0) u'(t_0 + \tau) \,d\tau. }[/math]
- If we have velocity data at one fixed (Eulerian) location[citation needed]:
- [math]\displaystyle{ D_{T_x} \approx [0.3 \pm 0.1] \left[\frac{\langle u'u' \rangle + \langle u \rangle^2}{\langle u'u' \rangle}\right] \int_\tau^\infty u'(t_0) u'(t_0 + \tau) \,d\tau. }[/math]
- If we have velocity information at two fixed (Eulerian) locations[citation needed]:
- [math]\displaystyle{ D_{T_x} \approx [0.4 \pm 0.1] \left[\frac{1}{\langle u'u' \rangle}\right] \int_r^\infty u'(x_0) u'(x_0 + r) \,dr, }[/math]
where [math]\displaystyle{ r }[/math] is the distance separated by these two fixed locations.
Auto-covariance of random vectors
See also
- Autoregressive process
- Correlation
- Cross-covariance
- Cross-correlation
- Noise covariance estimation (as an application example)
References
- ↑ 1.0 1.1 Hsu, Hwei (1997). Probability, random variables, and random processes. McGraw-Hill. ISBN 978-0-07-030644-8. https://archive.org/details/schaumsoutlineof00hsuh.
- ↑ Lapidoth, Amos (2009). A Foundation in Digital Communication. Cambridge University Press. ISBN 978-0-521-19395-5.
- ↑ 3.0 3.1 Kun Il Park, Fundamentals of Probability and Stochastic Processes with Applications to Communications, Springer, 2018, 978-3-319-68074-3
- ↑ Taylor, G. I. (1922-01-01). "Diffusion by Continuous Movements" (in en). Proceedings of the London Mathematical Society s2-20 (1): 196–212. doi:10.1112/plms/s2-20.1.196. ISSN 1460-244X. https://zenodo.org/record/1433523/files/article.pdf.
Further reading
- Hoel, P. G. (1984). Mathematical Statistics (Fifth ed.). New York: Wiley. ISBN 978-0-471-89045-4.
- Lecture notes on autocovariance from WHOI
Original source: https://en.wikipedia.org/wiki/Autocovariance.
Read more |