Gauss–Markov process

From HandWiki
Revision as of 20:50, 6 February 2024 by Steve2012 (talk | contribs) (fix)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Gauss–Markov stochastic processes (named after Carl Friedrich Gauss and Andrey Markov) are stochastic processes that satisfy the requirements for both Gaussian processes and Markov processes.[1][2] A stationary Gauss–Markov process is unique[citation needed] up to rescaling; such a process is also known as an Ornstein–Uhlenbeck process.

Gauss–Markov processes obey Langevin equations.[3]

Basic properties

Every Gauss–Markov process X(t) possesses the three following properties:[4]

  1. If h(t) is a non-zero scalar function of t, then Z(t) = h(t)X(t) is also a Gauss–Markov process
  2. If f(t) is a non-decreasing scalar function of t, then Z(t) = X(f(t)) is also a Gauss–Markov process
  3. If the process is non-degenerate and mean-square continuous, then there exists a non-zero scalar function h(t) and a strictly increasing scalar function f(t) such that X(t) = h(t)W(f(t)), where W(t) is the standard Wiener process.

Property (3) means that every non-degenerate mean-square continuous Gauss–Markov process can be synthesized from the standard Wiener process (SWP).

Other properties

A stationary Gauss–Markov process with variance [math]\displaystyle{ \textbf{E}(X^{2}(t)) = \sigma^{2} }[/math] and time constant [math]\displaystyle{ \beta^{-1} }[/math] has the following properties.

  • Exponential autocorrelation: [math]\displaystyle{ \textbf{R}_{x}(\tau) = \sigma^{2}e^{-\beta |\tau|}. }[/math]
  • A power spectral density (PSD) function that has the same shape as the Cauchy distribution: [math]\displaystyle{ \textbf{S}_{x}(j\omega) = \frac{2\sigma^{2}\beta}{\omega^{2} + \beta^{2}}. }[/math] (Note that the Cauchy distribution and this spectrum differ by scale factors.)
  • The above yields the following spectral factorization:[math]\displaystyle{ \textbf{S}_{x}(s) = \frac{2\sigma^{2}\beta}{-s^{2} + \beta^{2}} = \frac{\sqrt{2\beta}\,\sigma}{(s + \beta)} \cdot\frac{\sqrt{2\beta}\,\sigma}{(-s + \beta)}. }[/math] which is important in Wiener filtering and other areas.

There are also some trivial exceptions to all of the above.[clarification needed]

References

  1. C. E. Rasmussen & C. K. I. Williams (2006). Gaussian Processes for Machine Learning. MIT Press. p. Appendix B. ISBN 026218253X. http://www.gaussianprocess.org/gpml/chapters/RWB.pdf. 
  2. Lamon, Pierre (2008). 3D-Position Tracking and Control for All-Terrain Robots. Springer. pp. 93–95. ISBN 978-3-540-78286-5. https://archive.org/details/dpositiontrackin00lamo. 
  3. Bob Schutz, Byron Tapley, George H. Born (2004-06-26). Statistical Orbit Determination. pp. 230. ISBN 978-0-08-054173-0. https://books.google.com/books?id=Ct3qN1VCHewC&q=Gauss%E2%80%93Markov+process+%22langevin+equation%22&pg=PA230. 
  4. C. B. Mehr and J. A. McFadden. Certain Properties of Gaussian Processes and Their First-Passage Times. Journal of the Royal Statistical Society. Series B (Methodological), Vol. 27, No. 3(1965), pp. 505-522