Fractional Brownian motion

From HandWiki

In probability theory, fractional Brownian motion (fBm), also called a fractal Brownian motion, is a generalization of Brownian motion. Unlike classical Brownian motion, the increments of fBm need not be independent. fBm is a continuous-time Gaussian process [math]\displaystyle{ B_H(t) }[/math] on [math]\displaystyle{ [0, T] }[/math], that starts at zero, has expectation zero for all [math]\displaystyle{ t }[/math] in [math]\displaystyle{ [0, T] }[/math], and has the following covariance function:

[math]\displaystyle{ E[B_H(t) B_H (s)]=\tfrac{1}{2} (|t|^{2H}+|s|^{2H}-|t-s|^{2H}), }[/math]

where H is a real number in (0, 1), called the Hurst index or Hurst parameter associated with the fractional Brownian motion. The Hurst exponent describes the raggedness of the resultant motion, with a higher value leading to a smoother motion. It was introduced by (Mandelbrot van Ness).

The value of H determines what kind of process the fBm is:

  • if H = 1/2 then the process is in fact a Brownian motion or Wiener process;
  • if H > 1/2 then the increments of the process are positively correlated;
  • if H < 1/2 then the increments of the process are negatively correlated.

Fractional Brownian motion has stationary increments X(t) = BH(s+t) − BH(s) (the value is the same for any s). The increment process X(t) is known as fractional Gaussian noise.

There is also a generalization of fractional Brownian motion: n-th order fractional Brownian motion, abbreviated as n-fBm.[1] n-fBm is a Gaussian, self-similar, non-stationary process whose increments of order n are stationary. For n = 1, n-fBm is classical fBm.

Like the Brownian motion that it generalizes, fractional Brownian motion is named after 19th century biologist Robert Brown; fractional Gaussian noise is named after mathematician Carl Friedrich Gauss.

Background and definition

Prior to the introduction of the fractional Brownian motion, (Lévy 1953) used the Riemann–Liouville fractional integral to define the process

[math]\displaystyle{ \tilde B_H(t) = \frac{1}{\Gamma(H+1/2)}\int_0^t (t-s)^{H-1/2} \, dB(s) }[/math]

where integration is with respect to the white noise measure dB(s). This integral turns out to be ill-suited as a definition of fractional Brownian motion because of its over-emphasis of the origin (Mandelbrot van Ness). It does not have stationary increments.

The idea instead is to use a different fractional integral of white noise to define the process: the Weyl integral

[math]\displaystyle{ B_H (t) = B_H (0) + \frac{1}{\Gamma(H+1/2)}\left\{\int_{-\infty}^0\left[(t-s)^{H-1/2}-(-s)^{H-1/2}\right]\,dB(s) + \int_0^t (t-s)^{H-1/2}\,dB(s)\right\} }[/math]

for t > 0 (and similarly for t < 0). The resulting process has stationary increments.

The main difference between fractional Brownian motion and regular Brownian motion is that while the increments in Brownian Motion are independent, increments for fractional Brownian motion are not. If H > 1/2, then there is positive autocorrelation: if there is an increasing pattern in the previous steps, then it is likely that the current step will be increasing as well. If H < 1/2, the autocorrelation is negative.

Properties

Self-similarity

The process is self-similar, since in terms of probability distributions:

[math]\displaystyle{ B_H (at) \sim |a|^{H}B_H (t). }[/math]

This property is due to the fact that the covariance function is homogeneous of order 2H and can be considered as a fractal property. FBm can also be defined as the unique mean-zero Gaussian process, null at the origin, with stationary and self-similar increments.

Stationary increments

It has stationary increments:

[math]\displaystyle{ B_H (t) - B_H (s)\; \sim \; B_H (t-s). }[/math]

Long-range dependence

For H > ½ the process exhibits long-range dependence,

[math]\displaystyle{ \sum_{n=1}^\infty E[B_H (1)(B_H (n+1)-B_H (n))] = \infty. }[/math]

Regularity

Sample-paths are almost nowhere differentiable. However, almost-all trajectories are locally Hölder continuous of any order strictly less than H: for each such trajectory, for every T > 0 and for every ε > 0 there exists a (random) constant c such that

[math]\displaystyle{ |B_H (t)-B_H (s)| \le c |t-s|^{H-\varepsilon} }[/math]

for 0 < s,t < T.

Dimension

With probability 1, the graph of BH(t) has both Hausdorff dimension[2] and box dimension [3] of 2−H.

Integration

As for regular Brownian motion, one can define stochastic integrals with respect to fractional Brownian motion, usually called "fractional stochastic integrals". In general though, unlike integrals with respect to regular Brownian motion, fractional stochastic integrals are not semimartingales.

Frequency-domain interpretation

Just as Brownian motion can be viewed as white noise filtered by [math]\displaystyle{ \omega^{-1} }[/math] (i.e. integrated), fractional Brownian motion is white noise filtered by [math]\displaystyle{ \omega^{-H-1/2} }[/math] (corresponding to fractional integration).

Sample paths

Practical computer realisations of an fBm can be generated,[4] although they are only a finite approximation. The sample paths chosen can be thought of as showing discrete sampled points on an fBm process. Three realizations are shown below, each with 1000 points of an fBm with Hurst parameter 0.75.

"H" = 0.75 realisation 1
"H" = 0.75 realisation 2
"H" = 0.75 realisation 3

Realizations of three different types of fBm are shown below, each showing 1000 points, the first with Hurst parameter 0.15, the second with Hurst parameter 0.55, and the third with Hurst parameter 0.95. The higher the Hurst parameter is, the smoother the curve will be.

"H" = 0.15
"H" = 0.55
"H" = 0.95

Method 1 of simulation

One can simulate sample-paths of an fBm using methods for generating stationary Gaussian processes with known covariance function. The simplest method relies on the Cholesky decomposition method of the covariance matrix (explained below), which on a grid of size [math]\displaystyle{ n }[/math] has complexity of order [math]\displaystyle{ O(n^3) }[/math]. A more complex, but computationally faster method is the circulant embedding method of (Dietrich Newsam ).

Suppose we want to simulate the values of the fBM at times [math]\displaystyle{ t_1, \ldots, t_n }[/math] using the Cholesky decomposition method.

  • Form the matrix [math]\displaystyle{ \Gamma=\bigl(R(t_i,\, t_j), i,j=1,\ldots,\, n\bigr) }[/math] where [math]\displaystyle{ \,R(t,s)=(s^{2H}+t^{2H}-|t-s|^{2H})/2 }[/math].
  • Compute [math]\displaystyle{ \,\Sigma }[/math] the square root matrix of [math]\displaystyle{ \,\Gamma }[/math], i.e. [math]\displaystyle{ \,\Sigma^2 = \Gamma }[/math]. Loosely speaking, [math]\displaystyle{ \,\Sigma }[/math] is the "standard deviation" matrix associated to the variance-covariance matrix [math]\displaystyle{ \,\Gamma }[/math].
  • Construct a vector [math]\displaystyle{ \,v }[/math] of n numbers drawn independently according to a standard Gaussian distribution,
  • If we define [math]\displaystyle{ \,u=\Sigma v }[/math] then [math]\displaystyle{ \,u }[/math] yields a sample path of an fBm.

In order to compute [math]\displaystyle{ \,\Sigma }[/math], we can use for instance the Cholesky decomposition method. An alternative method uses the eigenvalues of [math]\displaystyle{ \,\Gamma }[/math]:

  • Since [math]\displaystyle{ \,\Gamma }[/math] is symmetric, positive-definite matrix, it follows that all eigenvalues [math]\displaystyle{ \,\lambda_i }[/math] of [math]\displaystyle{ \,\Gamma }[/math] satisfy [math]\displaystyle{ \,\lambda_i\gt 0 }[/math], ([math]\displaystyle{ i=1,\dots,n }[/math]).
  • Let [math]\displaystyle{ \,\Lambda }[/math] be the diagonal matrix of the eigenvalues, i.e. [math]\displaystyle{ \Lambda_{ij} = \lambda_i\,\delta_{ij} }[/math] where [math]\displaystyle{ \delta_{ij} }[/math] is the Kronecker delta. We define [math]\displaystyle{ \Lambda^{1/2} }[/math] as the diagonal matrix with entries [math]\displaystyle{ \lambda_i^ {1/2} }[/math], i.e. [math]\displaystyle{ \Lambda_{ij}^{1/2} = \lambda_i^{1/2}\,\delta_{ij} }[/math].

Note that the result is real-valued because [math]\displaystyle{ \lambda_i\gt 0 }[/math].

  • Let [math]\displaystyle{ \,v_i }[/math] an eigenvector associated to the eigenvalue [math]\displaystyle{ \,\lambda_i }[/math]. Define [math]\displaystyle{ \,P }[/math] as the matrix whose [math]\displaystyle{ i }[/math]-th column is the eigenvector [math]\displaystyle{ \,v_i }[/math].

Note that since the eigenvectors are linearly independent, the matrix [math]\displaystyle{ \,P }[/math] is invertible.

  • It follows then that [math]\displaystyle{ \Sigma = P\,\Lambda^{1/2}\,P^{-1} }[/math] because [math]\displaystyle{ \Gamma= P\,\Lambda\,P^{-1} }[/math].

Method 2 of simulation

It is also known that [5]

[math]\displaystyle{ B_H (t)=\int_0^t K_H(t,s) \, dB(s) }[/math]

where B is a standard Brownian motion and

[math]\displaystyle{ K_H(t,s)=\frac{(t-s)^{H-\frac{1}{2}}}{\Gamma(H+\frac{1}{2})}\;_2F_1\left (H-\frac{1}{2};\, \frac{1}{2}-H;\; H+\frac{1}{2};\, 1-\frac{t}{s} \right). }[/math]

Where [math]\displaystyle{ _2F_1 }[/math] is the Euler hypergeometric integral.

Say we want to simulate an fBm at points [math]\displaystyle{ 0=t_0\lt t_1\lt \cdots \lt t_n=T }[/math].

  • Construct a vector of n numbers drawn according to a standard Gaussian distribution.
  • Multiply it component-wise by T/n to obtain the increments of a Brownian motion on [0, T]. Denote this vector by [math]\displaystyle{ (\delta B_1, \ldots, \delta B_n) }[/math].
  • For each [math]\displaystyle{ t_j }[/math], compute
[math]\displaystyle{ B_H (t_j)=\frac{n}{T}\sum_{i=0}^{j-1} \int_{t_i}^{t_{i+1}} K_H(t_j,\, s)\, ds \ \delta B_i. }[/math]

The integral may be efficiently computed by Gaussian quadrature.

See also

Notes

  1. Perrin et al., 2001.
  2. Orey, 1970.
  3. Falconer, Kenneth (2003). Fractal Geometry Mathematical Foundations and Applications (2 ed.). Wiley. p. 268. ISBN 0-470-84861-8. https://archive.org/details/FractalGeometry/page/n289/mode/2up. Retrieved 23 January 2024. 
  4. Kroese, D.P.; Botev, Z.I. (2014). "Spatial Process Generation". Lectures on Stochastic Geometry, Spatial Statistics and Random Fields, Volume II: Analysis, Modeling and Simulation of Complex Structures, Springer-Verlag, Berlin. Bibcode2013arXiv1308.0399K. 
  5. Decreusefond, L.; Üstünel, A. S. (1999). Stochastic analysis of the fractional Brownian motion. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.201.5698&rep=rep1&type=pdf. 

References

  • Beran, J. (1994), Statistics for Long-Memory Processes, Chapman & Hall, ISBN 0-412-04901-5 .
  • Craigmile P.F. (2003), "Simulating a class of stationary Gaussian processes using the Davies–Harte Algorithm, with application to long memory processes", Journal of Times Series Analysis, 24: 505–511.
  • Dieker, T. (2004). Simulation of fractional Brownian motion (PDF) (M.Sc. thesis). Retrieved 29 December 2012.
  • Dietrich, C. R.; Newsam, G. N. (1997), "Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix.", SIAM Journal on Scientific Computing 18 (4): 1088–1107, doi:10.1137/s1064827592240555, Bibcode1997SJSC...18.1088D .
  • Falconer, Kenneth (2003), Fractal Geometry Mathematical Foundations and Applications (2 ed.), Wiley, pp. 267-271, ISBN 0-470-84861-8, https://archive.org/details/FractalGeometry/page/n289/mode/2up, retrieved 23 January 2024 .
  • Lévy, P. (1953), Random functions: General theory with special references to Laplacian random functions, University of California Publications in Statistics, 1, pp. 331–390 .
  • Mandelbrot, B.; van Ness, J.W. (1968), "Fractional Brownian motions, fractional noises and applications", SIAM Review 10 (4): 422–437, doi:10.1137/1010093, Bibcode1968SIAMR..10..422M .
  • Orey, Steven (1970), "Gaussian sample functions and the Hausdorff dimension of level crossings", Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 15 (3): 249–256, doi:10.1007/BF00534922 .
  • Perrin, E.; Harba, R.; Berzin-Joseph, C.; Iribarren, I.; Bonami, A. (2001). "NTH-order fractional Brownian motion and fractional Gaussian noises". IEEE Transactions on Signal Processing 49 (5): 1049–1059. doi:10.1109/78.917808. Bibcode2001ITSP...49.1049P. https://ieeexplore.ieee.org/document/917808. 
  • Samorodnitsky G., Taqqu M.S. (1994), Stable Non-Gaussian Random Processes, Chapter 7: "Self-similar processes" (Chapman & Hall).

Further reading