Onsager–Machlup function

From HandWiki
Short description: Concept in mathematics

The Onsager–Machlup function is a function that summarizes the dynamics of a continuous stochastic process. It is used to define a probability density for a stochastic process, and it is similar to the Lagrangian of a dynamical system. It is named after Lars Onsager and Stefan Machlup [de] who were the first to consider such probability densities.[1]

The dynamics of a continuous stochastic process X from time t = 0 to t = T in one dimension, satisfying a stochastic differential equation

[math]\displaystyle{ dX_t = b(X_t)\,dt + \sigma(X_t)\,dW_t }[/math]

where W is a Wiener process, can in approximation be described by the probability density function of its value xi at a finite number of points in time ti:

[math]\displaystyle{ p(x_1,\ldots,x_n) = \left( \prod^{n-1}_{i=1} \frac{1}{\sqrt{2\pi\sigma(x_i)^2\Delta t_i}} \right) \exp\left(-\sum^{n-1}_{i=1} L\left(x_i,\frac{x_{i+1}-x_i}{\Delta t_i}\right) \, \Delta t_i \right) }[/math]

where

[math]\displaystyle{ L(x,v) = \frac{1}{2}\left(\frac{v - b(x)}{\sigma(x)}\right)^2 }[/math]

and Δti = ti+1ti > 0, t1 = 0 and tn = T. A similar approximation is possible for processes in higher dimensions. The approximation is more accurate for smaller time step sizes Δti, but in the limit Δti → 0 the probability density function becomes ill defined, one reason being that the product of terms

[math]\displaystyle{ \frac{1}{\sqrt{2\pi\sigma(x_i)^2\Delta t_i}} }[/math]

diverges to infinity. In order to nevertheless define a density for the continuous stochastic process X, ratios of probabilities of X lying within a small distance ε from smooth curves φ1 and φ2 are considered:[2]

[math]\displaystyle{ \frac{P\left( \left |X_t - \varphi_1(t) \right| \leq \varepsilon \text{ for every }t\in[0,T] \right)}{P\left( \left |X_t - \varphi_2(t) \right | \leq \varepsilon \text{ for every }t\in[0,T] \right)} \to \exp\left(-\int^T_0 L \left (\varphi_1(t),\dot{\varphi}_1(t) \right ) \, dt + \int^T_0 L \left (\varphi_2(t),\dot{\varphi}_2(t) \right) \, dt \right) }[/math]

as ε → 0, where L is the Onsager–Machlup function.

Definition

Consider a d-dimensional Riemannian manifold M and a diffusion process X = {Xt : 0 ≤ tT} on M with infinitesimal generator 1/2ΔM + b, where ΔM is the Laplace–Beltrami operator and b is a vector field. For any two smooth curves φ1, φ2 : [0, T] → M,

[math]\displaystyle{ \lim_{\varepsilon\downarrow0} \frac{P\left( \rho(X_t,\varphi_1(t)) \leq \varepsilon \text{ for every }t\in[0,T] \right)}{P\left( \rho(X_t,\varphi_2(t)) \leq \varepsilon \text{ for every }t\in[0,T] \right)} = \exp\left( -\int^T_0 L \left (\varphi_1(t),\dot{\varphi}_1(t) \right ) \, dt +\int^T_0 L \left (\varphi_2(t),\dot{\varphi}_2(t) \right ) \, dt \right) }[/math]

where ρ is the Riemannian distance, [math]\displaystyle{ \scriptstyle \dot{\varphi}_1, \dot{\varphi}_2 }[/math] denote the first derivatives of φ1, φ2, and L is called the Onsager–Machlup function.

The Onsager–Machlup function is given by[3][4][5]

[math]\displaystyle{ L(x,v) = \tfrac{1}{2}\|v-b(x)\|_x^2 +\tfrac{1}{2}\operatorname{div}\, b(x) - \tfrac{1}{12}R(x), }[/math]

where || ⋅ ||x is the Riemannian norm in the tangent space Tx(M) at x, div b(x) is the divergence of b at x, and R(x) is the scalar curvature at x.

Examples

The following examples give explicit expressions for the Onsager–Machlup function of a continuous stochastic processes.

Wiener process on the real line

The Onsager–Machlup function of a Wiener process on the real line R is given by[6]

[math]\displaystyle{ L(x,v)=\tfrac{1}{2}|v|^2. }[/math]

Proof: Let X = {Xt : 0 ≤ tT} be a Wiener process on R and let φ : [0, T] → R be a twice differentiable curve such that φ(0) = X0. Define another process Xφ = {Xtφ : 0 ≤ tT} by Xtφ = Xtφ(t) and a measure Pφ by

[math]\displaystyle{ P^\varphi = \exp\left( \int^T_0\dot{\varphi}(t) \, dX^\varphi_t + \int^T_0\tfrac{1}{2} \left |\dot{\varphi}(t) \right |^2 \, dt \right) \, dP. }[/math]

For every ε > 0, the probability that |Xtφ(t)| ≤ ε for every t ∈ [0, T] satisfies

[math]\displaystyle{ \begin{align} P \left ( \left |X_t-\varphi(t) \right |\leq\varepsilon \text{ for every }t\in[0,T] \right ) &=P\left ( \left |X^\varphi_t \right|\leq\varepsilon \text{ for every }t\in[0,T] \right) \\ &=\int_{\left \{ \left |X^\varphi_t \right |\leq\varepsilon\text{ for every }t\in[0,T] \right\}} \exp\left( -\int^T_0\dot{\varphi}(t) \, dX^\varphi_t -\int^T_0\tfrac{1}{2}|\dot{\varphi}(t)|^2 \, dt \right) \, dP^\varphi. \end{align} }[/math]

By Girsanov's theorem, the distribution of Xφ under Pφ equals the distribution of X under P, hence the latter can be substituted by the former:

[math]\displaystyle{ P(|X_t-\varphi(t)|\leq\varepsilon \text{ for every }t\in[0,T])=\int_{\left \{ \left |X^\varphi_t \right |\leq\varepsilon\text{ for every }t\in[0,T] \right\}} \exp\left( -\int^T_0\dot{\varphi}(t) \, dX_t -\int^T_0\tfrac{1}{2}|\dot{\varphi}(t)|^2 \, dt \right) \, dP. }[/math]

By Itō's lemma it holds that

[math]\displaystyle{ \int^T_0\dot{\varphi}(t) \, dX_t = \dot{\varphi}(T)X_T - \int^T_0\ddot{\varphi}(t)X_t \, dt, }[/math]

where [math]\displaystyle{ \scriptstyle \ddot{\varphi} }[/math] is the second derivative of φ, and so this term is of order ε on the event where |Xt| ≤ ε for every t ∈ [0, T] and will disappear in the limit ε → 0, hence

[math]\displaystyle{ \lim_{\varepsilon\downarrow 0} \frac{P(|X_t-\varphi(t)|\leq\varepsilon \text{ for every }t\in[0,T])}{P(|X_t|\leq\varepsilon\text{ for every } t \in [0,T])} =\exp\left( -\int^T_0\tfrac{1}{2}|\dot{\varphi}(t)|^2 \, dt \right). }[/math]

Diffusion processes with constant diffusion coefficient on Euclidean space

The Onsager–Machlup function in the one-dimensional case with constant diffusion coefficient σ is given by[7]

[math]\displaystyle{ L(x,v)=\frac{1}{2}\left|\frac{v-b(x)}{\sigma}\right|^2 + \frac{1}{2}\frac{db}{dx}(x). }[/math]

In the d-dimensional case, with σ equal to the unit matrix, it is given by[8]

[math]\displaystyle{ L(x,v)=\frac{1}{2}\|v-b(x)\|^2 + \frac{1}{2}(\operatorname{div}\, b)(x), }[/math]

where || ⋅ || is the Euclidean norm and

[math]\displaystyle{ (\operatorname{div}\, b)(x) = \sum_{i=1}^d \frac{\partial}{\partial x_i} b_i(x). }[/math]

Generalizations

Generalizations have been obtained by weakening the differentiability condition on the curve φ.[9] Rather than taking the maximum distance between the stochastic process and the curve over a time interval, other conditions have been considered such as distances based on completely convex norms[10] and Hölder, Besov and Sobolev type norms.[11]

Applications

The Onsager–Machlup function can be used for purposes of reweighting and sampling trajectories,[12] as well as for determining the most probable trajectory of a diffusion process.[13][14]

See also

References

  1. Onsager, L. and Machlup, S. (1953)
  2. Stratonovich, R. (1971)
  3. Takahashi, Y. and Watanabe, S. (1980)
  4. Fujita, T. and Kotani, S. (1982)
  5. Wittich, Olaf
  6. Ikeda, N. and Watanabe, S. (1980), Chapter VI, Section 9
  7. Dürr, D. and Bach, A. (1978)
  8. Ikeda, N. and Watanabe, S. (1980), Chapter VI, Section 9
  9. Zeitouni, O. (1989)
  10. Shepp, L. and Zeitouni, O. (1993)
  11. Capitaine, M. (1995)
  12. Adib, A.B. (2008).
  13. Adib, A.B. (2008).
  14. Dürr, D. and Bach, A. (1978).

Bibliography

External links