Stochastic logarithm
In stochastic calculus, stochastic logarithm of a semimartingale [math]\displaystyle{ Y }[/math]such that [math]\displaystyle{ Y\neq0 }[/math] and [math]\displaystyle{ Y_-\neq0 }[/math] is the semimartingale [math]\displaystyle{ X }[/math] given by[1][math]\displaystyle{ dX_t=\frac{dY_t}{Y_{t-}},\quad X_0=0. }[/math]In layperson's terms, stochastic logarithm of [math]\displaystyle{ Y }[/math] measures the cumulative percentage change in [math]\displaystyle{ Y }[/math].
Notation and terminology
The process [math]\displaystyle{ X }[/math] obtained above is commonly denoted [math]\displaystyle{ \mathcal{L}(Y) }[/math]. The terminology stochastic logarithm arises from the similarity of [math]\displaystyle{ \mathcal{L}(Y) }[/math] to the natural logarithm [math]\displaystyle{ \log(Y) }[/math]: If [math]\displaystyle{ Y }[/math] is absolutely continuous with respect to time and [math]\displaystyle{ Y\neq 0 }[/math], then [math]\displaystyle{ X }[/math] solves, path-by-path, the differential equation [math]\displaystyle{ \frac{dX_t}{dt} = \frac{\frac{dY_t}{dt}}{Y_t}, }[/math]whose solution is [math]\displaystyle{ X =\log|Y|-\log|Y_0| }[/math].
General formula and special cases
- Without any assumptions on the semimartingale [math]\displaystyle{ Y }[/math] (other than [math]\displaystyle{ Y\neq 0, Y_-\neq 0 }[/math]), one has[1][math]\displaystyle{ \mathcal{L}(Y)_t = \log\Biggl|\frac{Y_t}{Y_0}\Biggl| +\frac12\int_0^t\frac{d[Y]^c_s}{Y_{s-}^2} +\sum_{s\le t}\Biggl(\log\Biggl| 1 + \frac{\Delta Y_s}{Y_{s-}} \Biggr| -\frac{\Delta Y_s}{Y_{s-}}\Biggr),\qquad t\ge0, }[/math]where [math]\displaystyle{ [Y]^c }[/math] is the continuous part of quadratic variation of [math]\displaystyle{ Y }[/math] and the sum extends over the (countably many) jumps of [math]\displaystyle{ Y }[/math] up to time [math]\displaystyle{ t }[/math].
- If [math]\displaystyle{ Y }[/math] is continuous, then [math]\displaystyle{ \mathcal{L}(Y)_t = \log\Biggl|\frac{Y_t}{Y_0}\Biggl| +\frac12\int_0^t\frac{d[Y]^c_s}{Y_{s-}^2},\qquad t\ge0. }[/math]In particular, if [math]\displaystyle{ Y }[/math] is a geometric Brownian motion, then [math]\displaystyle{ X }[/math] is a Brownian motion with a constant drift rate.
- If [math]\displaystyle{ Y }[/math] is continuous and of finite variation, then[math]\displaystyle{ \mathcal{L}(Y) = \log\Biggl|\frac{Y}{Y_0}\Biggl|. }[/math]Here [math]\displaystyle{ Y }[/math] need not be differentiable with respect to time; for example, [math]\displaystyle{ Y }[/math] can equal 1 plus the Cantor function.
Properties
- Stochastic logarithm is an inverse operation to stochastic exponential: If [math]\displaystyle{ \Delta X\neq -1 }[/math], then [math]\displaystyle{ \mathcal{L}(\mathcal{E}(X)) = X-X_0 }[/math]. Conversely, if [math]\displaystyle{ Y\neq 0 }[/math] and [math]\displaystyle{ Y_-\neq 0 }[/math], then [math]\displaystyle{ \mathcal{E}(\mathcal{L}(Y)) = Y/Y_0 }[/math].[1]
- Unlike the natural logarithm [math]\displaystyle{ \log(Y_t) }[/math], which depends only of the value of [math]\displaystyle{ Y }[/math] at time [math]\displaystyle{ t }[/math], the stochastic logarithm [math]\displaystyle{ \mathcal{L}(Y)_t }[/math] depends not only on [math]\displaystyle{ Y_t }[/math] but on the whole history of [math]\displaystyle{ Y }[/math] in the time interval [math]\displaystyle{ [0,t] }[/math]. For this reason one must write [math]\displaystyle{ \mathcal{L}(Y)_t }[/math] and not [math]\displaystyle{ \mathcal{L}(Y_t) }[/math].
- Stochastic logarithm of a local martingale that does not vanish together with its left limit is again a local martingale.
- All the formulae and properties above apply also to stochastic logarithm of a complex-valued [math]\displaystyle{ Y }[/math].
- Stochastic logarithm can be defined also for processes [math]\displaystyle{ Y }[/math] that are absorbed in zero after jumping to zero. Such definition is meaningful up to the first time that [math]\displaystyle{ Y }[/math] reaches [math]\displaystyle{ 0 }[/math] continuously.[2]
Useful identities
- Converse of the Yor formula:[1] If [math]\displaystyle{ Y^{(1)},Y^{(2)} }[/math] do not vanish together with their left limits, then[math]\displaystyle{ \mathcal{L}\bigl(Y^{(1)}Y^{(2)}\bigr) = \mathcal{L}\bigl(Y^{(1)}\bigr) + \mathcal{L}\bigl(Y^{(2)}\bigr) + \bigl[\mathcal{L}\bigl(Y^{(1)}\bigr),\mathcal{L}\bigl(Y^{(2)}\bigr)\bigr]. }[/math]
- Stochastic logarithm of [math]\displaystyle{ 1/\mathcal{E}(X) }[/math]:[2] If [math]\displaystyle{ \Delta X\neq -1 }[/math], then[math]\displaystyle{ \mathcal{L}\biggl(\frac{1}{\mathcal{E}(X)}\biggr)_t = X_0-X_t-[X]^c_t +\sum_{s\leq t}\frac{(\Delta X_s)^2}{1+\Delta X_s}. }[/math]
Applications
- Girsanov's theorem can be paraphrased as follows: Let [math]\displaystyle{ Q }[/math] be a probability measure equivalent to another probability measure [math]\displaystyle{ P }[/math]. Denote by [math]\displaystyle{ Z }[/math] the uniformly integrable martingale closed by [math]\displaystyle{ Z_\infty = dQ/dP }[/math]. For a semimartingale [math]\displaystyle{ U }[/math] the following are equivalent:
- Process [math]\displaystyle{ U }[/math] is special under [math]\displaystyle{ Q }[/math].
- Process [math]\displaystyle{ U+[U,\mathcal{L}(Z)] }[/math] is special under [math]\displaystyle{ P }[/math].
- + If either of these conditions holds, then the [math]\displaystyle{ Q }[/math]-drift of [math]\displaystyle{ U }[/math] equals the [math]\displaystyle{ P }[/math]-drift of [math]\displaystyle{ U+[U,\mathcal{L}(Z)] }[/math].
References
- ↑ 1.0 1.1 1.2 1.3 Jacod, Jean; Shiryaev, Albert Nikolaevich (2003). Limit theorems for stochastic processes (2nd ed.). Berlin: Springer. pp. 134–138. ISBN 3-540-43932-3. OCLC 50554399. https://www.worldcat.org/oclc/50554399.
- ↑ 2.0 2.1 Larsson, Martin; Ruf, Johannes (2019). "Stochastic exponentials and logarithms on stochastic intervals — A survey" (in en). Journal of Mathematical Analysis and Applications 476 (1): 2–12. doi:10.1016/j.jmaa.2018.11.040. https://linkinghub.elsevier.com/retrieve/pii/S0022247X18309867.
See also
Original source: https://en.wikipedia.org/wiki/Stochastic logarithm.
Read more |