Antithetic variates
In statistics, the antithetic variates method is a variance reduction technique used in Monte Carlo methods. Considering that the error in the simulated signal (using Monte Carlo methods) has a one-over square root convergence, a very large number of sample paths is required to obtain an accurate result. The antithetic variates method reduces the variance of the simulation results.[1][2]
Underlying principle
The antithetic variates technique consists, for every sample path obtained, in taking its antithetic path — that is given a path [math]\displaystyle{ \{\varepsilon_1,\dots,\varepsilon_M\} }[/math] to also take [math]\displaystyle{ \{-\varepsilon_1,\dots,-\varepsilon_M\} }[/math]. The advantage of this technique is twofold: it reduces the number of normal samples to be taken to generate N paths, and it reduces the variance of the sample paths, improving the precision.
Suppose that we would like to estimate
- [math]\displaystyle{ \theta = \mathrm{E}( h(X) ) = \mathrm{E}( Y ) \, }[/math]
For that we have generated two samples
- [math]\displaystyle{ Y_1\text{ and }Y_2 \, }[/math]
An unbiased estimate of [math]\displaystyle{ {\theta} }[/math] is given by
- [math]\displaystyle{ \hat \theta = \frac{Y_1 + Y_2}{2}. }[/math]
And
- [math]\displaystyle{ \text{Var}(\hat \theta) = \frac{\text{Var}(Y_1) + \text{Var}(Y_2) + 2\text{Cov}(Y_1,Y_2)}{4} }[/math]
so variance is reduced if [math]\displaystyle{ \text{Cov}(Y_1,Y_2) }[/math] is negative.
Example 1
If the law of the variable X follows a uniform distribution along [0, 1], the first sample will be [math]\displaystyle{ u_1, \ldots, u_n }[/math], where, for any given i, [math]\displaystyle{ u_i }[/math] is obtained from U(0, 1). The second sample is built from [math]\displaystyle{ u'_1, \ldots, u'_n }[/math], where, for any given i: [math]\displaystyle{ u'_i = 1-u_i }[/math]. If the set [math]\displaystyle{ u_i }[/math] is uniform along [0, 1], so are [math]\displaystyle{ u'_i }[/math]. Furthermore, covariance is negative, allowing for initial variance reduction.
Example 2: integral calculation
We would like to estimate
- [math]\displaystyle{ I = \int_0^1 \frac{1}{1+x} \, \mathrm{d}x. }[/math]
The exact result is [math]\displaystyle{ I=\ln 2 \approx 0.69314718 }[/math]. This integral can be seen as the expected value of [math]\displaystyle{ f(U) }[/math], where
- [math]\displaystyle{ f(x) = \frac{1}{1+x} }[/math]
and U follows a uniform distribution [0, 1].
The following table compares the classical Monte Carlo estimate (sample size: 2n, where n = 1500) to the antithetic variates estimate (sample size: n, completed with the transformed sample 1 − ui):
Estimate standard error Classical Estimate 0.69365 0.00255 Antithetic Variates 0.69399 0.00063
The use of the antithetic variates method to estimate the result shows an important variance reduction.
See also
References
- ↑ Botev, Z.; Ridder, A. (2017). "Variance Reduction". Wiley StatsRef: Statistics Reference Online: 1–6. doi:10.1002/9781118445112.stat07975. ISBN 9781118445112.
- ↑ Kroese, D. P.; Taimre, T.; Botev, Z. I. (2011). Handbook of Monte Carlo methods. John Wiley & Sons.(Chapter 9.3)
Original source: https://en.wikipedia.org/wiki/Antithetic variates.
Read more |