Autoregressive conditional heteroskedasticity

From HandWiki
Short description: Time series model

In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms;[1] often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.[2]

ARCH models are commonly employed in modeling financial time series that exhibit time-varying volatility and volatility clustering, i.e. periods of swings interspersed with periods of relative calm. ARCH-type models are sometimes considered to be in the family of stochastic volatility models, although this is strictly incorrect since at time t the volatility is completely pre-determined (deterministic) given previous values.[3]

Model specification

To model a time series using an ARCH process, let [math]\displaystyle{ ~\epsilon_t~ }[/math]denote the error terms (return residuals, with respect to a mean process), i.e. the series terms. These [math]\displaystyle{ ~\epsilon_t~ }[/math] are split into a stochastic piece [math]\displaystyle{ z_t }[/math] and a time-dependent standard deviation [math]\displaystyle{ \sigma_t }[/math] characterizing the typical size of the terms so that

[math]\displaystyle{ ~\epsilon_t=\sigma_t z_t ~ }[/math]

The random variable [math]\displaystyle{ z_t }[/math] is a strong white noise process. The series [math]\displaystyle{ \sigma_t^2 }[/math] is modeled by

[math]\displaystyle{ \sigma_t^2=\alpha_0+\alpha_1 \epsilon_{t-1}^2+\cdots+\alpha_q \epsilon_{t-q}^2 = \alpha_0 + \sum_{i=1}^q \alpha_{i} \epsilon_{t-i}^2 }[/math],
where [math]\displaystyle{ ~\alpha_0\gt 0~ }[/math] and [math]\displaystyle{ \alpha_i\ge 0,~i\gt 0 }[/math].

An ARCH(q) model can be estimated using ordinary least squares. A method for testing whether the residuals [math]\displaystyle{ \epsilon_t }[/math] exhibit time-varying heteroskedasticity using the Lagrange multiplier test was proposed by Engle (1982). This procedure is as follows:

  1. Estimate the best fitting autoregressive model AR(q) [math]\displaystyle{ y_t = a_0 + a_1 y_{t-1} + \cdots + a_q y_{t-q} + \epsilon_t = a_0 + \sum_{i=1}^q a_i y_{t-i} + \epsilon_t }[/math].
  2. Obtain the squares of the error [math]\displaystyle{ \hat \epsilon^2 }[/math] and regress them on a constant and q lagged values:
    [math]\displaystyle{ \hat \epsilon_t^2 = \alpha_0 + \sum_{i=1}^{q} \alpha_i \hat \epsilon_{t-i}^2 }[/math]
    where q is the length of ARCH lags.
  3. The null hypothesis is that, in the absence of ARCH components, we have [math]\displaystyle{ \alpha_i = 0 }[/math] for all [math]\displaystyle{ i = 1, \cdots, q }[/math]. The alternative hypothesis is that, in the presence of ARCH components, at least one of the estimated [math]\displaystyle{ \alpha_i }[/math] coefficients must be significant. In a sample of T residuals under the null hypothesis of no ARCH errors, the test statistic T'R² follows [math]\displaystyle{ \chi^2 }[/math] distribution with q degrees of freedom, where [math]\displaystyle{ T' }[/math] is the number of equations in the model which fits the residuals vs the lags (i.e. [math]\displaystyle{ T'=T-q }[/math]). If T'R² is greater than the Chi-square table value, we reject the null hypothesis and conclude there is an ARCH effect in the ARMA model. If T'R² is smaller than the Chi-square table value, we do not reject the null hypothesis.

GARCH

If an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.[2]

In that case, the GARCH (p, q) model (where p is the order of the GARCH terms [math]\displaystyle{ ~\sigma^2 }[/math] and q is the order of the ARCH terms [math]\displaystyle{ ~\epsilon^2 }[/math] ), following the notation of the original paper, is given by

[math]\displaystyle{ y_t=x'_t b +\epsilon_t }[/math]

[math]\displaystyle{ \epsilon_t| \psi_{t-1} \sim\mathcal{N}(0, \sigma^2_t) }[/math]

[math]\displaystyle{ \sigma_t^2=\omega + \alpha_1 \epsilon_{t-1}^2 + \cdots + \alpha_q \epsilon_{t-q}^2 + \beta_1 \sigma_{t-1}^2 + \cdots + \beta_p\sigma_{t-p}^2 = \omega + \sum_{i=1}^q \alpha_i \epsilon_{t-i}^2 + \sum_{i=1}^p \beta_i \sigma_{t-i}^2 }[/math]

Generally, when testing for heteroskedasticity in econometric models, the best test is the White test. However, when dealing with time series data, this means to test for ARCH and GARCH errors.

Exponentially weighted moving average (EWMA) is an alternative model in a separate class of exponential smoothing models. As an alternative to GARCH modelling it has some attractive properties such as a greater weight upon more recent observations, but also drawbacks such as an arbitrary decay factor that introduces subjectivity into the estimation.

GARCH(p, q) model specification

The lag length p of a GARCH(p, q) process is established in three steps:

  1. Estimate the best fitting AR(q) model
    [math]\displaystyle{ y_t = a_0 + a_1 y_{t-1} + \cdots + a_q y_{t-q} + \epsilon_t = a_0 + \sum_{i=1}^q a_i y_{t-i} + \epsilon_t }[/math].
  2. Compute and plot the autocorrelations of [math]\displaystyle{ \epsilon^2 }[/math] by
    [math]\displaystyle{ \rho = {{\sum^T_{t=i+1} (\hat \epsilon^2_t - \hat \sigma^2_t) (\hat \epsilon^2_{t-1} - \hat \sigma^2_{t-1})} \over {\sum^T_{t=1} (\hat \epsilon^2_t - \hat \sigma^2_t)^2}} }[/math]
  3. The asymptotic, that is for large samples, standard deviation of [math]\displaystyle{ \rho (i) }[/math] is [math]\displaystyle{ 1/\sqrt{T} }[/math]. Individual values that are larger than this indicate GARCH errors. To estimate the total number of lags, use the Ljung–Box test until the value of these are less than, say, 10% significant. The Ljung–Box Q-statistic follows [math]\displaystyle{ \chi^2 }[/math] distribution with n degrees of freedom if the squared residuals [math]\displaystyle{ \epsilon^2_t }[/math] are uncorrelated. It is recommended to consider up to T/4 values of n. The null hypothesis states that there are no ARCH or GARCH errors. Rejecting the null thus means that such errors exist in the conditional variance.

NGARCH

NAGARCH

Nonlinear Asymmetric GARCH(1,1) (NAGARCH) is a model with the specification:[4][5]

[math]\displaystyle{ ~\sigma_{t}^2= ~\omega + ~\alpha (~\epsilon_{t-1} - ~\theta~\sigma_{t-1})^2 + ~\beta ~\sigma_{t-1}^2 }[/math],
where [math]\displaystyle{ ~\alpha\geq 0 , ~\beta \geq 0 , ~\omega \gt 0 }[/math] and [math]\displaystyle{ ~\alpha (1 + ~\theta^2) + ~\beta \lt 1 }[/math], which ensures the non-negativity and stationarity of the variance process.

For stock returns, parameter [math]\displaystyle{ ~ \theta }[/math] is usually estimated to be positive; in this case, it reflects a phenomenon commonly referred to as the "leverage effect", signifying that negative returns increase future volatility by a larger amount than positive returns of the same magnitude.[4][5]

This model should not be confused with the NARCH model, together with the NGARCH extension, introduced by Higgins and Bera in 1992.[6]

IGARCH

Integrated Generalized Autoregressive Conditional heteroskedasticity (IGARCH) is a restricted version of the GARCH model, where the persistent parameters sum up to one, and imports a unit root in the GARCH process. The condition for this is

[math]\displaystyle{ \sum^p_{i=1} ~\beta_{i} +\sum_{i=1}^q~\alpha_{i} = 1 }[/math].

EGARCH

The exponential generalized autoregressive conditional heteroskedastic (EGARCH) model by Nelson & Cao (1991) is another form of the GARCH model. Formally, an EGARCH(p,q):

[math]\displaystyle{ \log\sigma_{t}^2=\omega+\sum_{k=1}^{q}\beta_{k}g(Z_{t-k})+\sum_{k=1}^{p}\alpha_{k}\log\sigma_{t-k}^{2} }[/math]

where [math]\displaystyle{ g(Z_{t})=\theta Z_{t}+\lambda(|Z_{t}|-E(|Z_{t}|)) }[/math], [math]\displaystyle{ \sigma_{t}^{2} }[/math] is the conditional variance, [math]\displaystyle{ \omega }[/math], [math]\displaystyle{ \beta }[/math], [math]\displaystyle{ \alpha }[/math], [math]\displaystyle{ \theta }[/math] and [math]\displaystyle{ \lambda }[/math] are coefficients. [math]\displaystyle{ Z_{t} }[/math] may be a standard normal variable or come from a generalized error distribution. The formulation for [math]\displaystyle{ g(Z_{t}) }[/math] allows the sign and the magnitude of [math]\displaystyle{ Z_{t} }[/math] to have separate effects on the volatility. This is particularly useful in an asset pricing context.[7][8]

Since [math]\displaystyle{ \log\sigma_{t}^{2} }[/math] may be negative, there are no sign restrictions for the parameters.

GARCH-M

The GARCH-in-mean (GARCH-M) model adds a heteroskedasticity term into the mean equation. It has the specification:

[math]\displaystyle{ y_t = ~\beta x_t + ~\lambda ~\sigma_t + ~\epsilon_t }[/math]

The residual [math]\displaystyle{ ~\epsilon_t }[/math] is defined as:

[math]\displaystyle{ ~\epsilon_t = ~\sigma_t ~\times z_t }[/math]

QGARCH

The Quadratic GARCH (QGARCH) model by Sentana (1995) is used to model asymmetric effects of positive and negative shocks.

In the example of a GARCH(1,1) model, the residual process [math]\displaystyle{ ~\sigma_t }[/math] is

[math]\displaystyle{ ~\epsilon_t = ~\sigma_t z_t }[/math]

where [math]\displaystyle{ z_t }[/math] is i.i.d. and

[math]\displaystyle{ ~\sigma_t^2 = K + ~\alpha ~\epsilon_{t-1}^2 + ~\beta ~\sigma_{t-1}^2 + ~\phi ~\epsilon_{t-1} }[/math]

GJR-GARCH

Similar to QGARCH, the Glosten-Jagannathan-Runkle GARCH (GJR-GARCH) model by Glosten, Jagannathan and Runkle (1993) also models asymmetry in the ARCH process. The suggestion is to model [math]\displaystyle{ ~\epsilon_t = ~\sigma_t z_t }[/math] where [math]\displaystyle{ z_t }[/math] is i.i.d., and

[math]\displaystyle{ ~\sigma_t^2 = K + ~\delta ~\sigma_{t-1}^2 + ~\alpha ~\epsilon_{t-1}^2 + ~\phi ~\epsilon_{t-1}^2 I_{t-1} }[/math]

where [math]\displaystyle{ I_{t-1} = 0 }[/math] if [math]\displaystyle{ ~\epsilon_{t-1} \ge 0 }[/math], and [math]\displaystyle{ I_{t-1} = 1 }[/math] if [math]\displaystyle{ ~\epsilon_{t-1} \lt 0 }[/math].

TGARCH model

The Threshold GARCH (TGARCH) model by Zakoian (1994) is similar to GJR GARCH. The specification is one on conditional standard deviation instead of conditional variance:

[math]\displaystyle{ ~\sigma_t = K + ~\delta ~\sigma_{t-1} + ~\alpha_1^{+} ~\epsilon_{t-1}^{+} + ~\alpha_1^{-} ~\epsilon_{t-1}^{-} }[/math]

where [math]\displaystyle{ ~\epsilon_{t-1}^{+} = ~\epsilon_{t-1} }[/math] if [math]\displaystyle{ ~\epsilon_{t-1} \gt 0 }[/math], and [math]\displaystyle{ ~\epsilon_{t-1}^{+} = 0 }[/math] if [math]\displaystyle{ ~\epsilon_{t-1} \le 0 }[/math]. Likewise, [math]\displaystyle{ ~\epsilon_{t-1}^{-} = ~\epsilon_{t-1} }[/math] if [math]\displaystyle{ ~\epsilon_{t-1} \le 0 }[/math], and [math]\displaystyle{ ~\epsilon_{t-1}^{-} = 0 }[/math] if [math]\displaystyle{ ~\epsilon_{t-1} \gt 0 }[/math].

fGARCH

Hentschel's fGARCH model,[9] also known as Family GARCH, is an omnibus model that nests a variety of other popular symmetric and asymmetric GARCH models including APARCH, GJR, AVGARCH, NGARCH, etc.

COGARCH

In 2004, Claudia Klüppelberg, Alexander Lindner and Ross Maller proposed a continuous-time generalization of the discrete-time GARCH(1,1) process. The idea is to start with the GARCH(1,1) model equations

[math]\displaystyle{ \epsilon_t = \sigma_t z_t, }[/math]
[math]\displaystyle{ \sigma_t^2 = \alpha_0 + \alpha_1 \epsilon^2_{t-1} + \beta_1 \sigma^2_{t-1} = \alpha_0 + \alpha_1 \sigma_{t-1}^2 z_{t-1}^2 + \beta_1 \sigma^2_{t-1}, }[/math]

and then to replace the strong white noise process [math]\displaystyle{ z_t }[/math] by the infinitesimal increments [math]\displaystyle{ \mathrm{d}L_t }[/math] of a Lévy process [math]\displaystyle{ (L_t)_{t\geq0} }[/math], and the squared noise process [math]\displaystyle{ z^2_t }[/math] by the increments [math]\displaystyle{ \mathrm{d}[L,L]^\mathrm{d}_t }[/math], where

[math]\displaystyle{ [L,L]^\mathrm{d}_t = \sum_{s\in[0,t]} (\Delta L_t)^2,\quad t\geq0, }[/math]

is the purely discontinuous part of the quadratic variation process of [math]\displaystyle{ L }[/math]. The result is the following system of stochastic differential equations:

[math]\displaystyle{ \mathrm{d}G_t = \sigma_{t-} \,\mathrm{d}L_t, }[/math]
[math]\displaystyle{ \mathrm{d}\sigma_t^2 = (\beta - \eta \sigma^2_t)\,\mathrm{d}t + \varphi \sigma_{t-}^2 \,\mathrm{d}[L,L]^\mathrm{d}_t, }[/math]

where the positive parameters [math]\displaystyle{ \beta }[/math], [math]\displaystyle{ \eta }[/math] and [math]\displaystyle{ \varphi }[/math] are determined by [math]\displaystyle{ \alpha_0 }[/math], [math]\displaystyle{ \alpha_1 }[/math] and [math]\displaystyle{ \beta_1 }[/math]. Now given some initial condition [math]\displaystyle{ (G_0,\sigma^2_0) }[/math], the system above has a pathwise unique solution [math]\displaystyle{ (G_t,\sigma^2_t)_{t\geq0} }[/math] which is then called the continuous-time GARCH (COGARCH) model.[10]

ZD-GARCH

Unlike GARCH model, the Zero-Drift GARCH (ZD-GARCH) model by Li, Zhang, Zhu and Ling (2018) [11] lets the drift term [math]\displaystyle{ ~\omega= 0 }[/math] in the first order GARCH model. The ZD-GARCH model is to model [math]\displaystyle{ ~\epsilon_t = ~\sigma_t z_t }[/math], where [math]\displaystyle{ z_t }[/math] is i.i.d., and

[math]\displaystyle{ ~\sigma_t^2 = ~\alpha_{1} ~\epsilon_{t-1}^2 + ~\beta_{1} ~\sigma_{t-1}^2. }[/math]

The ZD-GARCH model does not require [math]\displaystyle{ ~\alpha_{1} + ~\beta_{1}= 1 }[/math], and hence it nests the Exponentially weighted moving average (EWMA) model in "RiskMetrics". Since the drift term [math]\displaystyle{ ~\omega= 0 }[/math], the ZD-GARCH model is always non-stationary, and its statistical inference methods are quite different from those for the classical GARCH model. Based on the historical data, the parameters [math]\displaystyle{ ~\alpha_{1} }[/math] and [math]\displaystyle{ ~\beta_{1} }[/math] can be estimated by the generalized QMLE method.

Spatial GARCH

Spatial GARCH processes by Otto, Schmid and Garthoff (2018) [12] are considered as the spatial equivalent to the temporal generalized autoregressive conditional heteroscedasticity (GARCH) models. In contrast to the temporal ARCH model, in which the distribution is known given the full information set for the prior periods, the distribution is not straightforward in the spatial and spatiotemporal setting due to the interdependence between neighboring spatial locations. The spatial model is given by [math]\displaystyle{ ~\epsilon(s_i) = ~\sigma(s_i) z(s_i) }[/math] and

[math]\displaystyle{ ~\sigma(s_i)^2 = ~\alpha_i + \sum_{v=1}^{n} \rho w_{iv} \epsilon(s_v)^2, }[/math]

where [math]\displaystyle{ ~s_i }[/math] denotes the [math]\displaystyle{ i }[/math]-th spatial location and [math]\displaystyle{ ~w_{iv} }[/math] refers to the [math]\displaystyle{ iv }[/math]-th entry of a spatial weight matrix and [math]\displaystyle{ w_{ii}=0 }[/math] for [math]\displaystyle{ ~i = 1, ..., n }[/math]. The spatial weight matrix defines which locations are considered to be adjacent.

Gaussian process-driven GARCH

In a different vein, the machine learning community has proposed the use of Gaussian process regression models to obtain a GARCH scheme.[13] This results in a nonparametric modelling scheme, which allows for: (i) advanced robustness to overfitting, since the model marginalises over its parameters to perform inference, under a Bayesian inference rationale; and (ii) capturing highly-nonlinear dependencies without increasing model complexity.[citation needed]

References

  1. Engle, Robert F. (1982). "Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation". Econometrica 50 (4): 987–1007. doi:10.2307/1912773. 
  2. 2.0 2.1 Bollerslev, Tim (1986). "Generalized Autoregressive Conditional Heteroskedasticity". Journal of Econometrics 31 (3): 307–327. doi:10.1016/0304-4076(86)90063-1. 
  3. Brooks, Chris (2014). Introductory Econometrics for Finance (3rd ed.). Cambridge: Cambridge University Press. p. 461. ISBN 9781107661455. 
  4. 4.0 4.1 Engle, Robert F.; Ng, Victor K. (1993). "Measuring and testing the impact of news on volatility". Journal of Finance 48 (5): 1749–1778. doi:10.1111/j.1540-6261.1993.tb05127.x. http://www.finance.martinsewell.com/stylized-facts/volatility/EngleNg1993.pdf. "It is not yet clear in the finance literature that the asymmetric properties of variances are due to changing leverage. The name "leverage effect" is used simply because it is popular among researchers when referring to such a phenomenon.". 
  5. 5.0 5.1 Posedel, Petra (2006). "Analysis Of The Exchange Rate And Pricing Foreign Currency Options On The Croatian Market: The Ngarch Model As An Alternative To The Black Scholes Model". Financial Theory and Practice 30 (4): 347–368. http://www.ijf.hr/eng/FTP/2006/4/posedel.pdf. "Special attention to the model is given by the parameter of asymmetry [theta (θ)] which describes the correlation between returns and variance.6 ...
    6 In the case of analyzing stock returns, the positive value of [theta] reflects the empirically well known leverage effect indicating that a downward movement in the price of a stock causes more of an increase in variance more than a same value downward movement in the price of a stock, meaning that returns and variance are negatively correlated".
     
  6. Higgins, M.L; Bera, A.K (1992). "A Class of Nonlinear Arch Models". International Economic Review 33 (1): 137–158. doi:10.2307/2526988. 
  7. St. Pierre, Eilleen F. (1998). "Estimating EGARCH-M Models: Science or Art". The Quarterly Review of Economics and Finance 38 (2): 167–180. doi:10.1016/S1062-9769(99)80110-0. 
  8. Chatterjee, Swarn; Hubble, Amy (2016). "Day-Of-The-Whieek Effect In Us Biotechnology Stocks—Do Policy Changes And Economic Cycles Matter?". Annals of Financial Economics 11 (2): 1–17. doi:10.1142/S2010495216500081. 
  9. Hentschel, Ludger (1995). "All in the family Nesting symmetric and asymmetric GARCH models". Journal of Financial Economics 39 (1): 71–104. doi:10.1016/0304-405X(94)00821-H. 
  10. Klüppelberg, C.; Lindner, A.; Maller, R. (2004). "A continuous-time GARCH process driven by a Lévy process: stationarity and second-order behaviour". Journal of Applied Probability 41 (3): 601–622. doi:10.1239/jap/1091543413. http://nbn-resolving.de/urn:nbn:de:bvb:19-epub-1794-9. 
  11. Li, D.; Zhang, X.; Zhu, K.; Ling, S. (2018). "The ZD-GARCH model: A new way to study heteroscedasticity". Journal of Econometrics 202 (1): 1–17. doi:10.1016/j.jeconom.2017.09.003. https://mpra.ub.uni-muenchen.de/68621/1/MPRA_paper_68621.pdf. 
  12. Otto, P.; Schmid, W.; Garthoff, R. (2018). "Generalised spatial and spatiotemporal autoregressive conditional heteroscedasticity". Spatial Statistics 26 (1): 125–145. doi:10.1016/j.spasta.2018.07.005. 
  13. Platanios, E.; Chatzis, S. (2014). "Gaussian process-mixture conditional heteroscedasticity". IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (5): 889–900. doi:10.1109/TPAMI.2013.183. PMID 26353224. 

Further reading