Cochrane–Orcutt estimation

From HandWiki

Cochrane–Orcutt estimation is a procedure in econometrics, which adjusts a linear model for serial correlation in the error term. Developed in the 1940s, it is named after statisticians Donald Cochrane and Guy Orcutt.[1]

Theory

Consider the model

[math]\displaystyle{ y_t = \alpha + X_t \beta+\varepsilon_t,\, }[/math]

where [math]\displaystyle{ y_{t} }[/math] is the value of the dependent variable of interest at time t, [math]\displaystyle{ \beta }[/math] is a column vector of coefficients to be estimated, [math]\displaystyle{ X_{t} }[/math] is a row vector of explanatory variables at time t, and [math]\displaystyle{ \varepsilon_t }[/math] is the error term at time t.

If it is found, for instance via the Durbin–Watson statistic, that if the error term is serially correlated over time, then standard statistical inference as normally applied to regressions is invalid because standard errors are estimated with bias. To avoid this problem, the residuals must be modeled. If the process generating the residuals is found to be a stationary first-order autoregressive structure,[2] [math]\displaystyle{ \varepsilon_t =\rho \varepsilon_{t-1}+e_t,\ |\rho| \lt 1 }[/math], with the errors {[math]\displaystyle{ e_t }[/math]} being white noise, then the Cochrane–Orcutt procedure can be used to transform the model by taking a quasi-difference:

[math]\displaystyle{ y_t - \rho y_{t-1} = \alpha(1-\rho)+(X_t - \rho X_{t-1})\beta + e_t. \, }[/math]

In this specification the error terms are white noise, so statistical inference is valid. Then the sum of squared residuals (the sum of squared estimates of [math]\displaystyle{ e_t^2 }[/math]) is minimized with respect to [math]\displaystyle{ (\alpha,\beta) }[/math], conditional on [math]\displaystyle{ \rho }[/math].

Inefficiency

The transformation suggested by Cochrane and Orcutt disregards the first observation of a time series, causing a loss of efficiency that can be substantial in small samples.[3] A superior transformation, which retains the first observation with a weight of [math]\displaystyle{ \sqrt{(1-\rho^{2})} }[/math] was first suggested by Prais and Winsten,[4] and later independently by Kadilaya.[5]

Estimating the autoregressive parameter

If [math]\displaystyle{ \rho }[/math] is not known, then it is estimated by first regressing the untransformed model and obtaining the residuals {[math]\displaystyle{ \hat{\varepsilon}_t }[/math]}, and regressing [math]\displaystyle{ \hat{\varepsilon}_t }[/math] on [math]\displaystyle{ \hat{\varepsilon}_{t-1} }[/math], leading to an estimate of [math]\displaystyle{ \rho }[/math] and making the transformed regression sketched above feasible. (Note that one data point, the first, is lost in this regression.) This procedure of autoregressing estimated residuals can be done once and the resulting value of [math]\displaystyle{ \rho }[/math] can be used in the transformed y regression, or the residuals of the residuals autoregression can themselves be autoregressed in consecutive steps until no substantial change in the estimated value of [math]\displaystyle{ \rho }[/math] is observed.

It has to be noted, though, that the iterative Cochrane–Orcutt procedure might converge to a local but not global minimum of the residual sum of squares.[6][7][8] This problem disappears when using the Prais–Winsten transformation instead, which keeps the initial observation.[9]

See also

References

  1. Cochrane, D.; Orcutt, G. H. (1949). "Application of Least Squares Regression to Relationships Containing Auto-Correlated Error Terms". Journal of the American Statistical Association 44 (245): 32–61. doi:10.1080/01621459.1949.10483290. 
  2. Wooldridge, Jeffrey M. (2013). Introductory Econometrics: A Modern Approach (Fifth international ed.). Mason, OH: South-Western. pp. 409–415. ISBN 978-1-111-53439-4. 
  3. Rao, Potluri; Griliches, Zvi (1969). "Small-Sample Properties of Several Two-Stage Regression Methods in the Context of Auto-Correlated Errors". Journal of the American Statistical Association 64 (325): 253–272. doi:10.1080/01621459.1969.10500968. 
  4. Prais, S. J.; Winsten, C. B. (1954). "Trend Estimators and Serial Correlation". Cowles Commission Discussion Paper No. 383 (Chicago). https://cowles.yale.edu/sites/default/files/files/pub/cdp/s-0383.pdf. 
  5. Kadiyala, Koteswara Rao (1968). "A Transformation Used to Circumvent the Problem of Autocorrelation". Econometrica 36 (1): 93–96. doi:10.2307/1909605. 
  6. Dufour, J. M.; Gaudry, M. J. I.; Liem, T. C. (1980). "The Cochrane-Orcutt procedure numerical examples of multiple admissible minima". Economics Letters 6 (1): 43–48. doi:10.1016/0165-1765(80)90055-5. 
  7. Oxley, Leslie T.; Roberts, Colin J. (1982). "Pitfalls in the Application of the Cochrane‐Orcutt Technique". Oxford Bulletin of Economics and Statistics 44 (3): 227–240. doi:10.1111/j.1468-0084.1982.mp44003003.x. 
  8. Dufour, J. M.; Gaudry, M. J. I.; Hafer, R. W. (1983). "A warning on the use of the Cochrane-Orcutt procedure based on a money demand equation". Empirical Economics 8 (2): 111–117. doi:10.1007/BF01973194. 
  9. Doran, Howard; Kmenta, Jan (1992). "Multiple Minima in the Estimation of Models With Autoregressive Disturbances". Review of Economics and Statistics 74 (2): 354–357. doi:10.2307/2109671. 

Further reading

External links