Prais–Winsten estimation

From HandWiki

In econometrics, Prais–Winsten estimation is a procedure meant to take care of the serial correlation of type AR(1) in a linear model. Conceived by Sigbert Prais and Christopher Winsten in 1954,[1] it is a modification of Cochrane–Orcutt estimation in the sense that it does not lose the first observation, which leads to more efficiency as a result and makes it a special case of feasible generalized least squares.[2]

Theory

Consider the model

[math]\displaystyle{ y_t = \alpha + X_t \beta+\varepsilon_t,\, }[/math]

where [math]\displaystyle{ y_{t} }[/math] is the time series of interest at time t, [math]\displaystyle{ \beta }[/math] is a vector of coefficients, [math]\displaystyle{ X_{t} }[/math] is a matrix of explanatory variables, and [math]\displaystyle{ \varepsilon_t }[/math] is the error term. The error term can be serially correlated over time: [math]\displaystyle{ \varepsilon_t =\rho \varepsilon_{t-1}+e_t,\ |\rho| \lt 1 }[/math] and [math]\displaystyle{ e_t }[/math] is white noise. In addition to the Cochrane–Orcutt transformation, which is

[math]\displaystyle{ y_t - \rho y_{t-1} = \alpha(1-\rho)+(X_t - \rho X_{t-1})\beta + e_t, \, }[/math]

for t = 2,3,...,T, the Prais-Winsten procedure makes a reasonable transformation for t = 1 in the following form:

[math]\displaystyle{ \sqrt{1-\rho^2}y_1 = \alpha\sqrt{1-\rho^2}+\left(\sqrt{1-\rho^2}X_1\right)\beta + \sqrt{1-\rho^2}\varepsilon_1. \, }[/math]

Then the usual least squares estimation is done.

Estimation procedure

First notice that

[math]\displaystyle{ \mathrm{var}(\varepsilon_t)=\mathrm{var}(\rho\varepsilon_{t-1}+e_{it})=\rho^2 \mathrm{var}(\varepsilon_{t-1}) +\mathrm{var}(e_{it}) }[/math]

Noting that for a stationary process, variance is constant over time,

[math]\displaystyle{ (1-\rho^2 )\mathrm{var}(\varepsilon_t)= \mathrm{var}(e_{it}) }[/math]

and thus,

[math]\displaystyle{ \mathrm{var}(\varepsilon_t)=\frac{\mathrm{var}(e_{it})}{(1-\rho^2 )} }[/math]

Without loss of generality suppose the variance of the white noise is 1. To do the estimation in a compact way one must look at the autocovariance function of the error term considered in the model blow:

[math]\displaystyle{ \mathrm{cov}(\varepsilon_t,\varepsilon_{t+h})=\rho^h \mathrm{var}(\varepsilon_t)=\frac{\rho^h}{1-\rho^2}, \text{ for } h=0,\pm 1, \pm 2, \dots \, . }[/math]

It is easy to see that the variance–covariance matrix, [math]\displaystyle{ \mathbf{\Omega} }[/math], of the model is

[math]\displaystyle{ \mathbf{\Omega} = \begin{bmatrix} \frac{1}{1-\rho^2} & \frac{\rho}{1-\rho^2} & \frac{\rho^2}{1-\rho^2} & \cdots & \frac{\rho^{T-1}}{1-\rho^2} \\[8pt] \frac{\rho}{1-\rho^2} & \frac{1}{1-\rho^2} & \frac{\rho}{1-\rho^2} & \cdots & \frac{\rho^{T-2}}{1-\rho^2} \\[8pt] \frac{\rho^2}{1-\rho^2} & \frac{\rho}{1-\rho^2} & \frac{1}{1-\rho^2} & \cdots & \frac{\rho^{T-3}}{1-\rho^2} \\[8pt] \vdots & \vdots & \vdots & \ddots & \vdots \\[8pt] \frac{\rho^{T-1}}{1-\rho^2} & \frac{\rho^{T-2}}{1-\rho^2} & \frac{\rho^{T-3}}{1-\rho^2} & \cdots & \frac{1}{1-\rho^2} \end{bmatrix}. }[/math]

Having [math]\displaystyle{ \rho }[/math] (or an estimate of it), we see that,

[math]\displaystyle{ \hat{\Theta}=(\mathbf{Z}^{\mathsf{T}}\mathbf{\Omega}^{-1}\mathbf{Z})^{-1}(\mathbf{Z}^{\mathsf{T}}\mathbf{\Omega}^{-1}\mathbf{Y}), \, }[/math]

where [math]\displaystyle{ \mathbf{Z} }[/math] is a matrix of observations on the independent variable (Xt, t = 1, 2, ..., T) including a vector of ones, [math]\displaystyle{ \mathbf{Y} }[/math] is a vector stacking the observations on the dependent variable (yt, t = 1, 2, ..., T) and [math]\displaystyle{ \hat{\Theta} }[/math] includes the model parameters.

Note

To see why the initial observation assumption stated by Prais–Winsten (1954) is reasonable, considering the mechanics of generalized least square estimation procedure sketched above is helpful. The inverse of [math]\displaystyle{ \mathbf{\Omega} }[/math] can be decomposed as [math]\displaystyle{ \mathbf{\Omega}^{-1}=\mathbf{G}^{\mathsf{T}}\mathbf{G} }[/math] with[3]

[math]\displaystyle{ \mathbf{G} = \begin{bmatrix} \sqrt{1-\rho^2} & 0 & 0 & \cdots & 0 \\ -\rho & 1 & 0 & \cdots & 0 \\ 0 & -\rho & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{bmatrix}. }[/math]

A pre-multiplication of model in a matrix notation with this matrix gives the transformed model of Prais–Winsten.

Restrictions

The error term is still restricted to be of an AR(1) type. If [math]\displaystyle{ \rho }[/math] is not known, a recursive procedure (Cochrane–Orcutt estimation) or grid-search (Hildreth–Lu estimation) may be used to make the estimation feasible. Alternatively, a full information maximum likelihood procedure that estimates all parameters simultaneously has been suggested by Beach and MacKinnon.[4][5]

References

  1. Prais, S. J.; Winsten, C. B. (1954). "Trend Estimators and Serial Correlation". Cowles Commission Discussion Paper No. 383 (Chicago). https://cowles.yale.edu/sites/default/files/files/pub/cdp/s-0383.pdf. 
  2. Johnston, John (1972). Econometric Methods (2nd ed.). New York: McGraw-Hill. pp. 259–265. ISBN 9780070326798. https://books.google.com/books?id=aBOaAAAAIAAJ&pg=259. 
  3. Kadiyala, Koteswara Rao (1968). "A Transformation Used to Circumvent the Problem of Autocorrelation". Econometrica 36 (1): 93–96. doi:10.2307/1909605. 
  4. Beach, Charles M.; MacKinnon, James G. (1978). "A Maximum Likelihood Procedure for Regression with Autocorrelated Errors". Econometrica 46 (1): 51–58. doi:10.2307/1913644. 
  5. Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 190–191. ISBN 0-674-00560-0. https://books.google.com/books?id=0bzGQE14CwEC&pg=PA190. 

Further reading