Generalized method of moments
In econometrics and statistics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models. Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the data's distribution function may not be known, and therefore maximum likelihood estimation is not applicable. The method requires that a certain number of moment conditions be specified for the model. These moment conditions are functions of the model parameters and the data, such that their expectation is zero at the parameters' true values. The GMM method then minimizes a certain norm of the sample averages of the moment conditions, and can therefore be thought of as a special case of minimum-distance estimation.[1]
The GMM estimators are known to be consistent, asymptotically normal, and most efficient in the class of all estimators that do not use any extra information aside from that contained in the moment conditions. GMM were advocated by Lars Peter Hansen in 1982 as a generalization of the method of moments,[2] introduced by Karl Pearson in 1894. However, these estimators are mathematically equivalent to those based on "orthogonality conditions" (Sargan, 1958, 1959) or "unbiased estimating equations" (Huber, 1967; Wang et al., 1997).
Description
Suppose the available data consists of T observations {Yt } t = 1,...,T, where each observation Yt is an n-dimensional multivariate random variable. We assume that the data come from a certain statistical model, defined up to an unknown parameter θ ∈ Θ. The goal of the estimation problem is to find the “true” value of this parameter, θ0, or at least a reasonably close estimate.
A general assumption of GMM is that the data Yt be generated by a weakly stationary ergodic stochastic process. (The case of independent and identically distributed (iid) variables Yt is a special case of this condition.)
In order to apply GMM, we need to have "moment conditions", that is, we need to know a vector-valued function g(Y,θ) such that
- [math]\displaystyle{ m(\theta_0) \equiv \operatorname{E}[\,g(Y_t,\theta_0)\,]=0, }[/math]
where E denotes expectation, and Yt is a generic observation. Moreover, the function m(θ) must differ from zero for θ ≠ θ0, otherwise the parameter θ will not be point-identified.
The basic idea behind GMM is to replace the theoretical expected value E[⋅] with its empirical analog—sample average:
- [math]\displaystyle{ \hat{m}(\theta) \equiv \frac{1}{T}\sum_{t=1}^T g(Y_t,\theta) }[/math]
and then to minimize the norm of this expression with respect to θ. The minimizing value of θ is our estimate for θ0.
By the law of large numbers, [math]\displaystyle{ \scriptstyle\hat{m}(\theta)\,\approx\;\operatorname{E}[g(Y_t,\theta)]\,=\,m(\theta) }[/math] for large values of T, and thus we expect that [math]\displaystyle{ \scriptstyle\hat{m}(\theta_0)\;\approx\;m(\theta_0)\;=\;0 }[/math]. The generalized method of moments looks for a number [math]\displaystyle{ \scriptstyle\hat\theta }[/math] which would make [math]\displaystyle{ \scriptstyle\hat{m}(\;\!\hat\theta\;\!) }[/math] as close to zero as possible. Mathematically, this is equivalent to minimizing a certain norm of [math]\displaystyle{ \scriptstyle\hat{m}(\theta) }[/math] (norm of m, denoted as ||m||, measures the distance between m and zero). The properties of the resulting estimator will depend on the particular choice of the norm function, and therefore the theory of GMM considers an entire family of norms, defined as
- [math]\displaystyle{ \| \hat{m}(\theta) \|^2_{W} = \hat{m}(\theta)^{\mathsf{T}}\,W\hat{m}(\theta), }[/math]
where W is a positive-definite weighting matrix, and [math]\displaystyle{ m^{\mathsf{T}} }[/math] denotes transposition. In practice, the weighting matrix W is computed based on the available data set, which will be denoted as [math]\displaystyle{ \scriptstyle\hat{W} }[/math]. Thus, the GMM estimator can be written as
- [math]\displaystyle{ \hat\theta = \operatorname{arg}\min_{\theta\in\Theta} \bigg(\frac{1}{T}\sum_{t=1}^T g(Y_t,\theta)\bigg)^{\mathsf{T}} \hat{W} \bigg(\frac{1}{T}\sum_{t=1}^T g(Y_t,\theta)\bigg) }[/math]
Under suitable conditions this estimator is consistent, asymptotically normal, and with right choice of weighting matrix [math]\displaystyle{ \scriptstyle\hat{W} }[/math] also asymptotically efficient.
Properties
Consistency
Consistency is a statistical property of an estimator stating that, having a sufficient number of observations, the estimator will converge in probability to the true value of parameter:
- [math]\displaystyle{ \hat\theta \xrightarrow{p} \theta_0\ \text{as}\ T\to\infty. }[/math]
Sufficient conditions for a GMM estimator to be consistent are as follows:
- [math]\displaystyle{ \hat{W}_T \xrightarrow{p} W, }[/math] where W is a positive semi-definite matrix,
- [math]\displaystyle{ \,W\operatorname{E}[\,g(Y_t,\theta)\,]=0 }[/math] only for [math]\displaystyle{ \,\theta=\theta_0, }[/math]
- The space of possible parameters [math]\displaystyle{ \Theta \subset \mathbb{R}^{k} }[/math] is compact,
- [math]\displaystyle{ \,g(Y,\theta) }[/math] is continuous at each θ with probability one,
- [math]\displaystyle{ \operatorname{E}[\,\textstyle\sup_{\theta\in\Theta} \lVert g(Y,\theta)\rVert\,]\lt \infty. }[/math]
The second condition here (so-called Global identification condition) is often particularly hard to verify. There exist simpler necessary but not sufficient conditions, which may be used to detect non-identification problem:
- Order condition. The dimension of moment function m(θ) should be at least as large as the dimension of parameter vector θ.
- Local identification. If g(Y,θ) is continuously differentiable in a neighborhood of [math]\displaystyle{ \theta_0 }[/math], then matrix [math]\displaystyle{ W\operatorname{E}[\nabla_\theta g(Y_t,\theta_0)] }[/math] must have full column rank.
In practice applied econometricians often simply assume that global identification holds, without actually proving it.[3]:2127
Asymptotic normality
Asymptotic normality is a useful property, as it allows us to construct confidence bands for the estimator, and conduct different tests. Before we can make a statement about the asymptotic distribution of the GMM estimator, we need to define two auxiliary matrices:
- [math]\displaystyle{ G = \operatorname{E}[\,\nabla_{\!\theta}\,g(Y_t,\theta_0)\,], \qquad \Omega = \operatorname{E}[\,g(Y_t,\theta_0)g(Y_t,\theta_0)^{\mathsf{T}}\,] }[/math]
Then under conditions 1–6 listed below, the GMM estimator will be asymptotically normal with limiting distribution:
[math]\displaystyle{ \sqrt{T}\big(\hat\theta - \theta_0\big)\ \xrightarrow{d}\ \mathcal{N}\big[0, (G^{\mathsf{T}}WG)^{-1}G^{\mathsf{T}}W\Omega W^{\mathsf{T}}G(G^{\mathsf{T}}W^{\mathsf{T}}G)^{-1}\big]. }[/math]
Conditions:
- [math]\displaystyle{ \hat\theta }[/math] is consistent (see previous section),
- The set of possible parameters [math]\displaystyle{ \Theta \subset \mathbb{R}^{k} }[/math] is compact,
- [math]\displaystyle{ \,g(Y,\theta) }[/math] is continuously differentiable in some neighborhood N of [math]\displaystyle{ \theta_0 }[/math] with probability one,
- [math]\displaystyle{ \operatorname{E}[\,\lVert g(Y_t,\theta) \rVert^2\,]\lt \infty, }[/math]
- [math]\displaystyle{ \operatorname{E}[\,\textstyle\sup_{\theta\in N}\lVert \nabla_\theta g(Y_t,\theta) \rVert\,]\lt \infty, }[/math]
- the matrix [math]\displaystyle{ G'WG }[/math] is nonsingular.
Relative Efficiency
So far we have said nothing about the choice of matrix W, except that it must be positive semi-definite. In fact any such matrix will produce a consistent and asymptotically normal GMM estimator, the only difference will be in the asymptotic variance of that estimator. It can be shown that taking
- [math]\displaystyle{ W \propto\ \Omega^{-1} }[/math]
will result in the most efficient estimator in the class of all (generalized) method of moment estimators. Only infinite number of orthogonal conditions obtains the smallest variance, the Cramér–Rao bound.
In this case the formula for the asymptotic distribution of the GMM estimator simplifies to
- [math]\displaystyle{ \sqrt{T}\big(\hat\theta - \theta_0\big)\ \xrightarrow{d}\ \mathcal{N}\big[0, (G^{\mathsf{T}}\,\Omega^{-1}G)^{-1}\big] }[/math]
The proof that such a choice of weighting matrix is indeed locally optimal is often adopted with slight modifications when establishing efficiency of other estimators. As a rule of thumb, a weighting matrix inches closer to optimality when it turns into an expression closer to the Cramér–Rao bound.
Proof. We will consider the difference between asymptotic variance with arbitrary W and asymptotic variance with [math]\displaystyle{ W=\Omega^{-1} }[/math]. If we can factor this difference into a symmetric product of the form CC' for some matrix C, then it will guarantee that this difference is nonnegative-definite, and thus [math]\displaystyle{ W=\Omega^{-1} }[/math] will be optimal by definition. | |
[math]\displaystyle{ \,V(W)-V(\Omega^{-1}) }[/math] | [math]\displaystyle{ \,=(G^{\mathsf{T}}WG)^{-1}G^{\mathsf{T}}W\Omega WG(G^{\mathsf{T}}WG)^{-1} - (G^{\mathsf{T}}\Omega^{-1}G)^{-1} }[/math] |
[math]\displaystyle{ \,=(G^{\mathsf{T}}WG)^{-1}\Big(G^{\mathsf{T}}W\Omega WG - G^{\mathsf{T}}WG(G^{\mathsf{T}}\Omega^{-1}G)^{-1}G^{\mathsf{T}}WG\Big)(G^{\mathsf{T}}WG)^{-1} }[/math] | |
[math]\displaystyle{ \,=(G^{\mathsf{T}}WG)^{-1}G^{\mathsf{T}}W\Omega^{1/2}\Big(I - \Omega^{-1/2}G(G^{\mathsf{T}}\Omega^{-1}G)^{-1}G^{\mathsf{T}}\Omega^{-1/2}\Big)\Omega^{1/2}WG(G^{\mathsf{T}}WG)^{-1} }[/math] | |
[math]\displaystyle{ \,=A(I-B)A^{\mathsf{T}}, }[/math] | |
where we introduced matrices A and B in order to slightly simplify notation; I is an identity matrix. We can see that matrix B here is symmetric and idempotent: [math]\displaystyle{ B^2=B }[/math]. This means I−B is symmetric and idempotent as well: [math]\displaystyle{ I-B=(I-B)(I-B)^{\mathsf{T}} }[/math]. Thus we can continue to factor the previous expression as | |
[math]\displaystyle{ \,=A(I-B)(I-B)^{\mathsf{T}}A^{\mathsf{T}} = \Big(A(I-B)\Big)\Big(A(I-B)\Big)^{\mathsf{T}} \geq 0 }[/math] |
Implementation
One difficulty with implementing the outlined method is that we cannot take W = Ω−1 because, by the definition of matrix Ω, we need to know the value of θ0 in order to compute this matrix, and θ0 is precisely the quantity we do not know and are trying to estimate in the first place. In the case of Yt being iid we can estimate W as
- [math]\displaystyle{ \hat{W}_T(\hat\theta) = \bigg(\frac{1}{T}\sum_{t=1}^T g(Y_t,\hat\theta)g(Y_t,\hat\theta)^{\mathsf{T}}\bigg)^{-1}. }[/math]
Several approaches exist to deal with this issue, the first one being the most popular:
- Two-step feasible GMM:
- Step 1: Take W = I (the identity matrix) or some other positive-definite matrix, and compute preliminary GMM estimate [math]\displaystyle{ \scriptstyle\hat\theta_{(1)} }[/math]. This estimator is consistent for θ0, although not efficient.
- Step 2: [math]\displaystyle{ \hat{W}_T(\hat\theta_{(1)}) }[/math] converges in probability to Ω−1 and therefore if we compute [math]\displaystyle{ \scriptstyle\hat\theta }[/math] with this weighting matrix, the estimator will be asymptotically efficient.
- Iterated GMM. Essentially the same procedure as 2-step GMM, except that the matrix [math]\displaystyle{ \hat{W}_T }[/math] is recalculated several times. That is, the estimate obtained in step 2 is used to calculate the weighting matrix for step 3, and so on until some convergence criterion is met.
- [math]\displaystyle{ \hat\theta_{(i+1)} = \operatorname{arg}\min_{\theta\in\Theta}\bigg(\frac{1}{T}\sum_{t=1}^T g(Y_t,\theta)\bigg)^{\mathsf{T}} \hat{W}_T(\hat\theta_{(i)}) \bigg(\frac{1}{T}\sum_{t=1}^T g(Y_t,\theta)\bigg) }[/math]
- Continuously updating GMM (CUGMM, or CUE). Estimates [math]\displaystyle{ \scriptstyle\hat\theta }[/math] simultaneously with estimating the weighting matrix W:
- [math]\displaystyle{ \hat\theta = \operatorname{arg}\min_{\theta\in\Theta} \bigg(\frac{1}{T}\sum_{t=1}^T g(Y_t,\theta)\bigg)^{\mathsf{T}} \hat{W}_T(\theta) \bigg(\frac{1}{T}\sum_{t=1}^T g(Y_t,\theta)\bigg) }[/math]
Another important issue in implementation of minimization procedure is that the function is supposed to search through (possibly high-dimensional) parameter space Θ and find the value of θ which minimizes the objective function. No generic recommendation for such procedure exists, it is a subject of its own field, numerical optimization.
Sargan–Hansen J-test
When the number of moment conditions is greater than the dimension of the parameter vector θ, the model is said to be over-identified. Sargan (1958) proposed tests for over-identifying restrictions based on instrumental variables estimators that are distributed in large samples as Chi-square variables with degrees of freedom that depend on the number of over-identifying restrictions. Subsequently, Hansen (1982) applied this test to the mathematically equivalent formulation of GMM estimators. Note, however, that such statistics can be negative in empirical applications where the models are misspecified, and likelihood ratio tests can yield insights since the models are estimated under both null and alternative hypotheses (Bhargava and Sargan, 1983).
Conceptually we can check whether [math]\displaystyle{ \hat{m}(\hat\theta) }[/math] is sufficiently close to zero to suggest that the model fits the data well. The GMM method has then replaced the problem of solving the equation [math]\displaystyle{ \hat{m}(\theta)=0 }[/math], which chooses [math]\displaystyle{ \theta }[/math] to match the restrictions exactly, by a minimization calculation. The minimization can always be conducted even when no [math]\displaystyle{ \theta_0 }[/math] exists such that [math]\displaystyle{ m(\theta_0)=0 }[/math]. This is what J-test does. The J-test is also called a test for over-identifying restrictions.
Formally we consider two hypotheses:
- [math]\displaystyle{ H_0:\ m(\theta_0)=0 }[/math] (the null hypothesis that the model is “valid”), and
- [math]\displaystyle{ H_1:\ m(\theta)\neq 0,\ \forall \theta\in\Theta }[/math] (the alternative hypothesis that model is “invalid”; the data does not come close to meeting the restrictions)
Under hypothesis [math]\displaystyle{ H_0 }[/math], the following so-called J-statistic is asymptotically chi-squared distributed with k–l degrees of freedom. Define J to be:
- [math]\displaystyle{ J \equiv T \cdot \bigg(\frac{1}{T}\sum_{t=1}^T g(Y_t,\hat\theta)\bigg)^{\mathsf{T}} \hat{W}_T \bigg(\frac{1}{T}\sum_{t=1}^T g(Y_t,\hat\theta)\bigg)\ \xrightarrow{d}\ \chi^2_{k-\ell} }[/math] under [math]\displaystyle{ H_0, }[/math]
where [math]\displaystyle{ \hat\theta }[/math] is the GMM estimator of the parameter [math]\displaystyle{ \theta_0 }[/math], k is the number of moment conditions (dimension of vector g), and l is the number of estimated parameters (dimension of vector θ). Matrix [math]\displaystyle{ \hat{W}_T }[/math] must converge in probability to [math]\displaystyle{ \Omega^{-1} }[/math], the efficient weighting matrix (note that previously we only required that W be proportional to [math]\displaystyle{ \Omega^{-1} }[/math] for estimator to be efficient; however in order to conduct the J-test W must be exactly equal to [math]\displaystyle{ \Omega^{-1} }[/math], not simply proportional).
Under the alternative hypothesis [math]\displaystyle{ H_1 }[/math], the J-statistic is asymptotically unbounded:
- [math]\displaystyle{ J\ \xrightarrow{p}\ \infty }[/math] under [math]\displaystyle{ H_1 }[/math]
To conduct the test we compute the value of J from the data. It is a nonnegative number. We compare it with (for example) the 0.95 quantile of the [math]\displaystyle{ \chi^2_{k-\ell} }[/math] distribution:
- [math]\displaystyle{ H_0 }[/math] is rejected at 95% confidence level if [math]\displaystyle{ J \gt q_{0.95}^{\chi^2_{k-\ell}} }[/math]
- [math]\displaystyle{ H_0 }[/math] cannot be rejected at 95% confidence level if [math]\displaystyle{ J \lt q_{0.95}^{\chi^2_{k-\ell}} }[/math]
Scope
Many other popular estimation techniques can be cast in terms of GMM optimization:
- Ordinary least squares (OLS) is equivalent to GMM with moment conditions:
- [math]\displaystyle{ \operatorname{E}[\,x_t(y_t - x_t^{\mathsf{T}}\beta)\,]=0 }[/math]
- Weighted least squares (WLS)
- [math]\displaystyle{ \operatorname{E}[\,x_t(y_t - x_t^{\mathsf{T}}\beta)/\sigma^2(x_t)\,]=0 }[/math]
- Instrumental variables regression (IV)
- [math]\displaystyle{ \operatorname{E}[\,z_t(y_t - x_t^{\mathsf{T}}\beta)\,]=0 }[/math]
- Non-linear least squares (NLLS):
- [math]\displaystyle{ \operatorname{E}[\,\nabla_{\!\beta}\, g(x_t,\beta)\cdot(y_t - g(x_t,\beta))\,]=0 }[/math]
- Maximum likelihood estimation (MLE):
- [math]\displaystyle{ \operatorname{E}[\,\nabla_{\!\theta} \ln f(x_t,\theta) \,]=0 }[/math]
An Alternative to the GMM
In method of moments, an alternative to the original (non-generalized) Method of Moments (MoM) is described, and references to some applications and a list of theoretical advantages and disadvantages relative to the traditional method are provided. This Bayesian-Like MoM (BL-MoM) is distinct from all the related methods described above, which are subsumed by the GMM.[5][6] The literature does not contain a direct comparison between the GMM and the BL-MoM in specific applications.
Implementations
See also
- Method of maximum likelihood
- Generalized empirical likelihood
- Arellano–Bond estimator
- Approximate Bayesian computation
References
- ↑ Hayashi, Fumio (2000). Econometrics. Princeton University Press. p. 206. ISBN 0-691-01018-8. https://books.google.com/books?id=QyIW8WUIyzcC&pg=PA206.
- ↑ Hansen, Lars Peter (1982). "Large Sample Properties of Generalized Method of Moments Estimators". Econometrica 50 (4): 1029–1054. doi:10.2307/1912775.
- ↑ Newey, W.; McFadden, D. (1994). "Large sample estimation and hypothesis testing". Handbook of Econometrics. 4. Elsevier Science. pp. 2111–2245. doi:10.1016/S1573-4412(05)80005-4. ISBN 9780444887665.
- ↑ Hansen, Lars Peter; Heaton, John; Yaron, Amir (1996). "Finite-sample properties of some alternative GMM estimators". Journal of Business & Economic Statistics 14 (3): 262–280. doi:10.1080/07350015.1996.10524656. http://dspace.mit.edu/bitstream/1721.1/47970/1/finitesampleprop00hans.pdf.
- ↑ Armitage, Peter, ed (2005-02-18) (in en). Encyclopedia of Biostatistics (1 ed.). Wiley. doi:10.1002/0470011815. ISBN 978-0-470-84907-1. https://onlinelibrary.wiley.com/doi/book/10.1002/0470011815.
- ↑ Godambe, V. P., ed (2002). Estimating functions. Oxford statistical science series (Repr ed.). Oxford: Clarendon Press. ISBN 978-0-19-852228-7.
Further reading
- Huber, P. (1967). The behavior of maximum likelihood estimates under nonstandard conditions. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability 1, 221-233.
- Newey W., McFadden D. (1994). Large sample estimation and hypothesis testing, in Handbook of Econometrics, Ch.36. Elsevier Science.
- Imbens, Guido W.; Spady, Richard H.; Johnson, Phillip (1998). "Information theoretic approaches to inference in moment condition models". Econometrica 66 (2): 333–357. doi:10.2307/2998561. http://www.nber.org/papers/t0186.pdf.
- Sargan, J.D. (1958). The estimation of economic relationships using instrumental variables. Econometrica, 26, 393-415.
- Sargan, J.D. (1959). The estimation of relationships with autocorrelated residuals by the use on instrumental variables. Journal of the Royal Statistical Society B, 21, 91-105.
- Wang, C.Y., Wang, S., and Carroll, R. (1997). Estimation in choice-based sampling with measurement error and bootstrap analysis. Journal of Econometrics, 77, 65-86.
- Bhargava, A., and Sargan, J.D. (1983). Estimating dynamic random effects from panel data covering short time periods. Econometrica, 51, 6, 1635-1659.
- Hayashi, Fumio (2000). Econometrics. Princeton: Princeton University Press. ISBN 0-691-01018-8.
- Hansen, Lars Peter (2002). "Method of Moments". in Smelser, N. J.; Bates, P. B.. International Encyclopedia of the Social and Behavior Sciences. Oxford: Pergamon.
- Hall, Alastair R. (2005). Generalized Method of Moments. Advanced Texts in Econometrics. Oxford University Press. ISBN 0-19-877520-2.
- Faciane, Kirby Adam Jr. (2006). Statistics for Empirical and Quantitative Finance. Statistics for Empirical and Quantitative Finance. H.C. Baird. ISBN 0-9788208-9-4.
- Special issues of Journal of Business and Economic Statistics: vol. 14, no. 3 and vol. 20, no. 4.
Original source: https://en.wikipedia.org/wiki/Generalized method of moments.
Read more |