Two-step M-estimator

From HandWiki

Two-step M-estimators deals with M-estimation problems that require preliminary estimation to obtain the parameter of interest. Two-step M-estimation is different from usual M-estimation problem because asymptotic distribution of the second-step estimator generally depends on the first-step estimator. Accounting for this change in asymptotic distribution is important for valid inference.

Description

The class of two-step M-estimators includes Heckman's sample selection estimator,[1] weighted non-linear least squares, and ordinary least squares with generated regressors.[2]

To fix ideas, let [math]\displaystyle{ \{W_{i}\}^n_{i=1} \subseteq R^d }[/math] be an i.i.d. sample. [math]\displaystyle{ \Theta }[/math] and [math]\displaystyle{ \Gamma }[/math] are subsets of Euclidean spaces [math]\displaystyle{ R^p }[/math] and [math]\displaystyle{ R^q }[/math], respectively. Given a function [math]\displaystyle{ m(;;;): R^d \times \Theta \times \Gamma\rightarrow R }[/math] , two-step M-estimator [math]\displaystyle{ \hat\theta }[/math] is defined as:

[math]\displaystyle{ \hat \theta:=\arg\max_{\theta\in\Theta}\frac{1}{n}\sum_{i}m\bigl(W_{i},\theta,\hat\gamma\bigr) }[/math]

where [math]\displaystyle{ \hat\gamma }[/math] is an M-estimate of a nuisance parameter that needs to be calculated in the first step.

Consistency of two-step M-estimators can be verified by checking consistency conditions for usual M-estimators, although some modification might be necessary. In practice, the important condition to check is the identification condition.[2] If [math]\displaystyle{ \hat\gamma\rightarrow\gamma^*, }[/math] where [math]\displaystyle{ \gamma^* }[/math] is a non-random vector, then the identification condition is that [math]\displaystyle{ E[m(W_{1},\theta,\gamma^*)] }[/math] has a unique maximizer over [math]\displaystyle{ \Theta }[/math].

Asymptotic distribution

Under regularity conditions, two-step M-estimators have asymptotic normality. An important point to note is that the asymptotic variance of a two-step M-estimator is generally not the same as that of the usual M-estimator in which the first step estimation is not necessary.[3] This fact is intuitive because [math]\displaystyle{ \hat\gamma }[/math] is a random object and its variability should influence the estimation of [math]\displaystyle{ \Theta }[/math]. However, there exists a special case in which the asymptotic variance of two-step M-estimator takes the form as if there were no first-step estimation procedure. Such special case occurs if:

[math]\displaystyle{ E \frac{\partial}{\partial\theta\partial\gamma}m(W_{1},\theta_{0},\gamma^*)=0 }[/math]

where [math]\displaystyle{ \theta_{0} }[/math] is the true value of [math]\displaystyle{ \theta }[/math] and [math]\displaystyle{ \gamma^* }[/math] is the probability limit of [math]\displaystyle{ \hat\gamma }[/math].[3] To interpret this condition, first note that under regularity conditions, [math]\displaystyle{ E \frac{\partial}{\partial\theta}m(W_{1},\theta_{0},\gamma^*)=0 }[/math] since [math]\displaystyle{ \theta_{0} }[/math] is the maximizer of [math]\displaystyle{ E [ m(W_{1},\theta, \gamma^*)] }[/math]. So the condition above implies that small perturbation in γ has no impact on the first-order condition. Thus, in large sample, variability of [math]\displaystyle{ \hat\gamma }[/math] does not affect the argmax of the objective function, which explains invariant property of asymptotic variance. Of course, this result is valid only as the sample size tends to infinity, so the finite-sample property could be quite different.

Involving MLE

When the first step is a maximum likelihood estimator, under some assumptions, two-step M-estimator is more asymptotically efficient (i.e. has smaller asymptotic variance) than M-estimator with known first-step parameter. Consistency and asymptotic normality of the estimator follows from the general result on two-step M-estimators.[4]

Let {Vi,Wi,Zi}ni=1 be a random sample and the second-step M-estimator [math]\displaystyle{ \widehat{\theta} }[/math] is the following:

[math]\displaystyle{ \widehat{\theta} := \underset{\theta\in\Theta}{\operatorname{arg\max}}\sum_{i=1}^n m(v_i,w_i,z_i: \theta,\widehat{\gamma}) }[/math]

where [math]\displaystyle{ \widehat{\gamma } }[/math] is the parameter estimated by maximum likelihood in the first step. For the MLE,

[math]\displaystyle{ \widehat{\gamma } := \underset{\gamma\in\Gamma}{\operatorname{arg\max}}\sum_{i=1}^n \log f(v_{it} : z_{i} , \gamma) }[/math]

where f is the conditional density of V given Z. Now, suppose that given Z, V is conditionally independent of W. This is called the conditional independence assumption or selection on observables.[4][5] Intuitively, this condition means that Z is a good predictor of V so that once conditioned on Z, V has no systematic dependence on W. Under the conditional independence assumption, the asymptotic variance of the two-step estimator is:

[math]\displaystyle{ \mathrm E [\nabla_\theta s(\theta_0, \gamma_0)]^{-1} \mathrm E[g(\theta_0, \gamma_0) g(\theta_0, \gamma_0)^{\mathrm T}] \mathrm E[\nabla_\theta s(\theta_0,\gamma_0)]^{-1} }[/math]

where

[math]\displaystyle{ \begin{align} g(\theta,\gamma) &:= s(\theta,\gamma)-\mathrm E[ s(\theta , \gamma) \nabla_\gamma d(\gamma)^{\mathrm T} ] \mathrm E[\nabla_\gamma d(\gamma) \nabla_\gamma d(\gamma)^{\mathrm T} ]^{-1} d(\gamma) \\ s(\theta,\gamma) &:= \nabla_\theta m(V, W, Z: \theta, \gamma) \\ d(\gamma) &:= \nabla_\gamma \log f (V : Z, \gamma) \end{align} }[/math]

and represents partial derivative with respect to a row vector. In the case where γ0 is known, the asymptotic variance is

[math]\displaystyle{ \mathrm E[\nabla_\theta s(\theta_0, \gamma_0)]^{-1} \mathrm E[s(\theta_0, \gamma_0 )s(\theta_0, \gamma_0 )^{\mathrm T}] \mathrm E[\nabla_\theta s(\theta_0, \gamma_0)]^{-1} }[/math]

and therefore, unless [math]\displaystyle{ \mathrm E[ s(\theta, \gamma) \nabla_\gamma d(\gamma)^{\mathrm T} ]=0 }[/math], the two-step M-estimator is more efficient than the usual M-estimator. This fact suggests that even when γ0 is known a priori, there is an efficiency gain by estimating γ by MLE. An application of this result can be found, for example, in treatment effect estimation.[4]

Examples

See also

References

  1. Heckman, J.J., The Common Structure of Statistical Models of Truncation, Sample Selection, and Limited Dependent Variables and a Simple Estimator for Such Models, Annals of Economic and Social Measurement, 5,475-492.
  2. 2.0 2.1 Wooldridge, J.M., Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge, Mass.
  3. 3.0 3.1 Newey, K.W. and D. McFadden, Large Sample Estimation and Hypothesis Testing, in R. Engel and D. McFadden, eds., Handbook of Econometrics, Vol.4, Amsterdam: North-Holland.
  4. 4.0 4.1 4.2 Wooldridge, J.M., Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge, Mass.
  5. Heckman, J.J., and R. Robb, 1985, Alternative Methods for Evaluating the Impact of Interventions: An Overview, Journal of Econometrics, 30, 239-267.