Control function (econometrics)

From HandWiki
Short description: Statistical methods to correct for endogeneity problems

Control functions (also known as two-stage residual inclusion) are statistical methods to correct for endogeneity problems by modelling the endogeneity in the error term. The approach thereby differs in important ways from other models that try to account for the same econometric problem. Instrumental variables, for example, attempt to model the endogenous variable X as an often invertible model with respect to a relevant and exogenous instrument Z. Panel analysis uses special data properties to difference out unobserved heterogeneity that is assumed to be fixed over time.

Control functions were introduced by Heckman and Robb[1] although the principle can be traced back to earlier papers.[2] A particular reason why they are popular is because they work for non-invertible models (such as discrete choice models) and allow for heterogeneous effects, where effects at the individual level can differ from effects at the aggregate.[3] A well-known example of the control function approach is the Heckman correction.

Formal definition

Assume we start from a standard endogenous variable setup with additive errors, where X is an endogenous variable, and Z is an exogenous variable that can serve as an instrument.

[math]\displaystyle{ Y = g(X) + U }[/math]

 

 

 

 

(1)

[math]\displaystyle{ X = \pi(Z) + V }[/math]

 

 

 

 

(2)

[math]\displaystyle{ E[U \mid Z,V] = E[U \mid V] }[/math]

 

 

 

 

(3)

[math]\displaystyle{ E[V \mid Z] = 0 }[/math]

 

 

 

 

(4)

A popular instrumental variable approach is to use a two-step procedure and estimate equation (2) first and then use the estimates of this first step to estimate equation (1) in a second step. The control function, however, uses that this model implies

[math]\displaystyle{ E[Y \mid Z,V] = g(X) + E[U \mid Z,V] = g(X) + E[U \mid V] = g(X) + h(V) }[/math]

 

 

 

 

(5)

The function h(V) is effectively the control function that models the endogeneity and where this econometric approach lends its name from.[4]

In a Rubin causal model potential outcomes framework, where Y1 is the outcome variable of people for who the participation indicator D equals 1, the control function approach leads to the following model

[math]\displaystyle{ E[Y_1 \mid X,Z,D = 1] = \mu_1(X) + E[U \mid D = 1] }[/math]

 

 

 

 

(6)

as long as the potential outcomes Y0 and Y1 are independent of D conditional on X and Z.[5]

Variance correction

Since the second-stage regression includes generated regressors, its variance-covariance matrix needs to be adjusted.[6][7]

Examples

Endogeneity in Poisson regression

Wooldridge and Terza provide a methodology to both deal with and test for endogeneity within the exponential regression framework, which the following discussion follows closely.[8] While the example focuses on a Poisson regression model, it is possible to generalize to other exponential regression models, although this may come at the cost of additional assumptions (e.g. for binary response or censored data models).

Assume the following exponential regression model, where [math]\displaystyle{ a_i }[/math] is an unobserved term in the latent variable. We allow for correlation between [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ x_i }[/math] (implying [math]\displaystyle{ x_i }[/math] is possibly endogenous), but allow for no such correlation between [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ z_i }[/math].

[math]\displaystyle{ \operatorname E[y_i \mid x_i, z_i, a_i] = \exp(x_i b_0 + z_i c_0+a_i) }[/math]

The variables [math]\displaystyle{ z_i }[/math] serve as instrumental variables for the potentially endogenous [math]\displaystyle{ x_i }[/math]. One can assume a linear relationship between these two variables or alternatively project the endogenous variable [math]\displaystyle{ x_i }[/math] onto the instruments to get the following reduced form equation:

[math]\displaystyle{ x_i=z_i\Pi+v_i }[/math]

 

 

 

 

(1)

The usual rank condition is needed to ensure identification. The endogeneity is then modeled in the following way, where [math]\displaystyle{ \rho }[/math] determines the severity of endogeneity and [math]\displaystyle{ v_i }[/math] is assumed to be independent of [math]\displaystyle{ e_i }[/math].

[math]\displaystyle{ a_i=v_i \rho+e_i }[/math]

Imposing these assumptions, assuming the models are correctly specified, and normalizing [math]\displaystyle{ \operatorname E[\exp(e_i)]=1 }[/math], we can rewrite the conditional mean as follows:

[math]\displaystyle{ \operatorname E[y_i \mid x_i, z_i , v_i] = \exp (x_i b_0 + z_i c_0 +v_i\rho) }[/math]

 

 

 

 

(2)

If [math]\displaystyle{ v_i }[/math] were known at this point, it would be possible to estimate the relevant parameters by quasi-maximum likelihood estimation (QMLE). Following the two step procedure strategies, Wooldridge and Terza propose estimating equation (1) by ordinary least squares. The fitted residuals from this regression can then be plugged into the estimating equation (2) and QMLE methods will lead to consistent estimators of the parameters of interest. Significance tests on [math]\displaystyle{ \hat\rho }[/math] can then be used to test for endogeneity within the model.

Extensions

The original Heckit procedure makes distributional assumptions about the error terms, however, more flexible estimation approaches with weaker distributional assumptions have been established.[9] Furthermore, Blundell and Powell show how the control function approach can be particularly helpful in models with nonadditive errors, such as discrete choice models.[10] This latter approach, however, does implicitly make strong distributional and functional form assumptions.[5]

See also

References

  1. Heckman, James J.; Robb, Richard (1985). "Alternative methods for evaluating the impact of interventions". Journal of Econometrics (Elsevier BV) 30 (1–2): 239–267. doi:10.1016/0304-4076(85)90139-3. ISSN 0304-4076. 
  2. Telser, L. G. (1964). "Iterative Estimation of a Set of Linear Regression Equations". Journal of the American Statistical Association 59 (307): 845–862. doi:10.1080/01621459.1964.10480731. 
  3. Arellano, M. (2008). "Binary Models with Endogenous Explanatory Variables". Class notes. https://www.cemfi.es/~arellano/binary-endogeneity.pdf. 
  4. Arellano, M. (2003): Endogeneity and Instruments in Nonparametric Models. Comments to papers by Darolles, Florens & Renault; and Blundell & Powell. Advances in Economics and Econometrics, Theory and Applications, Eight World Congress. Volume II, ed. by M. Dewatripont, L.P. Hansen, and S.J. Turnovsky. Cambridge University Press, Cambridge.
  5. 5.0 5.1 Heckman, J. J., and E. J. Vytlacil (2007): Econometric Evaluation of Social Programs, Part II: Using the Marginal Treatment Effect to Organize Alternative Econometric Estimators to Evaluate Social Programs, and to Forecast the Effects in New Environments. Handbook of Econometrics, Vol 6, ed. by J. J. Heckman and E. E. Leamer. North Holland.
  6. Murphy, Kevin M.; Topel, Robert H. (1985). "Estimation and Inference in Two-Step Econometric Models". Journal of Business & Economic Statistics 3 (4): 370–379. 
  7. Gauger, Jean (1989). "The Generated Regressor Correction: Impacts Upon Inferences in Hypothesis Testing". Journal of Macroeconomics 11 (3): 383–395. doi:10.1016/0164-0704(89)90065-7. 
  8. Wooldridge 1997, pp. 382–383; Terza 1998
  9. Matzkin, R. L. (2003). "Nonparametric Estimation of Nonadditive Random Functions". Econometrica 71 (5): 1339–1375. doi:10.1111/1468-0262.00452. 
  10. Blundell, R., and J. L. Powell (2003): Endogeneity in Nonparametric and Semiparametric Regression Models. Advances in Economics and Econometrics, Theory and Applications, Eight World Congress. Volume II, ed. by M. Dewatripont, L.P. Hansen, and S.J. Turnovsky. Cambridge University Press, Cambridge.

Further reading