Mills ratio

From HandWiki
Short description: In probability, a theory

In probability theory, the Mills ratio (or Mills's ratio[1]) of a continuous random variable [math]\displaystyle{ X }[/math] is the function

[math]\displaystyle{ m(x) := \frac{\bar{F}(x)}{f(x)} , }[/math]

where [math]\displaystyle{ f(x) }[/math] is the probability density function, and

[math]\displaystyle{ \bar{F}(x) := \Pr[X\gt x] = \int_x^{+\infty} f(u)\, du }[/math]

is the complementary cumulative distribution function (also called survival function). The concept is named after John P. Mills.[2] The Mills ratio is related to the hazard rate h(x) which is defined as[3]

[math]\displaystyle{ h(x):=\lim_{\delta\to 0} \frac{1}{\delta}\Pr[x \lt X \leq x + \delta | X \gt x] }[/math]

by

[math]\displaystyle{ m(x) = \frac{1}{h(x)}. }[/math]

Upper and lower bounds

When [math]\displaystyle{ X }[/math] has a standard normal distribution then the following bounds hold for [math]\displaystyle{ x\gt 0 }[/math]:

[math]\displaystyle{ \frac{x}{x^2 + 1} \lt m(x) \lt \frac{1}{x} }[/math][4][5]


Example

If [math]\displaystyle{ X }[/math] has standard normal distribution then

[math]\displaystyle{ m(x) \sim 1/x , \, }[/math]

where the sign [math]\displaystyle{ \sim }[/math] means that the quotient of the two functions converges to 1 as [math]\displaystyle{ x\to+\infty }[/math], see Q-function for details. More precise asymptotics can be given.[6]

Inverse Mills ratio

The inverse Mills ratio is the ratio of the probability density function to the complementary cumulative distribution function of a distribution. Its use is often motivated by the following property of the truncated normal distribution. If X is a random variable having a normal distribution with mean μ and variance σ2, then

[math]\displaystyle{ \begin{align} & \operatorname{E}[\,X\,|\ X \gt \alpha \,] = \mu + \sigma \frac {\phi\big(\tfrac{\alpha-\mu}{\sigma}\big)}{1-\Phi\big(\tfrac{\alpha-\mu}{\sigma}\big)}, \\ & \operatorname{E}[\,X\,|\ X \lt \alpha \,] = \mu - \sigma \frac {\phi\big(\tfrac{\alpha-\mu}{\sigma}\big)}{\Phi\big(\tfrac{\alpha-\mu}{\sigma}\big)}, \end{align} }[/math]

where [math]\displaystyle{ \alpha }[/math] is a constant, [math]\displaystyle{ \phi }[/math] denotes the standard normal density function, and [math]\displaystyle{ \Phi }[/math] is the standard normal cumulative distribution function. The two fractions are the inverse Mills ratios.[7]

Use in regression

A common application of the inverse Mills ratio (sometimes also called “non-selection hazard”) arises in regression analysis to take account of a possible selection bias. If a dependent variable is censored (i.e., not for all observations a positive outcome is observed) it causes a concentration of observations at zero values. This problem was first acknowledged by Tobin (1958), who showed that if this is not taken into consideration in the estimation procedure, an ordinary least squares estimation will produce biased parameter estimates.[8] With censored dependent variables there is a violation of the Gauss–Markov assumption of zero correlation between independent variables and the error term.[9]

James Heckman proposed a two-stage estimation procedure using the inverse Mills ratio to correct for the selection bias.[10][11] In a first step, a regression for observing a positive outcome of the dependent variable is modeled with a probit model. The inverse Mills ratio must be generated from the estimation of a probit model, a logit cannot be used. The probit model assumes that the error term follows a standard normal distribution.[10] The estimated parameters are used to calculate the inverse Mills ratio, which is then included as an additional explanatory variable in the OLS estimation.[12]

See also

References

  1. Grimmett, G.; Stirzaker, S. (2001). Probability Theory and Random Processes (3rd ed.). Cambridge. p. 98. ISBN 0-19-857223-9. https://books.google.com/books?id=G3ig-0M4wSIC&pg=PA98. 
  2. Mills, John P. (1926). "Table of the Ratio: Area to Bounding Ordinate, for Any Portion of Normal Curve". Biometrika 18 (3/4): 395–400. doi:10.1093/biomet/18.3-4.395. 
  3. Klein, J. P.; Moeschberger, M. L. (2003). Survival Analysis: Techniques for Censored and Truncated Data. New York: Springer. p. 27. ISBN 0-387-95399-X. https://books.google.com/books?id=aO7xBwAAQBAJ&pg=PA27. 
  4. "Upper & lower bounds for the normal distribution function" (in en-US). 2018-06-02. https://www.johndcook.com/blog/norm-dist-bounds/. 
  5. Wainwright MJ. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge: Cambridge University Press; 2019. doi:10.1017/9781108627771
  6. Small, Christopher G. (2010). Expansions and Asymptotics for Statistics. Monographs on Statistics & Applied Probability. 115. CRC Press. pp. 48, 50–51, 88–90. ISBN 978-1-4200-1102-9. https://books.google.com/books?id=uXexXLoZnZAC&pg=PA48. .
  7. Greene, W. H. (2003). Econometric Analysis (Fifth ed.). Prentice-Hall. p. 759. ISBN 0-13-066189-9. 
  8. Tobin, J. (1958). "Estimation of relationships for limited dependent variables". Econometrica 26 (1): 24–36. doi:10.2307/1907382. http://cowles.yale.edu/sites/default/files/files/pub/d00/d0003-r.pdf. 
  9. Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 366–368. ISBN 0-674-00560-0. https://archive.org/details/advancedeconomet00amem. 
  10. 10.0 10.1 Heckman, J. J. (1979). "Sample Selection as a Specification Error". Econometrica 47 (1): 153–161. doi:10.2307/1912352. 
  11. Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 368–373. ISBN 0-674-00560-0. https://archive.org/details/advancedeconomet00amem. 
  12. Heckman, J. J. (1976). "The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models". Annals of Economic and Social Measurement 5 (4): 475–492. 

External links