Linear probability model
In statistics, a linear probability model (LPM) is a special case of a binary regression model. Here the dependent variable for each observation takes values which are either 0 or 1. The probability of observing a 0 or 1 in any one case is treated as depending on one or more explanatory variables. For the "linear probability model", this relationship is a particularly simple one, and allows the model to be fitted by linear regression.
The model assumes that, for a binary outcome (Bernoulli trial), [math]\displaystyle{ Y }[/math], and its associated vector of explanatory variables, [math]\displaystyle{ X }[/math],[1]
- [math]\displaystyle{ \Pr(Y=1 | X=x) = x'\beta . }[/math]
For this model,
- [math]\displaystyle{ E[Y|X] = 0\cdot \Pr(Y=0|X) +1\cdot \Pr(Y=1|X) = \Pr(Y=1|X) =x'\beta, }[/math]
and hence the vector of parameters β can be estimated using least squares. This method of fitting would be inefficient,[1] and can be improved by adopting an iterative scheme based on weighted least squares,[1] in which the model from the previous iteration is used to supply estimates of the conditional variances, [math]\displaystyle{ \operatorname{Var}(Y|X=x) }[/math], which would vary between observations. This approach can be related to fitting the model by maximum likelihood.[1]
A drawback of this model is that, unless restrictions are placed on [math]\displaystyle{ \beta }[/math], the estimated coefficients can imply probabilities outside the unit interval [math]\displaystyle{ [0,1] }[/math]. For this reason, models such as the logit model or the probit model are more commonly used.
Latent-variable formulation
More formally, the LPM can arise from a latent-variable formulation (usually to be found in the econometrics literature[2]), as follows: assume the following regression model with a latent (unobservable) dependent variable:
- [math]\displaystyle{ y^* = b_0+ \mathbf x'\mathbf b + \varepsilon,\;\; \varepsilon\mid \mathbf x\sim U(-a,a). }[/math]
The critical assumption here is that the error term of this regression is a symmetric around zero uniform random variable, and hence, of mean zero. The cumulative distribution function of [math]\displaystyle{ \varepsilon }[/math] here is [math]\displaystyle{ F_{\varepsilon|\mathbf x}(\varepsilon\mid \mathbf x) = \frac {\varepsilon + a}{2a}. }[/math]
Define the indicator variable [math]\displaystyle{ y = 1 }[/math] if [math]\displaystyle{ y^* \gt 0 }[/math], and zero otherwise, and consider the conditional probability
- [math]\displaystyle{ {\rm Pr}(y =1\mid \mathbf x ) = {\rm Pr}(y^* \gt 0\mid \mathbf x) = {\rm Pr}(b_0+ \mathbf x'\mathbf b + \varepsilon\gt 0\mid \mathbf x) }[/math]
- [math]\displaystyle{ = {\rm Pr}(\varepsilon \gt - b_0- \mathbf x'\mathbf b\mid \mathbf x) = 1- {\rm Pr}(\varepsilon \leq - b_0- \mathbf x'\mathbf b\mid \mathbf x) }[/math]
- [math]\displaystyle{ =1- F_{\varepsilon|\mathbf x}(- b_0- \mathbf x'\mathbf b\mid \mathbf x) =1- \frac {- b_0- \mathbf x'\mathbf b + a}{2a} = \frac {b_0+a}{2a}+\frac {\mathbf x'\mathbf b}{2a}. }[/math]
But this is the Linear Probability Model,
- [math]\displaystyle{ P(y =1\mid \mathbf x )= \beta_0 + \mathbf x'\beta }[/math]
with the mapping
- [math]\displaystyle{ \beta_0 = \frac {b_0+a}{2a},\;\; \beta=\frac{\mathbf b}{2a}. }[/math]
This method is a general device to obtain a conditional probability model of a binary variable: if we assume that the distribution of the error term is Logistic, we obtain the logit model, while if we assume that it is the Normal, we obtain the probit model and, if we assume that it is the logarithm of a Weibull distribution, the complementary log-log model.
See also
References
Further reading
- Aldrich, John H.; Nelson, Forrest D. (1984). "The Linear Probability Model". Linear Probability, Logit, and Probit Models. Sage. pp. 9–29. ISBN 0-8039-2133-0. https://books.google.com/books?id=z0tmctgE1OYC&pg=PA9.
- Amemiya, Takeshi (1985). "Qualitative Response Models". Advanced Econometrics. Oxford: Basil Blackwell. pp. 267–359. ISBN 0-631-13345-3. https://books.google.com/books?id=0bzGQE14CwEC&pg=PA267.
- Wooldridge, Jeffrey M. (2013). "A Binary Dependent Variable: The Linear Probability Model". Introductory Econometrics: A Modern Approach (5th international ed.). Mason, OH: South-Western. pp. 238–243. ISBN 978-1-111-53439-4.
- Horrace, William C., and Ronald L. Oaxaca. "Results on the Bias and Inconsistency of Ordinary Least Squares for the Linear Probability Model." Economics Letters, 2006: Vol. 90, P. 321–327
Original source: https://en.wikipedia.org/wiki/Linear probability model.
Read more |