Errorsinvariables models
Part of a series on 
Regression analysis 

Models 
Estimation 
Background 

In statistics, errorsinvariables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.^{[citation needed]}
In the case when some regressors have been measured with errors, estimation based on the standard assumption leads to inconsistent estimates, meaning that the parameter estimates do not tend to the true values even in very large samples. For simple linear regression the effect is an underestimate of the coefficient, known as the attenuation bias. In nonlinear models the direction of the bias is likely to be more complicated.^{[1]}^{[2]}^{[3]}
Motivating example
Consider a simple linear regression model of the form
 [math]\displaystyle{ y_{t} = \alpha + \beta x_{t}^{*} + \varepsilon_t\,, \quad t=1,\ldots,T, }[/math]
where [math]\displaystyle{ x_{t}^{*} }[/math] denotes the true but unobserved regressor. Instead we observe this value with an error:
 [math]\displaystyle{ x_{t} = x_{t}^{*} + \eta_{t}\, }[/math]
where the measurement error [math]\displaystyle{ \eta_{t} }[/math] is assumed to be independent of the true value [math]\displaystyle{ x_{t}^{*} }[/math].
If the [math]\displaystyle{ y_{t} }[/math]′s are simply regressed on the [math]\displaystyle{ x_{t} }[/math]′s (see simple linear regression), then the estimator for the slope coefficient is
 [math]\displaystyle{ \hat{\beta}_x = \frac{\tfrac{1}{T}\sum_{t = 1}^T(x_t\bar{x})(y_t\bar{y})} {\tfrac{1}{T}\sum_{t=1}^T(x_t\bar{x})^2}\,, }[/math]
which converges as the sample size [math]\displaystyle{ T }[/math] increases without bound:
 [math]\displaystyle{ \hat{\beta}_x \xrightarrow{p} \frac{\operatorname{Cov}[\,x_t,y_t\,]}{\operatorname{Var}[\,x_t\,]} = \frac{\beta \sigma^2_{x^*}} {\sigma_{x^*}^2 + \sigma_\eta^2} = \frac{\beta} {1 + \sigma_\eta^2/\sigma_{x^*}^2}\,. }[/math]
This is in contrast to the "true" effect of [math]\displaystyle{ \beta }[/math], estimated using the [math]\displaystyle{ x_{t}^{*} }[/math],:
 [math]\displaystyle{ \hat{\beta} = \frac{\tfrac{1}{T}\sum_{t=1}^T(x^*_t\bar{x})(y_t\bar{y})} {\tfrac{1}{T}\sum_{t=1}^T(x^*_t\bar{x})^2}\,, }[/math]
Variances are nonnegative, so that in the limit the estimated [math]\displaystyle{ \hat{\beta}_x }[/math] is smaller than [math]\displaystyle{ \hat{\beta} }[/math], an effect which statisticians call attenuation or regression dilution.^{[4]} Thus the ‘naïve’ least squares estimator [math]\displaystyle{ \hat{\beta}_x }[/math] is an inconsistent estimator for [math]\displaystyle{ \beta }[/math]. However, [math]\displaystyle{ \hat{\beta}_x }[/math] is a consistent estimator of the parameter required for a best linear predictor of [math]\displaystyle{ y }[/math] given the observed [math]\displaystyle{ x_t }[/math]: in some applications this may be what is required, rather than an estimate of the ‘true’ regression coefficient [math]\displaystyle{ \beta }[/math], although that would assume that the variance of the errors in the estimation and prediction is identical. This follows directly from the result quoted immediately above, and the fact that the regression coefficient relating the [math]\displaystyle{ y_{t} }[/math]′s to the actually observed [math]\displaystyle{ x_{t} }[/math]′s, in a simple linear regression, is given by
 [math]\displaystyle{ \beta_x = \frac{\operatorname{Cov}[\,x_t,y_t\,]}{\operatorname{Var}[\,x_t\,]} . }[/math]
It is this coefficient, rather than [math]\displaystyle{ \beta }[/math], that would be required for constructing a predictor of [math]\displaystyle{ y }[/math] based on an observed [math]\displaystyle{ x }[/math] which is subject to noise.
It can be argued that almost all existing data sets contain errors of different nature and magnitude, so that attenuation bias is extremely frequent (although in multivariate regression the direction of bias is ambiguous^{[5]}). Jerry Hausman sees this as an iron law of econometrics: "The magnitude of the estimate is usually smaller than expected."^{[6]}
Specification
Usually measurement error models are described using the latent variables approach. If [math]\displaystyle{ y }[/math] is the response variable and [math]\displaystyle{ x }[/math] are observed values of the regressors, then it is assumed there exist some latent variables [math]\displaystyle{ y^{*} }[/math] and [math]\displaystyle{ x^{*} }[/math] which follow the model's “true” functional relationship [math]\displaystyle{ g(\cdot) }[/math], and such that the observed quantities are their noisy observations:
 [math]\displaystyle{ \begin{cases} y^* = g(x^*\!,w\,\,\theta),\\ y = y^{*} + \varepsilon, \\ x = x^{*} + \eta, \end{cases} }[/math]
where [math]\displaystyle{ \theta }[/math] is the model's parameter and [math]\displaystyle{ w }[/math] are those regressors which are assumed to be errorfree (for example when linear regression contains an intercept, the regressor which corresponds to the constant certainly has no "measurement errors"). Depending on the specification these errorfree regressors may or may not be treated separately; in the latter case it is simply assumed that corresponding entries in the variance matrix of [math]\displaystyle{ \eta }[/math]'s are zero.
The variables [math]\displaystyle{ y }[/math], [math]\displaystyle{ x }[/math], [math]\displaystyle{ w }[/math] are all observed, meaning that the statistician possesses a data set of [math]\displaystyle{ n }[/math] statistical units [math]\displaystyle{ \left\{ y_{i}, x_{i}, w_{i} \right\}_{i = 1, \dots, n} }[/math] which follow the data generating process described above; the latent variables [math]\displaystyle{ x^* }[/math], [math]\displaystyle{ y^* }[/math], [math]\displaystyle{ \varepsilon }[/math], and [math]\displaystyle{ \eta }[/math] are not observed however.
This specification does not encompass all the existing errorsinvariables models. For example in some of them function [math]\displaystyle{ g(\cdot) }[/math] may be nonparametric or semiparametric. Other approaches model the relationship between [math]\displaystyle{ y^* }[/math] and [math]\displaystyle{ x^* }[/math] as distributional instead of functional, that is they assume that [math]\displaystyle{ y^* }[/math] conditionally on [math]\displaystyle{ x^* }[/math] follows a certain (usually parametric) distribution.
Terminology and assumptions
 The observed variable [math]\displaystyle{ x }[/math] may be called the manifest, indicator, or proxy variable.
 The unobserved variable [math]\displaystyle{ x^* }[/math] may be called the latent or true variable. It may be regarded either as an unknown constant (in which case the model is called a functional model), or as a random variable (correspondingly a structural model).^{[7]}
 The relationship between the measurement error [math]\displaystyle{ \eta }[/math] and the latent variable [math]\displaystyle{ x^* }[/math] can be modeled in different ways:
 Classical errors: [math]\displaystyle{ \eta \perp x^* }[/math] the errors are independent of the latent variable. This is the most common assumption, it implies that the errors are introduced by the measuring device and their magnitude does not depend on the value being measured.
 Meanindependence: [math]\displaystyle{ \operatorname{E}[\etax^*]\,=\,0, }[/math] the errors are meanzero for every value of the latent regressor. This is a less restrictive assumption than the classical one,^{[8]} as it allows for the presence of heteroscedasticity or other effects in the measurement errors.
 Berkson's errors: [math]\displaystyle{ \eta\,\perp\,x, }[/math] the errors are independent of the observed regressor x.^{[9]} This assumption has very limited applicability. One example is roundoff errors: for example if a person's age* is a continuous random variable, whereas the observed age is truncated to the next smallest integer, then the truncation error is approximately independent of the observed age. Another possibility is with the fixed design experiment: for example if a scientist decides to make a measurement at a certain predetermined moment of time [math]\displaystyle{ x }[/math], say at [math]\displaystyle{ x = 10 s }[/math], then the real measurement may occur at some other value of [math]\displaystyle{ x^* }[/math] (for example due to her finite reaction time) and such measurement error will be generally independent of the "observed" value of the regressor.
 Misclassification errors: special case used for the dummy regressors. If [math]\displaystyle{ x^* }[/math] is an indicator of a certain event or condition (such as person is male/female, some medical treatment given/not, etc.), then the measurement error in such regressor will correspond to the incorrect classification similar to type I and type II errors in statistical testing. In this case the error [math]\displaystyle{ \eta }[/math] may take only 3 possible values, and its distribution conditional on [math]\displaystyle{ x^* }[/math] is modeled with two parameters: [math]\displaystyle{ \alpha = \operatorname{Pr}[\eta = 1  x^* = 1] }[/math], and [math]\displaystyle{ \beta =\operatorname{Pr}[\eta = 1  x^*=0] }[/math]. The necessary condition for identification is that [math]\displaystyle{ \alpha + \beta \lt 1 }[/math], that is misclassification should not happen "too often". (This idea can be generalized to discrete variables with more than two possible values.)
Linear model
Linear errorsinvariables models were studied first, probably because linear models were so widely used and they are easier than nonlinear ones. Unlike standard least squares regression (OLS), extending errors in variables regression (EiV) from the simple to the multivariable case is not straightforward.
Simple linear model
The simple linear errorsinvariables model was already presented in the "motivation" section:
 [math]\displaystyle{ \begin{cases} y_t = \alpha + \beta x_t^* + \varepsilon_t, \\ x_t = x_t^* + \eta_t, \end{cases} }[/math]
where all variables are scalar. Here α and β are the parameters of interest, whereas σ_{ε} and σ_{η}—standard deviations of the error terms—are the nuisance parameters. The "true" regressor x* is treated as a random variable (structural model), independent of the measurement error η (classic assumption).
This model is identifiable in two cases: (1) either the latent regressor x* is not normally distributed, (2) or x* has normal distribution, but neither ε_{t} nor η_{t} are divisible by a normal distribution.^{[10]} That is, the parameters α, β can be consistently estimated from the data set [math]\displaystyle{ \scriptstyle(x_t,\,y_t)_{t=1}^T }[/math] without any additional information, provided the latent regressor is not Gaussian.
Before this identifiability result was established, statisticians attempted to apply the maximum likelihood technique by assuming that all variables are normal, and then concluded that the model is not identified. The suggested remedy was to assume that some of the parameters of the model are known or can be estimated from the outside source. Such estimation methods include^{[11]}
 Deming regression — assumes that the ratio δ = σ²_{ε}/σ²_{η} is known. This could be appropriate for example when errors in y and x are both caused by measurements, and the accuracy of measuring devices or procedures are known. The case when δ = 1 is also known as the orthogonal regression.
 Regression with known reliability ratio λ = σ²_{∗}/ ( σ²_{η} + σ²_{∗}), where σ²_{∗} is the variance of the latent regressor. Such approach may be applicable for example when repeating measurements of the same unit are available, or when the reliability ratio has been known from the independent study. In this case the consistent estimate of slope is equal to the leastsquares estimate divided by λ.
 Regression with known σ²_{η} may occur when the source of the errors in x's is known and their variance can be calculated. This could include rounding errors, or errors introduced by the measuring device. When σ²_{η} is known we can compute the reliability ratio as λ = ( σ²_{x} − σ²_{η}) / σ²_{x} and reduce the problem to the previous case.
Estimation methods that do not assume knowledge of some of the parameters of the model, include
 Method of moments — the GMM estimator based on the third (or higher) order joint cumulants of observable variables. The slope coefficient can be estimated from ^{[12]}
 [math]\displaystyle{ \hat\beta = \frac{\hat{K}(n_1,n_2+1)}{\hat{K}(n_1+1,n_2)}, \quad n_1,n_2\gt 0, }[/math]
where (n_{1},n_{2}) are such that K(n_{1}+1,n_{2}) — the joint cumulant of (x,y) — is not zero. In the case when the third central moment of the latent regressor x* is nonzero, the formula reduces to
 [math]\displaystyle{ \hat\beta = \frac{\tfrac{1}{T}\sum_{t=1}^T (x_t\bar x)(y_t\bar y)^2} {\tfrac{1}{T}\sum_{t=1}^T (x_t\bar x)^2(y_t\bar y)}\ . }[/math]
 Instrumental variables — a regression which requires that certain additional data variables z, called instruments, were available. These variables should be uncorrelated with the errors in the equation for the dependent (outcome) variable (valid), and they should also be correlated (relevant) with the true regressors x*. If such variables can be found then the estimator takes form
 [math]\displaystyle{ \hat\beta = \frac{\tfrac{1}{T}\sum_{t=1}^T (z_t\bar z)(y_t\bar y)} {\tfrac{1}{T}\sum_{t=1}^T (z_t\bar z)(x_t\bar x)}\ . }[/math]
 The geometric mean functional relationship. This treats both variables as having the same reliability. The resulting slope is the geometric mean of the ordinary least squares slope and the reverse least squares slope, i.e. the two red lines in the diagram.^{[13]}
Multivariable linear model
The multivariable model looks exactly like the simple linear model, only this time β, η_{t}, x_{t} and x*_{t} are k×1 vectors.
 [math]\displaystyle{ \begin{cases} y_t = \alpha + \beta'x_t^* + \varepsilon_t, \\ x_t = x_t^* + \eta_t. \end{cases} }[/math]
In the case when (ε_{t},η_{t}) is jointly normal, the parameter β is not identified if and only if there is a nonsingular k×k block matrix [a A], where a is a k×1 vector such that a′x* is distributed normally and independently of A′x*. In the case when ε_{t}, η_{t1},..., η_{tk} are mutually independent, the parameter β is not identified if and only if in addition to the conditions above some of the errors can be written as the sum of two independent variables one of which is normal.^{[14]}
Some of the estimation methods for multivariable linear models are
 Total least squares is an extension of Deming regression to the multivariable setting. When all the k+1 components of the vector (ε,η) have equal variances and are independent, this is equivalent to running the orthogonal regression of y on the vector x — that is, the regression which minimizes the sum of squared distances between points (y_{t},x_{t}) and the kdimensional hyperplane of "best fit".
 The method of moments estimator ^{[15]} can be constructed based on the moment conditions E[z_{t}·(y_{t} − α − β'x_{t})] = 0, where the (5k+3)dimensional vector of instruments z_{t} is defined as
 [math]\displaystyle{ \begin{align} & z_t = \left( 1\ z_{t1}'\ z_{t2}'\ z_{t3}'\ z_{t4}'\ z_{t5}'\ z_{t6}'\ z_{t7}' \right)', \quad \text{where} \\ & z_{t1} = x_t \circ x_t \\ & z_{t2} = x_t y_t \\ & z_{t3} = y_t^2 \\ & z_{t4} = x_t \circ x_t \circ x_t  3\big(\operatorname{E}[x_tx_t'] \circ I_k\big)x_t \\ & z_{t5} = x_t \circ x_t y_t  2\big(\operatorname{E}[y_tx_t'] \circ I_k\big)x_t  y_t\big(\operatorname{E}[x_tx_t'] \circ I_k\big)\iota_k \\ & z_{t6} = x_t y_t^2  \operatorname{E}[y_t^2]x_t  2y_t\operatorname{E}[x_ty_t] \\ & z_{t7} = y_t^3  3y_t\operatorname{E}[y_t^2] \end{align} }[/math]
where [math]\displaystyle{ \circ }[/math] designates the Hadamard product of matrices, and variables x_{t}, y_{t} have been preliminarily demeaned. The authors of the method suggest to use Fuller's modified IV estimator.^{[16]}
This method can be extended to use moments higher than the third order, if necessary, and to accommodate variables measured without error.^{[17]}
 The instrumental variables approach requires us to find additional data variables z_{t} that serve as instruments for the mismeasured regressors x_{t}. This method is the simplest from the implementation point of view, however its disadvantage is that it requires collecting additional data, which may be costly or even impossible. When the instruments can be found, the estimator takes standard form
 [math]\displaystyle{ \hat\beta = \big(X'Z(Z'Z)^{1}Z'X\big)^{1}X'Z(Z'Z)^{1}Z'y. }[/math]
 The impartial fitting approach treats all variables in the same way by assuming equal reliability, and does not require any distinction between explanatory and response variables as the resulting equation can be rearranged. It is the simplest measurement error model, and is a generalization of the geometric mean functional relationship mentioned above for two variables. It only requires covariances to be computed, and so can be estimated using basic spreadsheet functions. ^{[18]}
Nonlinear models
A generic nonlinear measurement error model takes form
 [math]\displaystyle{ \begin{cases} y_t = g(x^*_t) + \varepsilon_t, \\ x_t = x^*_t + \eta_t. \end{cases} }[/math]
Here function g can be either parametric or nonparametric. When function g is parametric it will be written as g(x*, β).
For a general vectorvalued regressor x* the conditions for model identifiability are not known. However in the case of scalar x* the model is identified unless the function g is of the "logexponential" form ^{[19]}
 [math]\displaystyle{ g(x^*) = a + b \ln\big(e^{cx^*} + d\big) }[/math]
and the latent regressor x* has density
 [math]\displaystyle{ f_{x^*}(x) = \begin{cases} A e^{Be^{Cx}+CDx}(e^{Cx}+E)^{F}, & \text{if}\ d\gt 0 \\ A e^{Bx^2 + Cx} & \text{if}\ d=0 \end{cases} }[/math]
where constants A,B,C,D,E,F may depend on a,b,c,d.
Despite this optimistic result, as of now no methods exist for estimating nonlinear errorsinvariables models without any extraneous information. However there are several techniques which make use of some additional data: either the instrumental variables, or repeated observations.
Instrumental variables methods
 Newey's simulated moments method^{[20]} for parametric models — requires that there is an additional set of observed predictor variables z_{t}, such that the true regressor can be expressed as
 [math]\displaystyle{ x^*_t = \pi_0'z_t + \sigma_0 \zeta_t, }[/math]
where π_{0} and σ_{0} are (unknown) constant matrices, and ζ_{t} ⊥ z_{t}. The coefficient π_{0} can be estimated using standard least squares regression of x on z. The distribution of ζ_{t} is unknown, however we can model it as belonging to a flexible parametric family — the Edgeworth series:
 [math]\displaystyle{ f_\zeta(v;\,\gamma) = \phi(v)\,\textstyle\sum_{j=1}^J \!\gamma_j v^j }[/math]
where ϕ is the standard normal distribution.
Simulated moments can be computed using the importance sampling algorithm: first we generate several random variables {v_{ts} ~ ϕ, s = 1,…,S, t = 1,…,T} from the standard normal distribution, then we compute the moments at tth observation as
 [math]\displaystyle{ m_t(\theta) = A(z_t) \frac{1}{S}\sum_{s=1}^S H(x_t,y_t,z_t,v_{ts};\theta) \sum_{j=1}^J\!\gamma_j v_{ts}^j, }[/math]
where θ = (β, σ, γ), A is just some function of the instrumental variables z, and H is a twocomponent vector of moments
 [math]\displaystyle{ \begin{align} & H_1(x_t,y_t,z_t,v_{ts};\theta) = y_t  g(\hat\pi'z_t + \sigma v_{ts}, \beta), \\ & H_2(x_t,y_t,z_t,v_{ts};\theta) = z_t y_t  (\hat\pi'z_t + \sigma v_{ts}) g(\hat\pi'z_t + \sigma v_{ts}, \beta) \end{align} }[/math]
Repeated observations
In this approach two (or maybe more) repeated observations of the regressor x* are available. Both observations contain their own measurement errors, however those errors are required to be independent:
 [math]\displaystyle{ \begin{cases} x_{1t} = x^*_t + \eta_{1t}, \\ x_{2t} = x^*_t + \eta_{2t}, \end{cases} }[/math]
where x* ⊥ η_{1} ⊥ η_{2}. Variables η_{1}, η_{2} need not be identically distributed (although if they are efficiency of the estimator can be slightly improved). With only these two observations it is possible to consistently estimate the density function of x* using Kotlarski's deconvolution technique.^{[21]}
 Li's conditional density method for parametric models.^{[22]} The regression equation can be written in terms of the observable variables as
 [math]\displaystyle{ \operatorname{E}[\,y_tx_t\,] = \int g(x^*_t,\beta) f_{x^*x}(x^*_tx_t)dx^*_t , }[/math]
where it would be possible to compute the integral if we knew the conditional density function ƒ_{x*x}. If this function could be known or estimated, then the problem turns into standard nonlinear regression, which can be estimated for example using the NLLS method.
Assuming for simplicity that η_{1}, η_{2} are identically distributed, this conditional density can be computed as [math]\displaystyle{ \hat f_{x^*x}(x^*x) = \frac{\hat f_{x^*}(x^*)}{\hat f_{x}(x)} \prod_{j=1}^k \hat f_{\eta_{j}}\big( x_{j}  x^*_{j} \big), }[/math]
where with slight abuse of notation x_{j} denotes the jth component of a vector.
All densities in this formula can be estimated using inversion of the empirical characteristic functions. In particular, [math]\displaystyle{ \begin{align} & \hat \varphi_{\eta_j}(v) = \frac{\hat\varphi_{x_j}(v,0)}{\hat\varphi_{x^*_j}(v)}, \quad \text{where } \hat\varphi_{x_j}(v_1,v_2) = \frac{1}{T}\sum_{t=1}^T e^{iv_1x_{1tj}+iv_2x_{2tj}}, \\ \hat\varphi_{x^*_j}(v) = \exp \int_0^v \frac{\partial\hat\varphi_{x_j}(0,v_2)/\partial v_1}{\hat\varphi_{x_j}(0,v_2)}dv_2, \\ & \hat \varphi_x(u) = \frac{1}{2T}\sum_{t=1}^T \Big( e^{iu'x_{1t}} + e^{iu'x_{2t}} \Big), \quad \hat \varphi_{x^*}(u) = \frac{\hat\varphi_x(u)}{\prod_{j=1}^k \hat\varphi_{\eta_j}(u_j)}. \end{align} }[/math]
In order to invert these characteristic function one has to apply the inverse Fourier transform, with a trimming parameter C needed to ensure the numerical stability. For example:
 [math]\displaystyle{ \hat f_x(x) = \frac{1}{(2\pi)^k} \int_{C}^{C}\cdots\int_{C}^C e^{iu'x} \hat\varphi_x(u) du. }[/math]
 [math]\displaystyle{ \begin{cases} y_t = \textstyle \sum_{j=1}^k \beta_j g_j(x^*_t) + \sum_{j=1}^\ell \beta_{k+j}w_{jt} + \varepsilon_t, \\ x_{1t} = x^*_t + \eta_{1t}, \\ x_{2t} = x^*_t + \eta_{2t}, \end{cases} }[/math]
where w_{t} represents variables measured without errors. The regressor x* here is scalar (the method can be extended to the case of vector x* as well).
If not for the measurement errors, this would have been a standard linear model with the estimator [math]\displaystyle{ \hat{\beta} = \big(\hat{\operatorname{E}}[\,\xi_t\xi_t'\,]\big)^{1} \hat{\operatorname{E}}[\,\xi_t y_t\,], }[/math]
where
 [math]\displaystyle{ \xi_t'= (g_1(x^*_t), \cdots ,g_k(x^*_t), w_{1,t}, \cdots , w_{l,t}). }[/math]
It turns out that all the expected values in this formula are estimable using the same deconvolution trick. In particular, for a generic observable w_{t} (which could be 1, w_{1t}, …, w_{ℓ t}, or y_{t}) and some function h (which could represent any g_{j} or g_{i}g_{j}) we have
 [math]\displaystyle{ \operatorname{E}[\,w_th(x^*_t)\,] = \frac{1}{2\pi} \int_{\infty}^\infty \varphi_h(u)\psi_w(u)du, }[/math]
where φ_{h} is the Fourier transform of h(x*), but using the same convention as for the characteristic functions,
 [math]\displaystyle{ \varphi_h(u)=\int e^{iux}h(x)dx }[/math],
and
 [math]\displaystyle{ \psi_w(u) = \operatorname{E}[\,w_te^{iux^*}\,] = \frac{\operatorname{E}[w_te^{iux_{1t}}]}{\operatorname{E}[e^{iux_{1t}}]} \exp \int_0^u i\frac{\operatorname{E}[x_{2t}e^{ivx_{1t}}]}{\operatorname{E}[e^{ivx_{1t}}]}dv }[/math]
The resulting estimator [math]\displaystyle{ \scriptstyle\hat\beta }[/math] is consistent and asymptotically normal.
 [math]\displaystyle{ \hat{g}(x) = \frac{\hat{\operatorname{E}}[\,y_tK_h(x^*_t  x)\,]}{\hat{\operatorname{E}}[\,K_h(x^*_t  x)\,]}, }[/math]
References
 ↑ Griliches, Zvi; Ringstad, Vidar (1970). "Errorsinthevariables bias in nonlinear contexts". Econometrica 38 (2): 368–370. doi:10.2307/1913020.
 ↑ Chesher, Andrew (1991). "The effect of measurement error". Biometrika 78 (3): 451–462. doi:10.1093/biomet/78.3.451.
 ↑ Carroll, Raymond J.; Ruppert, David; Stefanski, Leonard A.; Crainiceanu, Ciprian (2006). Measurement Error in Nonlinear Models: A Modern Perspective (Second ed.). ISBN 9781584886334. https://books.google.com/books?id=9kBx5CPZCqkC&pg=PA41.
 ↑ Greene, William H. (2003). Econometric Analysis (5th ed.). New Jersey: Prentice Hall. Chapter 5.6.1. ISBN 9780130661890. https://books.google.com/books?id=JJkWAQAAMAAJ.
 ↑ Wansbeek, T.; Meijer, E. (2000). "Measurement Error and Latent Variables". in Baltagi, B. H.. A Companion to Theoretical Econometrics. Blackwell. pp. 162–179. doi:10.1111/b.9781405106764.2003.00013.x. ISBN 9781405106764. https://books.google.com/books?id=xs55E7FsMHMC&pg=PA162.
 ↑ Hausman, Jerry A. (2001). "Mismeasured variables in econometric analysis: problems from the right and problems from the left". Journal of Economic Perspectives 15 (4): 57–67 [p. 58]. doi:10.1257/jep.15.4.57.
 ↑ Fuller, Wayne A. (1987). Measurement Error Models. John Wiley & Sons. p. 2. ISBN 9780471861874. https://books.google.com/books?id=Nalc0DkAJRYC&pg=PA2.
 ↑ Hayashi, Fumio (2000). Econometrics. Princeton University Press. pp. 7–8. ISBN 9781400823833. https://books.google.com/books?id=QyIW8WUIyzcC&pg=PA7.
 ↑ Koul, Hira; Song, Weixing (2008). "Regression model checking with Berkson measurement errors". Journal of Statistical Planning and Inference 138 (6): 1615–1628. doi:10.1016/j.jspi.2007.05.048.
 ↑ Reiersøl, Olav (1950). "Identifiability of a linear relation between variables which are subject to error". Econometrica 18 (4): 375–389 [p. 383]. doi:10.2307/1907835. A somewhat more restrictive result was established earlier by Geary, R. C. (1942). "Inherent relations between random variables". Proceedings of the Royal Irish Academy 47: 63–76. He showed that under the additional assumption that (ε, η) are jointly normal, the model is not identified if and only if x*s are normal.
 ↑ Fuller, Wayne A. (1987). "A Single Explanatory Variable". Measurement Error Models. John Wiley & Sons. pp. 1–99. ISBN 9780471861874. https://books.google.com/books?id=Nalc0DkAJRYC&pg=PA1.
 ↑ Pal, Manoranjan (1980). "Consistent moment estimators of regression coefficients in the presence of errors in variables". Journal of Econometrics 14 (3): 349–364 (pp. 360–361). doi:10.1016/03044076(80)900329.
 ↑ Xu, Shaoji (20141002). "A Property of Geometric Mean Regression". The American Statistician 68 (4): 277–281. doi:10.1080/00031305.2014.962763. ISSN 00031305. http://dx.doi.org/10.1080/00031305.2014.962763.
 ↑ BenMoshe, Dan (2020). "Identification of linear regressions with errors in all variables". Econometric Theory 37 (4): 1–31. doi:10.1017/S0266466620000250.
 ↑ Dagenais, Marcel G.; Dagenais, Denyse L. (1997). "Higher moment estimators for linear regression models with errors in the variables". Journal of Econometrics 76 (1–2): 193–221. doi:10.1016/03044076(95)017895. In the earlier paper (Pal 1980) considered a simpler case when all components in vector (ε, η) are independent and symmetrically distributed.
 ↑ Fuller, Wayne A. (1987). Measurement Error Models. John Wiley & Sons. p. 184. ISBN 9780471861874. https://books.google.com/books?id=Nalc0DkAJRYC&pg=PA184.
 ↑ Erickson, Timothy; Whited, Toni M. (2002). "Twostep GMM estimation of the errorsinvariables model using highorder moments". Econometric Theory 18 (3): 776–799. doi:10.1017/s0266466602183101.
 ↑ Tofallis, C. (2023). Fitting an Equation to Data Impartially. Mathematics, 11(18), 3957. https://ssrn.com/abstract=4556739 https://doi.org/10.3390/math11183957
 ↑ Schennach, S.; Hu, Y.; Lewbel, A. (2007). "Nonparametric identification of the classical errorsinvariables model without side information". Working Paper. http://escholarship.bc.edu/cgi/viewcontent.cgi?article=1433&context=econ_papers.
 ↑ Newey, Whitney K. (2001). "Flexible simulated moment estimation of nonlinear errorsinvariables model". Review of Economics and Statistics 83 (4): 616–627. doi:10.1162/003465301753237704.
 ↑ Li, Tong; Vuong, Quang (1998). "Nonparametric estimation of the measurement error model using multiple indicators". Journal of Multivariate Analysis 65 (2): 139–165. doi:10.1006/jmva.1998.1741.
 ↑ Li, Tong (2002). "Robust and consistent estimation of nonlinear errorsinvariables models". Journal of Econometrics 110 (1): 1–26. doi:10.1016/S03044076(02)001203.
Further reading
 Dougherty, Christopher (2011). "Stochastic Regressors and Measurement Errors". Introduction to Econometrics (Fourth ed.). Oxford University Press. pp. 300–330. ISBN 9780199567089. https://books.google.com/books?id=UXucAQAAQBAJ&pg=PA300.
 Kmenta, Jan (1986). "Estimation with Deficient Data". Elements of Econometrics (Second ed.). New York: Macmillan. pp. 346–391. ISBN 9780023650703. https://books.google.com/books?id=Bxq7AAAAIAAJ&pg=PA346.
 Schennach, Susanne (2013). "Measurement Error in Nonlinear Models – A Review". in Acemoglu, Daron; Arellano, Manuel; Dekel, Eddie. Advances in Economics and Econometrics. Cambridge University Press. pp. 296–337. doi:10.1017/CBO9781139060035.009. ISBN 9781107017214.
External links
 An Historical Overview of Linear Regression with Errors in both Variables, J.W. Gillard 2006
 Lecture on Econometrics (topic: Stochastic Regressors and Measurement Error) on YouTube by Mark Thoma.
Original source: https://en.wikipedia.org/wiki/Errorsinvariables models.
Read more 