Weighted least squares

From HandWiki
Revision as of 06:41, 27 June 2023 by Jport (talk | contribs) (fix)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Method for model fitting in statistics

Weighted least squares (WLS), also known as weighted linear regression,[1][2] is a generalization of ordinary least squares and linear regression in which knowledge of the unequal variance of observations (heteroscedasticity) is incorporated into the regression. WLS is also a specialization of generalized least squares, when all the off-diagonal entries of the covariance matrix of the errors, are null.

Formulation

The fit of a model to a data point is measured by its residual, [math]\displaystyle{ r_i }[/math], defined as the difference between a measured value of the dependent variable, [math]\displaystyle{ y_i }[/math] and the value predicted by the model, [math]\displaystyle{ f(x_i, \boldsymbol\beta) }[/math]: [math]\displaystyle{ r_i(\boldsymbol\beta) = y_i - f(x_i, \boldsymbol\beta). }[/math]

If the errors are uncorrelated and have equal variance, then the function [math]\displaystyle{ S(\boldsymbol\beta) = \sum_i r_i(\boldsymbol\beta)^2, }[/math] is minimised at [math]\displaystyle{ \boldsymbol\hat\beta }[/math], such that [math]\displaystyle{ \frac{\partial S}{\partial\beta_j}(\hat\boldsymbol\beta) = 0 }[/math].

The Gauss–Markov theorem shows that, when this is so, [math]\displaystyle{ \hat{\boldsymbol{\beta}} }[/math] is a best linear unbiased estimator (BLUE). If, however, the measurements are uncorrelated but have different uncertainties, a modified approach might be adopted. Aitken showed that when a weighted sum of squared residuals is minimized, [math]\displaystyle{ \hat{\boldsymbol{\beta}} }[/math] is the BLUE if each weight is equal to the reciprocal of the variance of the measurement [math]\displaystyle{ \begin{align} S &= \sum_{i=1}^n W_{ii}{r_i}^2, & W_{ii} &= \frac{1}{{\sigma_i}^2} \end{align} }[/math]

The gradient equations for this sum of squares are [math]\displaystyle{ -2\sum_i W_{ii}\frac{\partial f(x_i, \boldsymbol{\beta})}{\partial\beta_j} r_i = 0,\quad j = 1, \ldots, m }[/math]

which, in a linear least squares system give the modified normal equations, [math]\displaystyle{ \sum_{i=1}^n \sum_{k=1}^m X_{ij}W_{ii}X_{ik}\hat{\beta}_k = \sum_{i=1}^n X_{ij}W_{ii}y_i,\quad j = 1, \ldots, m\,. }[/math]

When the observational errors are uncorrelated and the weight matrix, W=Ω−1, is diagonal, these may be written as [math]\displaystyle{ \mathbf{\left(X^\textsf{T} WX\right)\hat{\boldsymbol{\beta}} = X^\textsf{T}Wy}. }[/math]

If the errors are correlated, the resulting estimator is the BLUE if the weight matrix is equal to the inverse of the variance-covariance matrix of the observations.

When the errors are uncorrelated, it is convenient to simplify the calculations to factor the weight matrix as [math]\displaystyle{ w_{ii} = \sqrt{W_{ii}} }[/math]. The normal equations can then be written in the same form as ordinary least squares: [math]\displaystyle{ \mathbf{\left(X'^\textsf{T}X'\right)\hat{\boldsymbol{\beta}} = X'^\textsf{T}y'}\, }[/math]

where we define the following scaled matrix and vector: [math]\displaystyle{ \begin{align} \mathbf{X'} &= \operatorname{diag}\left(\mathbf{w}\right) \mathbf{X},\\ \mathbf{y'} &= \operatorname{diag}\left(\mathbf{w}\right) \mathbf{y} = \mathbf{y} \oslash \mathbf{\sigma}. \end{align} }[/math]

This is a type of whitening transformation; the last expression involves an entrywise division.

For non-linear least squares systems a similar argument shows that the normal equations should be modified as follows. [math]\displaystyle{ \mathbf{\left(J^\textsf{T}WJ\right)\, \boldsymbol\Delta\beta = J^\textsf{T}W\, \boldsymbol\Delta y}.\, }[/math]

Note that for empirical tests, the appropriate W is not known for sure and must be estimated. For this feasible generalized least squares (FGLS) techniques may be used; in this case it is specialized for a diagonal covariance matrix, thus yielding a feasible weighted least squares solution.

If the uncertainty of the observations is not known from external sources, then the weights could be estimated from the given observations. This can be useful, for example, to identify outliers. After the outliers have been removed from the data set, the weights should be reset to one.[3]

Motivation

In some cases the observations may be weighted—for example, they may not be equally reliable. In this case, one can minimize the weighted sum of squares: [math]\displaystyle{ \underset{\boldsymbol\beta}{\operatorname{arg\ min}}\, \sum_{i=1}^{n} w_i \left|y_i - \sum_{j=1}^{m} X_{ij}\beta_j\right|^2 = \underset{\boldsymbol\beta}{\operatorname{arg\ min}}\, \left\|W^\frac{1}{2}\left(\mathbf{y} - X\boldsymbol\beta\right)\right\|^2. }[/math] where wi > 0 is the weight of the ith observation, and W is the diagonal matrix of such weights.

The weights should, ideally, be equal to the reciprocal of the variance of the measurement. (This implies that the observations are uncorrelated. If the observations are correlated, the expression [math]\displaystyle{ S = \sum_k \sum_j r_k W_{kj} r_j\, }[/math] applies. In this case the weight matrix should ideally be equal to the inverse of the variance-covariance matrix of the observations).[3] The normal equations are then: [math]\displaystyle{ \left(X^\textsf{T} W X\right)\hat{\boldsymbol{\beta}} = X^\textsf{T} W \mathbf{y}. }[/math]

This method is used in iteratively reweighted least squares.

Solution

Parameter errors and correlation

The estimated parameter values are linear combinations of the observed values [math]\displaystyle{ \hat{\boldsymbol{\beta}} = (X^\textsf{T} W X)^{-1} X^\textsf{T} W \mathbf{y}. }[/math]

Therefore, an expression for the estimated variance-covariance matrix of the parameter estimates can be obtained by error propagation from the errors in the observations. Let the variance-covariance matrix for the observations be denoted by M and that of the estimated parameters by Mβ. Then [math]\displaystyle{ M^\beta = \left(X^\textsf{T} W X\right)^{-1} X^\textsf{T} W M W^\textsf{T} X \left(X^\textsf{T} W^\textsf{T} X\right)^{-1}. }[/math]

When W = M−1, this simplifies to [math]\displaystyle{ M^\beta = \left(X^\textsf{T} W X\right)^{-1}. }[/math]

When unit weights are used (W = I, the identity matrix), it is implied that the experimental errors are uncorrelated and all equal: M = σ2I, where σ2 is the a priori variance of an observation. In any case, σ2 is approximated by the reduced chi-squared [math]\displaystyle{ \chi^2_\nu }[/math]: [math]\displaystyle{ \begin{align} M^\beta &= \chi^2_\nu\left(X^\textsf{T} W X\right)^{-1}, \\ \chi^2_\nu &= S/\nu, \end{align} }[/math]

where S is the minimum value of the weighted objective function: [math]\displaystyle{ S = r^\textsf{T} W r = \left\|W^\frac{1}{2}\left(\mathbf{y} - X\hat{\boldsymbol\beta}\right)\right\|^2. }[/math]

The denominator, [math]\displaystyle{ \nu = n - m }[/math], is the number of degrees of freedom; see effective degrees of freedom for generalizations for the case of correlated observations.

In all cases, the variance of the parameter estimate [math]\displaystyle{ \hat\beta_i }[/math] is given by [math]\displaystyle{ M^\beta_{ii} }[/math] and the covariance between the parameter estimates [math]\displaystyle{ \hat\beta_i }[/math] and [math]\displaystyle{ \hat\beta_j }[/math] is given by [math]\displaystyle{ M^\beta_{ij} }[/math]. The standard deviation is the square root of variance, [math]\displaystyle{ \sigma_i = \sqrt{M^\beta_{ii}} }[/math], and the correlation coefficient is given by [math]\displaystyle{ \rho_{ij} = M^\beta_{ij}/(\sigma_i \sigma_j) }[/math]. These error estimates reflect only random errors in the measurements. The true uncertainty in the parameters is larger due to the presence of systematic errors, which, by definition, cannot be quantified. Note that even though the observations may be uncorrelated, the parameters are typically correlated.

Parameter confidence limits

It is often assumed, for want of any concrete evidence but often appealing to the central limit theorem—see Normal distribution—that the error on each observation belongs to a normal distribution with a mean of zero and standard deviation [math]\displaystyle{ \sigma }[/math]. Under that assumption the following probabilities can be derived for a single scalar parameter estimate in terms of its estimated standard error [math]\displaystyle{ se_{\beta} }[/math] (given here):

  • 68% that the interval [math]\displaystyle{ \hat\beta \pm se_\beta }[/math] encompasses the true coefficient value
  • 95% that the interval [math]\displaystyle{ \hat\beta \pm 2se_\beta }[/math] encompasses the true coefficient value
  • 99% that the interval [math]\displaystyle{ \hat\beta \pm 2.5se_\beta }[/math] encompasses the true coefficient value

The assumption is not unreasonable when n >> m. If the experimental errors are normally distributed the parameters will belong to a Student's t-distribution with n − m degrees of freedom. When n ≫ m Student's t-distribution approximates a normal distribution. Note, however, that these confidence limits cannot take systematic error into account. Also, parameter errors should be quoted to one significant figure only, as they are subject to sampling error.[4]

When the number of observations is relatively small, Chebychev's inequality can be used for an upper bound on probabilities, regardless of any assumptions about the distribution of experimental errors: the maximum probabilities that a parameter will be more than 1, 2, or 3 standard deviations away from its expectation value are 100%, 25% and 11% respectively.

Residual values and correlation

The residuals are related to the observations by [math]\displaystyle{ \mathbf{\hat r} = \mathbf{y} - X \hat{\boldsymbol{\beta}} = \mathbf{y} - H \mathbf{y} = (I - H) \mathbf{y}, }[/math]

where H is the idempotent matrix known as the hat matrix: [math]\displaystyle{ H = X \left(X^\textsf{T} W X\right)^{-1} X^\textsf{T} W, }[/math]

and I is the identity matrix. The variance-covariance matrix of the residuals, M r is given by [math]\displaystyle{ M^\mathbf{r} = (I - H) M (I - H)^\textsf{T}. }[/math]

Thus the residuals are correlated, even if the observations are not.

When [math]\displaystyle{ W = M^{-1} }[/math], [math]\displaystyle{ M^\mathbf{r} = (I - H) M. }[/math]

The sum of weighted residual values is equal to zero whenever the model function contains a constant term. Left-multiply the expression for the residuals by XT WT: [math]\displaystyle{ X^\textsf{T} W \hat{\mathbf r} = X^\textsf{T} W \mathbf{y} - X^\textsf{T} W X \hat{\boldsymbol{\beta}} = X^\textsf{T} W \mathbf{y} - \left(X^{\rm T}W X\right) \left(X^\textsf{T} W X\right)^{-1} X^\textsf{T} W \mathbf{y} = \mathbf{0}. }[/math]

Say, for example, that the first term of the model is a constant, so that [math]\displaystyle{ X_{i1} = 1 }[/math] for all i. In that case it follows that [math]\displaystyle{ \sum_i^m X_{i1} W_i\hat r_i = \sum_i^m W_i \hat r_i = 0. }[/math]

Thus, in the motivational example, above, the fact that the sum of residual values is equal to zero is not accidental, but is a consequence of the presence of the constant term, α, in the model.

If experimental error follows a normal distribution, then, because of the linear relationship between residuals and observations, so should residuals,[5] but since the observations are only a sample of the population of all possible observations, the residuals should belong to a Student's t-distribution. Studentized residuals are useful in making a statistical test for an outlier when a particular residual appears to be excessively large.

See also

References

  1. "Weighted regression". https://support.minitab.com/en-us/minitab/18/help-and-how-to/modeling-statistics/regression/supporting-topics/basics/weighted-regression/. 
  2. "Visualize a weighted regression". https://blogs.sas.com/content/iml/2016/10/05/weighted-regression.html. 
  3. 3.0 3.1 Strutz, T. (2016). "3". Data Fitting and Uncertainty (A practical introduction to weighted least squares and beyond). Springer Vieweg. ISBN 978-3-658-11455-8. 
  4. Mandel, John (1964). The Statistical Analysis of Experimental Data. New York: Interscience. 
  5. Mardia, K. V.; Kent, J. T.; Bibby, J. M. (1979). Multivariate analysis. New York: Academic Press. ISBN 0-12-471250-9.