Numerical methods for linear least squares
Numerical methods for linear least squares entails the numerical analysis of linear least squares problems.
Introduction
A general approach to the least squares problem [math]\displaystyle{ \operatorname{\,min} \, \big\|\mathbf y - X \boldsymbol \beta \big\|^2 }[/math] can be described as follows. Suppose that we can find an n by m matrix S such that XS is an orthogonal projection onto the image of X. Then a solution to our minimization problem is given by
- [math]\displaystyle{ \boldsymbol \beta = S \mathbf y }[/math]
simply because
- [math]\displaystyle{ X \boldsymbol \beta = X ( S \mathbf y) = (X S) \mathbf y }[/math]
is exactly a sought for orthogonal projection of [math]\displaystyle{ \mathbf y }[/math] onto an image of X (see the picture below and note that as explained in the next section the image of X is just a subspace generated by column vectors of X). A few popular ways to find such a matrix S are described below.
Inverting the matrix of the normal equations
The equation [math]\displaystyle{ (\mathbf X^ {\rm T} \mathbf X )\beta = \mathbf X^ {\rm T} y }[/math] is known as the normal equation. The algebraic solution of the normal equations with a full-rank matrix XTX can be written as
- [math]\displaystyle{ \hat{\boldsymbol{\beta}} = (\mathbf X^ {\rm T} \mathbf X )^{-1} \mathbf X^ {\rm T} \mathbf y = \mathbf X^+ \mathbf y }[/math]
where X+ is the Moore–Penrose pseudoinverse of X. Although this equation is correct and can work in many applications, it is not computationally efficient to invert the normal-equations matrix (the Gramian matrix). An exception occurs in numerical smoothing and differentiation where an analytical expression is required.
If the matrix XTX is well-conditioned and positive definite, implying that it has full rank, the normal equations can be solved directly by using the Cholesky decomposition RTR, where R is an upper triangular matrix, giving:
- [math]\displaystyle{ R^{\rm T} R \hat{\boldsymbol{\beta}} = X^{\rm T} \mathbf y. }[/math]
The solution is obtained in two stages, a forward substitution step, solving for z:
- [math]\displaystyle{ R^{\rm T} \mathbf z = X^{\rm T} \mathbf y, }[/math]
followed by a backward substitution, solving for [math]\displaystyle{ \hat{\boldsymbol{\beta}} }[/math]:
- [math]\displaystyle{ R \hat{\boldsymbol{\beta}}= \mathbf z. }[/math]
Both substitutions are facilitated by the triangular nature of R.
Orthogonal decomposition methods
Orthogonal decomposition methods of solving the least squares problem are slower than the normal equations method but are more numerically stable because they avoid forming the product XTX.
The residuals are written in matrix notation as
- [math]\displaystyle{ \mathbf r= \mathbf y - X \hat{\boldsymbol{\beta}}. }[/math]
The matrix X is subjected to an orthogonal decomposition, e.g., the QR decomposition as follows.
- [math]\displaystyle{ X=Q \begin{pmatrix} R \\ 0 \end{pmatrix} \ }[/math],
where Q is an m×m orthogonal matrix (QTQ=I) and R is an n×n upper triangular matrix with [math]\displaystyle{ r_{ii}\gt 0 }[/math].
The residual vector is left-multiplied by QT.
- [math]\displaystyle{ Q^{\rm T} \mathbf r = Q^{\rm T} \mathbf y - \left( Q^{\rm T} Q \right) \begin{pmatrix} R \\ 0 \end{pmatrix} \hat{\boldsymbol{\beta}}= \begin{bmatrix} \left(Q^{\rm T} \mathbf y \right)_n - R \hat{\boldsymbol{\beta}} \\ \left(Q^{\rm T} \mathbf y \right)_{m-n} \end{bmatrix} = \begin{bmatrix} \mathbf u \\ \mathbf v \end{bmatrix} }[/math]
Because Q is orthogonal, the sum of squares of the residuals, s, may be written as:
- [math]\displaystyle{ s = \|\mathbf r \|^2 = \mathbf r^{\rm T} \mathbf r = \mathbf r^{\rm T} Q Q^{\rm T} \mathbf r = \mathbf u^{\rm T} \mathbf u + \mathbf v^{\rm T} \mathbf v }[/math]
Since v doesn't depend on β, the minimum value of s is attained when the upper block, u, is zero. Therefore, the parameters are found by solving:
- [math]\displaystyle{ R \hat{\boldsymbol{\beta}} =\left(Q^{\rm T} \mathbf y \right)_n. }[/math]
These equations are easily solved as R is upper triangular.
An alternative decomposition of X is the singular value decomposition (SVD)[1]
- [math]\displaystyle{ X = U \Sigma V^{\rm T} \ }[/math],
where U is m by m orthogonal matrix, V is n by n orthogonal matrix and [math]\displaystyle{ \Sigma }[/math] is an m by n matrix with all its elements outside of the main diagonal equal to 0. The pseudoinverse of [math]\displaystyle{ \Sigma }[/math] is easily obtained by inverting its non-zero diagonal elements and transposing. Hence,
- [math]\displaystyle{ \mathbf X \mathbf X^+ = U \Sigma V^{\rm T} V \Sigma^+ U^{\rm T} = U P U^{\rm T}, }[/math]
where P is obtained from [math]\displaystyle{ \Sigma }[/math] by replacing its non-zero diagonal elements with ones. Since [math]\displaystyle{ (\mathbf X \mathbf X^+)^* = \mathbf X \mathbf X^+ }[/math] (the property of pseudoinverse), the matrix [math]\displaystyle{ U P U^{\rm T} }[/math] is an orthogonal projection onto the image (column-space) of X. In accordance with a general approach described in the introduction above (find XS which is an orthogonal projection),
- [math]\displaystyle{ S = \mathbf X^+ }[/math],
and thus,
- [math]\displaystyle{ \beta = V\Sigma^+ U^{\rm T} \mathbf y }[/math]
is a solution of a least squares problem. This method is the most computationally intensive, but is particularly useful if the normal equations matrix, XTX, is very ill-conditioned (i.e. if its condition number multiplied by the machine's relative round-off error is appreciably large). In that case, including the smallest singular values in the inversion merely adds numerical noise to the solution. This can be cured with the truncated SVD approach, giving a more stable and exact answer, by explicitly setting to zero all singular values below a certain threshold and so ignoring them, a process closely related to factor analysis.
Discussion
The numerical methods for linear least squares are important because linear regression models are among the most important types of model, both as formal statistical models and for exploration of data-sets. The majority of statistical computer packages contain facilities for regression analysis that make use of linear least squares computations. Hence it is appropriate that considerable effort has been devoted to the task of ensuring that these computations are undertaken efficiently and with due regard to round-off error.
Individual statistical analyses are seldom undertaken in isolation, but rather are part of a sequence of investigatory steps. Some of the topics involved in considering numerical methods for linear least squares relate to this point. Thus important topics can be
- Computations where a number of similar, and often nested, models are considered for the same data-set. That is, where models with the same dependent variable but different sets of independent variables are to be considered, for essentially the same set of data-points.
- Computations for analyses that occur in a sequence, as the number of data-points increases.
- Special considerations for very extensive data-sets.
Fitting of linear models by least squares often, but not always, arise in the context of statistical analysis. It can therefore be important that considerations of computation efficiency for such problems extend to all of the auxiliary quantities required for such analyses, and are not restricted to the formal solution of the linear least squares problem.
Matrix calculations, like any other, are affected by rounding errors. An early summary of these effects, regarding the choice of computation methods for matrix inversion, was provided by Wilkinson.[2]
See also
- Numerical linear algebra
- Numerical methods for non-linear least squares
References
- ↑ Lawson, C. L.; Hanson, R. J. (1974). Solving Least Squares Problems. Englewood Cliffs, NJ: Prentice-Hall. ISBN 0-13-822585-0.
- ↑ Wilkinson, J.H. (1963) "Chapter 3: Matrix Computations", Rounding Errors in Algebraic Processes, London: Her Majesty's Stationery Office (National Physical Laboratory, Notes in Applied Science, No.32)
Further reading
- Ake Bjorck, Numerical Methods for Least Squares Problems, SIAM, 1996.
- R. W. Farebrother, Linear Least Squares Computations, CRC Press, 1988.
- Barlow, Jesse L. (1993), "Chapter 9: Numerical aspects of Solving Linear Least Squares Problems", in Rao, C. R., Computational Statistics, Handbook of Statistics, 9, North-Holland, ISBN 0-444-88096-8
- Björck, Åke (1996). Numerical methods for least squares problems. Philadelphia: SIAM. ISBN 0-89871-360-9.
- Goodall, Colin R. (1993), "Chapter 13: Computation using the QR decomposition", in Rao, C. R., Computational Statistics, Handbook of Statistics, 9, North-Holland, ISBN 0-444-88096-8
- National Physical Laboratory (1961), "Chapter 1: Linear Equations and Matrices: Direct Methods", Modern Computing Methods, Notes on Applied Science, 16 (2nd ed.), Her Majesty's Stationery Office
- National Physical Laboratory (1961), "Chapter 2: Linear Equations and Matrices: Direct Methods on Automatic Computers", Modern Computing Methods, Notes on Applied Science, 16 (2nd ed.), Her Majesty's Stationery Office
Original source: https://en.wikipedia.org/wiki/Numerical methods for linear least squares.
Read more |