Polynomial least squares

From HandWiki
Revision as of 00:09, 15 June 2021 by imported>Scavis2 (change)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In mathematical statistics, polynomial least squares comprises a broad range of statistical methods for estimating an underlying polynomial that describes observations. These methods include polynomial regression, curve fitting, linear regression, least squares, ordinary least squares, simple linear regression, linear least squares, approximation theory and method of moments. Polynomial least squares has applications in radar trackers, estimation theory, signal processing, statistics, and econometrics.

Two common applications of polynomial least squares methods are generating a low-degree polynomial that approximates a complicated function and estimating an assumed underlying polynomial from corrupted (also known as "noisy") observations. The former is commonly used in statistics and econometrics to fit a scatter plot with a first degree polynomial (that is, a linear expression).[1][2][3] The latter is commonly used in target tracking in the form of Kalman filtering, which is effectively a recursive implementation of polynomial least squares.[4][5][6][7] Estimating an assumed underlying deterministic polynomial can be used in econometrics as well.[8] In effect, both applications produce average curves as generalizations of the common average of a set of numbers, which is equivalent to zero degree polynomial least squares.[1][2][9]

In the above applications, the term "approximate" is used when no statistical measurement or observation errors are assumed, as when fitting a scatter plot. The term "estimate", derived from statistical estimation theory, is used when assuming that measurements or observations of a polynomial are corrupted.

Polynomial least squares estimate of a deterministic first degree polynomial corrupted with observation errors

Assume the deterministic first degree polynomial equation [math]\displaystyle{ y }[/math] with unknown coefficients [math]\displaystyle{ \alpha }[/math] and [math]\displaystyle{ \beta }[/math] is written as

[math]\displaystyle{ y=\alpha+\beta t. }[/math]

This is corrupted with an additive stochastic process [math]\displaystyle{ \varepsilon }[/math] described as an error (noise in tracking), resulting in

[math]\displaystyle{ z=y+\varepsilon=\alpha+\beta t+\varepsilon. }[/math]

Given observations [math]\displaystyle{ z_n }[/math] from a sample, where the subscript [math]\displaystyle{ n }[/math] is the observation index, the problem is to apply polynomial least squares to estimate [math]\displaystyle{ y(t) }[/math], and to determine its variance along with its expected value.

Definitions and assumptions

(1) The term linearity in mathematics may be considered to take two forms that are sometimes confusing: a linear system or transformation (sometimes called an operator)[9] and a linear equation. The term "function" is often used to describe both a system and an equation, which may lead to confusion. A linear system is defined by

[math]\displaystyle{ f(ax +by)= af(x) +bf(y) }[/math]

where [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are constants, and where [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] are variables. In a linear system [math]\displaystyle{ E[f(x)]=f(E[x]) }[/math], where [math]\displaystyle{ E }[/math] is the linear expectation operator. A linear equation is a straight line as is the first degree polynomial described above.

(2) The error [math]\displaystyle{ \varepsilon }[/math] is modeled as a zero mean stochastic process, sample points of which are random variables that are uncorrelated and assumed to have identical probability distributions (specifically same mean and variance), but not necessarily Gaussian, treated as inputs to polynomial least squares. Stochastic processes and random variables are described only by probability distributions.[1][9][2]

(3) Polynomial least squares is modeled as a linear signal processing system which processes statistical inputs deterministically, the output being the linearly processed empirically determined statistical estimate, variance, and expected value.[6][7][8]

(4) Polynomial least squares processing produces deterministic moments (analogous to mechanical moments), which may be considered as moments of sample statistics, but not of statistical moments.[8]

Polynomial least squares and the orthogonality principle

Approximating a function [math]\displaystyle{ z(t) }[/math] with a polynomial

[math]\displaystyle{ \hat z(t)=\sum_{j=1} ^J a_j t^{j-1} }[/math]

where hat (^) denotes the estimate and (J − 1) is the polynomial degree, can be performed by applying the orthogonality principle. The sum of squared residuals can be written as

[math]\displaystyle{ \sum_{n=1}^N (z_n - \hat z_n)^2. }[/math]

According to the orthogonality principle,[4][5][6][7][8][9][10][11] this is at its minimum when the residual vector ([math]\displaystyle{ z-\hat z }[/math]) is orthogonal to the estimate [math]\displaystyle{ \hat z }[/math], that is

[math]\displaystyle{ \sum_{n=1}^N (z_n - \hat z_n)\hat z_n=0. }[/math]

This can be described as the orthogonal projection of the data values {[math]\displaystyle{ z_n }[/math]} onto a solution in the form of the polynomial [math]\displaystyle{ \hat z(t) }[/math].[4][6][7] For N > J, orthogonal projection yields the standard overdetermined system of equations (often called normal equations) used to compute the coefficients in the polynomial approximation.[1][10][11] The minimum sum of squared residuals is then

[math]\displaystyle{ SSR_\min = \sum_{n=1} ^N (z_n - \hat z_n)z_n }[/math]

The advantage of using orthogonal projection is that [math]\displaystyle{ SSR_\min }[/math] can be determined for use in the polynomial least squares processed statistical variance of the estimate.[8][9][11]

The empirically determined polynomial least squares output of a first degree polynomial corrupted with observation errors

To fully determine the output of polynomial least squares, a weighting function describing the processing must first be structured and then the statistical moments can be computed.

The weighting function describing the linear polynomial least squares "system"

The weighting function [math]\displaystyle{ w_n (\tau) }[/math] can be formulated from polynomial least squares to estimate the unknown [math]\displaystyle{ y(t) }[/math] as follows:[8]

[math]\displaystyle{ \hat y (\tau) = \frac {1} {N}\sum_{n=1} ^N z_n w_n (\tau) = \frac {1} {N}\sum_{n=1} ^N (\alpha+\beta t_n + \varepsilon_n) w_n (\tau) }[/math]

where N is the number of samples, [math]\displaystyle{ z_n }[/math] are random variables as samples of the stochastic [math]\displaystyle{ z }[/math] (noisy signal), and the first degree polynomial data weights are

[math]\displaystyle{ w_n(\tau)\equiv\frac{[\bar{t^2}-\bar{t}t_n+(t_n-\bar{t})\tau]}{(\bar{t^2}- \bar{t}^2)} }[/math]

which represent the linear polynomial least squares "system" and describe its processing.[8] The Greek letter [math]\displaystyle{ \tau }[/math] is the independent variable [math]\displaystyle{ t }[/math] when estimating the dependent variable [math]\displaystyle{ y(t) }[/math] after data fitting has been performed. (The letter [math]\displaystyle{ \tau }[/math] is used to avoid confusion with [math]\displaystyle{ t }[/math] before and sampling during polynomial least squares processing.) The overbar ( ¯ ) defines the deterministic centroid of [math]\displaystyle{ u_n }[/math] as processed by polynomial least squares [8] – i.e., it defines the deterministic first order moment, which may be considered a sample average, but does not here approximate a first order statistical moment:

[math]\displaystyle{ \bar{u}\overset{\underset{\mathrm{def}}{}}{=}\frac {1} {N}\sum_{n=1} ^N u_n }[/math]

Empirically determined statistical moments

Applying [math]\displaystyle{ w_n (\tau) }[/math] yields

[math]\displaystyle{ \hat y(\tau)=\hat\alpha+\hat\beta \tau }[/math]

where

[math]\displaystyle{ \hat\alpha=\frac{(\bar{z}\bar{t^2}-\bar{zt}\bar{t})}{(\bar{t^2}-\bar{t}^2)}=\alpha+\frac{(\bar{\varepsilon}\bar{t^2}-\bar{{\varepsilon}t}\bar{t})}{(\bar{t^2}-\bar{t}^2)} }[/math]

and

[math]\displaystyle{ \hat\beta=\frac{(\bar{zt}-\bar{z}\bar{t})}{(\bar{t^2}-\bar{t}^2)}=\beta+\frac{(\bar{\varepsilon t}-\bar{\varepsilon}\bar{t})}{(\bar{t^2}-\bar{t}^2)} }[/math]

As linear functions of the random variables [math]\displaystyle{ \varepsilon_n }[/math], both coefficient estimates [math]\displaystyle{ \hat\alpha }[/math] and [math]\displaystyle{ \hat\beta }[/math] are random variables.[8] In the absence of the errors [math]\displaystyle{ \varepsilon_n }[/math], [math]\displaystyle{ \hat\alpha=\alpha }[/math] and [math]\displaystyle{ \hat\beta=\beta }[/math], as they should to meet that boundary condition.

Because the statistical expectation operator E[•] is a linear function and the sampled stochastic process errors [math]\displaystyle{ \varepsilon_n }[/math] are zero mean, the expected value of the estimate [math]\displaystyle{ \hat y }[/math] is the first order statistical moment as follows:[1][2][3][8]

[math]\displaystyle{ E[\hat y (\tau)] =\alpha+\beta\tau+ \frac {1} {N}\sum_{n=1} ^N E[\varepsilon_n] w_n (\tau)= \alpha+\beta\tau =\alpha+\beta t }[/math]

The statistical variance in [math]\displaystyle{ \hat y }[/math] is given by the second order statistical central moment as follows:[1][2][3][8]

[math]\displaystyle{ \sigma_\hat y ^2 = E[\left(\hat y-E[\hat y]\right)^2 ]= \frac {1} {N}\frac {1} {N}\sum_{n=1} ^N \sum_{i=1} ^N w_n (\tau) E[\varepsilon_n \varepsilon_i] w_i (\tau) }[/math]

[math]\displaystyle{ =\sigma_\varepsilon ^2 \frac {1} {N}\frac {1} {N}\sum_{n=1} ^N \sum_{i=1} ^N w_n ^2 (\tau) }[/math]

because

[math]\displaystyle{ \sum_{i=1} ^N E[\varepsilon_n \varepsilon_i] w_i (\tau)=\sigma_\varepsilon^2 w_n (\tau) }[/math]

where [math]\displaystyle{ \sigma_\varepsilon^2 }[/math] is the statistical variance of random variables [math]\displaystyle{ \varepsilon_n }[/math]; i.e., [math]\displaystyle{ E[\varepsilon_n \varepsilon_i]= \sigma_\varepsilon ^2 }[/math] for i = n and (because [math]\displaystyle{ \varepsilon_n }[/math] are uncorrelated) [math]\displaystyle{ \sigma_\varepsilon ^2=0 }[/math] for [math]\displaystyle{ i \ne n }[/math] [8]

Carrying out the multiplications and summations in [math]\displaystyle{ \sigma_\hat y^2 }[/math] yields[8]

[math]\displaystyle{ \sigma_\hat y^2=\sigma_\varepsilon^2\frac{(\bar{t^2}-2\bar{t}\tau+\tau^2)}{N(\bar{t^2}- \bar{t}^2)}. }[/math]

Measuring or approximating the statistical variance of the random errors

In a hardware system, such as a tracking radar, the measurement noise variance [math]\displaystyle{ \sigma_\varepsilon^2 }[/math] can be determined from measurements when there is no target return – i.e., by just taking measurements of the noise alone.

However, if polynomial least squares is used when the variance [math]\displaystyle{ \sigma_\varepsilon^2 }[/math] is not measurable (such as in econometrics or statistics), it can be estimated with observations in [math]\displaystyle{ e_\min }[/math] from orthogonal projection as follows:

[math]\displaystyle{ \sigma_\varepsilon^2\approx\hat {\sigma_\varepsilon^2}= (\bar {z^2}-\hat\alpha\bar{z} - \hat \beta \bar{zt}) }[/math] [8]

As a result, to the first order approximation from the estimates [math]\displaystyle{ \hat\alpha }[/math] and [math]\displaystyle{ \hat\beta }[/math] as functions of sampled [math]\displaystyle{ z }[/math] and [math]\displaystyle{ t }[/math]

[math]\displaystyle{ \sigma_\hat y^2 \approx \bigg[\frac{(\bar{z^2}-\bar{z}^2)}{(\bar{t^2}-\bar{t}^2)}- \Biggl(\frac{(\bar{zt}-\bar{z}\bar{t})}{(\bar{t^2}-\bar{t})}\Biggl)^2 \bigg]{\frac{(\bar{t^2}-2\bar{t}\tau+\tau^2)}N} }[/math]

which goes to zero in the absence of the errors [math]\displaystyle{ \varepsilon_n }[/math], as it should to meet that boundary condition.[8]

As a result, the samples [math]\displaystyle{ z_n }[/math] (noisy signal) are considered to be the input to the linear polynomial least squares "system" which transforms the samples into the empirically determined statistical estimate [math]\displaystyle{ \hat y (\tau) }[/math], the expected value [math]\displaystyle{ E[\hat y] }[/math], and the variance [math]\displaystyle{ \sigma_\hat y^2 }[/math].[8]

Properties of polynomial least squares modeled as a linear "system"

(1) The empirical statistical variance [math]\displaystyle{ \sigma_\hat y^2 }[/math] is a function of [math]\displaystyle{ \sigma_\varepsilon ^2 }[/math], N and [math]\displaystyle{ \tau }[/math]. Setting the derivative of [math]\displaystyle{ \sigma_\hat y^2 }[/math] with respect to [math]\displaystyle{ \tau }[/math] equal to zero shows the minimum to occur at [math]\displaystyle{ \tau=\bar t }[/math]; i.e., at the centroid (sample average) of the samples [math]\displaystyle{ t_n }[/math]. The minimum statistical variance thus becomes [math]\displaystyle{ \frac{\sigma_\varepsilon ^2 } {N} }[/math]. This is equivalent to the statistical variance from polynomial least squares of a zero degree polynomial – i.e., of the centroid (sample average) of [math]\displaystyle{ \alpha }[/math].[1][2][8] [9]

(2) The empirical statistical variance [math]\displaystyle{ \sigma_\hat y^2 }[/math] is a function of the quadratic [math]\displaystyle{ \tau^2 }[/math] . Moreover, the further [math]\displaystyle{ \tau }[/math] deviates from [math]\displaystyle{ \bar t }[/math] (even within the data window), the larger is the variance [math]\displaystyle{ \sigma_\hat y^2 }[/math] due to the random variable errors [math]\displaystyle{ \varepsilon_n }[/math] . The independent variable [math]\displaystyle{ \tau }[/math] can take any value on the [math]\displaystyle{ t }[/math] axis. It is not limited to the data window. It can extend beyond the data window – and likely will at times depending on the application. If it is within the data window, estimation is described as interpolation. If it is outside the data window, estimation is described as extrapolation. It is both intuitive and well known that the further is extrapolation, the larger is the error.[8]

(3) The empirical statistical variance [math]\displaystyle{ \sigma_\hat y^2 }[/math] due to the random variable errors [math]\displaystyle{ \varepsilon_n }[/math] is inversely proportional to N. As N increases, the statistical variance decreases. This is well known and what filtering out the errors [math]\displaystyle{ \varepsilon_n }[/math] is all about.[1][2][8][12] The underlying purpose of polynomial least squares is to filter out the errors to improve estimation accuracy by reducing the empirical statistical estimation variance. In reality, only two data points are required to estimate [math]\displaystyle{ \alpha }[/math] and [math]\displaystyle{ \beta }[/math]; albeit the more data points with zero mean statistical errors included, the smaller is the empirical statistical estimation variance as established by N samples.

(4) There is an additional issue to be considered when the noise variance is not measurable: Independent of the polynomial least squares estimation, any new observations would be described by the variance [math]\displaystyle{ \sigma_\varepsilon^2\approx\hat {\sigma_\varepsilon^2}= (\bar {z^2}-\hat\alpha\bar{z} - \hat \beta \bar{zt}) }[/math].[8][9]

Thus, the polynomial least squares statistical estimation variance [math]\displaystyle{ \sigma_\hat y^2 }[/math] and the statistical variance of any new sample in [math]\displaystyle{ \sigma_\varepsilon ^2 }[/math] would both contribute to the uncertainty of any future observation. Both variances are clearly determined by polynomial least squares in advance.

(5) This concept also applies to higher degree polynomials. However, the weighting function [math]\displaystyle{ w_n (\tau) }[/math] is obviously more complicated. In addition, the estimation variances increase exponentially as polynomial degrees increase linearly (i.e., in unit steps). However, there are ways of dealing with this as described in.[6][7]

The synergy of integrating polynomial least squares with statistical estimation theory

Modeling polynomial least squares as a linear signal processing "system" creates the synergy of integrating polynomial least squares with statistical estimation theory to deterministically process samples of an assumed polynomial corrupted with a statistically described stochastic error ε. In the absence of the error ε, statistical estimation theory is irrelevant and polynomial least squares reverts to the conventional approximation of complicated functions and scatter plots.

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Gujarati, Damodar N.; Porter, Dawn C. (2008). Basic Econometrics (5 ed.). McGraw-Hill Education. ISBN 978-0073375779. http://egei.vse.cz/english/wp-content/uploads/2012/08/Basic-Econometrics.pdf. 
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Hansen, Bruce E. (January 16, 2015). Econometrics. http://www.ssc.wisc.edu/~bhansen/econometrics/Econometrics.pdf. 
  3. 3.0 3.1 3.2 Copeland, Thomas E.; Weston, John Fred; Shastri, Kuldeep (January 10, 2004). Financial Theory and Corporate Policy (4 ed.). Prentice Hall. ISBN 978-0321127211. 
  4. 4.0 4.1 4.2 Kálmán, Rudolf E. (March 1, 1960). "A New Approach to Linear Filtering and Prediction Problems". Journal of Basic Engineering 82: 35. doi:10.1115/1.3662552. 
  5. 5.0 5.1 Sorenson, H. W., Least-squares estimation: Gauss to Kalman, IEEE Spectrum, July, 1970.
  6. 6.0 6.1 6.2 6.3 6.4 Bell, J. W., Simple Disambiguation Of Orthogonal Projection In Kalman’s Filter Derivation, Proceedings of the International Conference on Radar Systems, Glasgow, UK. October, 2012.
  7. 7.0 7.1 7.2 7.3 7.4 Bell, J. W., A Simple Kalman Filter Alternative: The Multi-Fractional Order Estimator, IET-RSN, Vol. 7, Issue 8, October 2013.
  8. 8.00 8.01 8.02 8.03 8.04 8.05 8.06 8.07 8.08 8.09 8.10 8.11 8.12 8.13 8.14 8.15 8.16 8.17 8.18 8.19 "Ordinary Least Squares Revolutionized: Establishing the Vital Missing Empirically Determined Statistical Prediction Variance by Jeff Bell". SSRN. doi:10.2139/ssrn.2573840. http://ssrn.com/abstract=2573840. Retrieved 2019-02-27. 
  9. 9.0 9.1 9.2 9.3 9.4 9.5 9.6 Papoulis, A., Probability, RVs, and Stochastic Processes, McGraw-Hill, New York, 1965
  10. 10.0 10.1 Wylie, C. R., Jr., Advanced Engineering Mathematics, McGraw-Hill, New York, 1960.
  11. 11.0 11.1 11.2 Schied, F., Numerical Analysis, Schaum's Outline Series, McGraw-Hill, New York, 1968.
  12. Ordinary least squares