Linear predictor function
In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable.[1] This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. However, they also occur in various types of linear classifiers (e.g. logistic regression,[2] perceptrons,[3] support vector machines,[4] and linear discriminant analysis[5]), as well as in various other models, such as principal component analysis[6] and factor analysis. In many of these models, the coefficients are referred to as "weights".
Definition
The basic form of a linear predictor function [math]\displaystyle{ f(i) }[/math] for data point i (consisting of p explanatory variables), for i = 1, ..., n, is
- [math]\displaystyle{ f(i) = \beta_0 + \beta_1 x_{i1} + \cdots + \beta_p x_{ip}, }[/math]
where [math]\displaystyle{ x_{ik} }[/math], for k = 1, ..., p, is the value of the k-th explanatory variable for data point i, and [math]\displaystyle{ \beta_0, \ldots, \beta_p }[/math] are the coefficients (regression coefficients, weights, etc.) indicating the relative effect of a particular explanatory variable on the outcome.
Notations
It is common to write the predictor function in a more compact form as follows:
- The coefficients β0, β1, ..., βp are grouped into a single vector β of size p + 1.
- For each data point i, an additional explanatory pseudo-variable xi0 is added, with a fixed value of 1, corresponding to the intercept coefficient β0.
- The resulting explanatory variables xi0(= 1), xi1, ..., xip are then grouped into a single vector xi of size p + 1.
Vector Notation
This makes it possible to write the linear predictor function as follows:
- [math]\displaystyle{ f(i)= \boldsymbol\beta \cdot \mathbf{x}_i }[/math]
using the notation for a dot product between two vectors.
Matrix Notation
An equivalent form using matrix notation is as follows:
- [math]\displaystyle{ f(i)= \boldsymbol\beta^{\mathrm T} \mathbf{x}_i = \mathbf{x}^{\mathrm T}_i \boldsymbol\beta }[/math]
where [math]\displaystyle{ \boldsymbol\beta }[/math] and [math]\displaystyle{ \mathbf{x}_i }[/math] are assumed to be a (p+1)-by-1 column vectors, [math]\displaystyle{ \boldsymbol\beta^{\mathrm T} }[/math] is the matrix transpose of [math]\displaystyle{ \boldsymbol\beta }[/math] (so [math]\displaystyle{ \boldsymbol\beta^{\mathrm T} }[/math] is a 1-by-(p+1) row vector), and [math]\displaystyle{ \boldsymbol\beta^{\mathrm T} \mathbf{x}_i }[/math] indicates matrix multiplication between the 1-by-(p+1) row vector and the (p+1)-by-1 column vector, producing a 1-by-1 matrix that is taken to be a scalar.
Linear regression
An example of the usage of a linear predictor function is in linear regression, where each data point is associated with a continuous outcome yi, and the relationship written
- [math]\displaystyle{ y_i = f(i) + \varepsilon_i = \boldsymbol\beta^{\mathrm T}\mathbf{x}_i\ + \varepsilon_i, }[/math]
where [math]\displaystyle{ \varepsilon_i }[/math] is a disturbance term or error variable — an unobserved random variable that adds noise to the linear relationship between the dependent variable and predictor function.
Stacking
In some models (standard linear regression, in particular), the equations for each of the data points i = 1, ..., n are stacked together and written in vector form as
- [math]\displaystyle{ \mathbf{y} = \mathbf{X}\boldsymbol\beta + \boldsymbol\varepsilon, \, }[/math]
where
- [math]\displaystyle{ \mathbf{y} = \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix}, \quad \mathbf{X} = \begin{pmatrix} \mathbf{x}'_1 \\ \mathbf{x}'_2 \\ \vdots \\ \mathbf{x}'_n \end{pmatrix} = \begin{pmatrix} x_{11} & \cdots & x_{1p} \\ x_{21} & \cdots & x_{2p} \\ \vdots & \ddots & \vdots \\ x_{n1} & \cdots & x_{np} \end{pmatrix}, \quad \boldsymbol\beta = \begin{pmatrix} \beta_1 \\ \vdots \\ \beta_p \end{pmatrix}, \quad \boldsymbol\varepsilon = \begin{pmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_n \end{pmatrix}. }[/math]
The matrix X is known as the design matrix and encodes all known information about the independent variables. The variables [math]\displaystyle{ \varepsilon_i }[/math] are random variables, which in standard linear regression are distributed according to a standard normal distribution; they express the influence of any unknown factors on the outcome.
This makes it possible to find optimal coefficients through the method of least squares using simple matrix operations. In particular, the optimal coefficients [math]\displaystyle{ \boldsymbol{\hat\beta} }[/math] as estimated by least squares can be written as follows:
- [math]\displaystyle{ \boldsymbol{\hat\beta} =( X^\mathrm T X)^{-1}X^{\mathrm T}\mathbf{y}. }[/math]
The matrix [math]\displaystyle{ ( X^\mathrm T X)^{-1}X^{\mathrm T} }[/math] is known as the Moore–Penrose pseudoinverse of X. The use of the matrix inverse in this formula requires that X is of full rank, i.e. there is not perfect multicollinearity among different explanatory variables (i.e. no explanatory variable can be perfectly predicted from the others). In such cases, the singular value decomposition can be used to compute the pseudoinverse.
Preprocessing of explanatory variables
When a fixed set of nonlinear functions are used to transform the value(s) of a data point, these functions are known as basis functions. An example is polynomial regression, which uses a linear predictor function to fit an arbitrary degree polynomial relationship (up to a given order) between two sets of data points (i.e. a single real-valued explanatory variable and a related real-valued dependent variable), by adding multiple explanatory variables corresponding to various powers of the existing explanatory variable. Mathematically, the form looks like this:
- [math]\displaystyle{ y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \cdots + \beta_p x_i^p. }[/math]
In this case, for each data point i, a set of explanatory variables is created as follows:
- [math]\displaystyle{ (x_{i1} = x_i,\quad x_{i2} = x_i^2,\quad \ldots,\quad x_{ip} = x_i^p) }[/math]
and then standard linear regression is run. The basis functions in this example would be
- [math]\displaystyle{ \boldsymbol\phi(x) = (\phi_1(x), \phi_2(x), \ldots, \phi_p(x)) = (x, x^2, \ldots, x^p). }[/math]
This example shows that a linear predictor function can actually be much more powerful than it first appears: It only really needs to be linear in the coefficients. All sorts of non-linear functions of the explanatory variables can be fit by the model.
There is no particular need for the inputs to basis functions to be univariate or single-dimensional (or their outputs, for that matter, although in such a case, a K-dimensional output value is likely to be treated as K separate scalar-output basis functions). An example of this is radial basis functions (RBF's), which compute some transformed version of the distance to some fixed point:
- [math]\displaystyle{ \phi(\mathbf{x};\mathbf{c}) = \phi(||\mathbf{x} - \mathbf{c}||) = \phi(\sqrt{(x_1 - c_1)^2 + \ldots + (x_K - c_K)^2}) }[/math]
An example is the Gaussian RBF, which has the same functional form as the normal distribution:
- [math]\displaystyle{ \phi(\mathbf{x};\mathbf{c}) = e^{-b||\mathbf{x} - \mathbf{c}||^2} }[/math]
which drops off rapidly as the distance from c increases.
A possible usage of RBF's is to create one for every observed data point. This means that the result of an RBF applied to a new data point will be close to 0 unless the new point is near to the point around which the RBF was applied. That is, the application of the radial basis functions will pick out the nearest point, and its regression coefficient will dominate. The result will be a form of nearest neighbor interpolation, where predictions are made by simply using the prediction of the nearest observed data point, possibly interpolating between multiple nearby data points when they are all similar distances away. This type of nearest neighbor method for prediction is often considered diametrically opposed to the type of prediction used in standard linear regression: But in fact, the transformations that can be applied to the explanatory variables in a linear predictor function are so powerful that even the nearest neighbor method can be implemented as a type of linear regression.
It is even possible to fit some functions that appear non-linear in the coefficients by transforming the coefficients into new coefficients that do appear linear. For example, a function of the form [math]\displaystyle{ a + b^2x_{i1} + \sqrt{c}x_{i2} }[/math] for coefficients [math]\displaystyle{ a,b,c }[/math] could be transformed into the appropriate linear function by applying the substitutions [math]\displaystyle{ b' = b^2, c' = \sqrt{c}, }[/math] leading to [math]\displaystyle{ a + b'x_{i1} + c'x_{i2}, }[/math] which is linear. Linear regression and similar techniques could be applied and will often still find the optimal coefficients, but their error estimates and such will be wrong.
The explanatory variables may be of any type: real-valued, binary, categorical, etc. The main distinction is between continuous variables (e.g. income, age, blood pressure, etc.) and discrete variables (e.g. sex, race, political party, etc.). Discrete variables referring to more than two possible choices are typically coded using dummy variables (or indicator variables), i.e. separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have the given value". For example, a four-way discrete variable of blood type with the possible values "A, B, AB, O" would be converted to separate two-way dummy variables, "is-A, is-B, is-AB, is-O", where only one of them has the value 1 and all the rest have the value 0. This allows for separate regression coefficients to be matched for each possible value of the discrete variable.
Note that, for K categories, not all K dummy variables are independent of each other. For example, in the above blood type example, only three of the four dummy variables are independent, in the sense that once the values of three of the variables are known, the fourth is automatically determined. Thus, it's really only necessary to encode three of the four possibilities as dummy variables, and in fact if all four possibilities are encoded, the overall model becomes non-identifiable. This causes problems for a number of methods, such as the simple closed-form solution used in linear regression. The solution is either to avoid such cases by eliminating one of the dummy variables, and/or introduce a regularization constraint (which necessitates a more powerful, typically iterative, method for finding the optimal coefficients).[7]
See also
References
- ↑ Makhoul, J. (1975). "Linear prediction: A tutorial review". Proceedings of the IEEE 63 (4): 561–580. doi:10.1109/PROC.1975.9792. ISSN 0018-9219. Bibcode: 1975IEEEP..63..561M.
- ↑ David A. Freedman (2009). Statistical Models: Theory and Practice. Cambridge University Press. p. 26. ISBN 9780521743853. https://archive.org/details/statisticalmodel00free. "A simple regression equation has on the right hand side an intercept and an explanatory variable with a slope coefficient. A multiple regression equation has two or more explanatory variables on the right hand side, each with its own slope coefficient"
- ↑ Rosenblatt, Frank (1957), The Perceptron--a perceiving and recognizing automaton. Report 85-460-1, Cornell Aeronautical Laboratory.
- ↑ Cortes, Corinna; Vapnik, Vladimir N. (1995). "Support-vector networks". Machine Learning 20 (3): 273–297. doi:10.1007/BF00994018. http://image.diku.dk/imagecanon/material/cortes_vapnik95.pdf.
- ↑ McLachlan, G. J. (2004). Discriminant Analysis and Statistical Pattern Recognition. Wiley Interscience. ISBN 978-0-471-69115-0.
- ↑ Jolliffe I.T. Principal Component Analysis, Series: Springer Series in Statistics, 2nd ed., Springer, NY, 2002, XXIX, 487 p. 28 illus. ISBN 978-0-387-95442-4
- ↑ Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome H. (2009) (in en). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer. ISBN 978-0-387-84884-6. https://books.google.com/books?id=eBSgoAEACAAJ.
Original source: https://en.wikipedia.org/wiki/Linear predictor function.
Read more |