Correspondence analysis
Correspondence analysis (CA) is a multivariate statistical technique proposed[1] by Herman Otto Hartley (Hirschfeld)[2] and later developed by Jean-Paul Benzécri.[3] It is conceptually similar to principal component analysis, but applies to categorical rather than continuous data. In a similar manner to principal component analysis, it provides a means of displaying or summarising a set of data in two-dimensional graphical form. Its aim is to display in a biplot any structure hidden in the multivariate setting of the data table. As such it is a technique from the field of multivariate ordination. Since the variant of CA described here can be applied either with a focus on the rows or on the columns it should in fact be called simple (symmetric) correspondence analysis.[4] It is traditionally applied to the contingency table of a pair of nominal variables where each cell contains either a count or a zero value. If more than two categorical variables are to be summarized, a variant called multiple correspondence analysis should be chosen instead. CA may also be applied to binary data given the presence/absence coding represents simplified count data i.e. a 1 describes a positive count and 0 stands for a count of zero. Depending on the scores used CA preserves the chi-square distance[5][6] between either the rows or the columns of the table. Because CA is a descriptive technique, it can be applied to tables regardless of a significant chi-squared test.[7][8] Although the [math]\displaystyle{ \chi^2 }[/math] statistic used in inferential statistics and the chi-square distance are computationally related they should not be confused since the latter works as a multivariate statistical distance measure in CA while the [math]\displaystyle{ \chi^2 }[/math] statistic is in fact a scalar not a metric.[9]
Details
Like principal components analysis, correspondence analysis creates orthogonal components (or axes) and, for each item in a table i.e. for each row, a set of scores (sometimes called factor scores, see Factor analysis). Correspondence analysis is performed on the data table, conceived as matrix C of size m × n where m is the number of rows and n is the number of columns. In the following mathematical description of the method capital letters in italics refer to a matrix while letters in italics refer to vectors. Understanding the following computations requires knowledge of matrix algebra.
Preprocessing
Before proceeding to the central computational step of the algorithm, the values in matrix C have to be transformed.[10] First compute a set of weights for the columns and the rows (sometimes called masses),[7][11] where row and column weights are given by the row and column vectors, respectively:
- [math]\displaystyle{ w_m = \frac{1}{n_C} C \mathbf{1}, \quad w_n = \frac{1}{n_C}\mathbf{1}^T C. }[/math]
Here [math]\displaystyle{ n_C = \sum_{i=1}^n \sum_{j=1}^m C_{ij} }[/math] is the sum of all cell values in matrix C, or short the sum of C, and [math]\displaystyle{ \mathbf{1} }[/math] is a column vector of ones with the appropriate dimension.
Put in simple words, [math]\displaystyle{ w_m }[/math] is just a vector whose elements are the row sums of C divided by the sum of C, and [math]\displaystyle{ w_n }[/math] is a vector whose elements are the column sums of C divided by the sum of C.
The weights are transformed into diagonal matrices
- [math]\displaystyle{ W_m = \operatorname{diag}(1/\sqrt{w_m}) }[/math]
and
- [math]\displaystyle{ W_n = \operatorname{diag}(1/\sqrt{w_n}) }[/math]
where the diagonal elements of [math]\displaystyle{ W_n }[/math] are [math]\displaystyle{ 1/\sqrt{w_n} }[/math] and those of [math]\displaystyle{ W_m }[/math] are [math]\displaystyle{ 1/\sqrt{w_m} }[/math] respectively i.e. the vector elements are the inverses of the square roots of the masses. The off-diagonal elements are all 0.
Next, compute matrix [math]\displaystyle{ P }[/math] by dividing [math]\displaystyle{ C }[/math] by its sum
- [math]\displaystyle{ P = \frac{1}{n_C} C. }[/math]
In simple words, Matrix [math]\displaystyle{ P }[/math] is just the data matrix (contingency table or binary table) transformed into portions i.e. each cell value is just the cell portion of the sum of the whole table.
Finally, compute matrix [math]\displaystyle{ S }[/math], sometimes called the matrix of standardized residuals,[10] by matrix multiplication as
- [math]\displaystyle{ S = W_m(P - w_m w_n)W_n }[/math]
Note, the vectors [math]\displaystyle{ w_m }[/math] and [math]\displaystyle{ w_n }[/math] are combined in an outer product resulting in a matrix of the same dimensions as [math]\displaystyle{ P }[/math]. In words the formula reads: matrix [math]\displaystyle{ \operatorname{outer}(w_m, w_n) }[/math] is subtracted from matrix [math]\displaystyle{ P }[/math] and the resulting matrix is scaled (weighted) by the diagonal matrices [math]\displaystyle{ W_m }[/math] and [math]\displaystyle{ W_n }[/math]. Multiplying the resulting matrix by the diagonal matrices is equivalent to multiply the i-th row (or column) of it by the i-th element of the diagonal of [math]\displaystyle{ W_m }[/math] or [math]\displaystyle{ W_n }[/math], respectively[12].
Interpretation of preprocessing
The vectors [math]\displaystyle{ w_m }[/math] and [math]\displaystyle{ w_n }[/math] are the row and column masses or the marginal probabilities for the rows and columns, respectively. Subtracting matrix [math]\displaystyle{ \operatorname{outer}(w_m, w_n) }[/math] from matrix [math]\displaystyle{ P }[/math] is the matrix algebra version of double centering the data. Multiplying this difference by the diagonal weighting matrices results in a matrix containing weighted deviations from the origin of a vector space. This origin is defined by matrix [math]\displaystyle{ \operatorname{outer}(w_m, w_n) }[/math].
In fact matrix [math]\displaystyle{ \operatorname{outer}(w_m, w_n) }[/math] is identical with the matrix of expected frequencies in the chi-squared test. Therefore [math]\displaystyle{ S }[/math] is computationally related to the independence model used in that test. But since CA is not an inferential method the term independence model is inappropriate here.
Orthogonal components
The table [math]\displaystyle{ S }[/math] is then decomposed[10] by a singular value decomposition as
- [math]\displaystyle{ S = U\Sigma V^* \, }[/math]
where [math]\displaystyle{ U }[/math] and [math]\displaystyle{ V }[/math] are the left and right singular vectors of [math]\displaystyle{ S }[/math] and [math]\displaystyle{ \Sigma }[/math] is a square diagonal matrix with the singular values [math]\displaystyle{ \sigma_i }[/math] of [math]\displaystyle{ S }[/math] on the diagonal. [math]\displaystyle{ \Sigma }[/math] is of dimension [math]\displaystyle{ p \leq (\min(m,n)-1) }[/math] hence [math]\displaystyle{ U }[/math] is of dimension m×p and [math]\displaystyle{ V }[/math] is of n×p. As orthonormal vectors [math]\displaystyle{ U }[/math] and [math]\displaystyle{ V }[/math] fulfill
- [math]\displaystyle{ U^* U = V^* V = I }[/math].
In other words, the multivariate information that is contained in [math]\displaystyle{ C }[/math] as well as in [math]\displaystyle{ S }[/math] is now distributed across two (coordinate) matrices [math]\displaystyle{ U }[/math] and [math]\displaystyle{ V }[/math] and a diagonal (scaling) matrix [math]\displaystyle{ \Sigma }[/math]. The vector space defined by them has as number of dimensions p, that is the smaller of the two values, number of rows and number of columns, minus 1.
Inertia
While a principal component analysis may be said to decompose the (co)variance, and hence its measure of success is the amount of (co-)variance covered by the first few PCA axes - measured in eigenvalue -, a CA works with a weighted (co-)variance which is called inertia.[13] The sum of the squared singular values is the total inertia [math]\displaystyle{ \Iota }[/math] of the data table, computed as
- [math]\displaystyle{ \Iota = \sum_{i=1}^p \sigma_i^2. }[/math]
The total inertia [math]\displaystyle{ \Iota }[/math] of the data table can also computed directly from [math]\displaystyle{ S }[/math] as
- [math]\displaystyle{ \Iota = \sum_{i=1}^n \sum_{j=1}^m s_{ij}^2. }[/math]
The amount of inertia covered by the i-th set of singular vectors is [math]\displaystyle{ \iota_i }[/math], the principal inertia. The higher the portion of inertia covered by the first few singular vectors i.e. the larger the sum of the principal inertiae in comparison to the total inertia, the more successful a CA is.[13] Therefore all principal inertia values are expressed as portion [math]\displaystyle{ \epsilon_i }[/math] of the total inertia
- [math]\displaystyle{ \epsilon_i = \sigma_i^2 / \sum_{i=1}^p \sigma_i^2 }[/math]
and are presented in the form of a scree plot. In fact a scree plot is just a bar plot of all principal inertia portions [math]\displaystyle{ \epsilon_i }[/math].
Coordinates
To transform the singular vectors to coordinates which preserve the chisquare distances between rows or columns an additional weighting step is necessary. The resulting coordinates are called principal coordinates[10] in CA text books. If principal coordinates are used for rows their visualization is called a row isometric[14] scaling in econometrics and scaling 1[15] in ecology. Since the weighting includes the singular values [math]\displaystyle{ \Sigma }[/math] of the matrix of standardized residuals [math]\displaystyle{ S }[/math] these coordinates are sometimes referred to as singular value scaled singular vectors, or, a little bit misleading, as eigenvalue scaled eigenvectors. In fact the non-trivial eigenvectors of [math]\displaystyle{ S S^* }[/math]are the left singular vectors [math]\displaystyle{ U }[/math] of [math]\displaystyle{ S }[/math] and those of [math]\displaystyle{ S^* S }[/math] are the right singular vectors [math]\displaystyle{ V }[/math] of [math]\displaystyle{ S }[/math] while the eigenvalues of either of these matrices are the squares of the singular values [math]\displaystyle{ \Sigma }[/math]. But since all modern algorithms for CA are based on a singular value decomposition this terminology should be avoided. In the french tradition of CA the coordinates are sometimes called (factor) scores.
Factor scores or principal coordinates for the rows of matrix C are computed by
- [math]\displaystyle{ F_m = W_m U \Sigma }[/math]
i.e. the left singular vectors are scaled by the inverse of the square roots of the row masses and by the singular values. Because principal coordinates are computed using singular values they contain the information about the spread between the rows (or columns) in the original table. Computing the euclidean distances between the entities in principal coordinates results in values that equal their chisquare distances which is the reason why CA is said to "preserve chisquare distances".
Compute principal coordinates for the columns by
- [math]\displaystyle{ F_n = W_n V \Sigma. }[/math]
To represent the result of CA in a proper biplot, those categories which are not plotted in principal coordinates, i.e. in chisquare distance preserving coordinates, should be plotted in so called standard coordinates.[10] They are called standard coordinates because each vector of standard coordinates has been standardized to exhibit mean 0 and variance 1.[16] When computing standard coordinates the singular values are omitted which is a direct result of applying the biplot rule by which one of the two sets of singular vector matrices must be scaled by singular values raised to the power of zero i.e. multiplied by one i.e. be computed by omitting the singular values if the other set of singular vectors have been scaled by the singular values. This reassures the existence of a inner product between the two sets of coordinates i.e. it leads to meaningful interpretations of their spatial relations in a biplot.
In practical terms one can think of the standard coordinates as the vertices of the vector space in which the set of principal coordinates (i.e. the respective points) "exists".[17] The standard coordinates for the rows are
- [math]\displaystyle{ G_m = W_m U }[/math]
and those for the columns are
- [math]\displaystyle{ G_n = W_n V }[/math]
Note that a scaling 1[15] biplot in ecology implies the rows to be in principal and the columns to be in standard coordinates while scaling 2 implies the rows to be in standard and the columns to be in principal coordinates. I.e. scaling 1 implies a biplot of [math]\displaystyle{ F_m }[/math]together with [math]\displaystyle{ G_n }[/math] while scaling 2 implies a biplot of [math]\displaystyle{ F_n }[/math]together with [math]\displaystyle{ G_m }[/math].
Graphical representation of result
The visualization of a CA result always starts with displaying the scree plot of the principal inertia values to evaluate the success of summarizing spread by the first few singular vectors.
The actual ordination is presented in a graph which could - at first look - be confused with a complicated scatter plot. In fact it consists of two scatter plots printed one upon the other, one set of points for the rows and one for the columns. But being a biplot a clear interpretation rule relates the two coordinate matrices used.
Usually the first two dimensions of the CA solution are plotted because they encompass the maximum of information about the data table that can be displayed in 2D although other combinations of dimensions may be investigated by a biplot. A biplot is in fact a low dimensional mapping of a part of the information contained in the original table.
As a rule of thumb that set (rows or columns) which should be analysed with respect to its composition as measured by the other set is displayed in principal coordinates while the other set is displayed in standard coordinates. E.g. a table displaying voting districts in rows and political parties in columns with the cells containing the counted votes may be displayed with the districts (rows) in principal coordinates when the focus is on ordering districts according to similar voting.
Traditionally, originating from the french tradition in CA,[18] early CA biplots mapped both entities in the same coordinate version, usually principal coordinates, but this kind of display is misleading insofar as: "Although this is called a biplot, it does not have any useful inner product relationship between the row and column scores" as Brian Ripley, maintainer of R package MASS points out correctly.[19] Today that kind of display should be avoided since laymen usually are not aware of the lacking relation between the two point sets.
A scaling 1[15] biplot (rows in principal coordinates, columns in standard coordinates) is interpreted as follows:[20]
- The distances between row points approximate their chi-square distance. Points close to each other represent rows with very similar values in the original data table. I.e they may exhibit rather similar frequencies in case of count data or closely related binary values in case of presence/absence data.
- (Column) points in standard coordinates represent the vertices of the vector space i.e. the outer corner of something that in multidimensional space has the shape of an irregular polyhedron. Project row points onto the line connecting the origin and the standard coordinate of a column; if the projected position along that connection line is close to the position of the standard coordinate, that row point is strongly associated with this column i.e. in case of count data the row has a high frequency of that category and in case of presence/absence data the row is likely to exhibit a 1 in that column. Row points whose projection would require to elongate the connection line beyond the origin have a lower than average value in that column.
Extensions and applications
Several variants of CA are available, including detrended correspondence analysis (DCA) and canonical correspondence analysis (CCA). The latter (CCA) is used when there is information about possible causes for the similarities between the investigated entities. The extension of correspondence analysis to many categorical variables is called multiple correspondence analysis. An adaptation of correspondence analysis to the problem of discrimination based upon qualitative variables (i.e., the equivalent of discriminant analysis for qualitative data) is called discriminant correspondence analysis or barycentric discriminant analysis.
In the social sciences, correspondence analysis, and particularly its extension multiple correspondence analysis, was made known outside France through French sociologist Pierre Bourdieu's application of it.[21]
Implementations
- The data visualization system Orange include the module: orngCA.
- The statistical programming language R includes several packages, which offer a function for (simple symmetric) correspondence analysis. Using the R notation [package_name::function_name] the packages and respective functions are:
ade4::dudi.coa()
,ca::ca()
,ExPosition::epCA()
,FactoMineR::CA()
,MASS::corresp()
,vegan::cca()
. The easiest approach for beginners isca::ca()
as there is an extensive text book[22] accompanying that package. - The Freeware PAST (PAleontological STatistics)[23] offers (simple symmetric) correspondence analysis via the menu "Multivariate/Ordination/Correspondence (CA)".
See also
References
- ↑ Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP ISBN:0-19-850994-4
- ↑ Hirschfeld, H.O. (1935) "A connection between correlation and contingency", Proc. Cambridge Philosophical Society, 31, 520–524
- ↑ Benzécri, J.-P. (1973). L'Analyse des Données. Volume II. L'Analyse des Correspondances. Paris, France: Dunod.
- ↑ Beh, Eric; Lombardo, Rosaria (2014). Correspondence Analysis. Theory, Practice and New Strategies. Chichester: Wiley. pp. 120. ISBN 978-1-119-95324-1.
- ↑ Greenacre, Michael (2007). Correspondence Analysis in Practice. Boca Raton: CRC Press. pp. 204. ISBN 9781584886167.
- ↑ Legendre, Pierre; Legendre, Louis (2012). Numerical Ecology. Amsterdam: Elsevier. pp. 465. ISBN 978-0-444-53868-0.
- ↑ 7.0 7.1 Greenacre, Michael (1983). Theory and Applications of Correspondence Analysis. London: Academic Press. ISBN 0-12-299050-1.
- ↑ Greenacre, Michael (2007). Correspondence Analysis in Practice, Second Edition. London: Chapman & Hall/CRC.
- ↑ Greenacre, Michael (2017). Correspondence Analysis in Practice (3rd ed.). Boca Raton: CRC Press. pp. 26–29. ISBN 9781498731775.
- ↑ 10.0 10.1 10.2 10.3 10.4 Greenacre, Michael (2007). Correspondence Analysis in Practice. Boca Raton: CRC Press. pp. 202. ISBN 9781584886167.
- ↑ Greenacre, Michael (2007). Correspondence Analysis in Practice, Second Edition. London: Chapman & Hall/CRC. pp. 202.
- ↑ Abadir, Karim; Magnus, Jan (2005). Matrix algebra. Cambridge: Cambridge University Press. pp. 24. ISBN 9786612394256.
- ↑ 13.0 13.1 Beh, Eric; Lombardo, Rosaria (2014). Correspondence Analysis. Theory, Practice and New Strategies. Chichester: Wiley. pp. 87, 129. ISBN 978-1-119-95324-1.
- ↑ Beh, Eric; Lombardo, Rosaria (2014). Correspondence Analysis. Theory, Practice and New Strategies. Chichester: Wiley. pp. 132–134. ISBN 978-1-119-95324-1.
- ↑ 15.0 15.1 15.2 Legendre, Pierre; Legendre, Louis (2012). Numerical Ecology. Amsterdam: Elsevier. pp. 470. ISBN 978-0-444-53868-0.
- ↑ Greenacre, Michael (2017). Correspondence Analysis in Practice (3rd ed.). Boca Raton: CRC Press. pp. 62. ISBN 9781498731775.
- ↑ Blasius, Jörg (2001) (in de). Korrespondenzanalyse. Berlin: Walter de Gruyter. pp. 40, 60. ISBN 9783486257304.
- ↑ Greenacre, Michael (2017). Correspondence Analysis in Practice (3rd ed.). Boca Raton: CRC Press. pp. 70. doi:10.1201/9781315369983. ISBN 9781498731775.
- ↑ Ripley, Brian (2022-01-13). "MASS R package manual". Details. https://rdrr.io/cran/MASS/man/corresp.html.
- ↑ Borcard, Daniel; Gillet, Francois; Legendre, Pierre (2018). Numerical Ecology with R (2nd ed.). Cham: Springer. p. 175. doi:10.1007/978-3-319-71404-2. ISBN 9783319714042.
- ↑ Bourdieu, Pierre (1984). Distinction. Routledge. pp. 41. ISBN 0674212770. https://archive.org/details/distinctionsocia0000bour/page/41.
- ↑ Greenacre, Michael (2021). Correspondence Analysis in Practice (third ed.). London: CRC PRESS. ISBN 9780367782511.
- ↑ Hammer, Øyvind. "Past 4 - the Past of the Future". https://www.nhm.uio.no/english/research/infrastructure/past/.
External links
- Greenacre, Michael (2008), La Práctica del Análisis de Correspondencias, BBVA Foundation, Madrid, Spanish translation of Correspondence Analysis in Practice, available for free download from BBVA Foundation publications
- Greenacre, Michael (2010), Biplots in Practice, BBVA Foundation, Madrid, available for free download at multivariatestatistics.org
Original source: https://en.wikipedia.org/wiki/Correspondence analysis.
Read more |