Multiple correspondence analysis

From HandWiki
Revision as of 13:15, 18 August 2023 by Steve Marsio (talk | contribs) (link)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In statistics, multiple correspondence analysis (MCA) is a data analysis technique for nominal categorical data, used to detect and represent underlying structures in a data set. It does this by representing data as points in a low-dimensional Euclidean space. The procedure thus appears to be the counterpart of principal component analysis for categorical data.[1][2] MCA can be viewed as an extension of simple correspondence analysis (CA) in that it is applicable to a large set of categorical variables.

As an extension of correspondence analysis

MCA is performed by applying the CA algorithm to either an indicator matrix (also called complete disjunctive table – CDT) or a Burt table formed from these variables.[3] An indicator matrix is an individuals × variables matrix, where the rows represent individuals and the columns are dummy variables representing categories of the variables.[4] Analyzing the indicator matrix allows the direct representation of individuals as points in geometric space. The Burt table is the symmetric matrix of all two-way cross-tabulations between the categorical variables, and has an analogy to the covariance matrix of continuous variables. Analyzing the Burt table is a more natural generalization of simple correspondence analysis, and individuals or the means of groups of individuals can be added as supplementary points to the graphical display.

In the indicator matrix approach, associations between variables are uncovered by calculating the chi-square distance between different categories of the variables and between the individuals (or respondents). These associations are then represented graphically as "maps", which eases the interpretation of the structures in the data. Oppositions between rows and columns are then maximized, in order to uncover the underlying dimensions best able to describe the central oppositions in the data. As in factor analysis or principal component analysis, the first axis is the most important dimension, the second axis the second most important, and so on, in terms of the amount of variance accounted for. The number of axes to be retained for analysis is determined by calculating modified eigenvalues.

Details

Since MCA is adapted to make statistical conclusion out of categorical variables (such as multiple choices questions), the first thing one needs to do is to transform quantitative data (such as age, size, weight, day time, etc) into categories (using for instance statistical quantiles).

When the dataset is completely represented as categorical variables, one is able to build the corresponding so called completely disjunctive table. We denote this table [math]\displaystyle{ X }[/math]. If [math]\displaystyle{ I }[/math] persons answered a survey with [math]\displaystyle{ J }[/math] multiple choices questions with 4 answers each, [math]\displaystyle{ X }[/math] will have [math]\displaystyle{ I }[/math] rows and [math]\displaystyle{ 4J }[/math] columns.

More theoretically,[5] assume [math]\displaystyle{ X }[/math] is the completely disjunctive table of [math]\displaystyle{ I }[/math] observations of [math]\displaystyle{ K }[/math] categorical variables. Assume also that the [math]\displaystyle{ k }[/math]-th variable have [math]\displaystyle{ J_k }[/math] different levels (categories) and set [math]\displaystyle{ J=\sum_{k=1}^{K} J_k }[/math]. The table [math]\displaystyle{ X }[/math] is then a [math]\displaystyle{ I \times J }[/math] matrix with all coefficient being [math]\displaystyle{ 0 }[/math] or [math]\displaystyle{ 1 }[/math]. Set the sum of all entries of [math]\displaystyle{ X }[/math] to be [math]\displaystyle{ N }[/math] and introduce [math]\displaystyle{ Z=X/N }[/math]. In an MCA, there are also two special vectors: first [math]\displaystyle{ r }[/math], that contains the sums along the rows of [math]\displaystyle{ Z }[/math], and [math]\displaystyle{ c }[/math], that contains the sums along the columns of [math]\displaystyle{ Z }[/math]. Note [math]\displaystyle{ D_r = \text{diag}(r) }[/math] and [math]\displaystyle{ D_c = \text{diag}(c) }[/math], the diagonal matrices containing [math]\displaystyle{ r }[/math] and [math]\displaystyle{ c }[/math] respectively as diagonal. With these notations, computing an MCA consists essentially in the singular value decomposition of the matrix:

[math]\displaystyle{ M = D_{r}^{-1/2} (Z-r c^T ) D_{c}^{-1/2} }[/math]

The decomposition of [math]\displaystyle{ M }[/math] gives you [math]\displaystyle{ P }[/math], [math]\displaystyle{ \Delta }[/math] and [math]\displaystyle{ Q }[/math] such that [math]\displaystyle{ M=P \Delta Q^T }[/math] with P, Q two unitary matrices and [math]\displaystyle{ \Delta }[/math] is the generalized diagonal matrix of the singular values (with the same shape as [math]\displaystyle{ Z }[/math]). The positive coefficients of [math]\displaystyle{ \Delta^2 }[/math] are the eigenvalues of [math]\displaystyle{ Z }[/math].

The interest of MCA comes from the way observations (rows) and variables (columns) in [math]\displaystyle{ Z }[/math] can be decomposed. This decomposition is called a factor decomposition. The coordinates of the observations in the factor space are given by

[math]\displaystyle{ F = D_{r}^{-1/2} P \Delta }[/math]

The [math]\displaystyle{ i }[/math]-th rows of [math]\displaystyle{ F }[/math] represent the [math]\displaystyle{ i }[/math]-th observation in the factor space. And similarly, the coordinates of the variables (in the same factor space as observations!) are given by

[math]\displaystyle{ G = D_{c}^{-1/2} Q \Delta }[/math]

Recent works and extensions

In recent years, several students of Jean-Paul Benzécri have refined MCA and incorporated it into a more general framework of data analysis known as geometric data analysis. This involves the development of direct connections between simple correspondence analysis, principal component analysis and MCA with a form of cluster analysis known as Euclidean classification.[6]

Two extensions have great practical use.

  • It is possible to include, as active elements in the MCA, several quantitative variables. This extension is called factor analysis of mixed data (see below).
  • Very often, in questionnaires, the questions are structured in several issues. In the statistical analysis it is necessary to take into account this structure. This is the aim of multiple factor analysis which balances the different issues (i.e. the different groups of variables) within a global analysis and provides, beyond the classical results of factorial analysis (mainly graphics of individuals and of categories), several results (indicators and graphics) specific of the group structure.

Application fields

In the social sciences, MCA is arguably best known for its application by Pierre Bourdieu,[7] notably in his books La Distinction, Homo Academicus and The State Nobility. Bourdieu argued that there was an internal link between his vision of the social as spatial and relational --– captured by the notion of field, and the geometric properties of MCA.[8] Sociologists following Bourdieu's work most often opt for the analysis of the indicator matrix, rather than the Burt table, largely because of the central importance accorded to the analysis of the 'cloud of individuals'.[9]

Multiple correspondence analysis and principal component analysis

MCA can also be viewed as a PCA applied to the complete disjunctive table. To do this, the CDT must be transformed as follows. Let [math]\displaystyle{ y_{ik} }[/math] denote the general term of the CDT. [math]\displaystyle{ y_{ik} }[/math] is equal to 1 if individual [math]\displaystyle{ i }[/math] possesses the category [math]\displaystyle{ k }[/math] and 0 if not. Let denote [math]\displaystyle{ p_k }[/math], the proportion of individuals possessing the category [math]\displaystyle{ k }[/math]. The transformed CDT (TCDT) has as general term:

[math]\displaystyle{  x_{ik}=y_{ik}/p_k - 1 }[/math]

The unstandardized PCA applied to TCDT, the column [math]\displaystyle{ k }[/math] having the weight [math]\displaystyle{ p_k }[/math], leads to the results of MCA.

This equivalence is fully explained in a book by Jérôme Pagès.[10] It plays an important theoretical role because it opens the way to the simultaneous treatment of quantitative and qualitative variables. Two methods simultaneously analyze these two types of variables: factor analysis of mixed data and, when the active variables are partitioned in several groups: multiple factor analysis.

This equivalence does not mean that MCA is a particular case of PCA as it is not a particular case of CA. It only means that these methods are closely linked to one another, as they belong to the same family: the factorial methods.[citation needed]

Software

There are numerous software of data analysis that include MCA, such as STATA and SPSS. The R package FactoMineR also features MCA. This software is related to a book describing the basic methods for performing MCA .[11] There is also a Python package for [1] which works with numpy array matrices; the package has not been implemented yet for Spark dataframes.

References

  1. Le Roux; B. and H. Rouanet (2004). Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis. Dordrecht. Kluwer: p.180. 
  2. Greenacre, Michael and Blasius, Jörg (editors) (2006). Multiple Correspondence Analysis and Related Methods. London: Chapman & Hall/CRC. 
  3. Greenacre, Michael (2007). Correspondence Analysis in Practice, Second Edition. London: Chapman & Hall/CRC. 
  4. Le Roux, B. and H. Rouanet (2004), Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis, Dordrecht. Kluwer: p.179
  5. Hervé Abdi; Dominique Valentin (2007). "Multiple correspondence analysis". https://personal.utdallas.edu/~herve/Abdi-MCA2007-pretty.pdf. 
  6. Le Roux; B. and H. Rouanet (2004). Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis. Dordrecht. Kluwer. 
  7. Scott, John & Gordon Marshall (2009): Oxford Dictionary of Sociology, p. 135. Oxford: Oxford University Press
  8. Rouanet, Henry (2000) "The Geometric Analysis of Questionnaires. The Lesson of Bourdieu's La Distinction", in Bulletin de Méthodologie Sociologique 65, pp. 4–18
  9. Lebaron, Frédéric (2009) "How Bourdieu “Quantified” Bourdieu: The Geometric Modelling of Data", in Robson and Sanders (eds.) Quantifying Theory: Pierre Bourdieu. Springer, pp. 11-30.
  10. Pagès Jérôme (2014). Multiple Factor Analysis by Example Using R. Chapman & Hall/CRC The R Series London 272 p
  11. Husson F., Lê S. & Pagès J. (2009). Exploratory Multivariate Analysis by Example Using R. Chapman & Hall/CRC The R Series, London. ISBN:978-2-7535-0938-2

External links

  • Le Roux, B. and H. Rouanet (2004), Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis at Google Books: [2]
  • Greenacre, Michael (2008), La Práctica del Análisis de Correspondencias, BBVA Foundation, Madrid, available for free download at the foundation's web site [3]
  • FactoMineR A R software devoted to exploratory data analysis.