Kaiser Meyer Olkin test
The Kaiser-Meyer-Olkin (KMO) Test is a statistical measure to determine how suited data is for Factor Analysis. The test measures sampling adequacy for each variable in the model and the complete model. The statistic is a measure of the proportion of variance among variables that might be common variance. The lower the proportion, the more suited the data is to Factor Analysis.[1]
History
Henry Kaiser introduced a Measure of Sampling Adequacy (MSA) of factor analytic data matrices in 1970.[2] Kaiser and Rice then modified it in 1974.[3]
Measure of sampling adequacy
The measure of sampling adequacy is calculated for each indicator as
- [math]\displaystyle{ MSA_j = \frac{\displaystyle \sum_{k\neq j} r_{jk}^2}{\displaystyle \sum_{k\neq j} r_{jk}^2+\sum_{k\neq j} p_{jk}^2} }[/math]
and indicates to what extent an indicator is suitable for a factor analysis.
Kaiser-Meyer-Olkin criterion
- The Kaiser-Meyer-Olkin criterion is calculated and returns values between 0 and 1.
- [math]\displaystyle{ KMO = \frac{\displaystyle \underset{j\neq k}{\sum\sum} r_{jk}^2}{\displaystyle \underset{j\neq k}{\sum\sum} r_{jk}^2+\underset{j\neq k}{\sum\sum} p_{jk}^2} }[/math]
Here [math]\displaystyle{ r_{jk} }[/math] is the correlation between the variable in question and another, and [math]\displaystyle{ p_{jk} }[/math] is the partial correlation.
This is a function of the squared elements of the `image' matrix compared to the squares of the original correlations. The overall MSA as well as estimates for each item are found. The index is known as the Kaiser-Meyer-Olkin (KMO) index.[4]
Interpretation of result
In flamboyant fashion, Kaiser proposed that a KMO > .9 was marvelous, in the .80s, meritorious, in the .70s, middling, in the .60s, mediocre, in the 50s, miserable, and less than .5 would be unacceptable. [3] In general, KMO values between 0.8 and 1 indicate the sampling is adequate. KMO values less than 0.6 indicate the sampling is not adequate and that remedial action should be taken. In contrast, others set this cutoff value at 0.5.[5] A KMO value close to zero means that there are large partial correlations compared to the sum of correlations. In other words, there are widespread correlations which would be a large problem for factor analysis.[1]
An alternative measure of whether a matrix is factorable is the Bartlett test, which tests the degree that the matrix deviates from an identity matrix.[1]
Example in R
If the following is run in R with the library(REdaS)
- set.seed(5L)
- five.samples <- data.frame("A"=rnorm(100), "B"=rnorm(100), "C"=rnorm(100),
- "D"=rnorm(100), "E"=rnorm(100))
- cor(five.samples)
- KMOS(five.samples, use = "pairwise.complete.obs")
The following is produced.
- Kaiser-Meyer-Olkin Statistics
- Call: KMOS(x = five.samples, use = "pairwise.complete.obs")
- Measures of Sampling Adequacy (MSA):
- A B C D E
- 0.5173978 0.5563367 0.5240787 0.4796702 0.5416592
- KMO-Criterion: 0.5269849
This shows that the data is not that suited to Factor Analysis.[6]
See also
References
- ↑ 1.0 1.1 1.2 "KMO and Bartlett's Test". https://www.ibm.com/docs/en/spss-statistics/23.0.0?topic=detection-kmo-bartletts-test.
- ↑ Kaiser, Henry F. (1970). "A second generation little jiffy". Psychometrika 35 (4): 401–415. doi:10.1007/BF02291817.
- ↑ 3.0 3.1 Kaiser, Henry F.; Rice, John (1974). "Little Jiffy, Mark Iv". Educational and Psychological Measurement 34: 111–117. doi:10.1177/001316447403400115.
- ↑ Cureton, Edward E.; d'Agostino, Ralph B. (2013). Factor Analysis. doi:10.4324/9781315799476. ISBN 9781315799476.
- ↑ Dziuban, Charles D.; Shirkey, Edwin C. (1974). "When is a correlation matrix appropriate for factor analysis? Some decision rules". Psychological Bulletin 81 (6): 358–361. doi:10.1037/h0036316.
- ↑ "KMO function - RDocumentation". https://www.rdocumentation.org/packages/psych/versions/2.1.3/topics/KMO.