D'Agostino's K-squared test
In statistics, D'Agostino's K2 test, named for Ralph D'Agostino, is a goodness-of-fit measure of departure from normality, that is the test aims to gauge the compatibility of given data with the null hypothesis that the data is a realization of independent, identically distributed Gaussian random variables. The test is based on transformations of the sample kurtosis and skewness, and has power only against the alternatives that the distribution is skewed and/or kurtic.
Skewness and kurtosis
In the following, { xi } denotes a sample of n observations, g1 and g2 are the sample skewness and kurtosis, mj’s are the j-th sample central moments, and [math]\displaystyle{ \bar{x} }[/math] is the sample mean. Frequently in the literature related to normality testing, the skewness and kurtosis are denoted as √β1 and β2 respectively. Such notation can be inconvenient since, for example, √β1 can be a negative quantity.
The sample skewness and kurtosis are defined as
- [math]\displaystyle{ \begin{align} & g_1 = \frac{ m_3 }{ m_2^{3/2} } = \frac{\frac{1}{n} \sum_{i=1}^n \left( x_i - \bar{x} \right)^3}{\left( \frac{1}{n} \sum_{i=1}^n \left( x_i - \bar{x} \right)^2 \right)^{3/2}}\ , \\ & g_2 = \frac{ m_4 }{ m_2^{2} }-3 = \frac{\frac{1}{n} \sum_{i=1}^n \left( x_i - \bar{x} \right)^4}{\left( \frac{1}{n} \sum_{i=1}^n \left( x_i - \bar{x} \right)^2 \right)^2} - 3\ . \end{align} }[/math]
These quantities consistently estimate the theoretical skewness and kurtosis of the distribution, respectively. Moreover, if the sample indeed comes from a normal population, then the exact finite sample distributions of the skewness and kurtosis can themselves be analysed in terms of their means μ1, variances μ2, skewnesses γ1, and kurtosis γ2. This has been done by (Pearson 1931), who derived the following expressions:[better source needed]
- [math]\displaystyle{ \begin{align} & \mu_1(g_1) = 0, \\ & \mu_2(g_1) = \frac{ 6(n-2) }{ (n+1)(n+3) }, \\ & \gamma_1(g_1) \equiv \frac{\mu_3(g_1)}{\mu_2(g_1)^{3/2}} = 0, \\ & \gamma_2(g_1) \equiv \frac{\mu_4(g_1)}{\mu_2(g_1)^{2}}-3 = \frac{ 36(n-7)(n^2+2n-5) }{ (n-2)(n+5)(n+7)(n+9) }. \end{align} }[/math]
and
- [math]\displaystyle{ \begin{align} & \mu_1(g_2) = - \frac{6}{n+1}, \\ & \mu_2(g_2) = \frac{ 24n(n-2)(n-3) }{ (n+1)^2(n+3)(n+5) }, \\ & \gamma_1(g_2) \equiv \frac{\mu_3(g_2)}{\mu_2(g_2)^{3/2}} = \frac{6(n^2-5n+2)}{(n+7)(n+9)} \sqrt{\frac{6(n+3)(n+5)}{n(n-2)(n-3)}}, \\ & \gamma_2(g_2) \equiv \frac{\mu_4(g_2)}{\mu_2(g_2)^{2}}-3 = \frac{ 36(15n^6-36n^5-628n^4+982n^3+5777n^2-6402n+900) }{ n(n-3)(n-2)(n+7)(n+9)(n+11)(n+13) }. \end{align} }[/math]
For example, a sample with size n = 1000 drawn from a normally distributed population can be expected to have a skewness of 0, SD 0.08 and a kurtosis of 0, SD 0.15, where SD indicates the standard deviation.[citation needed]
Transformed sample skewness and kurtosis
The sample skewness g1 and kurtosis g2 are both asymptotically normal. However, the rate of their convergence to the distribution limit is frustratingly slow, especially for g2. For example even with n = 5000 observations the sample kurtosis g2 has both the skewness and the kurtosis of approximately 0.3, which is not negligible. In order to remedy this situation, it has been suggested to transform the quantities g1 and g2 in a way that makes their distribution as close to standard normal as possible.
In particular, (D'Agostino Pearson) suggested the following transformation for sample skewness:
- [math]\displaystyle{ Z_1(g_1) = \delta \operatorname{asinh}\left( \frac{g_1}{\alpha\sqrt{\mu_2}} \right), }[/math]
where constants α and δ are computed as
- [math]\displaystyle{ \begin{align} & W^2 = \sqrt{2\gamma_2 + 4} - 1, \\ & \delta = 1 / \sqrt{\ln W}, \\ & \alpha^2 = 2 / (W^2-1), \end{align} }[/math]
and where μ2 = μ2(g1) is the variance of g1, and γ2 = γ2(g1) is the kurtosis — the expressions given in the previous section.
Similarly, (Anscombe Glynn) suggested a transformation for g2, which works reasonably well for sample sizes of 20 or greater:
- [math]\displaystyle{ Z_2(g_2) = \sqrt{\frac{9A}{2}} \left\{1 - \frac{2}{9A} - \left(\frac{ 1-2/A }{ 1+\frac{g_2-\mu_1}{\sqrt{\mu_2}}\sqrt{2/(A-4)} }\right)^{\!1/3}\right\}, }[/math]
where
- [math]\displaystyle{ A = 6 + \frac{8}{\gamma_1} \left( \frac{2}{\gamma_1} + \sqrt{1+4/\gamma_1^2}\right), }[/math]
and μ1 = μ1(g2), μ2 = μ2(g2), γ1 = γ1(g2) are the quantities computed by Pearson.
Omnibus K2 statistic
Statistics Z1 and Z2 can be combined to produce an omnibus test, able to detect deviations from normality due to either skewness or kurtosis (D'Agostino Belanger):
- [math]\displaystyle{ K^2 = Z_1(g_1)^2 + Z_2(g_2)^2\, }[/math]
If the null hypothesis of normality is true, then K2 is approximately χ2-distributed with 2 degrees of freedom.
Note that the statistics g1, g2 are not independent, only uncorrelated. Therefore, their transforms Z1, Z2 will be dependent also (Shenton Bowman), rendering the validity of χ2 approximation questionable. Simulations show that under the null hypothesis the K2 test statistic is characterized by
expected value | standard deviation | 95% quantile | |
---|---|---|---|
n = 20 | 1.971 | 2.339 | 6.373 |
n = 50 | 2.017 | 2.308 | 6.339 |
n = 100 | 2.026 | 2.267 | 6.271 |
n = 250 | 2.012 | 2.174 | 6.129 |
n = 500 | 2.009 | 2.113 | 6.063 |
n = 1000 | 2.000 | 2.062 | 6.038 |
χ2(2) distribution | 2.000 | 2.000 | 5.991 |
See also
References
- Anscombe, F.J.; Glynn, William J. (1983). "Distribution of the kurtosis statistic b2 for normal statistics". Biometrika 70 (1): 227–234. doi:10.1093/biomet/70.1.227.
- D'Agostino, Ralph B. (1970). "Transformation to normality of the null distribution of g1". Biometrika 57 (3): 679–681. doi:10.1093/biomet/57.3.679.
- D'Agostino, Ralph B.; Pearson, E. S. (1973). "Tests for Departure from Normality. Empirical Results for the Distributions of b2 and √b1". Biometrika 60 (3): 613–622.
- D'Agostino, Ralph B.; Belanger, Albert; D'Agostino, Ralph B., Jr. (1990). "A suggestion for using powerful and informative tests of normality". The American Statistician 44 (4): 316–321. doi:10.2307/2684359. Archived from the original on 2012-03-25. https://web.archive.org/web/20120325140006/http://www.cee.mtu.edu/~vgriffis/CE%205620%20materials/CE5620%20Reading/DAgostino%20et%20al%20-%20normaility%20tests.pdf.
- "Note on tests for normality". Biometrika 22 (3/4): 423–424. 1931. doi:10.1093/biomet/22.3-4.423.
- Shenton, L.R.; Bowman (1977). "A bivariate model for the distribution of √b1 and b2". Journal of the American Statistical Association 72 (357): 206–211. doi:10.1080/01621459.1977.10479940.
Original source: https://en.wikipedia.org/wiki/D'Agostino's K-squared test.
Read more |