Nonparametric skew

From HandWiki
Short description: Statistical quantity


In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values.[1][2] It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful[3] than the usual measures of skewness in detecting departures of the population from normality.[4]

Properties

Definition

The nonparametric skew is defined as

[math]\displaystyle{ S = \frac{ \mu - \nu } { \sigma } }[/math]

where the mean (µ), median (ν) and standard deviation (σ) of the population have their usual meanings.

Properties

The nonparametric skew is one third of the Pearson 2 skewness coefficient and lies between −1 and +1 for any distribution.[5][6] This range is implied by the fact that the mean lies within one standard deviation of any median.[7]

Under an affine transformation of the variable (X), the value of S does not change except for a possible change in sign. In symbols

[math]\displaystyle{ S( aX + b ) = \operatorname{sign} (a)\, S(X) }[/math]

where a ≠ 0 and b are constants and S( X ) is the nonparametric skew of the variable X.

Sharper bounds

The bounds of this statistic ( ±1 ) were sharpened by Majindar[8] who showed that its absolute value is bounded by

[math]\displaystyle{ \frac{ 2 ( p q )^{ 1 / 2 } } { ( p + q )^{ 1 / 2 } } }[/math]

with

[math]\displaystyle{ p = \Pr( X \gt \operatorname{ E }( X ) ) }[/math]

and

[math]\displaystyle{ q = \Pr( X \lt \operatorname{ E }( X ) ) , }[/math]

where X is a random variable with finite variance, E() is the expectation operator and Pr() is the probability of the event occurring.

When p = q = 0.5 the absolute value of this statistic is bounded by 1. With p = 0.1 and p = 0.01, the statistic's absolute value is bounded by 0.6 and 0.199 respectively.

Extensions

It is also known that[9]

[math]\displaystyle{ | \mu - \nu_0 | \le \operatorname{E} ( | X - \nu_0 | ) \le \operatorname{ E } ( | X - \mu | ) \le \sigma , }[/math]

where ν0 is any median and E(.) is the expectation operator.

It has been shown that

[math]\displaystyle{ \frac{ | \mu - x_q | }{ \sigma } \le \max\left( \sqrt{\frac{( 1 - q ) } { q }}, \sqrt{\frac{ q } { ( 1 - q ) } } \right) }[/math]

where xq is the qth quantile.[7] Quantiles lie between 0 and 1: the median (the 0.5 quantile) has q = 0.5. This inequality has also been used to define a measure of skewness.[10]

This latter inequality has been sharpened further.[11]

[math]\displaystyle{ \mu -\sigma \sqrt{ \frac{ 1-q }{ q } } \le x_q \le \mu + \sigma \sqrt{ \frac{ q }{1- q } } }[/math]

Another extension for a distribution with a finite mean has been published:[12]

[math]\displaystyle{ \mu - \frac{ 1 }{ 2q } \operatorname{ E }| X - \mu | \le x_q \le \mu + \frac{ 1 }{ ( 2 - 2q ) } \operatorname{ E }| X - \mu | }[/math]

The bounds in this last pair of inequalities are attained when [math]\displaystyle{ \Pr (X=a) = q }[/math] and [math]\displaystyle{ \Pr (X=b) = 1-q }[/math] for fixed numbers a < b.

Finite samples

For a finite sample with sample size n ≥ 2 with xr is the rth order statistic, m the sample mean and s the sample standard deviation corrected for degrees of freedom,[13]

[math]\displaystyle{ \frac{ | m - x_r | }{ s } \le \text{max}\left[ \sqrt{ \frac{ ( n - 1 )( r - 1 ) } { n ( n - r + 1 ) } } , \sqrt{ \frac{ ( n - 1 )( n - r ) }{ nr } } \right] }[/math]

Replacing r with n / 2 gives the result appropriate for the sample median:[14]

[math]\displaystyle{ \frac{ | m - a | }{ s } \le \sqrt{ \frac{ n^2 - n }{ n^2 } } = \sqrt{ \frac{ n - 1 }{ n } } }[/math]

where a is the sample median.

Statistical tests

Hotelling and Solomons considered the distribution of the test statistic[5]

[math]\displaystyle{ D = \frac{ n ( m - a ) }{ s } }[/math]

where n is the sample size, m is the sample mean, a is the sample median and s is the sample's standard deviation.

Statistical tests of D have assumed that the null hypothesis being tested is that the distribution is symmetric .

Gastwirth estimated the asymptotic variance of n−1/2D.[15] If the distribution is unimodal and symmetric about 0, the asymptotic variance lies between 1/4 and 1. Assuming a conservative estimate (putting the variance equal to 1) can lead to a true level of significance well below the nominal level.

Assuming that the underlying distribution is symmetric Cabilio and Masaro have shown that the distribution of S is asymptotically normal.[16] The asymptotic variance depends on the underlying distribution: for the normal distribution, the asymptotic variance of Sn is 0.5708...

Assuming that the underlying distribution is symmetric, by considering the distribution of values above and below the median Zheng and Gastwirth have argued that[17]

[math]\displaystyle{ \sqrt{ 2n } \left( \frac{ m - a }{ s } \right) }[/math]

where n is the sample size, is distributed as a t distribution.

Related statistics

Antonietta Mira studied the distribution of the difference between the mean and the median.[18]

[math]\displaystyle{ \gamma_1 = 2 ( m - a ) , }[/math]

where m is the sample mean and a is the median. If the underlying distribution is symmetrical γ1 itself is asymptotically normal. This statistic had been earlier suggested by Bonferroni.[19]

Assuming a symmetric underlying distribution, a modification of S was studied by Miao, Gel and Gastwirth who modified the standard deviation to create their statistic.[20]

[math]\displaystyle{ J = \frac{ 1 }{ n } \sqrt { \frac{ \pi }{ 2 } } \sum{ | X_i -a | } }[/math]

where Xi are the sample values, || is the absolute value and the sum is taken over all n sample values.

The test statistic was

[math]\displaystyle{ T = \frac{ m - a } { J } . }[/math]

The scaled statistic Tn is asymptotically normal with a mean of zero for a symmetric distribution. Its asymptotic variance depends on the underlying distribution: the limiting values are, for the normal distribution var(Tn) = 0.5708... and, for the t distribution with three degrees of freedom, var(Tn) = 0.9689...[20]

Values for individual distributions

Symmetric distributions

For symmetric probability distributions the value of the nonparametric skew is 0.

Asymmetric distributions

It is positive for right skewed distributions and negative for left skewed distributions. Absolute values ≥ 0.2 indicate marked skewness.

It may be difficult to determine S for some distributions. This is usually because a closed form for the median is not known: examples of such distributions include the gamma distribution, inverse-chi-squared distribution, the inverse-gamma distribution and the scaled inverse chi-squared distribution.

The following values for S are known:

  • Beta distribution: 1 < α < β where α and β are the parameters of the distribution, then to a good approximation[21]
[math]\displaystyle{ S = \frac{ 1 }{ 3 }\frac{ ( \alpha - 2 \beta ) ( \alpha + \beta + 1 )^{ 1 / 2 } }{ ( \alpha + \beta - 2 / 3 ) ( \alpha \beta )^{ 1 / 2 } } }[/math]
If 1 < β < α then the positions of α and β are reversed in the formula. S is always < 0.
[math]\displaystyle{ S = \frac{ 2 }{ \beta^2 ( 4 + 5 \alpha^2 ) } }[/math]
where α is the shape parameter and β is the location parameter.
[math]\displaystyle{ \frac{ -4 }{ 3 } \le S \le \frac{ 4 }{ 3 } }[/math]
  • Chi square distribution: Although S ≥ 0 its value depends on the numbers of degrees of freedom (k).
[math]\displaystyle{ S \approx \frac{ 1 - ( 1 - \frac{ 2 }{ k } )^3 }{ 2 } }[/math]
[math]\displaystyle{ S = 1 - \log_e( 2 ) \approx 0.31 }[/math]
[math]\displaystyle{ S = 1 - \log_e( 2 ) \approx 0.31 }[/math]
[math]\displaystyle{ S = - \frac{ polylog( 2, 1 - p ) + \ln( 1 + \sqrt{ p } ) \ln p }{ \sqrt{ -[2 polylog( 3, 1 - p ) + polylog^2( 2, 1 -p ) ] } } }[/math]
Here S is always > 0.
[math]\displaystyle{ 0 \le S \le 1 - \log_e( 2 ) }[/math]
[math]\displaystyle{ S = n^{ -3 / 2 } \sqrt{ \frac{ n - 4 }{ n - 2 } } + O( n^{ -5 / 2 } ) }[/math]
[math]\displaystyle{ S = \frac{ \Gamma \left( 1 - \frac{ 1 }{ \alpha } \right) - \frac{ 1 }{ \sqrt{\alpha} \log_e( 2 ) } } \sqrt { \Gamma \left( 1 - \frac{ 2 }{ \alpha } \right ) - \left( \Gamma \left( 1 - \frac{ 1 }{ \alpha } \right ) \right )^2 } }[/math]
  • Gamma distribution: The median can only be determined approximately for this distribution.[26] If the shape parameter α is ≥ 1 then
[math]\displaystyle{ S \approx \frac{ \beta }{ 3 \alpha + 0.2 } }[/math]
where β > 0 is the rate parameter. Here S is always > 0.
[math]\displaystyle{ S = - \frac{ \exp( \frac {-k^2 } { 2 } ) - 1 }{ \sqrt{ \exp( \frac{ k^2 }{ 2 } ) - 1 } } }[/math]
S is always < 0.
[math]\displaystyle{ S = \left( \frac{ 2^k - 1 }{ k } - 2^k \right)( 1 - 2k )^{ 0.5 } }[/math]
[math]\displaystyle{ \frac{ \sqrt{ 6 } [ \gamma + \log_e( \log_e( 2 ) ) ] }{ \pi } \approx 0.1643 }[/math]
where γ is Euler's constant.[27]
[math]\displaystyle{ S \approx \frac{ \sqrt{ 2 } - 0.6745 \sqrt{ \pi } }{ \sqrt{ \pi - 2 } } \approx 0.36279 }[/math]
[math]\displaystyle{ S = \frac{ b - \sin ( b ) }{ \sqrt{ b \tan ( b ) - b^2 } } }[/math]
The standard deviation does not exist for values of b > 4.932 (approximately). For values for which the standard deviation is defined, S is > 0.
[math]\displaystyle{ S = \frac{ 1 }{ ( e^{ \frac{ \sigma^2 }{ 2 } } + 1 ) ( e^{ \mu + \sigma^2 } ) } }[/math]
  • Log-Weibull distribution:[24]
[math]\displaystyle{ S \approx \frac{ [ \log_e( \log_e( 2 ) ) - 0.5772 ] \sqrt{ 6 } }{ \pi } \approx -0.1643 }[/math]
[math]\displaystyle{ S = \frac{ ( \alpha - 1 )( \alpha - 2 )( 1 - ( \alpha - 1 )( 2^{ 1 / \alpha } - 1 ) ) }{ \alpha^{ 1/2 } } }[/math]
[math]\displaystyle{ S \approx \frac{ \sqrt{ 2 } - 1.5382 \Gamma( \frac{ 3 }{ 2 } ) }{ \sqrt{ 2 ( \Gamma( \frac{ 5 }{ 2 } ) - \Gamma( \frac{ 3 }{ 2 } ) ) } } \approx 0.0854 }[/math]
[math]\displaystyle{ S = -1 }[/math]
[math]\displaystyle{ S = ( \alpha - 2^{ 1 / \alpha }[ \alpha - 1 ] ) ( \frac{ \alpha - 2 }{ \alpha } )^{ 1 / 2 } , }[/math]
and S is always > 0.
[math]\displaystyle{ \frac{ -\log_e(2)} { \lambda^\frac{ 1 }{ 2 } } \le S \le \frac{ 1 }{ 3 \lambda^\frac{ 1 }{ 2 } } }[/math]
where λ is the parameter of the distribution.[28]
[math]\displaystyle{ S = \sqrt { \frac{ 2 }{ 4 - \pi } } [ ( \frac{ \pi }{ 2 } )^{ 0.5 } - \log_e( 4 ) ] \approx 0.1251 }[/math]
[math]\displaystyle{ S = \frac{ \Gamma( 1 + 1 / k ) - \log_e( 2 )^{ 1 / k } }{ ( \Gamma( 1 + 2 / k ) - \Gamma( 1 + 1 / k ) )^{ 1 / 2 } }, }[/math]
where k is the shape parameter of the distribution. Here S is always > 0.

History

In 1895 Pearson first suggested measuring skewness by standardizing the difference between the mean and the mode,[29] giving

[math]\displaystyle{ \frac{ \mu - \theta } { \sigma } , }[/math]

where μ, θ and σ is the mean, mode and standard deviation of the distribution respectively. Estimates of the population mode from the sample data may be difficult but the difference between the mean and the mode for many distributions is approximately three times the difference between the mean and the median[30] which suggested to Pearson a second skewness coefficient:

[math]\displaystyle{ \frac{ 3 ( \mu - \nu ) } { \sigma } , }[/math]

where ν is the median of the distribution. Bowley dropped the factor 3 from this formula in 1901 leading to the nonparametric skew statistic.

The relationship between the median, the mean and the mode was first noted by Pearson when he was investigating his type III distributions.

Relationships between the mean, median and mode

For an arbitrary distribution the mode, median and mean may appear in any order.[31][32][33]

Analyses have been made of some of the relationships between the mean, median, mode and standard deviation.[34] and these relationships place some restrictions on the sign and magnitude of the nonparametric skew.

A simple example illustrating these relationships is the binomial distribution with n = 10 and p = 0.09.[35] This distribution when plotted has a long right tail. The mean (0.9) is to the left of the median (1) but the skew (0.906) as defined by the third standardized moment is positive. In contrast the nonparametric skew is -0.110.

Pearson's rule

The rule that for some distributions the difference between the mean and the mode is three times that between the mean and the median is due to Pearson who discovered it while investigating his Type 3 distributions. It is often applied to slightly asymmetric distributions that resemble a normal distribution but it is not always true.

In 1895 Pearson noted that for what is now known as the gamma distribution that the relation[29]

[math]\displaystyle{ \nu - \theta = 2( \mu - \nu ) }[/math]

where θ, ν and µ are the mode, median and mean of the distribution respectively was approximately true for distributions with a large shape parameter.

Doodson in 1917 proved that the median lies between the mode and the mean for moderately skewed distributions with finite fourth moments.[36] This relationship holds for all the Pearson distributions and all of these distributions have a positive nonparametric skew.

Doodson also noted that for this family of distributions to a good approximation,

[math]\displaystyle{ \theta = 3 \nu - 2 \mu , }[/math]

where θ, ν and µ are the mode, median and mean of the distribution respectively. Doodson's approximation was further investigated and confirmed by Haldane.[37] Haldane noted that samples with identical and independent variates with a third cumulant had sample means that obeyed Pearson's relationship for large sample sizes. Haldane required a number of conditions for this relationship to hold including the existence of an Edgeworth expansion and the uniqueness of both the median and the mode. Under these conditions he found that mode and the median converged to 1/2 and 1/6 of the third moment respectively. This result was confirmed by Hall under weaker conditions using characteristic functions.[38]

Doodson's relationship was studied by Kendall and Stuart in the log-normal distribution for which they found an exact relationship close to it.[39]

Hall also showed that for a distribution with regularly varying tails and exponent α that[clarification needed][38]

[math]\displaystyle{ \mu - \theta = \alpha ( \mu - \nu ) }[/math]

Unimodal distributions

Gauss showed in 1823 that for a unimodal distribution[40]

[math]\displaystyle{ \sigma \le \omega \le 2 \sigma }[/math]

and

[math]\displaystyle{ | \nu - \mu | \le \sqrt{ \frac{ 3 }{ 4 } } \omega , }[/math]

where ω is the root mean square deviation from the mode.

For a large class of unimodal distributions that are positively skewed the mode, median and mean fall in that order.[41] Conversely for a large class of unimodal distributions that are negatively skewed the mean is less than the median which in turn is less than the mode. In symbols for these positively skewed unimodal distributions

[math]\displaystyle{ \theta \le \nu \le \mu }[/math]

and for these negatively skewed unimodal distributions

[math]\displaystyle{ \mu \le \nu \le \theta }[/math]

This class includes the important F, beta and gamma distributions.

This rule does not hold for the unimodal Weibull distribution.[42]

For a unimodal distribution the following bounds are known and are sharp:[43]

[math]\displaystyle{ \frac{| \theta - \mu |}{ \sigma } \le \sqrt{ 3 } , }[/math]
[math]\displaystyle{ \frac{| \nu - \mu |}{ \sigma } \le \sqrt{ 0.6 } , }[/math]
[math]\displaystyle{ \frac{| \theta - \nu |}{ \sigma } \le \sqrt{ 3 } , }[/math]

where μ,ν and θ are the mean, median and mode respectively.

The middle bound limits the nonparametric skew of a unimodal distribution to approximately ±0.775.

van Zwet condition

The following inequality,

[math]\displaystyle{ \theta \le \nu \le \mu , }[/math]

where θ, ν and µ is the mode, median and mean of the distribution respectively, holds if

[math]\displaystyle{ F( \nu - x ) + F( \nu + x ) \ge 1 \text{ for all } x, }[/math]

where F is the cumulative distribution function of the distribution.[44] These conditions have since been generalised[33] and extended to discrete distributions.[45] Any distribution for which this holds has either a zero or a positive nonparametric skew.

Notes

Ordering of skewness

In 1964 van Zwet proposed a series of axioms for ordering measures of skewness.[46] The nonparametric skew does not satisfy these axioms.

Benford's law

Benford's law is an empirical law concerning the distribution of digits in a list of numbers. It has been suggested that random variates from distributions with a positive nonparametric skew will obey this law.[47]

Relation to Bowley's coefficient

This statistic is very similar to Bowley's coefficient of skewness[48]

[math]\displaystyle{ SK_2 = \frac{ Q_3 + Q_1 - 2 Q_2 }{ Q_3 - Q_1 } }[/math]

where Qi is the ith quartile of the distribution.

Hinkley generalised this[49]

[math]\displaystyle{ SK = \frac{ F^{-1}( 1 - \alpha ) + F^{-1}( \alpha ) - 2 Q_2 }{ Q_3 - Q_1 } }[/math]

where [math]\displaystyle{ \alpha }[/math] lies between 0 and 0.5. Bowley's coefficient is a special case with [math]\displaystyle{ \alpha }[/math] equal to 0.25.

Groeneveld and Meeden[50] removed the dependence on [math]\displaystyle{ \alpha }[/math] by integrating over it.

[math]\displaystyle{ SK_3 = \frac{ \mu - Q_2 }{ E | y - Q_2 | } }[/math]

The denominator is a measure of dispersion. Replacing the denominator with the standard deviation we obtain the nonparametric skew.

References

  1. Arnold BC, Groeneveld RA (1995) Measuring skewness with respect to the mode. The American Statistician 49 (1) 34–38 DOI:10.1080/00031305.1995.10476109
  2. Rubio F.J.; Steel M.F.J. (2012) "On the Marshall–Olkin transformation as a skewing mechanism". Computational Statistics & Data Analysis Preprint
  3. Tabor J (2010) Investigating the Investigative Task: Testing for skewness - An investigation of different test statistics and their power to detect skewness. J Stat Ed 18: 1–13
  4. Doane, David P.; Seward, Lori E. (2011). "Measuring Skewness: A Forgotten Statistic?". Journal of Statistics Education 19 (2). http://www.amstat.org/publications/jse/v19n2/doane.pdf. 
  5. 5.0 5.1 Hotelling H, Solomons LM (1932) The limits of a measure of skewness. Annals Math Stat 3, 141–114
  6. Garver (1932) Concerning the limits of a mesuare of skewness. Ann Math Stats 3(4) 141–142
  7. 7.0 7.1 O’Cinneide CA (1990) The mean is within one standard deviation of any median. Amer Statist 44, 292–293
  8. Majindar KN (1962) "Improved bounds on a measure of skewness". Annals of Mathematical Statistics, 33, 1192–1194 doi:10.1214/aoms/1177704482
  9. Mallows CCC, Richter D (1969) "Inequalities of Chebyschev type involving conditional expectations". Annals of Mathematical Statistics, 40:1922–1932
  10. Dziubinska R, Szynal D (1996) On functional measures of skewness. Applicationes Mathematicae 23(4) 395–403
  11. Dharmadhikari SS (1991) Bounds on quantiles: a comment on O'Cinneide. The Am Statist 45: 257-58
  12. Gilat D, Hill TP(1993) Quantile-locating functions and the distance between the mean and quantiles. Statistica Neerlandica 47 (4) 279–283 DOI: 10.1111/j.1467-9574.1993.tb01424.x [1]
  13. David HA (1991) Mean minus median: A comment on O'Cinneide. The Am Statist 45: 257
  14. Joarder AH, Laradji A (2004) Some inequalities in descriptive statistics. Technical Report Series TR 321
  15. Gastwirth JL (1971) "On the sign test for symmetry". Journal of the American Statistical Association 66:821–823
  16. Cabilio P, Masaro J (1996) "A simple test of symmetry about an unknown median". Canandian Journal of Statistics-Revue Canadienne De Statistique, 24:349–361
  17. Zheng T, Gastwirth J (2010) "On bootstrap tests of symmetry about an unknown median". Journal of Data Science, 8(3): 413–427
  18. Mira A (1999) "Distribution-free test for symmetry based on Bonferroni’s measure", Journal of Applied Statistics, 26:959–972
  19. Bonferroni CE (1930) Elementi di statistica generale. Seeber, Firenze
  20. 20.0 20.1 Miao W, Gel YR, Gastwirth JL (2006) "A new test of symmetry about an unknown median". In: Hsiung A, Zhang C-H, Ying Z, eds. Random Walk, Sequential Analysis and Related Topics — A Festschrift in honor of Yuan-Shih Chow. World Scientific; Singapore
  21. Kerman J (2011) "A closed-form approximation for the median of the beta distribution". arXiv:1111.0433v1
  22. Kaas R, Buhrman JM (1980) Mean, median and mode in binomial distributions. Statistica Neerlandica 34 (1) 13–18
  23. Hamza K (1995) "The smallest uniform upper bound on the distance between the mean and the median of the binomial and Poisson distributions". Statistics and Probability Letters, 23 (1) 21–25
  24. 24.0 24.1 24.2 24.3 "Archived copy". http://web.ipac.caltech.edu/staff/fmasci/home/statistics_refs/UsefulDistributions.pdf. 
  25. Terrell GR (1986) "Pearson's rule for sample medians". Technical Report 86-2[full citation needed]
  26. Banneheka BMSG, Ekanayake GEMUPD (2009) A new point estimator for the median of Gamma distribution. Viyodaya J Science 14:95–103
  27. Ferguson T. "Asymptotic Joint Distribution of Sample Mean and a Sample Quantile", Unpublished
  28. Choi KP (1994) "On the medians of Gamma distributions and an equation of Ramanujan". Proc Amer Math Soc 121 (1) 245–251
  29. 29.0 29.1 Pearson K (1895) Contributions to the Mathematical Theory of Evolution–II. Skew variation in homogenous material. Phil Trans Roy Soc A. 186: 343–414
  30. Stuart A, Ord JK (1994) Kendall’s advanced theory of statistics. Vol 1. Distribution theory. 6th Edition. Edward Arnold, London
  31. Relationship between the mean, median, mode, and standard deviation in a unimodal distribution
  32. von Hippel, Paul T. (2005) "Mean, Median, and Skew: Correcting a Textbook Rule", Journal of Statistics Education, 13(2)
  33. 33.0 33.1 Dharmadhikari SW, Joag-dev K (1983) Mean, Median, Mode III. Statistica Neerlandica, 33: 165–168
  34. Bottomly, H.(2002,2006) "Relationship between the mean, median, mode, and standard deviation in a unimodal distribution" Personal webpage
  35. Lesser LM (2005)."Letter to the editor" , [comment on von Hippel (2005)]. Journal of Statistics Education 13(2).
  36. Doodson AT (1917) "Relation of the mode, median and mean in frequency functions". Biometrika, 11 (4) 425–429 doi:10.1093/biomet/11.4.425
  37. Haldane JBS (1942) "The mode and median of a nearly normal distribution with given cumulants". Biometrika, 32: 294–299
  38. 38.0 38.1 Hall P (1980) "On the limiting behaviour of the mode and median of a sum of independent random variables". Annals of Probability 8: 419–430
  39. Kendall M.G., Stuart A. (1958) The advanced theory of statistics. p53 Vol 1. Griffin. London
  40. Gauss C.F. Theoria Combinationis Observationum Erroribus Minimis Obnoxiae. Pars Prior. Pars Posterior. Supplementum. Theory of the Combination of Observations Least Subject to Errors. Part One. Part Two. Supplement. 1995. Translated by G.W. Stewart. Classics in Applied Mathematics Series, Society for Industrial and Applied Mathematics, Philadelphia
  41. MacGillivray HL (1981) The mean, median, mode inequality and skewness for a class of densities. Aust J Stat 23(2) 247–250
  42. Groeneveld RA (1986) Skewness for the Weibull family. Statistica Neerlandica 40: 135–140
  43. Johnson NL, Rogers CA (1951) "The moment problem for unimodal distributions". Annals of Mathematical Statistics, 22 (3) 433–439
  44. van Zwet W.R. (1979) "Mean, median, mode II". Statistica Neerlandica 33(1) 1–5
  45. Abdous B, Theodorescu R (1998) Mean, median, mode IV. Statistica Neerlandica. 52 (3) 356–359
  46. van Zwet, W.R. (1964) "Convex transformations of random variables". Mathematics Centre Tract, 7, Mathematisch Centrum, Amsterdam
  47. Durtschi C, Hillison W, Pacini C (2004) The effective use of Benford’s Law to assist in detecting fraud in accounting data. J Forensic Accounting 5: 17–34
  48. Bowley AL (1920) Elements of statistics. New York: Charles Scribner's Sons
  49. Hinkley DV (1975) On power transformations to symmetry. Biometrika 62: 101–111
  50. Groeneveld RA, Meeden G (1984) Measuring skewness and kurtosis. The Statistician, 33: 391–399