# Spearman's rank correlation coefficient

__: Nonparametric measure of rank correlation__

**Short description**In statistics, **Spearman's rank correlation coefficient** or **Spearman's ρ**, named after Charles Spearman and often denoted by the Greek letter [math]\displaystyle{ \rho }[/math] (rho) or as [math]\displaystyle{ r_s }[/math], is a nonparametric measure of rank correlation (statistical dependence between the rankings of two variables). It assesses how well the relationship between two variables can be described using a monotonic function.

The Spearman correlation between two variables is equal to the Pearson correlation between the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other.

Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of −1) rank between the two variables.

Spearman's coefficient is appropriate for both continuous and discrete ordinal variables.^{[1]}^{[2]} Both Spearman's [math]\displaystyle{ \rho }[/math] and Kendall's [math]\displaystyle{ \tau }[/math] can be formulated as special cases of a more general correlation coefficient.

## Definition and calculation

The Spearman correlation coefficient is defined as the Pearson correlation coefficient between the rank variables.^{[3]}

For a sample of size *n*, the *n* raw scores [math]\displaystyle{ X_i, Y_i }[/math] are converted to ranks [math]\displaystyle{ \operatorname{R}({X_i}), \operatorname{R}({Y_i}) }[/math], and [math]\displaystyle{ r_s }[/math] is computed as

- [math]\displaystyle{ r_s = \rho_{\operatorname{R}(X),\operatorname{R}(Y)} = \frac{\operatorname{cov}(\operatorname{R}(X), \operatorname{R}(Y))} {\sigma_{\operatorname{R}(X)} \sigma_{\operatorname{R}(Y)}}, }[/math]

where

- [math]\displaystyle{ \rho }[/math] denotes the usual Pearson correlation coefficient, but applied to the rank variables,
- [math]\displaystyle{ \operatorname{cov}(\operatorname{R}(X), \operatorname{R}(Y)) }[/math] is the covariance of the rank variables,
- [math]\displaystyle{ \sigma_{\operatorname{R}(X)} }[/math] and [math]\displaystyle{ \sigma_{\operatorname{R}(Y)} }[/math] are the standard deviations of the rank variables.

Only if all *n* ranks are *distinct integers*, it can be computed using the popular formula

- [math]\displaystyle{ r_s = 1 - \frac{6 \sum d_i^2}{n(n^2 - 1)}, }[/math]

where

- [math]\displaystyle{ d_i = \operatorname{R}(X_i) - \operatorname{R}(Y_i) }[/math] is the difference between the two ranks of each observation,
*n*is the number of observations.

Consider a bivariate sample [math]\displaystyle{ (x_i, y_i),\, i=1\dots, n }[/math] with corresponding ranks [math]\displaystyle{ (R(X_i), R(Y_i)) = (R_i, S_i) }[/math]. Then, the Spearman correlation coefficient of [math]\displaystyle{ x,y }[/math] is
[math]\displaystyle{
r_s = \frac{\sum_{i=1}^{n}(R_i- \overline{R})(S_i- \overline{S})}{\sqrt{\sum_{i=1}^{n}(R_i- \overline{R})^2}\sqrt{\sum_{i=1}^{n}(S_i- \overline{S})^2}}
}[/math]

Where

- [math]\displaystyle{ \overline{R}:=\frac{1}{n}\sum_{i=1}^{n}R_i, \quad \overline{S}:=\frac{1}{n}\sum_{i=1}^{n}S_i }[/math]

Now we will show that [math]\displaystyle{ r_s }[/math] can be expressed only by [math]\displaystyle{ d_i^2 = (R_i - S_i)^2 }[/math], in the case where there are no ties within each sample.

First, recall the following formulas for the triangular number and Square pyramidal number:

- [math]\displaystyle{ \sum_{i=1}^{n}i = \frac{n(n+1)}{2}\,, \quad \sum_{i=1}^{n}i^2 = \frac{n(n+1)(2n+1)}{6} }[/math]

It follows that

- [math]\displaystyle{ \overline{R}=\overline{S}=\frac{1}{n}\sum_{i=1}^{n}i = \frac{(n+1)}{2}\,, \quad \sum_{i=1}^{n}R_i^2=\sum_{i=1}^{n}S_i^2=\sum_{i=1}^{n}i^2 = \frac{n(n+1)(2n+1)}{6} }[/math]

Thus,

- [math]\displaystyle{ \begin{align} \sum_{i=1}^{n} (R_i-\overline{R})^2 = \sum_{i=1}^{n} (S_i-\overline{S})^2 & = \sum_{i=1}^{n} S_i^2 - n\overline{S}^2\\ & = \sum_{i=1}^{n} i^2 - n\left(\frac{n+1}{2}\right)^2 \\ & = \frac{n(n+1)(2n+1)}{6} - n\left(\frac{n+1}{2}\right)^2 \\ & = \left (\frac{n+1}{2}\right)\left(\frac{n(2n+1)}{3} - n\frac{(n+1)}{2}\right) \\ & = \left (\frac{n+1}{2}\right)\left(\frac{n^2-n}{6}\right) \\ & = \frac{n(n^2 - 1)}{12} \end{align} }[/math]

and

- [math]\displaystyle{ \sum_{i=1}^{n}(R_i- \overline{R})(S_i- \overline{S}) = \sum_{i=1}^{n}R_iS_i - n\overline{R}\cdot\overline{S} = \sum_{i=1}^{n}R_iS_i - \frac{n(n+1)^2}{4}. }[/math]

So up to this point we have that:

- [math]\displaystyle{ \begin{align} r_s = \frac{\sum_{i=1}^{n}R_iS_i - \dfrac{n(n+1)^2}{4}}{\dfrac{n(n^2 - 1)}{12}} & = \frac{12\sum_{i=1}^{n}R_iS_i}{n(n^2 - 1)} - \frac{3n(n+1)^2}{n(n^2 - 1)} \\ & = \frac{12\sum_{i=1}^{n}R_iS_i}{n(n^2 - 1)} - \frac{3n(n+1)^2}{n(n - 1)(n+1)} \\ & = \frac{12\sum_{i=1}^{n}R_iS_i}{n(n^2 - 1)} - \frac{3(n+1)}{n - 1} \end{align} }[/math]

Now, let [math]\displaystyle{ d_i^2 := (R_i - S_i)^2 = R_i^2 + S_i^2 - 2R_iS_i\, }[/math], hence

- [math]\displaystyle{ \sum_{i=1}^{n} d_i^2 = \sum_{i=1}^{n}R_i^2 + \sum_{i=1}^{n}S_i^2 - 2\sum_{i=1}^{n}R_iS_i = 2\sum_{i=1}^{n}i^2 - 2\sum_{i=1}^{n}R_iS_i = \frac{n(n+1)(2n+1)}{6} - 2\sum_{i=1}^{n}R_iS_i. }[/math]

Now we can express [math]\displaystyle{ \sum_{i=1}^{n}R_iS_i }[/math] using [math]\displaystyle{ d_i^2 }[/math] and get

- [math]\displaystyle{ \sum_{i=1}^{n}R_iS_i = \frac{n(n+1)(2n+1)}{6} - \frac{1}{2}\sum_{i=1}^{n} d_i^2 }[/math]

Substituting this result back in the last expression of [math]\displaystyle{ r_s }[/math] gives us

- [math]\displaystyle{ \begin{align} r_s & = \frac{12\cdot\left(\dfrac{n(n+1)(2n+1)}{6} - \dfrac{1}{2}\sum_{i=1}^{n} d_i^2\right )}{n(n^2 - 1)} - \frac{3(n+1)}{n - 1} \\[2ex] & = \frac{2n(n+1)(2n+1) - 6\sum_{i=1}^{n} d_i^2}{n(n-1)(n+1)} - \frac{3(n+1)}{n - 1} \\[2ex] & = \frac{4n^3+6n^2 + 2n - 6\sum_{i=1}^{n} d_i^2 - 3n(n+1)^2}{n(n-1)(n+1)} \\[2ex] & = \frac{n^3 - 1 - 6\sum_{i=1}^{n} d_i^2}{n(n-1)(n+1)} \\[2ex] & = \frac{n^3 - 1 - 6\sum_{i=1}^{n} d_i^2}{n(n^2 - 1)} \\[2ex] & = 1 - \frac{6\sum_{i=1}^{n} d_i^2}{n(n^2 - 1)} \end{align} }[/math]

Identical values are usually^{[4]} each assigned fractional ranks equal to the average of their positions in the ascending order of the values, which is equivalent to averaging over all possible permutations.

If ties are present in the data set, the simplified formula above yields incorrect results: Only if in both variables all ranks are distinct, then [math]\displaystyle{ \sigma_{\operatorname{R}(X)} \sigma_{\operatorname{R}(Y)} = \operatorname{Var}{(\operatorname{R}(X))} = \operatorname{Var}{(\operatorname{R}(Y))} = (n^2 - 1)/12 }[/math] (calculated according to biased variance). The first equation — normalizing by the standard deviation — may be used even when ranks are normalized to [0, 1] ("relative ranks") because it is insensitive both to translation and linear scaling.

The simplified method should also not be used in cases where the data set is truncated; that is, when the Spearman's correlation coefficient is desired for the top *X* records (whether by pre-change rank or post-change rank, or both), the user should use the Pearson correlation coefficient formula given above.^{[5]}

## Related quantities

There are several other numerical measures that quantify the extent of statistical dependence between pairs of observations. The most common of these is the Pearson product-moment correlation coefficient, which is a similar correlation method to Spearman's rank, that measures the “linear” relationships between the raw numbers rather than between their ranks.

An alternative name for the Spearman rank correlation is the “grade correlation”;^{[6]} in this, the “rank” of an observation is replaced by the “grade”. In continuous distributions, the grade of an observation is, by convention, always one half less than the rank, and hence the grade and rank correlations are the same in this case. More generally, the “grade” of an observation is proportional to an estimate of the fraction of a population less than a given value, with the half-observation adjustment at observed values. Thus this corresponds to one possible treatment of tied ranks. While unusual, the term “grade correlation” is still in use.^{[7]}

## Interpretation

The sign of the Spearman correlation indicates the direction of association between *X* (the independent variable) and *Y* (the dependent variable). If *Y* tends to increase when *X* increases, the Spearman correlation coefficient is positive. If *Y* tends to decrease when *X* increases, the Spearman correlation coefficient is negative. A Spearman correlation of zero indicates that there is no tendency for *Y* to either increase or decrease when *X* increases. The Spearman correlation increases in magnitude as *X* and *Y* become closer to being perfectly monotone functions of each other. When *X* and *Y* are perfectly monotonically related, the Spearman correlation coefficient becomes 1. A perfectly monotone increasing relationship implies that for any two pairs of data values *X*_{i}, *Y*_{i} and *X*_{j}, *Y*_{j}, that *X*_{i} − *X*_{j} and *Y*_{i} − *Y*_{j} always have the same sign. A perfectly monotone decreasing relationship implies that these differences always have opposite signs.

The Spearman correlation coefficient is often described as being "nonparametric". This can have two meanings. First, a perfect Spearman correlation results when *X* and *Y* are related by any monotonic function. Contrast this with the Pearson correlation, which only gives a perfect value when *X* and *Y* are related by a *linear* function. The other sense in which the Spearman correlation is nonparametric is that its exact sampling distribution can be obtained without requiring knowledge (i.e., knowing the parameters) of the joint probability distribution of *X* and *Y*.

## Example

In this example, the raw data in the table below is used to calculate the correlation between the IQ of a person with the number of hours spent in front of TV per week.

IQ, [math]\displaystyle{ X_i }[/math] | Hours of TV per week, [math]\displaystyle{ Y_i }[/math] |
---|---|

106 | 7 |

100 | 27 |

86 | 2 |

101 | 50 |

99 | 28 |

103 | 29 |

97 | 20 |

113 | 12 |

112 | 6 |

110 | 17 |

Firstly, evaluate [math]\displaystyle{ d^2_i }[/math]. To do so use the following steps, reflected in the table below.

- Sort the data by the first column ([math]\displaystyle{ X_i }[/math]). Create a new column [math]\displaystyle{ x_i }[/math] and assign it the ranked values 1, 2, 3, ...,
*n*. - Next, sort the data by the second column ([math]\displaystyle{ Y_i }[/math]). Create a fourth column [math]\displaystyle{ y_i }[/math] and similarly assign it the ranked values 1, 2, 3, ...,
*n*. - Create a fifth column [math]\displaystyle{ d_i }[/math] to hold the differences between the two rank columns ([math]\displaystyle{ x_i }[/math] and [math]\displaystyle{ y_i }[/math]).
- Create one final column [math]\displaystyle{ d^2_i }[/math] to hold the value of column [math]\displaystyle{ d_i }[/math] squared.

IQ, [math]\displaystyle{ X_i }[/math] | Hours of TV per week, [math]\displaystyle{ Y_i }[/math] | rank [math]\displaystyle{ x_i }[/math] | rank [math]\displaystyle{ y_i }[/math] | [math]\displaystyle{ d_i }[/math] | [math]\displaystyle{ d^2_i }[/math] |
---|---|---|---|---|---|

86 | 2 | 1 | 1 | 0 | 0 |

97 | 20 | 2 | 6 | −4 | 16 |

99 | 28 | 3 | 8 | −5 | 25 |

100 | 27 | 4 | 7 | −3 | 9 |

101 | 50 | 5 | 10 | −5 | 25 |

103 | 29 | 6 | 9 | −3 | 9 |

106 | 7 | 7 | 3 | 4 | 16 |

110 | 17 | 8 | 5 | 3 | 9 |

112 | 6 | 9 | 2 | 7 | 49 |

113 | 12 | 10 | 4 | 6 | 36 |

With [math]\displaystyle{ d^2_i }[/math] found, add them to find [math]\displaystyle{ \sum d_i^2 = 194 }[/math]. The value of *n* is 10. These values can now be substituted back into the equation

- [math]\displaystyle{ \rho = 1 - \frac{6 \sum d_i^2}{n(n^2 - 1)} }[/math]

to give

- [math]\displaystyle{ \rho = 1 - \frac{6 \times 194}{10(10^2 - 1)}, }[/math]

which evaluates to *ρ* = −29/165 = −0.175757575... with a *p*-value = 0.627188 (using the *t*-distribution).

That the value is close to zero shows that the correlation between IQ and hours spent watching TV is very low, although the negative value suggests that the longer the time spent watching television the lower the IQ. In the case of ties in the original values, this formula should not be used; instead, the Pearson correlation coefficient should be calculated on the ranks (where ties are given ranks, as described above).

## Determining significance

One approach to test whether an observed value of *ρ* is significantly different from zero (*r* will always maintain −1 ≤ *r* ≤ 1) is to calculate the probability that it would be greater than or equal to the observed *r*, given the null hypothesis, by using a permutation test. An advantage of this approach is that it automatically takes into account the number of tied data values in the sample and the way they are treated in computing the rank correlation.

Another approach parallels the use of the Fisher transformation in the case of the Pearson product-moment correlation coefficient. That is, confidence intervals and hypothesis tests relating to the population value *ρ* can be carried out using the Fisher transformation:

- [math]\displaystyle{ F(r) = \frac{1}{2} \ln\frac{1 + r}{1 - r} = \operatorname{artanh} r. }[/math]

If *F*(*r*) is the Fisher transformation of *r*, the sample Spearman rank correlation coefficient, and *n* is the sample size, then

- [math]\displaystyle{ z = \sqrt{\frac{n - 3}{1.06}} F(r) }[/math]

is a *z*-score for *r*, which approximately follows a standard normal distribution under the null hypothesis of statistical independence (*ρ* = 0).^{[8]}^{[9]}

One can also test for significance using

- [math]\displaystyle{ t = r \sqrt{\frac{n - 2}{1 - r^2}}, }[/math]

which is distributed approximately as Student's *t*-distribution with *n* − 2 degrees of freedom under the null hypothesis.^{[10]} A justification for this result relies on a permutation argument.^{[11]}

A generalization of the Spearman coefficient is useful in the situation where there are three or more conditions, a number of subjects are all observed in each of them, and it is predicted that the observations will have a particular order. For example, a number of subjects might each be given three trials at the same task, and it is predicted that performance will improve from trial to trial. A test of the significance of the trend between conditions in this situation was developed by E. B. Page^{[12]} and is usually referred to as Page's trend test for ordered alternatives.

## Correspondence analysis based on Spearman's *ρ*

Classic correspondence analysis is a statistical method that gives a score to every value of two nominal variables. In this way the Pearson correlation coefficient between them is maximized.

There exists an equivalent of this method, called grade correspondence analysis, which maximizes Spearman's *ρ* or Kendall's τ.^{[13]}

## Approximating Spearman's *ρ* from a stream

There are two existing approaches to approximating the Spearman's rank correlation coefficient from streaming data.^{[14]}^{[15]} The first approach^{[14]}
involves coarsening the joint distribution of [math]\displaystyle{ (X,Y) }[/math]. For continuous [math]\displaystyle{ X, Y }[/math] values: [math]\displaystyle{ m_{1}, m_{2} }[/math] cutpoints are selected for [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] respectively, discretizing
these random variables. Default cutpoints are added at [math]\displaystyle{ -\infty }[/math] and [math]\displaystyle{ \infty }[/math]. A count matrix of size [math]\displaystyle{ (m_{1}+1) \times (m_{2}+1) }[/math], denoted [math]\displaystyle{ M }[/math], is then constructed where [math]\displaystyle{ M[i,j] }[/math] stores the number of observations that
fall into the two-dimensional cell indexed by [math]\displaystyle{ (i,j) }[/math]. For streaming data, when a new observation arrives, the appropriate [math]\displaystyle{ M[i,j] }[/math] element is incremented. The Spearman's rank
correlation can then be computed, based on the count matrix [math]\displaystyle{ M }[/math], using linear algebra operations (Algorithm 2^{[14]}). Note that for discrete random
variables, no discretization procedure is necessary. This method is applicable to stationary streaming data as well as large data sets. For non-stationary streaming data, where the Spearman's rank correlation coefficient may change over time, the same procedure can be applied, but to a moving window of observations. When using a moving window, memory requirements grow linearly with chosen window size.

The second approach to approximating the Spearman's rank correlation coefficient from streaming data involves the use of Hermite series based estimators.^{[15]} These estimators, based on Hermite polynomials,
allow sequential estimation of the probability density function and cumulative distribution function in univariate and bivariate cases. Bivariate Hermite series density
estimators and univariate Hermite series based cumulative distribution function estimators are plugged into a large sample version of the
Spearman's rank correlation coefficient estimator, to give a sequential Spearman's correlation estimator. This estimator is phrased in
terms of linear algebra operations for computational efficiency (equation (8) and algorithm 1 and 2^{[15]}). These algorithms are only applicable to continuous random variable data, but have
certain advantages over the count matrix approach in this setting. The first advantage is improved accuracy when applied to large numbers of observations. The second advantage is that the Spearman's rank correlation coefficient can be
computed on non-stationary streams without relying on a moving window. Instead, the Hermite series based estimator uses an exponential weighting scheme to track time-varying Spearman's rank correlation from streaming data,
which has constant memory requirements with respect to "effective" moving window size.

## Software implementations

- R's statistics base-package implements the test
`cor.test(x, y, method = "spearman")`

in its "stats" package (also`cor(x, y, method = "spearman")`

will work. - Stata implementation:
`spearman`

calculates all pairwise correlation coefficients for all variables in*varlist**varlist*. - MATLAB implementation:
`[r,p] = corr(x,y,'Type','Spearman')`

where`r`

is the Spearman's rank correlation coefficient,`p`

is the p-value, and`x`

and`y`

are vectors.^{[16]} - Python. Can be computed with the spearmanr function of the scipy.stats module.

## See also

- Kendall tau rank correlation coefficient
- Chebyshev's sum inequality, rearrangement inequality (These two articles may shed light on the mathematical properties of Spearman's
*ρ*.) - Distance correlation
- Polychoric correlation

## References

- ↑ Scale types.
- ↑ Lehman, Ann (2005).
*Jmp For Basic Univariate And Multivariate Statistics: A Step-by-step Guide*. Cary, NC: SAS Press. p. 123. ISBN 978-1-59047-576-8. https://archive.org/details/jmpforbasicuniva00leha. - ↑ Myers, Jerome L.; Well, Arnold D. (2003).
*Research Design and Statistical Analysis*(2nd ed.). Lawrence Erlbaum. pp. 508. ISBN 978-0-8058-4037-7. https://archive.org/details/researchdesignst00jero_935. - ↑ Dodge, Yadolah (2010).
*The Concise Encyclopedia of Statistics*. Springer-Verlag New York. p. 502. ISBN 978-0-387-31742-7. https://archive.org/details/conciseencyclope00ydod. - ↑ Al Jaber, Ahmed Odeh; Elayyan, Haifaa Omar (2018).
*Toward Quality Assurance and Excellence in Higher Education*. River Publishers. pp. 284. ISBN 978-87-93609-54-9. - ↑ Yule, G. U.; Kendall, M. G. (1968).
*An Introduction to the Theory of Statistics*(14th ed.). Charles Griffin & Co.. p. 268. - ↑ Piantadosi, J.; Howlett, P.; Boland, J. (2007). "Matching the grade correlation coefficient using a copula with maximum disorder".
*Journal of Industrial and Management Optimization***3**(2): 305–312. doi:10.3934/jimo.2007.3.305. http://aimsciences.org/journals/pdfs.jsp?paperID=2265&mode=abstract. - ↑ Choi, S. C. (1977). "Tests of Equality of Dependent Correlation Coefficients".
*Biometrika***64**(3): 645–647. doi:10.1093/biomet/64.3.645. - ↑ Fieller, E. C.; Hartley, H. O.; Pearson, E. S. (1957). "Tests for rank correlation coefficients. I".
*Biometrika***44**(3–4): 470–481. doi:10.1093/biomet/44.3-4.470. - ↑ Press; Vettering; Teukolsky; Flannery (1992).
*Numerical Recipes in C: The Art of Scientific Computing*(2nd ed.). Cambridge University Press. p. 640. https://archive.org/details/numericalrecipes00pres_0. - ↑ Kendall, M. G.; Stuart, A. (1973).
*The Advanced Theory of Statistics, Volume 2: Inference and Relationship*. Griffin. ISBN 978-0-85264-215-3. https://archive.org/details/advancedtheoryof0001kend. - ↑ Page, E. B. (1963). "Ordered hypotheses for multiple treatments: A significance test for linear ranks".
*Journal of the American Statistical Association***58**(301): 216–230. doi:10.2307/2282965. - ↑ Kowalczyk, T.; Pleszczyńska, E.; Ruland, F., eds (2004).
*Grade Models and Methods for Data Analysis with Applications for the Analysis of Data Populations*. Studies in Fuzziness and Soft Computing.**151**. Berlin Heidelberg New York: Springer Verlag. ISBN 978-3-540-21120-4. - ↑
^{14.0}^{14.1}^{14.2}Xiao, W. (2019). "Novel Online Algorithms for Nonparametric Correlations with Application to Analyze Sensor Data".*2019 IEEE International Conference on Big Data (Big Data)*: 404–412. doi:10.1109/BigData47090.2019.9006483. ISBN 978-1-7281-0858-2. - ↑
^{15.0}^{15.1}^{15.2}Stephanou, Michael; Varughese, Melvin (July 2021). "Sequential estimation of Spearman rank correlation using Hermite series estimators".*Journal of Multivariate Analysis***186**: 104783. doi:10.1016/j.jmva.2021.104783. - ↑ https://www.mathworks.com/help/stats/corr.html

## Further reading

- Corder, G. W. & Foreman, D. I. (2014). Nonparametric Statistics: A Step-by-Step Approach, Wiley. ISBN 978-1118840313.
- Daniel, Wayne W. (1990). "Spearman rank correlation coefficient".
*Applied Nonparametric Statistics*(2nd ed.). Boston: PWS-Kent. pp. 358–365. ISBN 978-0-534-91976-4. https://books.google.com/books?id=0hPvAAAAMAAJ&pg=PA358. - Spearman C. (1904). "The proof and measurement of association between two things".
*American Journal of Psychology***15**(1): 72–101. doi:10.2307/1412159. http://archive.org/details/proofmeasurement00speauoft. - Bonett D. G., Wright, T. A. (2000). "Sample size requirements for Pearson, Kendall, and Spearman correlations".
*Psychometrika***65**: 23–28. doi:10.1007/bf02294183. - Kendall M. G. (1970).
*Rank correlation methods*(4th ed.). London: Griffin. ISBN 978-0-852-6419-96. OCLC 136868. - Hollander M., Wolfe D. A. (1973).
*Nonparametric statistical methods*. New York: Wiley. ISBN 978-0-471-40635-8. OCLC 520735. https://archive.org/details/nonparametricsta00holl. - Caruso J. C., Cliff N. (1997). "Empirical size, coverage, and power of confidence intervals for Spearman's Rho".
*Educational and Psychological Measurement***57**(4): 637–654. doi:10.1177/0013164497057004009.

## External links

- Table of critical values of
*ρ*for significance with small samples - Spearman’s Rank Correlation Coefficient – Excel Guide: sample data and formulae for Excel, developed by the Royal Geographical Society.

Original source: https://en.wikipedia.org/wiki/ Spearman's rank correlation coefficient.
Read more |