Chauvenet's criterion

From HandWiki
Revision as of 19:01, 6 February 2024 by S.Timg (talk | contribs) (link)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In statistical theory, Chauvenet's criterion (named for William Chauvenet[1]) is a means of assessing whether one piece of experimental data — an outlier — from a set of observations, is likely to be spurious.[2]

Derivation

The idea behind Chauvenet's criterion finds a probability band that reasonably contains all n samples of a data set, centred on the mean of a normal distribution. By doing this, any data point from the n samples that lies outside this probability band can be considered an outlier, removed from the data set, and a new mean and standard deviation based on the remaining values and new sample size can be calculated. This identification of the outliers will be achieved by finding the number of standard deviations that correspond to the bounds of the probability band around the mean ([math]\displaystyle{ D_{\mathrm{max}} }[/math]) and comparing that value to the absolute value of the difference between the suspected outliers and the mean divided by the sample standard deviation (Eq.1).

[math]\displaystyle{ D_{\mathrm{max}} \ge \frac{|x - \bar x|}{s_x} }[/math]

 

 

 

 

(1)

where

  • [math]\displaystyle{ D_{\mathrm{max}} }[/math] is the maximum allowable deviation,
  • [math]\displaystyle{ | \cdot | }[/math] is the absolute value,
  • [math]\displaystyle{ x }[/math] is the value of suspected outlier,
  • [math]\displaystyle{ \bar x }[/math] is sample mean, and
  • [math]\displaystyle{ s_x }[/math] is sample standard deviation.

In order to be considered as including all [math]\displaystyle{ n }[/math] observations in the sample, the probability band (centered on the mean) must only account for [math]\displaystyle{ n-\tfrac12 }[/math] samples (if [math]\displaystyle{ n=3 }[/math] then only 2.5 of the samples must be accounted for in the probability band). In reality we cannot have partial samples so [math]\displaystyle{ n-\tfrac12 }[/math] (2.5 for [math]\displaystyle{ n=3 }[/math]) is approximately [math]\displaystyle{ n }[/math]. Anything less than [math]\displaystyle{ n-\tfrac12 }[/math] is approximately [math]\displaystyle{ n-1 }[/math] (2 if [math]\displaystyle{ n=3 }[/math]) and is not valid because we want to find the probability band that contains [math]\displaystyle{ n }[/math] observations, not [math]\displaystyle{ n-1 }[/math] samples. In short, we are looking for the probability, [math]\displaystyle{ P }[/math], that is equal to [math]\displaystyle{ n-\tfrac12 }[/math] out of [math]\displaystyle{ n }[/math] samples (Eq.2).

[math]\displaystyle{ P = \frac{n-\tfrac12}{n} = 1-\tfrac1{2n} }[/math]

 

 

 

 

(2)

where

  • [math]\displaystyle{ P }[/math] is the probability band centered on the sample mean and
  • [math]\displaystyle{ n }[/math] is the sample size.

The quantity [math]\displaystyle{ \tfrac1{2n} }[/math] corresponds to the combined probability represented by the two tails of the normal distribution that fall outside of the probability band [math]\displaystyle{ P }[/math]. In order to find the standard deviation level associated with [math]\displaystyle{ P }[/math], only the probability of one of the tails of the normal distribution needs to be analyzed due to its symmetry (Eq.3).

[math]\displaystyle{ P_z = \frac1{4n} }[/math]

 

 

 

 

(3)

where

  • [math]\displaystyle{ P_z }[/math] is probability represented by one tail of the normal distribution and
  • [math]\displaystyle{ n }[/math] = sample size.

Eq.1 is analogous to the [math]\displaystyle{ Z }[/math]-score equation (Eq.4).

[math]\displaystyle{ Z = \frac{x-\mu}{\sigma} }[/math]

 

 

 

 

(4)

where

  • [math]\displaystyle{ Z }[/math] is the [math]\displaystyle{ Z }[/math]-score,
  • [math]\displaystyle{ x }[/math] is the sample value,
  • [math]\displaystyle{ \mu=0 }[/math] is the mean of standard normal distribution, and
  • [math]\displaystyle{ \sigma=1 }[/math] is the standard deviation of standard normal distribution.

Based on Eq.4, to find the [math]\displaystyle{ D_{\mathrm{max}} }[/math] (Eq.1) find the z-score corresponding to [math]\displaystyle{ P_z }[/math] in a [math]\displaystyle{ Z }[/math]-score table. [math]\displaystyle{ D_{\mathrm{max}} }[/math] is equal to the score for [math]\displaystyle{ P_z }[/math]. Using this method [math]\displaystyle{ D_{\mathrm{max}} }[/math] can be determined for any sample size. In Excel, [math]\displaystyle{ D_{\mathrm{max}} }[/math] can be found with the following formula: =ABS(NORM.S.INV(1/(4n))).

Calculation

To apply Chauvenet's criterion, first calculate the mean and standard deviation of the observed data. Based on how much the suspect datum differs from the mean, use the normal distribution function (or a table thereof) to determine the probability that a given data point will be at the value of the suspect data point. Multiply this probability by the number of data points taken. If the result is less than 0.5, the suspicious data point may be discarded, i.e., a reading may be rejected if the probability of obtaining the particular deviation from the mean is less than [math]\displaystyle{ \tfrac1{2n} }[/math].[citation needed]

Example

For instance, suppose a value is measured experimentally in several trials as 9, 10, 10, 10, 11, and 50, and we want to find out if 50 is an outlier.

First, we find [math]\displaystyle{ P_z }[/math].

[math]\displaystyle{ P_z = 1-\frac1{4n}=1-\frac1{4\times6}=1-\frac1{24}\approx.9583 }[/math]


Then we find [math]\displaystyle{ D_{max} }[/math] by plugging [math]\displaystyle{ P_z }[/math] into the Quantile Function.

[math]\displaystyle{ D_{max}=Q(P_z)\approx1.7317 }[/math]


Then we find the z-score of 50.

[math]\displaystyle{ z=\frac{50-\bar x}{s_x}=\frac{50-16.67}{16.34}\approx2.04 }[/math]


From there we see that [math]\displaystyle{ z\gt D_{max} }[/math] and can conclude that 50 is an outlier according to Chauvenet's Criterion.

Peirce's criterion

Another method for eliminating spurious data is called Peirce's criterion. It was developed a few years before Chauvenet's criterion was published, and it is a more rigorous approach to the rational deletion of outlier data.[3] Other methods such as Grubbs's test for outliers are mentioned under the listing for Outlier.[citation needed]

Criticism

Deletion of outlier data is a controversial practice frowned on by many scientists and science instructors; while Chauvenet's criterion provides an objective and quantitative method for data rejection, it does not make the practice more scientifically or methodologically sound, especially in small sets or where a normal distribution cannot be assumed. Rejection of outliers is more acceptable in areas of practice where the underlying model of the process being measured and the usual distribution of measurement error are confidently known.

References

  1. Chauvenet, William. A Manual of Spherical and Practical Astronomy V. II. 1863. Reprint of 1891. 5th ed. Dover, N.Y.: 1960. pp. 474–566.
  2. Fratta, M; Scaringi, S; Drew, J E; Monguió, M; Knigge, C; Maccarone, T J; Court, J M C; Iłkiewicz, K A et al. (21 July 2021). "Population-based identification of H α-excess sources in the Gaia DR2 and IPHAS catalogues". Monthly Notices of the Royal Astronomical Society 505 (1): 1135–1152. doi:10.1093/mnras/stab1258. ISSN 0035-8711. https://academic.oup.com/mnras/article/505/1/1135/6279691. 
  3. Ross, PhD, Stephen (2003). University of New Haven article. J. Engr. Technology, Fall 2003. Retrieved from https://www.researchgate.net/profile/Stephen-Ross-9.

Bibliography

  • Taylor, John R. An Introduction to Error Analysis. 2nd edition. Sausalito, California: University Science Books, 1997. pp 166–8.
  • Barnett, Vic and Lewis, Toby. "Outliers in Statistical Data". 3rd edition. Chichester: J.Wiley and Sons, 1994. ISBN:0-471-93094-6.
  • Aicha Zerbet, Mikhail Nikulin. A new statistics for detecting outliers in exponential case, Communications in Statistics: Theory and Methods, 2003, v.32, pp. 573–584.