Additive smoothing
In statistics, additive smoothing, also called Laplace smoothing[1] or Lidstone smoothing, is a technique used to smooth count data, eliminating issues caused by certain values having 0 occurrences. Given a set of observation counts [math]\displaystyle{ \textstyle { \mathbf{x}\ =\ \left\langle x_1,\, x_2,\, \ldots,\, x_d \right\rangle} }[/math] from a [math]\displaystyle{ \textstyle {d} }[/math]-dimensional multinomial distribution with [math]\displaystyle{ \textstyle {N} }[/math] trials, a "smoothed" version of the counts gives the estimator:
- [math]\displaystyle{ \hat\theta_i= \frac{x_i + \alpha}{N + \alpha d} \qquad (i=1,\ldots,d), }[/math]
where the smoothed count [math]\displaystyle{ \textstyle { \hat{x}_i=N\hat{\theta}_i} }[/math] and the "pseudocount" α > 0 is a smoothing parameter. α = 0 corresponds to no smoothing. (This parameter is explained in § Pseudocount below.) Additive smoothing is a type of shrinkage estimator, as the resulting estimate will be between the empirical probability (relative frequency) [math]\displaystyle{ \textstyle {x_i/ N} }[/math], and the uniform probability [math]\displaystyle{ \textstyle {1/d} }[/math]. Invoking Laplace's rule of succession, some authors have argued[citation needed] that α should be 1 (in which case the term add-one smoothing[2][3] is also used)[further explanation needed], though in practice a smaller value is typically chosen.
From a Bayesian point of view, this corresponds to the expected value of the posterior distribution, using a symmetric Dirichlet distribution with parameter α as a prior distribution. In the special case where the number of categories is 2, this is equivalent to using a beta distribution as the conjugate prior for the parameters of the binomial distribution.
History
Laplace came up with this smoothing technique when he tried to estimate the chance that the sun will rise tomorrow. His rationale was that even given a large sample of days with the rising sun, we still can not be completely sure that the sun will still rise tomorrow (known as the sunrise problem).[4]
Pseudocount
A pseudocount is an amount (not generally an integer, despite its name) added to the number of observed cases in order to change the expected probability in a model of those data, when not known to be zero. It is so named because, roughly speaking, a pseudo-count of value [math]\displaystyle{ \textstyle {\alpha} }[/math] weighs into the posterior distribution similarly to each category having an additional count of [math]\displaystyle{ \textstyle { \alpha } }[/math]. If the frequency of each item [math]\displaystyle{ \textstyle { i } }[/math] is [math]\displaystyle{ \textstyle {x_i} }[/math] out of [math]\displaystyle{ \textstyle {N} }[/math] samples, the empirical probability of event [math]\displaystyle{ \textstyle { i } }[/math] is
- [math]\displaystyle{ p_{i,\ \mathrm{empirical}} = \frac{x_i}{N} }[/math]
but the posterior probability when additively smoothed is
- [math]\displaystyle{ p_{i,\ \alpha\text{-smoothed}} = \frac{x_i + \alpha}{N + \alpha d}, }[/math]
as if to increase each count [math]\displaystyle{ \textstyle {x_i} }[/math] by [math]\displaystyle{ \textstyle {\alpha} }[/math] a priori.
Depending on the prior knowledge, which is sometimes a subjective value, a pseudocount may have any non-negative finite value. It may only be zero (or the possibility ignored) if impossible by definition, such as the possibility of a decimal digit of pi being a letter, or a physical possibility that would be rejected and so not counted, such as a computer printing a letter when a valid program for pi is run, or excluded and not counted because of no interest, such as if only interested in the zeros and ones. Generally, there is also a possibility that no value may be computable or observable in a finite time (see the halting problem). But at least one possibility must have a non-zero pseudocount, otherwise no prediction could be computed before the first observation. The relative values of pseudocounts represent the relative prior expected probabilities of their possibilities. The sum of the pseudocounts, which may be very large, represents the estimated weight of the prior knowledge compared with all the actual observations (one for each) when determining the expected probability.
In any observed data set or sample there is the possibility, especially with low-probability events and with small data sets, of a possible event not occurring. Its observed frequency is therefore zero, apparently implying a probability of zero. This oversimplification is inaccurate and often unhelpful, particularly in probability-based machine learning techniques such as artificial neural networks and hidden Markov models. By artificially adjusting the probability of rare (but not impossible) events so those probabilities are not exactly zero, zero-frequency problems are avoided. Also see Cromwell's rule.
The simplest approach is to add one to each observed number of events including the zero-count possibilities. This is sometimes called Laplace's Rule of Succession. This approach is equivalent to assuming a uniform prior distribution over the probabilities for each possible event (spanning the simplex where each probability is between 0 and 1, and they all sum to 1).
Using the Jeffreys prior approach, a pseudocount of one half should be added to each possible outcome.
Pseudocounts should be set to one only when there is no prior knowledge at all — see the principle of indifference. However, given appropriate prior knowledge, the sum should be adjusted in proportion to the expectation that the prior probabilities should be considered correct, despite evidence to the contrary — see further analysis. Higher values are appropriate inasmuch as there is prior knowledge of the true values (for a mint condition coin, say); lower values inasmuch as there is prior knowledge that there is probable bias, but of unknown degree (for a bent coin, say).
A more complex approach is to estimate the probability of the events from other factors and adjust accordingly.
Examples
One way to motivate pseudocounts, particularly for binomial data, is via a formula for the midpoint of an interval estimate, particularly a binomial proportion confidence interval. The best-known is due to Edwin Bidwell Wilson, in (Wilson 1927): the midpoint of the Wilson score interval corresponding to [math]\displaystyle{ z }[/math] standard deviations on either side is:
- [math]\displaystyle{ \frac{n_S + z}{n + 2z}. }[/math]
Taking [math]\displaystyle{ \textstyle z = 2 }[/math] standard deviations to approximate a 95% confidence interval ([math]\displaystyle{ z \approx 1.96 }[/math]) yields pseudocount of 2 for each outcome, so 4 in total, colloquially known as the "plus four rule":
- [math]\displaystyle{ \frac{n_S + 2}{n + 4}. }[/math]
This is also the midpoint of the Agresti–Coull interval, (Agresti Coull).
Generalized to the case of known incidence rates
Often you are testing the bias of an unknown trial population against a control population with known parameters (incidence rates) [math]\displaystyle{ \textstyle { \mathbf{\mu}\ =\ \left\langle \mu_1,\, \mu_2,\, \ldots,\, \mu_d \right\rangle} }[/math]. In this case the uniform probability [math]\displaystyle{ \textstyle {\frac{1}{d}} }[/math] should be replaced by the known incidence rate of the control population [math]\displaystyle{ \textstyle {\mu_i} }[/math] to calculate the smoothed estimator :
- [math]\displaystyle{ \hat\theta_i= \frac{x_i + \mu_i \alpha d }{N + \alpha d } \qquad (i=1,\ldots,d), }[/math]
As a consistency check, if the empirical estimator happens to equal the incidence rate, i.e. [math]\displaystyle{ \textstyle {\mu_i} = \frac{x_i}{N} }[/math], the smoothed estimator is independent of [math]\displaystyle{ \textstyle {\alpha} }[/math] and also equals the incidence rate.
Applications
Classification
Additive smoothing is commonly a component of naive Bayes classifiers.
Statistical language modelling
In a bag of words model of natural language processing and information retrieval, the data consists of the number of occurrences of each word in a document. Additive smoothing allows the assignment of non-zero probabilities to words which do not occur in the sample. Recent studies have proven that additive smoothing is more effective than other probability smoothing methods in several retrieval tasks such as language-model-based pseudo-relevance feedback and recommender systems.[5][6]
See also
References
- ↑ C.D. Manning, P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press, p. 260.
- ↑ Jurafsky, Daniel; Martin, James H. (June 2008). Speech and Language Processing (2nd ed.). Prentice Hall. pp. 132. ISBN 978-0-13-187321-6.
- ↑ Russell, Stuart; Norvig, Peter (2010). Artificial Intelligence: A Modern Approach (2nd ed.). Pearson Education, Inc.. pp. 863.
- ↑ Lecture 5 | Machine Learning (Stanford) at 1h10m into the lecture
- ↑ Hazimeh, Hussein; Zhai, ChengXiang. "Axiomatic Analysis of Smoothing Methods in Language Models for Pseudo-Relevance Feedback". ICTIR '15 Proceedings of the 2015 International Conference on the Theory of Information Retrieval. http://dl.acm.org/citation.cfm?id=2809471.
- ↑ Valcarce, Daniel; Parapar, Javier; Barreiro, Álvaro. "Additive Smoothing for Relevance-Based Language Modelling of Recommender Systems". CERI '16 Proceedings of the 4th Spanish Conference on Information Retrieval. http://dl.acm.org/citation.cfm?id=2934737.
Sources
- Wilson, E. B. (1927). "Probable inference, the law of succession, and statistical inference". Journal of the American Statistical Association 22 (158): 209–212. doi:10.1080/01621459.1927.10502953.
- Agresti, Alan; Coull, Brent A. (1998). "Approximate is better than 'exact' for interval estimation of binomial proportions". The American Statistician 52 (2): 119–126. doi:10.2307/2685469.
External links
- SF Chen, J Goodman (1996). "An empirical study of smoothing techniques for language modeling". Proceedings of the 34th annual meeting on Association for Computational Linguistics.
- Pseudocounts
Original source: https://en.wikipedia.org/wiki/Additive smoothing.
Read more |