Multinomial distribution
Parameters |
[math]\displaystyle{ n \gt 0 }[/math] number of trials (integer) | ||
---|---|---|---|
Support | [math]\displaystyle{ \left\lbrace (x_1, \dots, x_k) \, \Big\vert\, \sum_{i=1}^k x_i = n, x_i \ge 0\ (i=1,\dots,k) \right\rbrace }[/math] | ||
pmf | [math]\displaystyle{ \frac{n!}{x_1!\cdots x_k!} p_1^{x_1} \cdots p_k^{x_k} }[/math] | ||
Mean | [math]\displaystyle{ \operatorname E(X_i) = np_i }[/math] | ||
Variance |
[math]\displaystyle{ \operatorname{Var}(X_i) = n p_i (1-p_i) }[/math] [math]\displaystyle{ \operatorname{Cov}(X_i,X_j) = - n p_i p_j~~(i\neq j) }[/math] | ||
Entropy | [math]\displaystyle{ -\log(n!) - n\sum_{i=1}^kp_i\log(p_i)+\sum_{i=1}^k\sum_{x_i=0}^n\binom{n}{x_i}p_i^{x_i}(1-p_i)^{n-x_i}\log(x_i!) }[/math] | ||
MGF | [math]\displaystyle{ \biggl( \sum_{i=1}^k p_i e^{t_i} \biggr)^n }[/math] | ||
CF | [math]\displaystyle{ \left(\sum_{j=1}^k p_je^{it_j}\right)^n }[/math] where [math]\displaystyle{ i^2= -1 }[/math] | ||
PGF | [math]\displaystyle{ \biggl( \sum_{i=1}^k p_i z_i \biggr)^n\text{ for }(z_1,\ldots,z_k)\in\mathbb{C}^k }[/math] |
In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided die rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.
When k is 2 and n is 1, the multinomial distribution is the Bernoulli distribution. When k is 2 and n is bigger than 1, it is the binomial distribution. When k is bigger than 2 and n is 1, it is the categorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so n determines the suffix, and k the prefix).
The Bernoulli distribution models the outcome of a single Bernoulli trial. In other words, it models whether flipping a (possibly biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The binomial distribution generalizes this to the number of heads from performing n independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of n experiments, where the outcome of each trial has a categorical distribution, such as rolling a k-sided die n times.
Let k be a fixed finite number. Mathematically, we have k possible mutually exclusive outcomes, with corresponding probabilities p1, ..., pk, and n independent trials. Since the k outcomes are mutually exclusive and one must occur we have pi ≥ 0 for i = 1, ..., k and [math]\displaystyle{ \sum_{i=1}^k p_i = 1 }[/math]. Then if the random variables Xi indicate the number of times outcome number i is observed over the n trials, the vector X = (X1, ..., Xk) follows a multinomial distribution with parameters n and p, where p = (p1, ..., pk). While the trials are independent, their outcomes Xi are dependent because they must be summed to n.
Definitions
Probability mass function
Suppose one does an experiment of extracting n balls of k different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color i (i = 1, ..., k) as Xi, and denote as pi the probability that a given extraction will be in color i. The probability mass function of this multinomial distribution is:
- [math]\displaystyle{ \begin{align} f(x_1,\ldots,x_k;n,p_1,\ldots,p_k) & {} = \Pr(X_1 = x_1 \text{ and } \dots \text{ and } X_k = x_k) \\ & {} = \begin{cases} { \displaystyle {n! \over x_1!\cdots x_k!}p_1^{x_1}\times\cdots\times p_k^{x_k}}, \quad & \text{when } \sum_{i=1}^k x_i=n \\ \\ 0 & \text{otherwise,} \end{cases} \end{align} }[/math]
for non-negative integers x1, ..., xk.
The probability mass function can be expressed using the gamma function as:
- [math]\displaystyle{ f(x_1,\dots, x_{k}; p_1,\ldots, p_k) = \frac{\Gamma(\sum_i x_i + 1)}{\prod_i \Gamma(x_i+1)} \prod_{i=1}^k p_i^{x_i}. }[/math]
This form shows its resemblance to the Dirichlet distribution, which is its conjugate prior.
Example
Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?
Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size[1].
- [math]\displaystyle{ \Pr(A=1,B=2,C=3) = \frac{6!}{1! 2! 3!}(0.2^1) (0.3^2) (0.5^3) = 0.135 }[/math]
Properties
Expected value and variance
The expected number of times the outcome i was observed over n trials is
- [math]\displaystyle{ \operatorname{E}(X_i) = n p_i.\, }[/math]
The covariance matrix is as follows. Each diagonal entry is the variance of a binomially distributed random variable, and is therefore
- [math]\displaystyle{ \operatorname{Var}(X_i)=np_i(1-p_i).\, }[/math]
The off-diagonal entries are the covariances:
- [math]\displaystyle{ \operatorname{Cov}(X_i,X_j)=-np_i p_j\, }[/math]
for i, j distinct.
All covariances are negative because for fixed n, an increase in one component of a multinomial vector requires a decrease in another component.
When these expressions are combined into a matrix with i, j element [math]\displaystyle{ \operatorname{cov} (X_i,X_j), }[/math] the result is a k × k positive-semidefinite covariance matrix of rank k − 1. In the special case where k = n and where the pi are all equal, the covariance matrix is the centering matrix.
The entries of the corresponding correlation matrix are
- [math]\displaystyle{ \rho(X_i,X_i) = 1. }[/math]
- [math]\displaystyle{ \rho(X_i,X_j) = \frac{\operatorname{Cov}(X_i,X_j)}{\sqrt{\operatorname{Var}(X_i)\operatorname{Var}(X_j)}} = \frac{-p_i p_j}{\sqrt{p_i(1-p_i) p_j(1-p_j)}} = -\sqrt{\frac{p_i p_j}{(1-p_i)(1-p_j)}}. }[/math]
Note that the number of trials n drops out of this expression.
Each of the k components separately has a binomial distribution with parameters n and pi, for the appropriate value of the subscript i.
The support of the multinomial distribution is the set
- [math]\displaystyle{ \{(n_1,\dots,n_k)\in \mathbb{N}^k \mid n_1+\cdots+n_k=n\}.\, }[/math]
Its number of elements is
- [math]\displaystyle{ {n+k-1 \choose k-1}. }[/math]
Matrix notation
In matrix notation,
- [math]\displaystyle{ \operatorname{E}(\mathbf{X}) = n \mathbf{p},\, }[/math]
and
- [math]\displaystyle{ \operatorname{Var}(\mathbf{X}) = n \lbrace \operatorname{diag}(\mathbf{p}) - \mathbf{p} \mathbf{p}^{\rm T} \rbrace ,\, }[/math]
with pT = the row vector transpose of the column vector p.
Visualization
As slices of generalized Pascal's triangle
Just like one can interpret the binomial distribution as (normalized) one-dimensional (1D) slices of Pascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices of Pascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the range of the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. a simplex with a grid.[citation needed]
As polynomial coefficients
Similarly, just like one can interpret the binomial distribution as the polynomial coefficients of [math]\displaystyle{ (p + q)^n }[/math] when expanded, one can interpret the multinomial distribution as the coefficients of [math]\displaystyle{ (p_1 + p_2 + p_3 + \cdots + p_k)^n }[/math] when expanded, noting that just the coefficients must sum up to 1.
Large deviation theory
Asymptotics
By Stirling's formula, at the limit of [math]\displaystyle{ N, x_1, ..., x_n \to \infty }[/math], we have[math]\displaystyle{ \ln \binom{N}{x_1, \cdots x_n} p_1^{x_1} \cdots p_n^{x_n} = -N D_{KL}(\hat p \| p) - \frac 12 \sum_i \ln(\hat p_i) - \frac{n-1}{2} \ln(2\pi) + o(1) }[/math]where [math]\displaystyle{ \hat p_i = x_i/N }[/math] can be interpreted as the empirical distribution of data, and [math]\displaystyle{ D_{KL} }[/math] is the Kullback–Leibler divergence.
This formula can be interpreted as follows.
Consider [math]\displaystyle{ \Delta_n }[/math], the space of all possible distributions over the categories [math]\displaystyle{ \{1, 2, ..., n\} }[/math]. It is a simplex. After [math]\displaystyle{ N }[/math] independent samples from the categorical distribution [math]\displaystyle{ p }[/math] (which is how we construct the multinomial distribution), we obtain an empirical distribution [math]\displaystyle{ \hat p }[/math].
By the asymptotic formula, the probability that empirical distribution [math]\displaystyle{ \hat p }[/math]deviates from the actual distribution [math]\displaystyle{ p }[/math] decays exponentially, at a rate [math]\displaystyle{ ND_{KL}(\hat p \| p) }[/math]. The more experiments and the more different [math]\displaystyle{ \hat p }[/math] is from [math]\displaystyle{ p }[/math], the less likely it is to see such an empirical distribution.
If [math]\displaystyle{ A }[/math] is a closed subset of [math]\displaystyle{ \Delta_n }[/math], then by dividing up [math]\displaystyle{ A }[/math] into pieces, and reasoning about the growth rate of [math]\displaystyle{ Pr(\hat p \in A_\epsilon) }[/math] on each piece [math]\displaystyle{ A_\epsilon }[/math], we obtain Sanov's theorem, which states that[math]\displaystyle{ \lim_{N \to \infty} \frac 1N \ln Pr(\hat p \in A) = - \inf_{\hat p \in A} D_{KL}(\hat p \| p) }[/math]
Concentration at large N
Due to the exponential decay, at large [math]\displaystyle{ N }[/math], almost all the probability mass is concentrated in a small neighborhood of [math]\displaystyle{ p }[/math]. In this small neighborhood, we can take the first nonzero term in the Taylor expansion of [math]\displaystyle{ D_{KL} }[/math], to obtain[math]\displaystyle{ \ln \binom{N}{x_1, \cdots x_n} p_1^{x_1} \cdots p_n^{x_n} \approx -\frac N2 \sum_i \frac{(\hat p_i - p_i)^2}{p_i} = -\frac 12 \sum_i \frac{(x_i - Np_i)^2}{Np_i} }[/math]This resembles the gaussian distribution, which suggests the following theorem:
Theorem. At the [math]\displaystyle{ N\to \infty }[/math] limit, [math]\displaystyle{ N \sum_i \frac{(\hat p_i - p_i)^2}{p_i} = \sum_i \frac{(x_i - Np_i)^2}{Np_i} }[/math] converges in distribution to the chi-squared distribution [math]\displaystyle{ \chi^2(n-1) }[/math]. File:Convergence of multinomial distribution to the gaussian distribution.webm Proof. The space of all distributions over categories [math]\displaystyle{ \{1, 2, \ldots, n\} }[/math] is a simplex: [math]\displaystyle{ \Delta_{n} = \left\{(y_1, \ldots, y_n)\colon y_1, \ldots, y_n \geq 0, \sum_i y_i = 1\right\} }[/math], and the set of all possible empirical distributions after [math]\displaystyle{ N }[/math] experiments is a subset of the simplex: [math]\displaystyle{ \Delta_{n, N} = \left\{(x_1/N, \ldots, x_n/N)\colon x_1, \ldots, x_n \in \N, \sum_i x_i = N\right\} }[/math]. That is, it is the intersection between [math]\displaystyle{ \Delta_n }[/math] and the lattice [math]\displaystyle{ (\Z^n)/N }[/math].
As [math]\displaystyle{ N }[/math] increases, most of the probability mass is concentrated in a subset of [math]\displaystyle{ \Delta_{n, N} }[/math] near [math]\displaystyle{ p }[/math], and the probability distribution near [math]\displaystyle{ p }[/math] becomes well-approximated by [math]\displaystyle{ \binom{N}{x_1, \cdots x_n} p_1^{x_1} \cdots p_n^{x_n} \approx e^{-\frac N2 \sum_i \frac{(\hat p_i - p_i)^2}{p_i}} }[/math]From this, we see that the subset upon which the mass is concentrated has radius on the order of [math]\displaystyle{ 1/\sqrt N }[/math], but the points in the subset are separated by distance on the order of [math]\displaystyle{ 1/N }[/math], so at large [math]\displaystyle{ N }[/math], the points merge into a continuum. To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point of [math]\displaystyle{ \Delta_{n, N} }[/math] in [math]\displaystyle{ \Delta_n }[/math]. However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability density [math]\displaystyle{ \rho(\hat p) = C e^{-\frac N2 \sum_i \frac{(\hat p_i - p_i)^2}{p_i}} }[/math], where [math]\displaystyle{ C }[/math] is a constant.
Finally, since the simplex [math]\displaystyle{ \Delta_n }[/math] is not all of [math]\displaystyle{ \R^n }[/math], but only within a [math]\displaystyle{ (n-1) }[/math]-dimensional plane, we obtain the desired result.
Conditional concentration at large N
The above concentration phenomenon can be easily generalized to the case where we condition upon linear constraints. This is the theoretical justification for Pearson's chi-squared test.
Theorem. If we impose [math]\displaystyle{ k + 1 }[/math] independent linear constraints [math]\displaystyle{ \begin{cases} \sum_i \hat p_i = 1, \\ \sum_i a_{1i} \hat p_i = b_1, \\ \sum_i a_{2i} \hat p_i = b_2, \\ \cdots, \\ \sum_i a_{ki} \hat p_i = b_k \end{cases} }[/math](notice that the first constraint is simply the requirement that the empirical distributions sum to one), such that [math]\displaystyle{ p }[/math] satisfies all these constraints simultaneously, then, for samples from the multinomial distribution conditional on the linear constraints, at the [math]\displaystyle{ N\to \infty }[/math] limit, [math]\displaystyle{ N \sum_i \frac{(\hat p_i - p_i)^2}{p_i} = \sum_i \frac{(x_i - Np_i)^2}{Np_i} }[/math] converges in distribution to the chi-squared distribution [math]\displaystyle{ \chi^2(n-1-k) }[/math].
Proof. The same proof applies, but this time [math]\displaystyle{ \Delta_{n, N} }[/math] is the intersection of [math]\displaystyle{ (\Z^n)/N }[/math] with [math]\displaystyle{ \Delta_n }[/math] and [math]\displaystyle{ k }[/math] hyperplanes, all linearly independent, so the probability density [math]\displaystyle{ \rho(\hat p) }[/math] is restricted to a [math]\displaystyle{ (n-k-1) }[/math]-dimensional plane.
The above theorem is not entirely satisfactory, because by definition, every one of [math]\displaystyle{ \hat p_1, \hat p_2, ..., \hat p_n }[/math] must be a rational number, whereas [math]\displaystyle{ p_1, p_2, ..., p_n }[/math] may be chosen from any number in [math]\displaystyle{ [0, 1] }[/math]. In particular, [math]\displaystyle{ p }[/math] may satisfy linear constraints that no [math]\displaystyle{ \hat p }[/math] can possibly satisfy, such as [math]\displaystyle{ \sqrt{2} \; p_1 = 1 }[/math]. The next theorem fixes this issue:
Theorem.
- Given functions [math]\displaystyle{ f_1, ..., f_k }[/math], such that they are continuously differentiable in a neighborhood of [math]\displaystyle{ p }[/math], and the vectors [math]\displaystyle{ (1, 1, ..., 1), \nabla f_1(p), ..., \nabla f_k(p) }[/math] are linearly independent;
- given sequences [math]\displaystyle{ \epsilon_1(N), ..., \epsilon_n(N) }[/math], such that asymptotically [math]\displaystyle{ \frac 1N \ll \epsilon_k(N) \ll \frac{1}{\sqrt N} }[/math] for each [math]\displaystyle{ k \in \{1, ..., n\} }[/math];
- then for the multinomial distribution conditional on constraints [math]\displaystyle{ f_1(\hat p) \in [f_1(p)- \epsilon_1(N), f_1(p) + \epsilon_1(N)], ..., f_n(\hat p) \in [f_n(p)- \epsilon_n(N), f_n(p) + \epsilon_n(N)] }[/math], we have the quantity [math]\displaystyle{ N \sum_i \frac{(\hat p_i - p_i)^2}{p_i} = \sum_i \frac{(x_i - Np_i)^2}{Np_i} }[/math] converging in distribution to [math]\displaystyle{ \chi^2(n-1-k) }[/math] at the [math]\displaystyle{ N\to \infty }[/math] limit.
In the case that all [math]\displaystyle{ \hat p_i }[/math] are equal, the Theorem reduces to the concentration of entropies around the Maximum Entropy.[2][3]
Related distributions
In some fields such as natural language processing, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-K" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range [math]\displaystyle{ 1 \dots K }[/math]; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial.
- When k = 2, the multinomial distribution is the binomial distribution.
- Categorical distribution, the distribution of each trial; for k = 2, this is the Bernoulli distribution.
- The Dirichlet distribution is the conjugate prior of the multinomial in Bayesian statistics.
- Dirichlet-multinomial distribution.
- Beta-binomial distribution.
- Negative multinomial distribution
- Hardy–Weinberg principle (it is a trinomial distribution with probabilities [math]\displaystyle{ (\theta^2, 2 \theta (1-\theta), (1-\theta)^2) }[/math])
Statistical inference
Equivalence tests for multinomial distributions
The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions.
Let [math]\displaystyle{ q }[/math] denote a theoretical multinomial distribution and let [math]\displaystyle{ p }[/math] be a true underlying distribution. The distributions [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] are considered equivalent if [math]\displaystyle{ d(p,q)\lt \varepsilon }[/math] for a distance [math]\displaystyle{ d }[/math] and a tolerance parameter [math]\displaystyle{ \varepsilon\gt 0 }[/math]. The equivalence test problem is [math]\displaystyle{ H_0=\{d(p,q)\geq\varepsilon\} }[/math] versus [math]\displaystyle{ H_1=\{d(p,q)\lt \varepsilon\} }[/math]. The true underlying distribution [math]\displaystyle{ p }[/math] is unknown. Instead, the counting frequencies [math]\displaystyle{ p_n }[/math] are observed, where [math]\displaystyle{ n }[/math] is a sample size. An equivalence test uses [math]\displaystyle{ p_n }[/math] to reject [math]\displaystyle{ H_0 }[/math]. If [math]\displaystyle{ H_0 }[/math] can be rejected then the equivalence between [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010).[4] The equivalence test for the total variation distance is developed in Ostrovski (2017).[5] The exact equivalence test for the specific cumulative distance is proposed in Frey (2009).[6]
The distance between the true underlying distribution [math]\displaystyle{ p }[/math] and a family of the multinomial distributions [math]\displaystyle{ \mathcal{M} }[/math] is defined by [math]\displaystyle{ d(p, \mathcal{M})=\min_{h\in\mathcal{M}}d(p,h) }[/math]. Then the equivalence test problem is given by [math]\displaystyle{ H_0=\{d(p,\mathcal{M})\geq \varepsilon\} }[/math] and [math]\displaystyle{ H_1=\{d(p,\mathcal{M})\lt \varepsilon\} }[/math]. The distance [math]\displaystyle{ d(p,\mathcal{M}) }[/math] is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018).[7]
Random variate generation
First, reorder the parameters [math]\displaystyle{ p_1, \ldots, p_k }[/math] such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable X from a uniform (0, 1) distribution. The resulting outcome is the component
- [math]\displaystyle{ j = \min \left\{ j' \in \{1,\dots,k\}\colon \left(\sum_{i=1}^{j'} p_i\right) - X \geq 0 \right\}. }[/math]
{Xj = 1, Xk = 0 for k ≠ j } is one observation from the multinomial distribution with [math]\displaystyle{ p_1, \ldots, p_k }[/math] and n = 1. A sum of independent repetitions of this experiment is an observation from a multinomial distribution with n equal to the number of such repetitions.
Sampling using repeated conditional binomial samples
Given the parameters [math]\displaystyle{ p_1, p_2, \ldots, p_k }[/math] and a total for the sample [math]\displaystyle{ N }[/math] such that [math]\displaystyle{ \sum_{i=1}^k X_i = N }[/math], it is possible to sample sequentially for the number in an arbitrary state [math]\displaystyle{ X_i }[/math], by partitioning the state space into [math]\displaystyle{ i }[/math] and not-[math]\displaystyle{ i }[/math], conditioned on any prior samples already taken, repeatedly.
Algorithm: Sequential conditional binomial sampling
S = N rho = 1 for i in [1,k-1]: if rho != 0: X[i] ~ Binom(S,p[i]/rho) else X[i] = 0 S = S - X[i] rho = rho - p[i] X[k] = S
Heuristically, each application of the binomial sample reduces the available number to sample from and the conditional probabilities are likewise updated to ensure logical consistency.[8]
References
Citations
- ↑ "probability - multinomial distribution sampling" (in en). https://stats.stackexchange.com/a/335239/307588.
- ↑ Template:Cite preprint
- ↑ Template:Cite preprint
- ↑ Wellek, Stefan (2010). Testing statistical hypotheses of equivalence and noninferiority. Chapman and Hall/CRC. ISBN 978-1439808184.
- ↑ Ostrovski, Vladimir (May 2017). "Testing equivalence of multinomial distributions". Statistics & Probability Letters 124: 77–82. doi:10.1016/j.spl.2017.01.004.Official web link (subscription required). Alternate, free web link.
- ↑ Frey, Jesse (March 2009). "An exact multinomial test for equivalence". The Canadian Journal of Statistics 37: 47–59. doi:10.1002/cjs.10000.Official web link (subscription required).
- ↑ Ostrovski, Vladimir (March 2018). "Testing equivalence to families of multinomial distributions with application to the independence model". Statistics & Probability Letters 139: 61–66. doi:10.1016/j.spl.2018.03.014.Official web link (subscription required). Alternate, free web link.
- ↑ "11.5: The Multinomial Distribution" (in en). 2020-05-05. https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/11%3A_Bernoulli_Trials/11.05%3A_The_Multinomial_Distribution.
Sources
- Evans, Morton; Hastings, Nicholas; Peacock, Brian (2000). Statistical Distributions (3rd ed.). New York: Wiley. pp. 134–136. ISBN 0-471-37124-6. https://archive.org/details/statisticaldistr00evan_132.
- Weisstein, Eric W.. "Multinomial Distribution". Wolfram Research. http://mathworld.wolfram.com/MultinomialDistribution.html.
Original source: https://en.wikipedia.org/wiki/Multinomial distribution.
Read more |