Independent and identically distributed random variables
In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent.[1] This property is usually abbreviated as i.i.d., iid, or IID. IID was first defined in statistics and finds application in different fields such as data mining and signal processing.
Introduction
Statistics commonly deals with random samples. A random sample can be thought of as a set of objects that are chosen randomly. More formally, it is "a sequence of independent, identically distributed (IID) random data points".
In other words, the terms random sample and IID are basically one and the same. In statistics, "random sample" is the typical terminology, but in probability it is more common to say "IID".
- Identically distributed means that there are no overall trends—the distribution does not fluctuate and all items in the sample are taken from the same probability distribution.
- Independent means that the sample items are all independent events. In other words, they are not connected to each other in any way;[2] knowledge of the value of one variable gives no information about the value of the other and vice versa.
Application
Independent and identically distributed random variables are often used as an assumption, which tends to simplify the underlying mathematics. In practical applications of statistical modeling, however, the assumption may or may not be realistic.[3]
The i.i.d. assumption is also used in the central limit theorem, which states that the probability distribution of the sum (or average) of i.i.d. variables with finite variance approaches a normal distribution.[4]
Often the i.i.d. assumption arises in the context of sequences of random variables. Then "independent and identically distributed" implies that an element in the sequence is independent of the random variables that came before it. In this way, an i.i.d. sequence is different from a Markov sequence, where the probability distribution for the nth random variable is a function of the previous random variable in the sequence (for a first order Markov sequence). An i.i.d. sequence does not imply the probabilities for all elements of the sample space or event space must be the same.[5] For example, repeated throws of loaded dice will produce a sequence that is i.i.d., despite the outcomes being biased.
In signal processing and image processing the notion of transformation to i.i.d. implies two specifications, the "i.d." part and the "i." part:
i.d. – The signal level must be balanced on the time axis.
i. – The signal spectrum must be flattened, i.e. transformed by filtering (such as deconvolution) to a white noise signal (i.e. a signal where all frequencies are equally present).
Definition
Definition for two random variables
Suppose that the random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are defined to assume values in [math]\displaystyle{ I \subseteq \mathbb{R} }[/math]. Let [math]\displaystyle{ F_X(x) = \operatorname{P}(X\leq x) }[/math] and [math]\displaystyle{ F_Y(y) = \operatorname{P}(Y\leq y) }[/math] be the cumulative distribution functions of [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math], respectively, and denote their joint cumulative distribution function by [math]\displaystyle{ F_{X,Y}(x,y) = \operatorname{P}(X\leq x \land Y\leq y) }[/math].
Two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are identically distributed if and only if[6] [math]\displaystyle{ F_X(x)=F_Y(x) \, \forall x \in I }[/math].
Two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are independent if and only if [math]\displaystyle{ F_{X,Y}(x,y) = F_{X}(x) \cdot F_{Y}(y) \, \forall x,y \in I }[/math]. (See further Independence (probability theory) § Two random variables.)
Two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are i.i.d. if they are independent and identically distributed, i.e. if and only if
[math]\displaystyle{ \begin{align} & F_X(x)=F_Y(x) \, & \forall x \in I \\ & F_{X,Y}(x,y) = F_{X}(x) \cdot F_{Y}(y) \, & \forall x,y \in I \end{align} }[/math] |
|
( ) |
Definition for more than two random variables
The definition extends naturally to more than two random variables. We say that [math]\displaystyle{ n }[/math] random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math] are i.i.d. if they are independent (see further Independence (probability theory) § More than two random variables) and identically distributed, i.e. if and only if
[math]\displaystyle{ \begin{align} & F_{X_1}(x)=F_{X_k}(x) \, & \forall k \in \{1,\ldots,n \} \text{ and } \forall x \in I \\ & F_{X_1,\ldots,X_n}(x_1,\ldots,x_n) = F_{X_1}(x_1) \cdot \ldots \cdot F_{X_n}(x_n) \, & \forall x_1,\ldots,x_n \in I \end{align} }[/math] |
|
( ) |
where [math]\displaystyle{ F_{X_1,\ldots,X_n}(x_1,\ldots,x_n) = \operatorname{P}(X_1\leq x_1 \land \ldots \land X_n\leq x_n) }[/math] denotes the joint cumulative distribution function of [math]\displaystyle{ X_1,\ldots,X_n }[/math].
Definition for independence
In probability theory, two events, [math]\displaystyle{ \color{red}A }[/math] and [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} \color{Green}B }[/math], are called independent if and only if [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} P({\color{red}A} \ \mathrm{and} \ {\color{green}B})=P({\color{red}A})P({\color{green}B}) }[/math]. In the following, [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} P({\color{red}A}{\color{green}B}) }[/math] is short for [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} P({\color{red}A} \ \mathrm{and} \ {\color{green}B}) }[/math].
Suppose there are two events of the experiment, [math]\displaystyle{ \color{red}A }[/math] and [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} \color{Green}B }[/math]. If [math]\displaystyle{ P({\color{red}A})\gt 0 }[/math], there is a possibility [math]\displaystyle{ P({{\color{green}B}}|{\color{red}A}) }[/math]. Generally, the occurrence of [math]\displaystyle{ \color{red}A }[/math] has an effect on the probability of [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} \color{Green}B }[/math], which is called conditional probability, and only when the occurrence of [math]\displaystyle{ \color{red}A }[/math] has no effect on the occurrence of [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} \color{Green}B }[/math], there is [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} P({{\color{green}B}}|{\color{red}A})=P({\color{green}B}) }[/math].
Note: If [math]\displaystyle{ P({\color{red}A})\gt 0 }[/math] and [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} P({\color{Green}B})\gt 0 }[/math], then [math]\displaystyle{ \color{red}A }[/math] and [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} \color{Green}B }[/math] are mutually independent which cannot be established with mutually incompatible at the same time; that is, independence must be compatible and mutual exclusion must be related.
Suppose [math]\displaystyle{ \color{red}A }[/math], [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} \color{Green}B }[/math], and [math]\displaystyle{ \definecolor{blue}{RGB}{0,0,255} \color{blue}C }[/math] are three events. If [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} P({\color{red}A}{\color{green}B})=P({\color{red}A})P({\color{green}B}) }[/math], [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} \definecolor{blue}{RGB}{0,0,255} \definecolor{Blue}{RGB}{0,0,255} P({\color{green}B}{\color{blue}C})=P({\color{green}B})P({\color{blue}C}) }[/math], [math]\displaystyle{ \definecolor{blue}{RGB}{0,0,255} P({\color{red}A}{\color{blue}C})=P({\color{red}A})P({\color{blue}C}) }[/math], and [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} \definecolor{blue}{RGB}{0,0,255} \definecolor{Blue}{RGB}{0,0,255} P({\color{red}A}{\color{green}B}{\color{blue}C})=P({\color{red}A})P({\color{green}B})P({\color{blue}C}) }[/math] are satisfied, then the events [math]\displaystyle{ \color{red}A }[/math], [math]\displaystyle{ \definecolor{Green}{RGB}{0,128,0} \definecolor{green}{RGB}{0,128,0} \color{Green}B }[/math], and [math]\displaystyle{ \definecolor{blue}{RGB}{0,0,255} \color{blue}C }[/math] are mutually independent.
A more general definition is there are [math]\displaystyle{ n }[/math] events, [math]\displaystyle{ {\color{red}A}_1,{\color{red}A}_2, \ldots, {\color{red}A}_n }[/math]. If the probabilities of the product events for any [math]\displaystyle{ 2, 3, \ldots, n }[/math] events are equal to the product of the probabilities of each event, then the events [math]\displaystyle{ {\color{red}A}_1,{\color{red}A}_2, \ldots, {\color{red}A}_n }[/math] are independent of each other.
Examples
Example 1
A sequence of outcomes of spins of a fair or unfair roulette wheel is i.i.d. One implication of this is that if the roulette ball lands on "red", for example, 20 times in a row, the next spin is no more or less likely to be "black" than on any other spin (see the gambler's fallacy).
Example 2
Toss a coin 10 times and record how many times the coin lands on heads.
- Independent – Each outcome of landing will not affect the other outcome, which means the 10 results are independent from each other.
- Identically distributed – Regardless of whether the coin is fair (probability 1/2 of heads) or unfair, as long as the same coin is used for each flip, each flip will have the same probability as each other flip.
Such a sequence of two possible i.i.d. outcomes is also called a Bernoulli process.
Example 3
Roll a die 10 times and record how many times the result is 1.
- Independent – Each outcome of the die roll will not affect the next one, which means the 10 results are independent from each other.
- Identically distributed – Regardless of whether the die is fair or weighted, each roll will have the same probability as each other roll. In contrast, rolling 10 different dice, some of which are weighted and some of which are not, would not produce i.i.d. variables.
Example 4
Choose a card from a standard deck of cards containing 52 cards, then place the card back in the deck. Repeat it 52 times. Record the number of kings that appear.
- Independent – Each outcome of the card will not affect the next one, which means the 52 results are independent from each other. In contrast, if each card that is drawn is kept out of the deck, subsequent draws would be affected by it (drawing one king would make drawing a second king less likely), and the result would not be independent.
- Identically distributed – After drawing one card from it, each time the probability for a king is 4/52, which means the probability is identical each time.
Generalizations
Many results that were first proven under the assumption that the random variables are i.i.d. have been shown to be true even under a weaker distributional assumption.
Exchangeable random variables
The most general notion which shares the main properties of i.i.d. variables are exchangeable random variables, introduced by Bruno de Finetti.[citation needed] Exchangeability means that while variables may not be independent, future ones behave like past ones – formally, any value of a finite sequence is as likely as any permutation of those values – the joint probability distribution is invariant under the symmetric group.
This provides a useful generalization – for example, sampling without replacement is not independent, but is exchangeable.
Lévy process
In stochastic calculus, i.i.d. variables are thought of as a discrete time Lévy process: each variable gives how much one changes from one time to another. For example, a sequence of Bernoulli trials is interpreted as the Bernoulli process. One may generalize this to include continuous time Lévy processes, and many Lévy processes can be seen as limits of i.i.d. variables—for instance, the Wiener process is the limit of the Bernoulli process.
In machine learning
Machine learning uses currently acquired massive quantities of data to deliver faster, more accurate results.[7] Therefore, we need to use historical data with overall representativeness. If the data obtained is not representative of the overall situation, then the rules will be summarized badly or wrongly.
Through i.i.d. hypothesis, the number of individual cases in the training sample can be greatly reduced.
This assumption makes maximization very easy to calculate mathematically. Observing the assumption of independent and identical distribution in mathematics simplifies the calculation of the likelihood function in optimization problems. Because of the assumption of independence, the likelihood function can be written like this
- [math]\displaystyle{ l(\theta) = P(x_1, x_2, x_3,...,x_n|\theta) = P(x_1|\theta) P(x_2|\theta) P(x_3|\theta) ... P(x_n|\theta) }[/math]
In order to maximize the probability of the observed event, take the log function and maximize the parameter θ. That is to say, to compute:
- [math]\displaystyle{ \mathop{\rm argmax}\limits_\theta \log(l(\theta)) }[/math]
where
- [math]\displaystyle{ \log(l(\theta)) = \log(P(x_1|\theta)) + \log(P(x_2|\theta)) + \log(P(x_3|\theta)) + ... + \log(P(x_n|\theta)) }[/math]
The computer is very efficient to calculate multiple additions, but it is not efficient to calculate the multiplication. This simplification is the core reason for the increase in computational efficiency. And this Log transformation is also in the process of maximizing, turning many exponential functions into linear functions.
For two reasons, this hypothesis is easy to use the central limit theorem in practical applications.
- Even if the sample comes from a more complex non-Gaussian distribution, it can also approximate well. Because it can be simplified from the central limit theorem to Gaussian distribution. For a large number of observable samples, "the sum of many random variables will have an approximately normal distribution".
- The second reason is that the accuracy of the model depends on the simplicity and representative power of the model unit, as well as the data quality. Because the simplicity of the unit makes it easy to interpret and scale, and the representative power + scale out of the unit improves the model accuracy. Like in a deep neural network, each neuron is very simple but has strong representative power, layer by layer to represent more complex features to improve model accuracy.
See also
References
- ↑ "A brief primer on probability distributions". Santa Fe Institute. 2011. http://tuvalu.santafe.edu/~aaronc/courses/7000/csci7000-001_2011_L0.pdf.
- ↑ Stephanie (2016-05-11). "IID Statistics: Independent and Identically Distributed Definition and Examples" (in en-US). https://www.statisticshowto.com/iid-statistics/.
- ↑ Hampel, Frank (1998), "Is statistics too difficult?", Canadian Journal of Statistics 26 (3): 497–513, doi:10.2307/3315772 (§8).
- ↑ Blum, J. R.; Chernoff, H.; Rosenblatt, M.; Teicher, H. (1958). "Central Limit Theorems for Interchangeable Processes". Canadian Journal of Mathematics 10: 222–229. doi:10.4153/CJM-1958-026-0.
- ↑ Cover, T. M.; Thomas, J. A. (2006). Elements Of Information Theory. Wiley-Interscience. pp. 57–58. ISBN 978-0-471-24195-9.
- ↑ Casella & Berger 2002, Theorem 1.5.10
- ↑ "What is Machine Learning? A Definition." (in en-US). 2020-05-05. https://www.expert.ai/blog/machine-learning-definition/.
Further reading
- Casella, George; Berger, Roger L. (2002), Statistical Inference, Duxbury Advanced Series
Original source: https://en.wikipedia.org/wiki/Independent and identically distributed random variables.
Read more |