Prediction interval
In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.
A simple example is given by a six-sided die with face values ranging from 1 to 6. The confidence interval for the estimated expected value of the face value will be around 3.5 and will become narrower with a larger sample size. However, the prediction interval for the next roll will approximately range from 1 to 6, even with any number of samples seen so far.
Prediction intervals are used in both frequentist statistics and Bayesian statistics: a prediction interval bears the same relationship to a future observation that a frequentist confidence interval or Bayesian credible interval bears to an unobservable population parameter: prediction intervals predict the distribution of individual future points, whereas confidence intervals and credible intervals of parameters predict the distribution of estimates of the true population mean or other quantity of interest that cannot be observed.
Introduction
If one makes the parametric assumption that the underlying distribution is a normal distribution, and has a sample set {X1, ..., Xn}, then confidence intervals and credible intervals may be used to estimate the population mean μ and population standard deviation σ of the underlying population, while prediction intervals may be used to estimate the value of the next sample variable, Xn+1.
Alternatively, in Bayesian terms, a prediction interval can be described as a credible interval for the variable itself, rather than for a parameter of the distribution thereof.
The concept of prediction intervals need not be restricted to inference about a single future sample value but can be extended to more complicated cases. For example, in the context of river flooding where analyses are often based on annual values of the largest flow within the year, there may be interest in making inferences about the largest flood likely to be experienced within the next 50 years.
Since prediction intervals are only concerned with past and future observations, rather than unobservable population parameters, they are advocated as a better method than confidence intervals by some statisticians, such as Seymour Geisser,[citation needed] following the focus on observables by Bruno de Finetti.[citation needed]
Normal distribution
Given a sample from a normal distribution, whose parameters are unknown, it is possible to give prediction intervals in the frequentist sense, i.e., an interval [a, b] based on statistics of the sample such that on repeated experiments, Xn+1 falls in the interval the desired percentage of the time; one may call these "predictive confidence intervals".[1]
A general technique of frequentist prediction intervals is to find and compute a pivotal quantity of the observables X1, ..., Xn, Xn+1 – meaning a function of observables and parameters whose probability distribution does not depend on the parameters – that can be inverted to give a probability of the future observation Xn+1 falling in some interval computed in terms of the observed values so far, [math]\displaystyle{ X_1,\dots,X_n. }[/math] Such a pivotal quantity, depending only on observables, is called an ancillary statistic.[2] The usual method of constructing pivotal quantities is to take the difference of two variables that depend on location, so that location cancels out, and then take the ratio of two variables that depend on scale, so that scale cancels out. The most familiar pivotal quantity is the Student's t-statistic, which can be derived by this method and is used in the sequel.
Known mean, known variance
A prediction interval [ℓ,u] for a future observation X in a normal distribution N(µ,σ2) with known mean and variance may be calculated from
- [math]\displaystyle{ \gamma=P(\ell\lt X\lt u)=P\left(\frac{\ell-\mu} \sigma \lt \frac{X-\mu} \sigma \lt \frac{u-\mu} \sigma \right)=P\left(\frac{\ell-\mu} \sigma \lt Z \lt \frac{u-\mu} \sigma \right), }[/math]
where [math]\displaystyle{ Z=\frac{X-\mu}{\sigma} }[/math], the standard score of X, is distributed as standard normal.
Hence
- [math]\displaystyle{ \frac{\ell-\mu} \sigma = -z, \quad \frac{u-\mu} \sigma = z, }[/math]
or
- [math]\displaystyle{ \ell=\mu-z\sigma, \quad u=\mu+z\sigma, }[/math]
with z the quantile in the standard normal distribution for which:
- [math]\displaystyle{ \gamma=P(-z\lt Z\lt z). }[/math]
or equivalently;
- [math]\displaystyle{ \tfrac 12(1-\gamma)=P(Z\gt z). }[/math]
Prediction interval |
z |
---|---|
75% | 1.15[3] |
90% | 1.64[3] |
95% | 1.96[3] |
99% | 2.58[3] |
The prediction interval is conventionally written as:
- [math]\displaystyle{ \left[\mu- z\sigma,\ \mu + z\sigma \right]. }[/math]
For example, to calculate the 95% prediction interval for a normal distribution with a mean (µ) of 5 and a standard deviation (σ) of 1, then z is approximately 2. Therefore, the lower limit of the prediction interval is approximately 5 ‒ (2·1) = 3, and the upper limit is approximately 5 + (2·1) = 7, thus giving a prediction interval of approximately 3 to 7.
Estimation of parameters
For a distribution with unknown parameters, a direct approach to prediction is to estimate the parameters and then use the associated quantile function – for example, one could use the sample mean [math]\displaystyle{ \overline{X} }[/math] as estimate for μ and the sample variance s2 as an estimate for σ2. There are two natural choices for s2 here – dividing by [math]\displaystyle{ (n-1) }[/math] yields an unbiased estimate, while dividing by n yields the maximum likelihood estimator, and either might be used. One then uses the quantile function with these estimated parameters [math]\displaystyle{ \Phi^{-1}_{\overline{X},s^2} }[/math] to give a prediction interval.
This approach is usable, but the resulting interval will not have the repeated sampling interpretation[4] – it is not a predictive confidence interval.
For the sequel, use the sample mean:
- [math]\displaystyle{ \overline{X} = \overline{X}_n=(X_1+\cdots+X_n)/n }[/math]
and the (unbiased) sample variance:
- [math]\displaystyle{ s^2 = s_n^2={1 \over n-1}\sum_{i=1}^n (X_i-\overline{X}_n)^2 }[/math]
Unknown mean, known variance
Given[5] a normal distribution with unknown mean μ but known variance 1, the sample mean [math]\displaystyle{ \overline{X} }[/math] of the observations [math]\displaystyle{ X_1,\dots,X_n }[/math] has distribution [math]\displaystyle{ N(\mu,1/n), }[/math] while the future observation [math]\displaystyle{ X_{n+1} }[/math] has distribution [math]\displaystyle{ N(\mu,1). }[/math] Taking the difference of these cancels the μ and yields a normal distribution of variance [math]\displaystyle{ 1+(1/n), }[/math] thus
- [math]\displaystyle{ \frac{X_{n+1}-\overline{X}}{\sqrt{1+(1/n)}} \sim N(0,1). }[/math]
Solving for [math]\displaystyle{ X_{n+1} }[/math] gives the prediction distribution [math]\displaystyle{ N(\overline{X},1+(1/n)), }[/math] from which one can compute intervals as before. This is a predictive confidence interval in the sense that if one uses a quantile range of 100p%, then on repeated applications of this computation, the future observation [math]\displaystyle{ X_{n+1} }[/math] will fall in the predicted interval 100p% of the time.
Notice that this prediction distribution is more conservative than using the estimated mean [math]\displaystyle{ \overline{X} }[/math] and known variance 1, as this uses variance [math]\displaystyle{ 1+(1/n) }[/math], hence yields wider intervals. This is necessary for the desired confidence interval property to hold.
Known mean, unknown variance
Conversely, given a normal distribution with known mean 0 but unknown variance [math]\displaystyle{ \sigma^2 }[/math], the sample variance [math]\displaystyle{ s^2 }[/math] of the observations [math]\displaystyle{ X_1,\dots,X_n }[/math] has, up to scale, a [math]\displaystyle{ \scriptstyle\chi {n-1}^2 }[/math] distribution; more precisely:
- [math]\displaystyle{ \frac{(n-1)s_n^2}{\sigma^2} \sim \chi_{n-1}^2. }[/math]
while the future observation [math]\displaystyle{ X_{n+1} }[/math] has distribution [math]\displaystyle{ N(0,\sigma^2). }[/math] Taking the ratio of the future observation and the sample standard deviation[clarification needed] cancels the σ, yielding a Student's t-distribution with n – 1 degrees of freedom:
- [math]\displaystyle{ \frac{X_{n+1}} s \sim T^{n-1}. }[/math]
Solving for [math]\displaystyle{ X_{n+1} }[/math] gives the prediction distribution [math]\displaystyle{ sT^{n-1}, }[/math] from which one can compute intervals as before.
Notice that this prediction distribution is more conservative than using a normal distribution with the estimated standard deviation [math]\displaystyle{ s }[/math] and known mean 0, as it uses the t-distribution instead of the normal distribution, hence yields wider intervals. This is necessary for the desired confidence interval property to hold.
Unknown mean, unknown variance
Combining the above for a normal distribution [math]\displaystyle{ N(\mu,\sigma^2) }[/math] with both μ and σ2 unknown yields the following ancillary statistic:[6]
- [math]\displaystyle{ \frac{X_{n+1}-\overline{X}_n}{s_n\sqrt{1+1/n}} \sim T^{n-1} }[/math]
This simple combination is possible because the sample mean and sample variance of the normal distribution are independent statistics; this is only true for the normal distribution, and in fact characterizes the normal distribution.
Solving for [math]\displaystyle{ X_{n+1} }[/math] yields the prediction distribution
- [math]\displaystyle{ \overline{X}_n + s_n\sqrt{1+1/n} \cdot T^{n-1} }[/math]
The probability of [math]\displaystyle{ X_{n+1} }[/math] falling in a given interval is then:
- [math]\displaystyle{ \Pr\left(\overline{X}_n-T_a s_n\sqrt{1+(1/n)}\leq X_{n+1} \leq\overline{X}_n+T_a s_n\sqrt{1+(1/n)}\,\right)=p }[/math]
where Ta is the 100((1 − p)/2)th percentile of Student's t-distribution with n − 1 degrees of freedom. Therefore, the numbers
- [math]\displaystyle{ \overline{X}_n \pm T_a s_n \sqrt{1+(1/n)} }[/math]
are the endpoints of a 100(1 − p)% prediction interval for [math]\displaystyle{ X_{n+1} }[/math].
Non-parametric methods
One can compute prediction intervals without any assumptions on the population, i.e. in a non-parametric way.
The residual bootstrap method can be used for constructing non-parametric prediction intervals.
Conformal Prediction
In general the conformal prediction method is more general. Let us look at the special case of using the minimum and maximum as boundaries for a prediction interval: If one has a sample of identical random variables {X1, ..., Xn}, then the probability that the next observation Xn+1 will be the largest is 1/(n + 1), since all observations have equal probability of being the maximum. In the same way, the probability that Xn+1 will be the smallest is 1/(n + 1). The other (n − 1)/(n + 1) of the time, Xn+1 falls between the sample maximum and sample minimum of the sample {X1, ..., Xn}. Thus, denoting the sample maximum and minimum by M and m, this yields an (n − 1)/(n + 1) prediction interval of [m, M].
Notice that while this gives the probability that a future observation will fall in a range, it does not give any estimate as to where in a segment it will fall – notably, if it falls outside the range of observed values, it may be far outside the range. See extreme value theory for further discussion. Formally, this applies not just to sampling from a population, but to any exchangeable sequence of random variables, not necessarily independent or identically distributed.
Contrast with other intervals
Contrast with confidence intervals
In the formula for the predictive confidence interval no mention is made of the unobservable parameters μ and σ of population mean and standard deviation – the observed sample statistics [math]\displaystyle{ \overline{X}_n }[/math] and [math]\displaystyle{ S_n }[/math] of sample mean and standard deviation are used, and what is estimated is the outcome of future samples.
Rather than using sample statistics as estimators of population parameters and applying confidence intervals to these estimates, one considers "the next sample" [math]\displaystyle{ X_{n+1} }[/math] as itself a statistic, and computes its sampling distribution.
In parameter confidence intervals, one estimates population parameters; if one wishes to interpret this as prediction of the next sample, one models "the next sample" as a draw from this estimated population, using the (estimated) population distribution. By contrast, in predictive confidence intervals, one uses the sampling distribution of (a statistic of) a sample of n or n + 1 observations from such a population, and the population distribution is not directly used, though the assumption about its form (though not the values of its parameters) is used in computing the sampling distribution.
In regression analysis
A common application of prediction intervals is to regression analysis.
Suppose the data is being modeled by a straight line regression:
- [math]\displaystyle{ y_i=\alpha+\beta x_i +\varepsilon_i\, }[/math]
where [math]\displaystyle{ y_i }[/math] is the response variable, [math]\displaystyle{ x_i }[/math] is the explanatory variable, εi is a random error term, and [math]\displaystyle{ \alpha }[/math] and [math]\displaystyle{ \beta }[/math] are parameters.
Given estimates [math]\displaystyle{ \hat \alpha }[/math] and [math]\displaystyle{ \hat \beta }[/math] for the parameters, such as from a simple linear regression, the predicted response value yd for a given explanatory value xd is
- [math]\displaystyle{ \hat{y}_d=\hat\alpha+\hat\beta x_d , }[/math]
(the point on the regression line), while the actual response would be
- [math]\displaystyle{ y_d=\alpha+\beta x_d +\varepsilon_d. \, }[/math]
The point estimate [math]\displaystyle{ \hat{y}_d }[/math] is called the mean response, and is an estimate of the expected value of yd, [math]\displaystyle{ E(y\mid x_d). }[/math]
A prediction interval instead gives an interval in which one expects yd to fall; this is not necessary if the actual parameters α and β are known (together with the error term εi), but if one is estimating from a sample, then one may use the standard error of the estimates for the intercept and slope ([math]\displaystyle{ \hat\alpha }[/math] and [math]\displaystyle{ \hat\beta }[/math]), as well as their correlation, to compute a prediction interval.
In regression, (Faraway 2002) makes a distinction between intervals for predictions of the mean response vs. for predictions of observed response—affecting essentially the inclusion or not of the unity term within the square root in the expansion factors above; for details, see (Faraway 2002).
Bayesian statistics
Seymour Geisser, a proponent of predictive inference, gives predictive applications of Bayesian statistics.[7]
In Bayesian statistics, one can compute (Bayesian) prediction intervals from the posterior probability of the random variable, as a credible interval. In theoretical work, credible intervals are not often calculated for the prediction of future events, but for inference of parameters – i.e., credible intervals of a parameter, not for the outcomes of the variable itself. However, particularly where applications are concerned with possible extreme values of yet to be observed cases, credible intervals for such values can be of practical importance.
Applications
Prediction intervals are commonly used as definitions of reference ranges, such as reference ranges for blood tests to give an idea of whether a blood test is normal or not. For this purpose, the most commonly used prediction interval is the 95% prediction interval, and a reference range based on it can be called a standard reference range.
See also
- Extrapolation
- Posterior probability
- Prediction
- Prediction band
- Seymour Geisser
- Statistical model validation
- Trend estimation
Notes
References
- Faraway, Julian J. (2002), Practical Regression and Anova using R, https://cran.r-project.org/doc/contrib/Faraway-PRA.pdf
- Predictive Inference, CRC Press, 1993
- Sterne, Jonathan; Kirkwood, Betty R. (2003), Essential Medical Statistics, Blackwell Science, ISBN 0-86542-871-9, https://archive.org/details/essentialmedical00kirk
Further reading
- Chatfield, C. (1993). "Calculating Interval Forecasts". Journal of Business & Economic Statistics 11 (2): 121–135. doi:10.2307/1391361.
- Lawless, J. F.; Fredette, M. (2005). "Frequentist prediction intervals and predictive distributions". Biometrika 92 (3): 529–542. doi:10.1093/biomet/92.3.529.
- Meade, N.; Islam, T. (1995). "Prediction Intervals for Growth Curve Forecasts". Journal of Forecasting 14 (5): 413–430. doi:10.1002/for.3980140502.
- ISO 16269-8 Standard Interpretation of Data, Part 8, Determination of Prediction Intervals
Original source: https://en.wikipedia.org/wiki/Prediction interval.
Read more |