Parametric model

From HandWiki
Short description: Type of statistical model

In statistics, a parametric model or parametric family or finite-dimensional model is a particular class of statistical models. Specifically, a parametric model is a family of probability distributions that has a finite number of parameters.

Definition

A statistical model is a collection of probability distributions on some sample space. We assume that the collection, 𝒫, is indexed by some set Θ. The set Θ is called the parameter set or, more commonly, the parameter space. For each θ ∈ Θ, let Fθ denote the corresponding member of the collection; so Fθ is a cumulative distribution function. Then a statistical model can be written as

[math]\displaystyle{ \mathcal{P} = \big\{ F_\theta\ \big|\ \theta\in\Theta \big\}. }[/math]

The model is a parametric model if Θ ⊆ ℝk for some positive integer k.

When the model consists of absolutely continuous distributions, it is often specified in terms of corresponding probability density functions:

[math]\displaystyle{ \mathcal{P} = \big\{ f_\theta\ \big|\ \theta\in\Theta \big\}. }[/math]

Examples

  • The Poisson family of distributions is parametrized by a single number λ > 0:
[math]\displaystyle{ \mathcal{P} = \Big\{\ p_\lambda(j) = \tfrac{\lambda^j}{j!}e^{-\lambda},\ j=0,1,2,3,\dots \ \Big|\;\; \lambda\gt 0 \ \Big\}, }[/math]

where pλ is the probability mass function. This family is an exponential family.

  • The normal family is parametrized by θ = (μ, σ), where μ ∈ ℝ is a location parameter and σ > 0 is a scale parameter:
[math]\displaystyle{ \mathcal{P} = \Big\{\ f_\theta(x) = \tfrac{1}{\sqrt{2\pi}\sigma} \exp\left(-\tfrac{(x-\mu)^2}{2\sigma^2}\right)\ \Big|\;\; \mu\in\mathbb{R}, \sigma\gt 0 \ \Big\}. }[/math]

This parametrized family is both an exponential family and a location-scale family.

[math]\displaystyle{ \mathcal{P} = \Big\{\ f_\theta(x) = \tfrac{\beta}{\lambda} \left(\tfrac{x-\mu}{\lambda}\right)^{\beta-1}\! \exp\!\big(\!-\!\big(\tfrac{x-\mu}{\lambda}\big)^\beta \big)\, \mathbf{1}_{\{x\gt \mu\}} \ \Big|\;\; \lambda\gt 0,\, \beta\gt 0,\, \mu\in\mathbb{R} \ \Big\}. }[/math]
  • The binomial model is parametrized by θ = (n, p), where n is a non-negative integer and p is a probability (i.e. p ≥ 0 and p ≤ 1):
[math]\displaystyle{ \mathcal{P} = \Big\{\ p_\theta(k) = \tfrac{n!}{k!(n-k)!}\, p^k (1-p)^{n-k},\ k=0,1,2,\dots, n \ \Big|\;\; n\in\mathbb{Z}_{\ge 0},\, p \ge 0 \land p \le 1\Big\}. }[/math]

This example illustrates the definition for a model with some discrete parameters.

General remarks

A parametric model is called identifiable if the mapping θPθ is invertible, i.e. there are no two different parameter values θ1 and θ2 such that Pθ1 = Pθ2.

Comparisons with other classes of models

Parametric models are contrasted with the semi-parametric, semi-nonparametric, and non-parametric models, all of which consist of an infinite set of "parameters" for description. The distinction between these four classes is as follows:[citation needed]

  • in a "parametric" model all the parameters are in finite-dimensional parameter spaces;
  • a model is "non-parametric" if all the parameters are in infinite-dimensional parameter spaces;
  • a "semi-parametric" model contains finite-dimensional parameters of interest and infinite-dimensional nuisance parameters;
  • a "semi-nonparametric" model has both finite-dimensional and infinite-dimensional unknown parameters of interest.

Some statisticians believe that the concepts "parametric", "non-parametric", and "semi-parametric" are ambiguous.[1] It can also be noted that the set of all probability measures has cardinality of continuum, and therefore it is possible to parametrize any model at all by a single number in (0,1) interval.[2] This difficulty can be avoided by considering only "smooth" parametric models.

See also

Notes

Bibliography

  • Mathematical Statistics: Basic and selected topics, 1 (Second (updated printing 2007) ed.), Prentice-Hall, 2001 
  • Efficient and Adaptive Estimation for Semiparametric Models, Springer, 1998 
  • Davison, A. C. (2003), Statistical Models, Cambridge University Press 
  • Asymptotics in Statistics: Some basic concepts (2nd ed.), Springer, 2000 
  • Theory of Point Estimation (2nd ed.), Springer, 1998 
  • Statistical Decision Theory: Estimation, testing, and selection, Springer, 2008 
  • Pfanzagl, Johann; with the assistance of R. Hamböker (1994), Parametric Statistical Theory, Walter de Gruyter