Location parameter
In statistics, a location parameter of a probability distribution is a scalar- or vector-valued parameter [math]\displaystyle{ x_0 }[/math], which determines the "location" or shift of the distribution. In the literature of location parameter estimation, the probability distributions with such parameter are found to be formally defined in one of the following equivalent ways:
- either as having a probability density function or probability mass function [math]\displaystyle{ f(x - x_0) }[/math];[1] or
- having a cumulative distribution function [math]\displaystyle{ F(x - x_0) }[/math];[2] or
- being defined as resulting from the random variable transformation [math]\displaystyle{ x_0 + X }[/math], where [math]\displaystyle{ X }[/math] is a random variable with a certain, possibly unknown, distribution[3] (See also #Additive noise).
A direct example of a location parameter is the parameter [math]\displaystyle{ \mu }[/math] of the normal distribution. To see this, note that the probability density function [math]\displaystyle{ f(x | \mu, \sigma) }[/math] of a normal distribution [math]\displaystyle{ \mathcal{N}(\mu,\sigma^2) }[/math] can have the parameter [math]\displaystyle{ \mu }[/math] factored out and be written as:
- [math]\displaystyle{ g(y - \mu | \sigma) = \frac{1}{\sigma \sqrt{2\pi} } e^{-\frac{1}{2}\left(\frac{y}{\sigma}\right)^2} }[/math]
thus fulfilling the first of the definitions given above.
The above definition indicates, in the one-dimensional case, that if [math]\displaystyle{ x_0 }[/math] is increased, the probability density or mass function shifts rigidly to the right, maintaining its exact shape.
A location parameter can also be found in families having more than one parameter, such as location–scale families. In this case, the probability density function or probability mass function will be a special case of the more general form
- [math]\displaystyle{ f_{x_0,\theta}(x) = f_\theta(x-x_0) }[/math]
where [math]\displaystyle{ x_0 }[/math] is the location parameter, θ represents additional parameters, and [math]\displaystyle{ f_\theta }[/math] is a function parametrized on the additional parameters.
Definition[4]
Let [math]\displaystyle{ f(x) }[/math] be any probability density function and let [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma \gt 0 }[/math] be any given constants. Then the function
[math]\displaystyle{ g(x| \mu, \sigma)= \frac{1}{\sigma}f\left(\frac{x-\mu}{\sigma}\right) }[/math]
is a probability density function.
The location family is then defined as follows:
Let [math]\displaystyle{ f(x) }[/math] be any probability density function. Then the family of probability density functions [math]\displaystyle{ \mathcal{F} = \{f(x-\mu) : \mu \in \mathbb{R}\} }[/math] is called the location family with standard probability density function [math]\displaystyle{ f(x) }[/math], where [math]\displaystyle{ \mu }[/math] is called the location parameter for the family.
Additive noise
An alternative way of thinking of location families is through the concept of additive noise. If [math]\displaystyle{ x_0 }[/math] is a constant and W is random noise with probability density [math]\displaystyle{ f_W(w), }[/math] then [math]\displaystyle{ X = x_0 + W }[/math] has probability density [math]\displaystyle{ f_{x_0}(x) = f_W(x-x_0) }[/math] and its distribution is therefore part of a location family.
Proofs
For the continuous univariate case, consider a probability density function [math]\displaystyle{ f(x | \theta), x \in [a, b] \subset \mathbb{R} }[/math], where [math]\displaystyle{ \theta }[/math] is a vector of parameters. A location parameter [math]\displaystyle{ x_0 }[/math] can be added by defining:
- [math]\displaystyle{ g(x | \theta, x_0) = f(x - x_0 | \theta), \; x \in [a - x_0, b - x_0] }[/math]
it can be proved that [math]\displaystyle{ g }[/math] is a p.d.f. by verifying if it respects the two conditions[5] [math]\displaystyle{ g(x | \theta, x_0) \ge 0 }[/math] and [math]\displaystyle{ \int_{-\infty}^{\infty} g(x | \theta, x_0) dx = 1 }[/math]. [math]\displaystyle{ g }[/math] integrates to 1 because:
- [math]\displaystyle{ \int_{-\infty}^{\infty} g(x | \theta, x_0) dx = \int_{a - x_0}^{b - x_0} g(x | \theta, x_0) dx = \int_{a - x_0}^{b - x_0} f(x - x_0 | \theta) dx }[/math]
now making the variable change [math]\displaystyle{ u = x - x_0 }[/math] and updating the integration interval accordingly yields:
- [math]\displaystyle{ \int_{a}^{b} f(u | \theta) du = 1 }[/math]
because [math]\displaystyle{ f(x | \theta) }[/math] is a p.d.f. by hypothesis. [math]\displaystyle{ g(x | \theta, x_0) \ge 0 }[/math] follows from [math]\displaystyle{ g }[/math] sharing the same image of [math]\displaystyle{ f }[/math], which is a p.d.f. so its image is contained in [math]\displaystyle{ [0, 1] }[/math].
See also
- Central tendency
- Location test
- Invariant estimator
- Scale parameter
- Two-moment decision models
References
- ↑ Takeuchi, Kei (1971). "A Uniformly Asymptotically Efficient Estimator of a Location Parameter". Journal of the American Statistical Association 66 (334): 292–301. doi:10.1080/01621459.1971.10482258.
- ↑ Huber, Peter J. (1992). "Robust estimation of a location parameter". Breakthroughs in Statistics. Springer Series in Statistics (Springer): 492–518. doi:10.1007/978-1-4612-4380-9_35. ISBN 978-0-387-94039-7. http://projecteuclid.org/euclid.aoms/1177703732.
- ↑ Stone, Charles J. (1975). "Adaptive Maximum Likelihood Estimators of a Location Parameter". The Annals of Statistics 3 (2): 267–284. doi:10.1214/aos/1176343056.
- ↑ Casella, George; Berger, Roger (2001). Statistical Inference (2nd ed.). pp. 116. ISBN 978-0534243128.
- ↑ Ross, Sheldon (2010). Introduction to probability models. Amsterdam Boston: Academic Press. ISBN 978-0-12-375686-2. OCLC 444116127.
Original source: https://en.wikipedia.org/wiki/Location parameter.
Read more |