# Standardized moment

Short description: Normalized central moments

In probability theory and statistics, a standardized moment of a probability distribution is a moment (normally a higher degree central moment) that is normalized. The normalization is typically a division by an expression of the standard deviation which renders the moment scale invariant. This has the advantage that such normalized moments differ only in other properties than variability, facilitating e.g. comparison of shape of different probability distributions.

## Standard normalization

Let X be a random variable with a probability distribution P and mean value $\displaystyle{ \mu = \mathrm{E}[X] }$ (i.e. the first raw moment or moment about zero), the operator E denoting the expected value of X. Then the standardized moment of degree k is $\displaystyle{ \frac{\mu_k}{\sigma^k}, }$ that is, the ratio of the kth moment about the mean

$\displaystyle{ \mu_k = \operatorname{E} \left[ ( X - \mu )^k \right] = \int_{-\infty}^{\infty} (x - \mu)^k P(x)\,dx, }$

to the kth power of the standard deviation,

$\displaystyle{ \sigma^k = \left(\sqrt{\mathrm{E}\left[(X - \mu)^2\right]}\right)^k. }$

The power of k is because moments scale as $\displaystyle{ x^k, }$ meaning that $\displaystyle{ \mu_k(\lambda X) = \lambda^k \mu_k(X): }$ they are homogeneous functions of degree k, thus the standardized moment is scale invariant. This can also be understood as being because moments have dimension; in the above ratio defining standardized moments, the dimensions cancel, so they are dimensionless numbers.

The first four standardized moments can be written as:

Degree k Comment
1 $\displaystyle{ \tilde{\mu}_1 = \frac{\mu_1}{\sigma^1} = \frac{\operatorname{E} \left[ ( X - \mu )^1 \right]}{( \operatorname{E} \left[ ( X - \mu )^2 \right])^{1/2}} = \frac{\mu - \mu}{\sqrt{ \operatorname{E} \left[ ( X - \mu )^2 \right]}} = 0 }$ The first standardized moment is zero, because the first moment about the mean is always zero.
2 $\displaystyle{ \tilde{\mu}_2 = \frac{\mu_2}{\sigma^2} = \frac{\operatorname{E} \left[ ( X - \mu )^2 \right]}{( \operatorname{E} \left[ ( X - \mu )^2 \right])^{2/2}} = 1 }$ The second standardized moment is one, because the second moment about the mean is equal to the variance σ2.
3 $\displaystyle{ \tilde{\mu}_3 = \frac{\mu_3}{\sigma^3} = \frac{\operatorname{E} \left[ ( X - \mu )^3 \right]}{( \operatorname{E} \left[ ( X - \mu )^2 \right])^{3/2}} }$ The third standardized moment is a measure of skewness.
4 $\displaystyle{ \tilde{\mu}_4 = \frac{\mu_4}{\sigma^4} = \frac{\operatorname{E} \left[ ( X - \mu )^4 \right]}{( \operatorname{E} \left[ ( X - \mu )^2 \right])^{4/2}} }$ The fourth standardized moment refers to the kurtosis.

For skewness and kurtosis, alternative definitions exist, which are based on the third and fourth cumulant respectively.

## Other normalizations

Another scale invariant, dimensionless measure for characteristics of a distribution is the coefficient of variation, $\displaystyle{ \frac{\sigma}{\mu} }$. However, this is not a standardized moment, firstly because it is a reciprocal, and secondly because $\displaystyle{ \mu }$ is the first moment about zero (the mean), not the first moment about the mean (which is zero).

See Normalization (statistics) for further normalizing ratios.