# Symmetric polynomial

In mathematics, a **symmetric polynomial** is a polynomial *P*(*X*_{1}, *X*_{2}, …, *X*_{n}) in *n* variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, *P* is a *symmetric polynomial* if for any permutation σ of the subscripts 1, 2, ..., *n* one has *P*(*X*_{σ(1)}, *X*_{σ(2)}, …, *X*_{σ(n)}) = *P*(*X*_{1}, *X*_{2}, …, *X*_{n}).

Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view the elementary symmetric polynomials are the most fundamental symmetric polynomials. A theorem states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials, which implies that every *symmetric* polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial.

Symmetric polynomials also form an interesting structure by themselves, independently of any relation to the roots of a polynomial. In this context other collections of specific symmetric polynomials, such as complete homogeneous, power sum, and Schur polynomials play important roles alongside the elementary ones. The resulting structures, and in particular the ring of symmetric functions, are of great importance in combinatorics and in representation theory.

## Examples

The following polynomials in two variables *X*_{1} and *X*_{2} are symmetric:

- [math]\displaystyle{ X_1^3+ X_2^3-7 }[/math]
- [math]\displaystyle{ 4 X_1^2X_2^2 +X_1^3X_2 + X_1X_2^3 +(X_1+X_2)^4 }[/math]

as is the following polynomial in three variables *X*_{1}, *X*_{2}, *X*_{3}:

- [math]\displaystyle{ X_1 X_2 X_3 - 2 X_1 X_2 - 2 X_1 X_3 - 2 X_2 X_3 }[/math]

There are many ways to make specific symmetric polynomials in any number of variables (see the various types below). An example of a somewhat different flavor is

- [math]\displaystyle{ \prod_{1\leq i\lt j\leq n}(X_i-X_j)^2 }[/math]

where first a polynomial is constructed that changes sign under every exchange of variables, and taking the square renders it completely symmetric (if the variables represent the roots of a monic polynomial, this polynomial gives its discriminant).

On the other hand, the polynomial in two variables

- [math]\displaystyle{ X_1 - X_2 }[/math]

is not symmetric, since if one exchanges [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] one gets a different polynomial, [math]\displaystyle{ X_2 - X_1 }[/math]. Similarly in three variables

- [math]\displaystyle{ X_1^4X_2^2X_3 + X_1X_2^4X_3^2 + X_1^2X_2X_3^4 }[/math]

has only symmetry under cyclic permutations of the three variables, which is not sufficient to be a symmetric polynomial. However, the following is symmetric:

- [math]\displaystyle{ X_1^4X_2^2X_3 + X_1X_2^4X_3^2 + X_1^2X_2X_3^4 + X_1^4X_2X_3^2 + X_1X_2^2X_3^4 + X_1^2X_2^4X_3 }[/math]

## Applications

### Galois theory

One context in which symmetric polynomial functions occur is in the study of monic univariate polynomials of degree *n* having *n* roots in a given field. These *n* roots determine the polynomial, and when they are considered as independent variables, the coefficients of the polynomial are symmetric polynomial functions of the roots. Moreover the fundamental theorem of symmetric polynomials implies that a polynomial function *f* of the *n* roots can be expressed as (another) polynomial function of the coefficients of the polynomial determined by the roots if and only if *f* is given by a symmetric polynomial.

This yields the approach to solving polynomial equations by inverting this map, "breaking" the symmetry – given the coefficients of the polynomial (the elementary symmetric polynomials in the roots), how can one recover the roots? This leads to studying solutions of polynomials using the permutation group of the roots, originally in the form of Lagrange resolvents, later developed in Galois theory.

## Relation with the roots of a monic univariate polynomial

Consider a monic polynomial in *t* of degree *n*

- [math]\displaystyle{ P=t^n+a_{n-1}t^{n-1}+\cdots+a_2t^2+a_1t+a_0 }[/math]

with coefficients *a*_{i} in some field *K*. There exist *n* roots *x*_{1},…,*x*_{n} of *P* in some possibly larger field (for instance if *K* is the field of real numbers, the roots will exist in the field of complex numbers); some of the roots might be equal, but the fact that one has *all* roots is expressed by the relation

- [math]\displaystyle{ P = t^n+a_{n-1}t^{n-1}+\cdots+a_2t^2+a_1t+a_0=(t-x_1)(t-x_2)\cdots(t-x_n). }[/math]

By comparing coefficients one finds that

- [math]\displaystyle{ \begin{align} a_{n-1}&=-x_1-x_2-\cdots-x_n\\ a_{n-2}&=x_1x_2+x_1x_3+\cdots+x_2x_3+\cdots+x_{n-1}x_n = \textstyle\sum_{1\leq i\lt j\leq n}x_ix_j\\ & {}\ \, \vdots\\ a_1&=(-1)^{n-1}(x_2x_3\cdots x_n+x_1x_3x_4\cdots x_n+\cdots+x_1x_2\cdots x_{n-2}x_n+x_1x_2\cdots x_{n-1}) = \textstyle(-1)^{n-1}\sum_{i=1}^n\prod_{j\neq i}x_j\\ a_0&=(-1)^nx_1x_2\cdots x_n. \end{align} }[/math]

These are in fact just instances of Viète's formulas. They show that all coefficients of the polynomial are given in terms of the roots by a symmetric polynomial expression: although for a given polynomial *P* there may be qualitative differences between the roots (like lying in the base field *K* or not, being simple or multiple roots), none of this affects the way the roots occur in these expressions.

Now one may change the point of view, by taking the roots rather than the coefficients as basic parameters for describing *P*, and considering them as indeterminates rather than as constants in an appropriate field; the coefficients *a*_{i} then become just the particular symmetric polynomials given by the above equations. Those polynomials, without the sign [math]\displaystyle{ (-1)^{n-i} }[/math], are known as the elementary symmetric polynomials in *x*_{1}, …, *x*_{n}. A basic fact, known as the **fundamental theorem of symmetric polynomials**, states that *any* symmetric polynomial in *n* variables can be given by a polynomial expression in terms of these elementary symmetric polynomials. It follows that any symmetric polynomial expression in the roots of a monic polynomial can be expressed as a polynomial in the *coefficients* of the polynomial, and in particular that its value lies in the base field *K* that contains those coefficients. Thus, when working only with such symmetric polynomial expressions in the roots, it is unnecessary to know anything particular about those roots, or to compute in any larger field than *K* in which those roots may lie. In fact the values of the roots themselves become rather irrelevant, and the necessary relations between coefficients and symmetric polynomial expressions can be found by computations in terms of symmetric polynomials only. An example of such relations are Newton's identities, which express the sum of any fixed power of the roots in terms of the elementary symmetric polynomials.

## Special kinds of symmetric polynomials

There are a few types of symmetric polynomials in the variables *X*_{1}, *X*_{2}, …, *X*_{n} that are fundamental.

### Elementary symmetric polynomials

For each nonnegative integer *k*, the elementary symmetric polynomial *e*_{k}(*X*_{1}, …, *X*_{n}) is the sum of all distinct products of *k* distinct variables. (Some authors denote it by σ_{k} instead.) For *k* = 0 there is only the empty product so *e*_{0}(*X*_{1}, …, *X*_{n}) = 1, while for *k* > *n*, no products at all can be formed, so *e*_{k}(*X*_{1}, *X*_{2}, …, *X*_{n}) = 0 in these cases. The remaining *n* elementary symmetric polynomials are building blocks for all symmetric polynomials in these variables: as mentioned above, any symmetric polynomial in the variables considered can be obtained from these elementary symmetric polynomials using multiplications and additions only. In fact one has the following more detailed facts:

- any symmetric polynomial
*P*in*X*_{1}, …,*X*_{n}can be written as a polynomial expression in the polynomials*e*_{k}(*X*_{1}, …,*X*_{n}) with 1 ≤*k*≤*n*; - this expression is unique up to equivalence of polynomial expressions;
- if
*P*has integral coefficients, then the polynomial expression also has integral coefficients.

For example, for *n* = 2, the relevant elementary symmetric polynomials are *e*_{1}(*X*_{1}, *X*_{2}) = *X*_{1} + *X*_{2}, and *e*_{2}(*X*_{1}, *X*_{2}) = *X*_{1}*X*_{2}. The first polynomial in the list of examples above can then be written as

- [math]\displaystyle{ X_1^3+X_2^3-7=e_1(X_1,X_2)^3-3e_2(X_1,X_2)e_1(X_1,X_2)-7 }[/math]

(for a proof that this is always possible see the fundamental theorem of symmetric polynomials).

### Monomial symmetric polynomials

Powers and products of elementary symmetric polynomials work out to rather complicated expressions. If one seeks basic *additive* building blocks for symmetric polynomials, a more natural choice is to take those symmetric polynomials that contain only one type of monomial, with only those copies required to obtain symmetry. Any monomial in *X*_{1}, …, *X*_{n} can be written as *X*_{1}^{α1}…*X*_{n}^{αn} where the exponents α_{i} are natural numbers (possibly zero); writing α = (α_{1},…,α_{n}) this can be abbreviated to *X*^{ α}. The **monomial symmetric polynomial** *m*_{α}(*X*_{1}, …, *X*_{n}) is defined as the sum of all monomials *x*^{β} where β ranges over all *distinct* permutations of (α_{1},…,α_{n}). For instance one has

- [math]\displaystyle{ m_{(3,1,1)}(X_1,X_2,X_3)=X_1^3X_2X_3+X_1X_2^3X_3+X_1X_2X_3^3 }[/math],
- [math]\displaystyle{ m_{(3,2,1)}(X_1,X_2,X_3)=X_1^3X_2^2X_3+X_1^3X_2X_3^2+X_1^2X_2^3X_3+X_1^2X_2X_3^3+X_1X_2^3X_3^2+X_1X_2^2X_3^3. }[/math]

Clearly *m*_{α} = *m*_{β} when β is a permutation of α, so one usually considers only those *m*_{α} for which α_{1} ≥ α_{2} ≥ … ≥ α_{n}, in other words for which α is a partition of an integer.
These monomial symmetric polynomials form a vector space basis: every symmetric polynomial *P* can be written as a linear combination of the monomial symmetric polynomials. To do this it suffices to separate the different types of monomial occurring in *P*. In particular if *P* has integer coefficients, then so will the linear combination.

The elementary symmetric polynomials are particular cases of monomial symmetric polynomials: for 0 ≤ *k* ≤ *n* one has

- [math]\displaystyle{ e_k(X_1,\ldots,X_n)=m_\alpha(X_1,\ldots,X_n) }[/math] where α is the partition of
*k*into*k*parts 1 (followed by*n*−*k*zeros).

### Power-sum symmetric polynomials

For each integer *k* ≥ 1, the monomial symmetric polynomial *m*_{(k,0,…,0)}(*X*_{1}, …, *X*_{n}) is of special interest. It is the power sum symmetric polynomial, defined as

- [math]\displaystyle{ p_k(X_1,\ldots,X_n) = X_1^k + X_2^k + \cdots + X_n^k . }[/math]

All symmetric polynomials can be obtained from the first *n* power sum symmetric polynomials by additions and multiplications, possibly involving rational coefficients. More precisely,

- Any symmetric polynomial in
*X*_{1}, …,*X*_{n}can be expressed as a polynomial expression with rational coefficients in the power sum symmetric polynomials*p*_{1}(*X*_{1}, …,*X*_{n}), …,*p*_{n}(*X*_{1}, …,*X*_{n}).

In particular, the remaining power sum polynomials *p*_{k}(*X*_{1}, …, *X*_{n}) for *k* > *n* can be so expressed in the first *n* power sum polynomials; for example

- [math]\displaystyle{ p_3(X_1,X_2)=\textstyle\frac32p_2(X_1,X_2)p_1(X_1,X_2)-\frac12p_1(X_1,X_2)^3. }[/math]

In contrast to the situation for the elementary and complete homogeneous polynomials, a symmetric polynomial in *n* variables with *integral* coefficients need not be a polynomial function with integral coefficients of the power sum symmetric polynomials.
For an example, for *n* = 2, the symmetric polynomial

- [math]\displaystyle{ m_{(2,1)}(X_1,X_2) = X_1^2 X_2 + X_1 X_2^2 }[/math]

has the expression

- [math]\displaystyle{ m_{(2,1)}(X_1,X_2)= \textstyle\frac12p_1(X_1,X_2)^3-\frac12p_2(X_1,X_2)p_1(X_1,X_2). }[/math]

Using three variables one gets a different expression

- [math]\displaystyle{ \begin{align}m_{(2,1)}(X_1,X_2,X_3) &= X_1^2 X_2 + X_1 X_2^2 + X_1^2 X_3 + X_1 X_3^2 + X_2^2 X_3 + X_2 X_3^2\\ &= p_1(X_1,X_2,X_3)p_2(X_1,X_2,X_3)-p_3(X_1,X_2,X_3). \end{align} }[/math]

The corresponding expression was valid for two variables as well (it suffices to set *X*_{3} to zero), but since it involves *p*_{3}, it could not be used to illustrate the statement for *n* = 2. The example shows that whether or not the expression for a given monomial symmetric polynomial in terms of the first *n* power sum polynomials involves rational coefficients may depend on *n*. But rational coefficients are *always* needed to express elementary symmetric polynomials (except the constant ones, and *e*_{1} which coincides with the first power sum) in terms of power sum polynomials. The Newton identities provide an explicit method to do this; it involves division by integers up to *n*, which explains the rational coefficients. Because of these divisions, the mentioned statement fails in general when coefficients are taken in a field of finite characteristic; however, it is valid with coefficients in any ring containing the rational numbers.

### Complete homogeneous symmetric polynomials

For each nonnegative integer *k*, the complete homogeneous symmetric polynomial *h*_{k}(*X*_{1}, …, *X*_{n}) is the sum of all distinct monomials of degree *k* in the variables *X*_{1}, …, *X*_{n}. For instance

- [math]\displaystyle{ h_3(X_1,X_2,X_3) = X_1^3+X_1^2X_2+X_1^2X_3+X_1X_2^2+X_1X_2X_3+X_1X_3^2+X_2^3+X_2^2X_3+X_2X_3^2+X_3^3. }[/math]

The polynomial *h*_{k}(*X*_{1}, …, *X*_{n}) is also the sum of all distinct monomial symmetric polynomials of degree *k* in *X*_{1}, …, *X*_{n}, for instance for the given example

- [math]\displaystyle{ \begin{align} h_3(X_1,X_2,X_3)&=m_{(3)}(X_1,X_2,X_3)+m_{(2,1)}(X_1,X_2,X_3)+m_{(1,1,1)}(X_1,X_2,X_3)\\ &=(X_1^3+X_2^3+X_3^3)+(X_1^2X_2+X_1^2X_3+X_1X_2^2+X_1X_3^2+X_2^2X_3+X_2X_3^2)+(X_1X_2X_3).\\ \end{align} }[/math]

All symmetric polynomials in these variables can be built up from complete homogeneous ones: any symmetric polynomial in *X*_{1}, …, *X*_{n} can be obtained from the complete homogeneous symmetric polynomials *h*_{1}(*X*_{1}, …, *X*_{n}), …, *h*_{n}(*X*_{1}, …, *X*_{n}) via multiplications and additions. More precisely:

- Any symmetric polynomial
*P*in*X*_{1}, …,*X*_{n}can be written as a polynomial expression in the polynomials*h*_{k}(*X*_{1}, …,*X*_{n}) with 1 ≤*k*≤*n*. - If
*P*has integral coefficients, then the polynomial expression also has integral coefficients.

For example, for *n* = 2, the relevant complete homogeneous symmetric polynomials are *h*_{1}(*X*_{1}, *X*_{2}) = *X*_{1} + *X*_{2} and *h*_{2}(*X*_{1}, *X*_{2}) = *X*_{1}^{2} + *X*_{1}*X*_{2} + *X*_{2}^{2}. The first polynomial in the list of examples above can then be written as

- [math]\displaystyle{ X_1^3+ X_2^3-7 = -2h_1(X_1,X_2)^3+3h_1(X_1,X_2)h_2(X_1,X_2)-7. }[/math]

As in the case of power sums, the given statement applies in particular to the complete homogeneous symmetric polynomials beyond *h*_{n}(*X*_{1}, …, *X*_{n}), allowing them to be expressed in terms of the ones up to that point; again the resulting identities become invalid when the number of variables is increased.

An important aspect of complete homogeneous symmetric polynomials is their relation to elementary symmetric polynomials, which can be expressed as the identities

- [math]\displaystyle{ \sum_{i=0}^k(-1)^i e_i(X_1,\ldots,X_n)h_{k-i}(X_1,\ldots,X_n) = 0 }[/math], for all
*k*> 0, and any number of variables*n*.

Since *e*_{0}(*X*_{1}, …, *X*_{n}) and *h*_{0}(*X*_{1}, …, *X*_{n}) are both equal to 1, one can isolate either the first or the last term of these summations; the former gives a set of equations that allows one to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials, and the latter gives a set of equations that allows doing the inverse. This implicitly shows that any symmetric polynomial can be expressed in terms of the *h*_{k}(*X*_{1}, …, *X*_{n}) with 1 ≤ *k* ≤ *n*: one first expresses the symmetric polynomial in terms of the elementary symmetric polynomials, and then expresses those in terms of the mentioned complete homogeneous ones.

### Schur polynomials

Another class of symmetric polynomials is that of the Schur polynomials, which are of fundamental importance in the applications of symmetric polynomials to representation theory. They are however not as easy to describe as the other kinds of special symmetric polynomials; see the main article for details.

## Symmetric polynomials in algebra

Symmetric polynomials are important to linear algebra, representation theory, and Galois theory. They are also important in combinatorics, where they are mostly studied through the ring of symmetric functions, which avoids having to carry around a fixed number of variables all the time.

## Alternating polynomials

Analogous to symmetric polynomials are alternating polynomials: polynomials that, rather than being *invariant* under permutation of the entries, change according to the sign of the permutation.

These are all products of the Vandermonde polynomial and a symmetric polynomial, and form a quadratic extension of the ring of symmetric polynomials: the Vandermonde polynomial is a square root of the discriminant.

## See also

## References

- Lang, Serge (2002),
*Algebra*, Graduate Texts in Mathematics,**211**(Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4 - Macdonald, I.G. (1979),
*Symmetric Functions and Hall Polynomials*. Oxford Mathematical Monographs. Oxford: Clarendon Press. - I.G. Macdonald (1995),
*Symmetric Functions and Hall Polynomials*, second ed. Oxford: Clarendon Press. ISBN 0-19-850450-0 (paperback, 1998). - Richard P. Stanley (1999),
*Enumerative Combinatorics*, Vol. 2. Cambridge: Cambridge University Press. ISBN 0-521-56069-1

Original source: https://en.wikipedia.org/wiki/Symmetric polynomial.
Read more |