# Symmetric polynomial

In mathematics, a **symmetric polynomial** is a polynomial *P*(*X*_{1}, *X*_{2}, …, *X*_{n}) in *n* variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, *P* is a *symmetric polynomial* if for any permutation σ of the subscripts 1, 2, ..., *n* one has *P*(*X*_{σ(1)}, *X*_{σ(2)}, …, *X*_{σ(n)}) = *P*(*X*_{1}, *X*_{2}, …, *X*_{n}).

Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view the elementary symmetric polynomials are the most fundamental symmetric polynomials. A theorem states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials, which implies that every *symmetric* polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial.

Symmetric polynomials also form an interesting structure by themselves, independently of any relation to the roots of a polynomial. In this context other collections of specific symmetric polynomials, such as complete homogeneous, power sum, and Schur polynomials play important roles alongside the elementary ones. The resulting structures, and in particular the ring of symmetric functions, are of great importance in combinatorics and in representation theory.

There are many ways to make specific symmetric polynomials in any number of variables (see the various types below). An example of a somewhat different flavor is

where first a polynomial is constructed that changes sign under every exchange of variables, and taking the square renders it completely symmetric (if the variables represent the roots of a monic polynomial, this polynomial gives its discriminant).

has only symmetry under cyclic permutations of the three variables, which is not sufficient to be a symmetric polynomial. However, the following is symmetric:

One context in which symmetric polynomial functions occur is in the study of monic univariate polynomials of degree *n* having *n* roots in a given field. These *n* roots determine the polynomial, and when they are considered as independent variables, the coefficients of the polynomial are symmetric polynomial functions of the roots. Moreover the fundamental theorem of symmetric polynomials implies that a polynomial function *f* of the *n* roots can be expressed as (another) polynomial function of the coefficients of the polynomial determined by the roots if and only if *f* is given by a symmetric polynomial.

This yields the approach to solving polynomial equations by inverting this map, "breaking" the symmetry – given the coefficients of the polynomial (the elementary symmetric polynomials in the roots), how can one recover the roots? This leads to studying solutions of polynomials using the permutation group of the roots, originally in the form of Lagrange resolvents, later developed in Galois theory.

with coefficients *a*_{i} in some field *K*. There exist *n* roots *x*_{1},…,*x*_{n} of *P* in some possibly larger field (for instance if *K* is the field of real numbers, the roots will exist in the field of complex numbers); some of the roots might be equal, but the fact that one has *all* roots is expressed by the relation

These are in fact just instances of Viète's formulas. They show that all coefficients of the polynomial are given in terms of the roots by a symmetric polynomial expression: although for a given polynomial *P* there may be qualitative differences between the roots (like lying in the base field *K* or not, being simple or multiple roots), none of this affects the way the roots occur in these expressions.

There are a few types of symmetric polynomials in the variables *X*_{1}, *X*_{2}, …, *X*_{n} that are fundamental.

For each nonnegative integer *k*, the elementary symmetric polynomial *e*_{k}(*X*_{1}, …, *X*_{n}) is the sum of all distinct products of *k* distinct variables. (Some authors denote it by σ_{k} instead.) For *k* = 0 there is only the empty product so *e*_{0}(*X*_{1}, …, *X*_{n}) = 1, while for *k* > *n*, no products at all can be formed, so *e*_{k}(*X*_{1}, *X*_{2}, …, *X*_{n}) = 0 in these cases. The remaining *n* elementary symmetric polynomials are building blocks for all symmetric polynomials in these variables: as mentioned above, any symmetric polynomial in the variables considered can be obtained from these elementary symmetric polynomials using multiplications and additions only. In fact one has the following more detailed facts:

For example, for *n* = 2, the relevant elementary symmetric polynomials are *e*_{1}(*X*_{1}, *X*_{2}) = *X*_{1} + *X*_{2}, and *e*_{2}(*X*_{1}, *X*_{2}) = *X*_{1}*X*_{2}. The first polynomial in the list of examples above can then be written as

(for a proof that this is always possible see the fundamental theorem of symmetric polynomials).

Powers and products of elementary symmetric polynomials work out to rather complicated expressions. If one seeks basic *additive* building blocks for symmetric polynomials, a more natural choice is to take those symmetric polynomials that contain only one type of monomial, with only those copies required to obtain symmetry. Any monomial in *X*_{1}, …, *X*_{n} can be written as *X*_{1}^{α1}…*X*_{n}^{αn} where the exponents α_{i} are natural numbers (possibly zero); writing α = (α_{1},…,α_{n}) this can be abbreviated to *X*^{ α}. The **monomial symmetric polynomial** *m*_{α}(*X*_{1}, …, *X*_{n}) is defined as the sum of all monomials *x*^{β} where β ranges over all *distinct* permutations of (α_{1},…,α_{n}). For instance one has

Clearly *m*_{α} = *m*_{β} when β is a permutation of α, so one usually considers only those *m*_{α} for which α_{1} ≥ α_{2} ≥ … ≥ α_{n}, in other words for which α is a partition of an integer.
These monomial symmetric polynomials form a vector space basis: every symmetric polynomial *P* can be written as a linear combination of the monomial symmetric polynomials. To do this it suffices to separate the different types of monomial occurring in *P*. In particular if *P* has integer coefficients, then so will the linear combination.

The elementary symmetric polynomials are particular cases of monomial symmetric polynomials: for 0 ≤ *k* ≤ *n* one has

For each integer *k* ≥ 1, the monomial symmetric polynomial *m*_{(k,0,…,0)}(*X*_{1}, …, *X*_{n}) is of special interest. It is the power sum symmetric polynomial, defined as

All symmetric polynomials can be obtained from the first *n* power sum symmetric polynomials by additions and multiplications, possibly involving rational coefficients. More precisely,

In particular, the remaining power sum polynomials *p*_{k}(*X*_{1}, …, *X*_{n}) for *k* > *n* can be so expressed in the first *n* power sum polynomials; for example

In contrast to the situation for the elementary and complete homogeneous polynomials, a symmetric polynomial in *n* variables with *integral* coefficients need not be a polynomial function with integral coefficients of the power sum symmetric polynomials.
For an example, for *n* = 2, the symmetric polynomial

The corresponding expression was valid for two variables as well (it suffices to set *X*_{3} to zero), but since it involves *p*_{3}, it could not be used to illustrate the statement for *n* = 2. The example shows that whether or not the expression for a given monomial symmetric polynomial in terms of the first *n* power sum polynomials involves rational coefficients may depend on *n*. But rational coefficients are *always* needed to express elementary symmetric polynomials (except the constant ones, and *e*_{1} which coincides with the first power sum) in terms of power sum polynomials. The Newton identities provide an explicit method to do this; it involves division by integers up to *n*, which explains the rational coefficients. Because of these divisions, the mentioned statement fails in general when coefficients are taken in a field of finite characteristic; however, it is valid with coefficients in any ring containing the rational numbers.

For each nonnegative integer *k*, the complete homogeneous symmetric polynomial *h*_{k}(*X*_{1}, …, *X*_{n}) is the sum of all distinct monomials of degree *k* in the variables *X*_{1}, …, *X*_{n}. For instance

The polynomial *h*_{k}(*X*_{1}, …, *X*_{n}) is also the sum of all distinct monomial symmetric polynomials of degree *k* in *X*_{1}, …, *X*_{n}, for instance for the given example

All symmetric polynomials in these variables can be built up from complete homogeneous ones: any symmetric polynomial in *X*_{1}, …, *X*_{n} can be obtained from the complete homogeneous symmetric polynomials *h*_{1}(*X*_{1}, …, *X*_{n}), …, *h*_{n}(*X*_{1}, …, *X*_{n}) via multiplications and additions. More precisely:

*P*has integral coefficients, then the polynomial expression also has integral coefficients.

For example, for *n* = 2, the relevant complete homogeneous symmetric polynomials are *h*_{1}(*X*_{1}, *X*_{2}) = *X*_{1} + *X*_{2} and *h*_{2}(*X*_{1}, *X*_{2}) = *X*_{1}^{2} + *X*_{1}*X*_{2} + *X*_{2}^{2}. The first polynomial in the list of examples above can then be written as

As in the case of power sums, the given statement applies in particular to the complete homogeneous symmetric polynomials beyond *h*_{n}(*X*_{1}, …, *X*_{n}), allowing them to be expressed in terms of the ones up to that point; again the resulting identities become invalid when the number of variables is increased.

An important aspect of complete homogeneous symmetric polynomials is their relation to elementary symmetric polynomials, which can be expressed as the identities

Since *e*_{0}(*X*_{1}, …, *X*_{n}) and *h*_{0}(*X*_{1}, …, *X*_{n}) are both equal to 1, one can isolate either the first or the last term of these summations; the former gives a set of equations that allows one to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials, and the latter gives a set of equations that allows doing the inverse. This implicitly shows that any symmetric polynomial can be expressed in terms of the *h*_{k}(*X*_{1}, …, *X*_{n}) with 1 ≤ *k* ≤ *n*: one first expresses the symmetric polynomial in terms of the elementary symmetric polynomials, and then expresses those in terms of the mentioned complete homogeneous ones.

Another class of symmetric polynomials is that of the Schur polynomials, which are of fundamental importance in the applications of symmetric polynomials to representation theory. They are however not as easy to describe as the other kinds of special symmetric polynomials; see the main article for details.

Symmetric polynomials are important to linear algebra, representation theory, and Galois theory. They are also important in combinatorics, where they are mostly studied through the ring of symmetric functions, which avoids having to carry around a fixed number of variables all the time.

Analogous to symmetric polynomials are alternating polynomials: polynomials that, rather than being *invariant* under permutation of the entries, change according to the sign of the permutation.

These are all products of the Vandermonde polynomial and a symmetric polynomial, and form a quadratic extension of the ring of symmetric polynomials: the Vandermonde polynomial is a square root of the discriminant.