# Alternant matrix

In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.

Generally, if $\displaystyle{ f_1, f_2, \dots, f_n }$ are functions from a set $\displaystyle{ X }$ to a field $\displaystyle{ F }$, and $\displaystyle{ {\alpha_1, \alpha_2, \ldots, \alpha_m} \in X }$, then the alternant matrix has size $\displaystyle{ m \times n }$ and is defined by

$\displaystyle{ M=\begin{bmatrix} f_1(\alpha_1) & f_2(\alpha_1) & \cdots & f_n(\alpha_1)\\ f_1(\alpha_2) & f_2(\alpha_2) & \cdots & f_n(\alpha_2)\\ f_1(\alpha_3) & f_2(\alpha_3) & \cdots & f_n(\alpha_3)\\ \vdots & \vdots & \ddots &\vdots \\ f_1(\alpha_m) & f_2(\alpha_m) & \cdots & f_n(\alpha_m)\\ \end{bmatrix} }$

or, more compactly, $\displaystyle{ M_{ij} = f_j(\alpha_i) }$. (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which $\displaystyle{ f_j(\alpha)=\alpha^{j-1} }$, and Moore matrices, for which $\displaystyle{ f_j(\alpha)=\alpha^{q^{j-1}} }$.

## Properties

• The alternant can be used to check the linear independence of the functions $\displaystyle{ f_1, f_2, \dots, f_n }$ in function space. For example, let $\displaystyle{ f_1(x) = \sin(x) }$, $\displaystyle{ f_2(x) = \cos(x) }$ and choose $\displaystyle{ \alpha_1 = 0, \alpha_2 = \pi/2 }$. Then the alternant is the matrix $\displaystyle{ \left[\begin{smallmatrix}0 & 1 \\ 1 & 0 \end{smallmatrix}\right] }$ and the alternant determinant is $\displaystyle{ -1 \neq 0 }$. Therefore M is invertible and the vectors $\displaystyle{ \{\sin(x), \cos(x)\} }$ form a basis for their spanning set: in particular, $\displaystyle{ \sin(x) }$ and $\displaystyle{ \cos(x) }$ are linearly independent.
• Linear dependence of the columns of an alternant does not imply that the functions are linearly dependent in function space. For example, let $\displaystyle{ f_1(x) = \sin(x) }$, $\displaystyle{ f_2 = \cos(x) }$ and choose $\displaystyle{ \alpha_1 = 0, \alpha_2 = \pi }$. Then the alternant is $\displaystyle{ \left[\begin{smallmatrix}0 & 1 \\ 0 & -1 \end{smallmatrix}\right] }$ and the alternant determinant is 0, but we have already seen that $\displaystyle{ \sin(x) }$ and $\displaystyle{ \cos(x) }$ are linearly independent.
• Despite this, the alternant can be used to find a linear dependence if it is already known that one exists. For example, we know from the theory of partial fractions that there are real numbers A and B for which $\displaystyle{ \frac{A}{x+1} + \frac{B}{x+2} = \frac{1}{(x+1)(x+2)} }$. Choosing $\displaystyle{ f_1(x) = \frac{1}{x+1} }$, $\displaystyle{ f_2(x) = \frac{1}{x+2} }$, $\displaystyle{ f_3(x) = \frac{1}{(x+1)(x+2)} }$ and $\displaystyle{ (\alpha_1,\alpha_2,\alpha_3) = (1,2,3) }$, we obtain the alternant $\displaystyle{ \begin{bmatrix} 1/2 & 1/3 & 1/6 \\ 1/3 & 1/4 & 1/12 \\ 1/4 & 1/5 & 1/20 \end{bmatrix} \sim \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \end{bmatrix} }$. Therefore, $\displaystyle{ (1,-1,-1) }$ is in the nullspace of the matrix: that is, $\displaystyle{ f_1 - f_2 - f_3 = 0 }$. Moving $\displaystyle{ f_3 }$ to the other side of the equation gives the partial fraction decomposition $\displaystyle{ A = 1, B = -1 }$.
• If $\displaystyle{ n = m }$ and $\displaystyle{ \alpha_i = \alpha_j }$ for any $\displaystyle{ i \neq j }$, then the alternant determinant is zero (as a row is repeated).
• If $\displaystyle{ n = m }$ and the functions $\displaystyle{ f_j(x) }$ are all polynomials, then $\displaystyle{ (\alpha_j - \alpha_i) }$ divides the alternant determinant for all $\displaystyle{ 1 \leq i \lt j \leq n }$. In particular, if V is a Vandermonde matrix, then $\displaystyle{ \prod_{i \lt j} (\alpha_j - \alpha_i) = \det V }$ divides such polynomial alternant determinants. The ratio $\displaystyle{ \frac{\det M}{\det V} }$ is therefore a polynomial in $\displaystyle{ \alpha_1, \ldots, \alpha_m }$ called the bialternant. The Schur polynomial $\displaystyle{ s_{(\lambda_1, \dots, \lambda_n)} }$ is classically defined as the bialternant of the polynomials $\displaystyle{ f_j(x) = x^{\lambda_j} }$.

## References

• Thomas Muir (1960). A treatise on the theory of determinants. Dover Publications. pp. 321–363.
• A. C. Aitken (1956). Determinants and Matrices. Oliver and Boyd Ltd. pp. 111–123.
• Richard P. Stanley (1999). Enumerative Combinatorics. Cambridge University Press. pp. 334–342.