Alternant matrix

From HandWiki

In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.

Generally, if [math]\displaystyle{ f_1, f_2, \dots, f_n }[/math] are functions from a set [math]\displaystyle{ X }[/math] to a field [math]\displaystyle{ F }[/math], and [math]\displaystyle{ {\alpha_1, \alpha_2, \ldots, \alpha_m} \in X }[/math], then the alternant matrix has size [math]\displaystyle{ m \times n }[/math] and is defined by

[math]\displaystyle{ M=\begin{bmatrix} f_1(\alpha_1) & f_2(\alpha_1) & \cdots & f_n(\alpha_1)\\ f_1(\alpha_2) & f_2(\alpha_2) & \cdots & f_n(\alpha_2)\\ f_1(\alpha_3) & f_2(\alpha_3) & \cdots & f_n(\alpha_3)\\ \vdots & \vdots & \ddots &\vdots \\ f_1(\alpha_m) & f_2(\alpha_m) & \cdots & f_n(\alpha_m)\\ \end{bmatrix} }[/math]

or, more compactly, [math]\displaystyle{ M_{ij} = f_j(\alpha_i) }[/math]. (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which [math]\displaystyle{ f_j(\alpha)=\alpha^{j-1} }[/math], and Moore matrices, for which [math]\displaystyle{ f_j(\alpha)=\alpha^{q^{j-1}} }[/math].

Properties

  • The alternant can be used to check the linear independence of the functions [math]\displaystyle{ f_1, f_2, \dots, f_n }[/math] in function space. For example, let [math]\displaystyle{ f_1(x) = \sin(x) }[/math], [math]\displaystyle{ f_2(x) = \cos(x) }[/math] and choose [math]\displaystyle{ \alpha_1 = 0, \alpha_2 = \pi/2 }[/math]. Then the alternant is the matrix [math]\displaystyle{ \left[\begin{smallmatrix}0 & 1 \\ 1 & 0 \end{smallmatrix}\right] }[/math] and the alternant determinant is [math]\displaystyle{ -1 \neq 0 }[/math]. Therefore M is invertible and the vectors [math]\displaystyle{ \{\sin(x), \cos(x)\} }[/math] form a basis for their spanning set: in particular, [math]\displaystyle{ \sin(x) }[/math] and [math]\displaystyle{ \cos(x) }[/math] are linearly independent.
  • Linear dependence of the columns of an alternant does not imply that the functions are linearly dependent in function space. For example, let [math]\displaystyle{ f_1(x) = \sin(x) }[/math], [math]\displaystyle{ f_2 = \cos(x) }[/math] and choose [math]\displaystyle{ \alpha_1 = 0, \alpha_2 = \pi }[/math]. Then the alternant is [math]\displaystyle{ \left[\begin{smallmatrix}0 & 1 \\ 0 & -1 \end{smallmatrix}\right] }[/math] and the alternant determinant is 0, but we have already seen that [math]\displaystyle{ \sin(x) }[/math] and [math]\displaystyle{ \cos(x) }[/math] are linearly independent.
  • Despite this, the alternant can be used to find a linear dependence if it is already known that one exists. For example, we know from the theory of partial fractions that there are real numbers A and B for which [math]\displaystyle{ \frac{A}{x+1} + \frac{B}{x+2} = \frac{1}{(x+1)(x+2)} }[/math]. Choosing [math]\displaystyle{ f_1(x) = \frac{1}{x+1} }[/math], [math]\displaystyle{ f_2(x) = \frac{1}{x+2} }[/math], [math]\displaystyle{ f_3(x) = \frac{1}{(x+1)(x+2)} }[/math] and [math]\displaystyle{ (\alpha_1,\alpha_2,\alpha_3) = (1,2,3) }[/math], we obtain the alternant [math]\displaystyle{ \begin{bmatrix} 1/2 & 1/3 & 1/6 \\ 1/3 & 1/4 & 1/12 \\ 1/4 & 1/5 & 1/20 \end{bmatrix} \sim \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \end{bmatrix} }[/math]. Therefore, [math]\displaystyle{ (1,-1,-1) }[/math] is in the nullspace of the matrix: that is, [math]\displaystyle{ f_1 - f_2 - f_3 = 0 }[/math]. Moving [math]\displaystyle{ f_3 }[/math] to the other side of the equation gives the partial fraction decomposition [math]\displaystyle{ A = 1, B = -1 }[/math].
  • If [math]\displaystyle{ n = m }[/math] and [math]\displaystyle{ \alpha_i = \alpha_j }[/math] for any [math]\displaystyle{ i \neq j }[/math], then the alternant determinant is zero (as a row is repeated).
  • If [math]\displaystyle{ n = m }[/math] and the functions [math]\displaystyle{ f_j(x) }[/math] are all polynomials, then [math]\displaystyle{ (\alpha_j - \alpha_i) }[/math] divides the alternant determinant for all [math]\displaystyle{ 1 \leq i \lt j \leq n }[/math]. In particular, if V is a Vandermonde matrix, then [math]\displaystyle{ \prod_{i \lt j} (\alpha_j - \alpha_i) = \det V }[/math] divides such polynomial alternant determinants. The ratio [math]\displaystyle{ \frac{\det M}{\det V} }[/math] is therefore a polynomial in [math]\displaystyle{ \alpha_1, \ldots, \alpha_m }[/math] called the bialternant. The Schur polynomial [math]\displaystyle{ s_{(\lambda_1, \dots, \lambda_n)} }[/math] is classically defined as the bialternant of the polynomials [math]\displaystyle{ f_j(x) = x^{\lambda_j} }[/math].

Applications

See also

References

  • Thomas Muir (1960). A treatise on the theory of determinants. Dover Publications. pp. 321–363. 
  • A. C. Aitken (1956). Determinants and Matrices. Oliver and Boyd Ltd. pp. 111–123. 
  • Richard P. Stanley (1999). Enumerative Combinatorics. Cambridge University Press. pp. 334–342.