Moore matrix

From HandWiki

In linear algebra, a Moore matrix, introduced by E. H. Moore (1896), is a matrix defined over a finite field. When it is a square matrix its determinant is called a Moore determinant (this is unrelated to the Moore determinant of a quaternionic Hermitian matrix). The Moore matrix has successive powers of the Frobenius automorphism applied to its columns (beginning with the zeroth power of the Frobenius automorphism in the first column), so it is an m × n matrix [math]\displaystyle{ M=\begin{bmatrix} \alpha_1 & \alpha_1^q & \dots & \alpha_1^{q^{n-1}}\\ \alpha_2 & \alpha_2^q & \dots & \alpha_2^{q^{n-1}}\\ \alpha_3 & \alpha_3^q & \dots & \alpha_3^{q^{n-1}}\\ \vdots & \vdots & \ddots &\vdots \\ \alpha_m & \alpha_m^q & \dots & \alpha_m^{q^{n-1}}\\ \end{bmatrix} }[/math] or [math]\displaystyle{ M_{i,j} = \alpha_i^{q^{j-1}} }[/math] for all indices i and j. (Some authors use the transpose of the above matrix.)

The Moore determinant of a square Moore matrix (so m = n) can be expressed as:

[math]\displaystyle{ \det(V) = \prod_{\mathbf{c}} \left( c_1\alpha_1 + \cdots + c_n\alpha_n \right), }[/math]

where c runs over a complete set of direction vectors, made specific by having the last non-zero entry equal to 1, i.e.,

[math]\displaystyle{ \det(V) = \prod_{1 \le i \le n} \prod_{c_1, \dots, c_{i-1}} \left( c_1\alpha_1 + \cdots + c_{i-1}\alpha_{i-1} + \alpha_i \right). }[/math]

In particular the Moore determinant vanishes if and only if the elements in the left hand column are linearly dependent over the finite field of order q. So it is analogous to the Wronskian of several functions.

Dickson used the Moore determinant in finding the modular invariants of the general linear group over a finite field.

See also

References