Zassenhaus algorithm

From HandWiki
Short description: Mathematic algorithm for basis

In mathematics, the Zassenhaus algorithm[1] is a method to calculate a basis for the intersection and sum of two subspaces of a vector space. It is named after Hans Zassenhaus, but no publication of this algorithm by him is known.[2] It is used in computer algebra systems.[3]

Algorithm

Input

Let V be a vector space and U, W two finite-dimensional subspaces of V with the following spanning sets:

[math]\displaystyle{ U = \langle u_1, \ldots, u_n\rangle }[/math]

and

[math]\displaystyle{ W = \langle w_1, \ldots, w_k\rangle. }[/math]

Finally, let [math]\displaystyle{ B_1, \ldots, B_m }[/math] be linearly independent vectors so that [math]\displaystyle{ u_i }[/math] and [math]\displaystyle{ w_i }[/math] can be written as

[math]\displaystyle{ u_i = \sum_{j=1}^m a_{i,j}B_j }[/math]

and

[math]\displaystyle{ w_i = \sum_{j=1}^m b_{i,j}B_j. }[/math]

Output

The algorithm computes the base of the sum [math]\displaystyle{ U + W }[/math] and a base of the intersection [math]\displaystyle{ U \cap W }[/math].

Algorithm

The algorithm creates the following block matrix of size [math]\displaystyle{ ((n+k) \times (2m)) }[/math]:

[math]\displaystyle{ \begin{pmatrix} a_{1,1}&a_{1,2}&\cdots&a_{1,m}&a_{1,1}&a_{1,2}&\cdots&a_{1,m}\\ \vdots&\vdots&&\vdots&\vdots&\vdots&&\vdots\\ a_{n,1}&a_{n,2}&\cdots&a_{n,m}&a_{n,1}&a_{n,2}&\cdots&a_{n,m}\\ b_{1,1}&b_{1,2}&\cdots&b_{1,m}&0&0&\cdots&0\\ \vdots&\vdots&&\vdots&\vdots&\vdots&&\vdots\\ b_{k,1}&b_{k,2}&\cdots&b_{k,m}&0&0&\cdots&0 \end{pmatrix} }[/math]

Using elementary row operations, this matrix is transformed to the row echelon form. Then, it has the following shape:

[math]\displaystyle{ \begin{pmatrix} c_{1,1}&c_{1,2}&\cdots&c_{1,m}&\bullet&\bullet&\cdots&\bullet\\ \vdots&\vdots&&\vdots&\vdots&\vdots&&\vdots\\ c_{q,1}&c_{q,2}&\cdots&c_{q,m}&\bullet&\bullet&\cdots&\bullet\\ 0&0&\cdots&0&d_{1,1}&d_{1,2}&\cdots&d_{1,m}\\ \vdots&\vdots&&\vdots&\vdots&\vdots&&\vdots\\ 0&0&\cdots&0&d_{\ell,1}&d_{\ell,2}&\cdots&d_{\ell,m}\\ 0&0&\cdots&0&0&0&\cdots&0\\ \vdots&\vdots&&\vdots&\vdots&\vdots&&\vdots\\ 0&0&\cdots&0&0&0&\cdots&0 \end{pmatrix} }[/math]

Here, [math]\displaystyle{ \bullet }[/math] stands for arbitrary numbers, and the vectors [math]\displaystyle{ (c_{p,1}, c_{p,2}, \ldots, c_{p,m}) }[/math] for every [math]\displaystyle{ p \in \{1, \ldots, q\} }[/math] and [math]\displaystyle{ (d_{p,1}, \ldots, d_{p,m}) }[/math] for every [math]\displaystyle{ p\in \{1, \ldots, \ell\} }[/math] are nonzero.

Then [math]\displaystyle{ (y_1, \ldots, y_q) }[/math] with

[math]\displaystyle{ y_i := \sum_{j=1}^m c_{i,j}B_j }[/math]

is a basis of [math]\displaystyle{ U+W }[/math] and [math]\displaystyle{ (z_1, \ldots, z_\ell) }[/math] with

[math]\displaystyle{ z_i := \sum_{j=1}^m d_{i,j}B_j }[/math]

is a basis of [math]\displaystyle{ U \cap W }[/math].

Proof of correctness

First, we define [math]\displaystyle{ \pi_1 : V \times V \to V, (a, b) \mapsto a }[/math] to be the projection to the first component.

Let [math]\displaystyle{ H:=\{(u,u) \mid u\in U\}+\{(w,0) \mid w\in W\} \subseteq V\times V. }[/math] Then [math]\displaystyle{ \pi_1(H) = U+W }[/math] and [math]\displaystyle{ H\cap(0\times V)=0\times(U\cap W) }[/math].

Also, [math]\displaystyle{ H \cap (0 \times V) }[/math] is the kernel of [math]\displaystyle{ {\pi_1|}_H }[/math], the projection restricted to H. Therefore, [math]\displaystyle{ \dim(H) = \dim(U+W)+\dim(U\cap W) }[/math].

The Zassenhaus algorithm calculates a basis of H. In the first m columns of this matrix, there is a basis [math]\displaystyle{ y_i }[/math] of [math]\displaystyle{ U+W }[/math].

The rows of the form [math]\displaystyle{ (0, z_i) }[/math] (with [math]\displaystyle{ z_i \neq 0 }[/math]) are obviously in [math]\displaystyle{ H \cap (0 \times V) }[/math]. Because the matrix is in row echelon form, they are also linearly independent. All rows which are different from zero ([math]\displaystyle{ (y_i, \bullet) }[/math] and [math]\displaystyle{ (0, z_i) }[/math]) are a basis of H, so there are [math]\displaystyle{ \dim(U \cap W) }[/math] such [math]\displaystyle{ z_i }[/math]s. Therefore, the [math]\displaystyle{ z_i }[/math]s form a basis of [math]\displaystyle{ U \cap W }[/math].

Example

Consider the two subspaces [math]\displaystyle{ U = \left\langle \left( \begin{array}{r} 1\\ -1 \\ 0 \\ 1 \end{array} \right), \left( \begin{array}{r} 0 \\ 0 \\ 1 \\ -1 \end{array} \right) \right\rangle }[/math] and [math]\displaystyle{ W = \left\langle \left( \begin{array}{r} 5 \\ 0 \\ -3 \\ 3 \end{array} \right), \left( \begin{array}{r} 0 \\ 5 \\ -3 \\ -2 \end{array} \right) \right\rangle }[/math] of the vector space [math]\displaystyle{ \mathbb{R}^4 }[/math].

Using the standard basis, we create the following matrix of dimension [math]\displaystyle{ (2 + 2) \times (2 \cdot 4) }[/math]:

[math]\displaystyle{ \left( \begin{array}{rrrrrrrr} 1 & -1 & 0 & 1 & & 1 & -1 & 0 & 1 \\ 0&0&1&-1& &0&0&1&-1 \\ \\ 5&0&-3&3& &0&0&0&0 \\ 0&5&-3&-2& &0&0&0&0 \end{array} \right). }[/math]

Using elementary row operations, we transform this matrix into the following matrix:

[math]\displaystyle{ \left( \begin{array}{rrrrrrrrr} 1 & 0 & 0 & 0 & & \bullet & \bullet & \bullet & \bullet \\ 0&1&0&-1& &\bullet&\bullet&\bullet&\bullet\\ 0&0&1&-1& &\bullet&\bullet&\bullet&\bullet\\ \\ 0&0&0&0& &1&-1&0&1 \end{array} \right) }[/math] (Some entries have been replaced by "[math]\displaystyle{ \bullet }[/math]" because they are irrelevant to the result.)

Therefore [math]\displaystyle{ \left( \left(\begin{array}{r} 1\\0\\0\\0\end{array} \right), \left( \begin{array}{r} 0\\1\\0\\-1\end{array} \right), \left( \begin{array}{r} 0\\0\\1\\-1\end{array}\right) \right) }[/math] is a basis of [math]\displaystyle{ U+W }[/math], and [math]\displaystyle{ \left( \left(\begin{array}{r} 1\\-1\\0\\1\end{array}\right) \right) }[/math] is a basis of [math]\displaystyle{ U \cap W }[/math].

See also

References

  1. "Some algorithms for nilpotent permutation groups", Journal of Symbolic Computation 23 (4): 335–354, April 1997, doi:10.1006/jsco.1996.0092 .
  2. Fischer, Gerd (2012) (in de), Lernbuch Lineare Algebra und Analytische Geometrie, Vieweg+Teubner, pp. 207–210, doi:10.1007/978-3-8348-2379-3, ISBN 978-3-8348-2378-6 
  3. The GAP Group (February 13, 2015), "24 Matrices", GAP Reference Manual, Release 4.7, http://www.gap-system.org/Manuals/doc/ref/chap24.html, retrieved 2015-06-11 

External links