Lie's theorem

From HandWiki

In mathematics, specifically the theory of Lie algebras, Lie's theorem states that,[1] over an algebraically closed field of characteristic zero, if [math]\displaystyle{ \pi: \mathfrak{g} \to \mathfrak{gl}(V) }[/math] is a finite-dimensional representation of a solvable Lie algebra, then [math]\displaystyle{ \pi(\mathfrak{g}) }[/math] stabilizes a flag [math]\displaystyle{ V = V_0 \supset V_1 \supset \cdots \supset V_n = 0, \operatorname{codim} V_i = i }[/math]; "stabilizes" means [math]\displaystyle{ \pi(X) V_i \subset V_i }[/math] for each [math]\displaystyle{ X \in \mathfrak{g} }[/math] and i. Put in another way, the theorem says there is a basis for V such that all linear transformations in [math]\displaystyle{ \pi(\mathfrak{g}) }[/math] are represented by upper triangular matrices.[2] This is a generalization of the result of Frobenius that commuting matrices are simultaneously upper triangularizable, as commuting matrices form an abelian Lie algebra, which is a fortiori solvable.

A consequence of Lie's theorem is that any finite dimensional solvable Lie algebra over a field of characteristic 0 has a nilpotent derived algebra (see #Consequences). Also, to each flag in a finite-dimensional vector space V, there correspond a Borel subalgebra (that consist of linear transformations stabilizing the flag); thus, the theorem says that [math]\displaystyle{ \pi(\mathfrak{g}) }[/math] is contained in some Borel subalgebra of [math]\displaystyle{ \mathfrak{gl}(V) }[/math].[1]

Counter-example

For algebraically closed fields of characteristic p>0 Lie's theorem holds provided the dimension of the representation is less than p (see the proof below), but can fail for representations of dimension p. An example is given by the 3-dimensional nilpotent Lie algebra spanned by 1, x, and d/dx acting on the p-dimensional vector space k[x]/(xp), which has no eigenvectors. Taking the semidirect product of this 3-dimensional Lie algebra by the p-dimensional representation (considered as an abelian Lie algebra) gives a solvable Lie algebra whose derived algebra is not nilpotent.

Proof

The proof is by induction on the dimension of [math]\displaystyle{ \mathfrak{g} }[/math] and consists of several steps. (Note: the structure of the proof is very similar to that for Engel's theorem.) The basic case is trivial and we assume the dimension of [math]\displaystyle{ \mathfrak{g} }[/math] is positive. We also assume V is not zero. For simplicity, we write [math]\displaystyle{ X \cdot v = \pi(X) v }[/math].

Step 1: Observe that the theorem is equivalent to the statement:[3]

  • There exists a vector in V that is an eigenvector for each linear transformation in [math]\displaystyle{ \pi(\mathfrak{g}) }[/math].
Indeed, the theorem says in particular that a nonzero vector spanning [math]\displaystyle{ V_{n-1} }[/math] is a common eigenvector for all the linear transformations in [math]\displaystyle{ \pi(\mathfrak{g}) }[/math]. Conversely, if v is a common eigenvector, take [math]\displaystyle{ V_{n-1} }[/math] to its span and then [math]\displaystyle{ \pi(\mathfrak{g}) }[/math] admits a common eigenvector in the quotient [math]\displaystyle{ V/V_{n-1} }[/math]; repeat the argument.

Step 2: Find an ideal [math]\displaystyle{ \mathfrak{h} }[/math] of codimension one in [math]\displaystyle{ \mathfrak{g} }[/math].

Let [math]\displaystyle{ D\mathfrak{g} = [\mathfrak{g}, \mathfrak{g}] }[/math] be the derived algebra. Since [math]\displaystyle{ \mathfrak{g} }[/math] is solvable and has positive dimension, [math]\displaystyle{ D\mathfrak{g} \ne \mathfrak{g} }[/math] and so the quotient [math]\displaystyle{ \mathfrak{g}/D\mathfrak{g} }[/math] is a nonzero abelian Lie algebra, which certainly contains an ideal of codimension one and by the ideal correspondence, it corresponds to an ideal of codimension one in [math]\displaystyle{ \mathfrak{g} }[/math].

Step 3: There exists some linear functional [math]\displaystyle{ \lambda }[/math] in [math]\displaystyle{ \mathfrak{h}^* }[/math] such that

[math]\displaystyle{ V_{\lambda} = \{ v \in V | X \cdot v = \lambda(X) v, X \in \mathfrak{h} \} }[/math]

is nonzero.

This follows from the inductive hypothesis (it is easy to check that the eigenvalues determine a linear functional).

Step 4: [math]\displaystyle{ V_{\lambda} }[/math] is a [math]\displaystyle{ \mathfrak{g} }[/math]-module.

(Note this step proves a general fact and does not involve solvability.)
Let [math]\displaystyle{ Y }[/math] be in [math]\displaystyle{ \mathfrak{g} }[/math], [math]\displaystyle{ v \in V_{\lambda} }[/math] and set recursively [math]\displaystyle{ v_0 = v, \, v_{i+1} = Y \cdot v_i }[/math]. For any [math]\displaystyle{ X \in \mathfrak{h} }[/math], since [math]\displaystyle{ \mathfrak{h} }[/math] is an ideal,
[math]\displaystyle{ X \cdot v_i = \lambda(X) v_i + \lambda([X, Y]) v_{i-1} }[/math].
This says that [math]\displaystyle{ X }[/math] (that is [math]\displaystyle{ \pi(X) }[/math]) restricted to [math]\displaystyle{ U = \operatorname{span} \{ v_i | i \ge 0 \} }[/math] is represented by a matrix whose diagonal is [math]\displaystyle{ \lambda(X) }[/math] repeated. Hence, [math]\displaystyle{ \dim(U) \lambda([X, Y]) = \operatorname{tr}([\pi(X)|_U, \pi(Y)|_U]) = 0 }[/math]. Since [math]\displaystyle{ \dim(U) }[/math] is invertible, [math]\displaystyle{ \lambda([X, Y]) = 0 }[/math] and [math]\displaystyle{ Y \cdot v }[/math] is an eigenvector for X.

Step 5: Finish up the proof by finding a common eigenvector.

Write [math]\displaystyle{ \mathfrak{g} = \mathfrak{h} + L }[/math] where L is a one-dimensional vector subspace. Since the base field k is algebraically closed, there exists an eigenvector in [math]\displaystyle{ V_{\lambda} }[/math] for some (thus every) nonzero element of L. Since that vector is also eigenvector for each element of [math]\displaystyle{ \mathfrak{h} }[/math], the proof is complete. [math]\displaystyle{ \square }[/math]

Consequences

The theorem applies in particular to the adjoint representation [math]\displaystyle{ \operatorname{ad}: \mathfrak{g} \to \mathfrak{gl}(\mathfrak{g}) }[/math] of a (finite-dimensional) solvable Lie algebra [math]\displaystyle{ \mathfrak{g} }[/math]; thus, one can choose a basis on [math]\displaystyle{ \mathfrak{g} }[/math] with respect to which [math]\displaystyle{ \operatorname{ad}(\mathfrak{g}) }[/math] consists of upper-triangular matrices. It follows easily that for each [math]\displaystyle{ x, y \in \mathfrak{g} }[/math], [math]\displaystyle{ \operatorname{ad}([x, y]) = [\operatorname{ad}(x), \operatorname{ad}(y)] }[/math] has diagonal consisting of zeros; i.e., [math]\displaystyle{ \operatorname{ad}([x, y]) }[/math] is a nilpotent matrix. By Engel's theorem, this implies that [math]\displaystyle{ [\mathfrak g, \mathfrak g] }[/math] is a nilpotent Lie algebra; the converse is obviously true as well. Moreover, whether a linear transformation is nilpotent or not can be determined after extending the base field to its algebraic closure. Hence, one concludes the statement:[4]

A finite-dimensional Lie algebra [math]\displaystyle{ \mathfrak g }[/math] over a field of characteristic zero is solvable if and only if the derived algebra [math]\displaystyle{ D \mathfrak g = [\mathfrak g, \mathfrak g] }[/math] is nilpotent.

Lie's theorem also establishes one direction in Cartan's criterion for solvability: if V is a finite-dimensional vector over a field of characteristic zero and [math]\displaystyle{ \mathfrak{g} \subset \mathfrak{gl}(V) }[/math] a Lie subalgebra, then [math]\displaystyle{ \mathfrak{g} }[/math] is solvable if and only if [math]\displaystyle{ \operatorname{tr}(XY) = 0 }[/math] for every [math]\displaystyle{ X \in \mathfrak{g} }[/math] and [math]\displaystyle{ Y \in [\mathfrak{g}, \mathfrak{g}] }[/math].[5]

Indeed, as above, after extending the base field, the implication [math]\displaystyle{ \Rightarrow }[/math] is seen easily. (The converse is more difficult to prove.)

Lie's theorem (for various V) is equivalent to the statement:[6]

For a solvable Lie algebra [math]\displaystyle{ \mathfrak g }[/math], each finite-dimensional simple [math]\displaystyle{ \mathfrak{g} }[/math]-module (i.e., irreducible as a representation) has dimension one.

Indeed, Lie's theorem clearly implies this statement. Conversely, assume the statement is true. Given a finite-dimensional [math]\displaystyle{ \mathfrak g }[/math]-module V, let [math]\displaystyle{ V_1 }[/math] be a maximal [math]\displaystyle{ \mathfrak g }[/math]-submodule (which exists by finiteness of the dimension). Then, by maximality, [math]\displaystyle{ V/V_1 }[/math] is simple; thus, is one-dimensional. The induction now finishes the proof.

The statement says in particular that a finite-dimensional simple module over an abelian Lie algebra is one-dimensional; this fact remains true without the assumption that the base field has characteristic zero.[7]

Here is another quite useful application:[8]

Let [math]\displaystyle{ \mathfrak{g} }[/math] be a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero with radical [math]\displaystyle{ \operatorname{rad}(\mathfrak{g}) }[/math]. Then each finite-dimensional simple representation [math]\displaystyle{ \pi: \mathfrak{g} \to \mathfrak{gl}(V) }[/math] is the tensor product of a simple representation of [math]\displaystyle{ \mathfrak{g}/\operatorname{rad}(\mathfrak{g}) }[/math] with a one-dimensional representation of [math]\displaystyle{ \mathfrak{g} }[/math] (i.e., a linear functional vanishing on Lie brackets).

By Lie's theorem, we can find a linear functional [math]\displaystyle{ \lambda }[/math] of [math]\displaystyle{ \operatorname{rad}(\mathfrak{g}) }[/math] so that there is the weight space [math]\displaystyle{ V_{\lambda} }[/math] of [math]\displaystyle{ \operatorname{rad}(\mathfrak{g}) }[/math]. By Step 4 of the proof of Lie's theorem, [math]\displaystyle{ V_{\lambda} }[/math] is also a [math]\displaystyle{ \mathfrak{g} }[/math]-module; so [math]\displaystyle{ V = V_{\lambda} }[/math]. In particular, for each [math]\displaystyle{ X \in \operatorname{rad}(\mathfrak{g}) }[/math], [math]\displaystyle{ \operatorname{tr}(\pi(X)) = \dim(V) \lambda(X) }[/math]. Extend [math]\displaystyle{ \lambda }[/math] to a linear functional on [math]\displaystyle{ \mathfrak{g} }[/math] that vanishes on [math]\displaystyle{ [\mathfrak g, \mathfrak g] }[/math]; [math]\displaystyle{ \lambda }[/math] is then a one-dimensional representation of [math]\displaystyle{ \mathfrak{g} }[/math]. Now, [math]\displaystyle{ (\pi, V) \simeq (\pi, V) \otimes (-\lambda) \otimes \lambda }[/math]. Since [math]\displaystyle{ \pi }[/math] coincides with [math]\displaystyle{ \lambda }[/math] on [math]\displaystyle{ \operatorname{rad}(\mathfrak{g}) }[/math], we have that [math]\displaystyle{ V \otimes (-\lambda) }[/math] is trivial on [math]\displaystyle{ \operatorname{rad}(\mathfrak{g}) }[/math] and thus is the restriction of a (simple) representation of [math]\displaystyle{ \mathfrak{g}/\operatorname{rad}(\mathfrak{g}) }[/math]. [math]\displaystyle{ \square }[/math]

See also

References

  1. 1.0 1.1 Serre, Theorem 3
  2. Humphreys, Ch. II, § 4.1., Corollary A.
  3. Serre, Theorem 3″
  4. Humphreys, Ch. II, § 4.1., Corollary C.
  5. Serre, Theorem 4
  6. Serre, Theorem 3'
  7. Jacobson, Ch. II, § 6, Lemma 5.
  8. Fulton & Harris, Proposition 9.17.

Sources