Algebra representation
In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a (not necessarily unital) ring. If the algebra is not unital, it may be made so in a standard way (see the adjoint functors page); there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra.
Examples
Linear complex structure
One of the simplest non-trivial examples is a linear complex structure, which is a representation of the complex numbers C, thought of as an associative algebra over the real numbers R. This algebra is realized concretely as [math]\displaystyle{ \mathbb{C} = \mathbb{R}[x]/(x^2+1), }[/math] which corresponds to i2 = −1. Then a representation of C is a real vector space V, together with an action of C on V (a map [math]\displaystyle{ \mathbb{C} \to \mathrm{End}(V) }[/math]). Concretely, this is just an action of i , as this generates the algebra, and the operator representing i (the image of i in End(V)) is denoted J to avoid confusion with the identity matrix I.
Polynomial algebras
Another important basic class of examples are representations of polynomial algebras, the free commutative algebras – these form a central object of study in commutative algebra and its geometric counterpart, algebraic geometry. A representation of a polynomial algebra in k variables over the field K is concretely a K-vector space with k commuting operators, and is often denoted [math]\displaystyle{ K[T_1,\dots,T_k], }[/math] meaning the representation of the abstract algebra [math]\displaystyle{ K[x_1,\dots,x_k] }[/math] where [math]\displaystyle{ x_i \mapsto T_i. }[/math]
A basic result about such representations is that, over an algebraically closed field, the representing matrices are simultaneously triangularisable.
Even the case of representations of the polynomial algebra in a single variable are of interest – this is denoted by [math]\displaystyle{ K[T] }[/math] and is used in understanding the structure of a single linear operator on a finite-dimensional vector space. Specifically, applying the structure theorem for finitely generated modules over a principal ideal domain to this algebra yields as corollaries the various canonical forms of matrices, such as Jordan canonical form.
In some approaches to noncommutative geometry, the free noncommutative algebra (polynomials in non-commuting variables) plays a similar role, but the analysis is much more difficult.
Weights
Eigenvalues and eigenvectors can be generalized to algebra representations.
The generalization of an eigenvalue of an algebra representation is, rather than a single scalar, a one-dimensional representation [math]\displaystyle{ \lambda\colon A \to R }[/math] (i.e., an algebra homomorphism from the algebra to its underlying ring: a linear functional that is also multiplicative).[note 1] This is known as a weight, and the analog of an eigenvector and eigenspace are called weight vector and weight space.
The case of the eigenvalue of a single operator corresponds to the algebra [math]\displaystyle{ R[T], }[/math] and a map of algebras [math]\displaystyle{ R[T] \to R }[/math] is determined by which scalar it maps the generator T to. A weight vector for an algebra representation is a vector such that any element of the algebra maps this vector to a multiple of itself – a one-dimensional submodule (subrepresentation). As the pairing [math]\displaystyle{ A \times M \to M }[/math] is bilinear, "which multiple" is an A-linear functional of A (an algebra map A → R), namely the weight. In symbols, a weight vector is a vector [math]\displaystyle{ m \in M }[/math] such that [math]\displaystyle{ am = \lambda(a)m }[/math] for all elements [math]\displaystyle{ a \in A, }[/math] for some linear functional [math]\displaystyle{ \lambda }[/math] – note that on the left, multiplication is the algebra action, while on the right, multiplication is scalar multiplication.
Because a weight is a map to a commutative ring, the map factors through the abelianization of the algebra [math]\displaystyle{ \mathcal{A} }[/math] – equivalently, it vanishes on the derived algebra – in terms of matrices, if [math]\displaystyle{ v }[/math] is a common eigenvector of operators [math]\displaystyle{ T }[/math] and [math]\displaystyle{ U }[/math], then [math]\displaystyle{ T U v = U T v }[/math] (because in both cases it is just multiplication by scalars), so common eigenvectors of an algebra must be in the set on which the algebra acts commutatively (which is annihilated by the derived algebra). Thus of central interest are the free commutative algebras, namely the polynomial algebras. In this particularly simple and important case of the polynomial algebra [math]\displaystyle{ \mathbf{F}[T_1,\dots,T_k] }[/math] in a set of commuting matrices, a weight vector of this algebra is a simultaneous eigenvector of the matrices, while a weight of this algebra is simply a [math]\displaystyle{ k }[/math]-tuple of scalars [math]\displaystyle{ \lambda = (\lambda_1,\dots,\lambda_k) }[/math] corresponding to the eigenvalue of each matrix, and hence geometrically to a point in [math]\displaystyle{ k }[/math]-space. These weights – in particularly their geometry – are of central importance in understanding the representation theory of Lie algebras, specifically the finite-dimensional representations of semisimple Lie algebras.
As an application of this geometry, given an algebra that is a quotient of a polynomial algebra on [math]\displaystyle{ k }[/math] generators, it corresponds geometrically to an algebraic variety in [math]\displaystyle{ k }[/math]-dimensional space, and the weight must fall on the variety – i.e., it satisfies the defining equations for the variety. This generalizes the fact that eigenvalues satisfy the characteristic polynomial of a matrix in one variable.
See also
- Representation theory
- Intertwiner
- Representation theory of Hopf algebras
- Lie algebra representation
- Schur’s lemma
- Jacobson density theorem
- Double commutant theorem
Notes
- ↑ Note that for a field, the endomorphism algebra of a one-dimensional vector space (a line) is canonically equal to the underlying field: End(L) = K, since all endomorphisms are scalar multiplication; there is thus no loss in restricting to concrete maps to the base field, rather than to abstract 1-dimensional representations. For rings there are also maps to quotient rings, which need not factor through maps to the ring itself, but again abstract 1-dimensional modules are not needed.
References
- Richard S. Pierce. Associative algebras. Graduate texts in mathematics, Vol. 88, Springer-Verlag, 1982, ISBN:978-0-387-90693-5
Original source: https://en.wikipedia.org/wiki/Algebra representation.
Read more |