Dimension theorem for vector spaces

From HandWiki
Revision as of 18:30, 6 March 2023 by Unex (talk | contribs) (fixing)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: All bases of a vector space have equally many elements

In mathematics, the dimension theorem for vector spaces states that all bases of a vector space have equally many elements. This number of elements may be finite or infinite (in the latter case, it is a cardinal number), and defines the dimension of the vector space.

Formally, the dimension theorem for vector spaces states that:

Given a vector space V, any two bases have the same cardinality.

As a basis is a generating set that is linearly independent, the theorem is a consequence of the following theorem, which is also useful:

In a vector space V, if G is a generating set, and I is a linearly independent set, then the cardinality of I is not larger than the cardinality of G.

In particular if V is finitely generated, then all its bases are finite and have the same number of elements.

While the proof of the existence of a basis for any vector space in the general case requires Zorn's lemma and is in fact equivalent to the axiom of choice, the uniqueness of the cardinality of the basis requires only the ultrafilter lemma,[1] which is strictly weaker (the proof given below, however, assumes trichotomy, i.e., that all cardinal numbers are comparable, a statement which is also equivalent to the axiom of choice). The theorem can be generalized to arbitrary R-modules for rings R having invariant basis number.

In the finitely generated case the proof uses only elementary arguments of algebra, and does not require the axiom of choice nor its weaker variants.

Proof

Let V be a vector space, {ai: iI} be a linearly independent set of elements of V, and {bj: jJ} be a generating set. One has to prove that the cardinality of I is not larger than that of J.

If J is finite, this results from the Steinitz exchange lemma. (Indeed, the Steinitz exchange lemma implies every finite subset of I has cardinality not larger than that of J, hence I is finite with cardinality not larger than that of J.) If J is finite, a proof based on matrix theory is also possible.[2]

Assume that J is infinite. If I is finite, there is nothing to prove. Thus, we may assume that I is also infinite. Let us suppose that the cardinality of I is larger than that of J.[note 1] We have to prove that this leads to a contradiction.

By Zorn's lemma, every linearly independent set is contained in a maximal linearly independent set K. This maximality implies that K spans V and is therefore a basis (the maximality implies that every element of V is linearly dependent from the elements of K, and therefore is a linear combination of elements of K). As the cardinality of K is greater than or equal to the cardinality of I, one may replace {ai: iI} with K, that is, one may suppose, without loss of generality, that {ai: iI} is a basis.

Thus, every bj can be written as a finite sum [math]\displaystyle{ b_j = \sum_{i\in E_j} \lambda_{i,j} a_i, }[/math] where [math]\displaystyle{ E_j }[/math] is a finite subset of [math]\displaystyle{ I. }[/math] As J is infinite, [math]\displaystyle{ \bigcup_{j \in J} E_j }[/math] has the same cardinality as J.[note 1] Therefore [math]\displaystyle{ \bigcup_{j \in J} E_j }[/math] has cardinality smaller than that of I. So there is some [math]\displaystyle{ i_0\in I }[/math] which does not appear in any [math]\displaystyle{ E_j }[/math]. The corresponding [math]\displaystyle{ a_{i_0} }[/math] can be expressed as a finite linear combination of [math]\displaystyle{ b_j }[/math]s, which in turn can be expressed as finite linear combination of [math]\displaystyle{ a_i }[/math]s, not involving [math]\displaystyle{ a_{i_0} }[/math]. Hence [math]\displaystyle{ a_{i_0} }[/math] is linearly dependent on the other [math]\displaystyle{ a_i }[/math]s, which provides the desired contradiction.

Kernel extension theorem for vector spaces

This application of the dimension theorem is sometimes itself called the dimension theorem. Let

T: UV

be a linear transformation. Then

dim(range(T)) + dim(ker(T)) = dim(U),

that is, the dimension of U is equal to the dimension of the transformation's range plus the dimension of the kernel. See rank–nullity theorem for a fuller discussion.

Notes

  1. 1.0 1.1 This uses the axiom of choice.

References

  1. Howard, P., Rubin, J.: "Consequences of the axiom of choice" - Mathematical Surveys and Monographs, vol 59 (1998) ISSN 0076-5376.
  2. Hoffman, K., Kunze, R., "Linear Algebra", 2nd ed., 1971, Prentice-Hall. (Theorem 4 of Chapter 2).