Coordinate vector

From HandWiki
Short description: Concept in linear algebra


In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers (a tuple) that describes the vector in terms of a particular ordered basis.[1] An easy example may be a position such as (5, 2, 1) in a 3-dimensional Cartesian coordinate system with the basis as the axes of this system. Coordinates are always specified relative to an ordered basis. Bases and their associated coordinate representations let one realize vector spaces and linear transformations concretely as column vectors, row vectors, and matrices; hence, they are useful in calculations.

The idea of a coordinate vector can also be used for infinite-dimensional vector spaces, as addressed below.

Definition

Let V be a vector space of dimension n over a field F and let

[math]\displaystyle{ B = \{ b_1, b_2, \ldots, b_n \} }[/math]

be an ordered basis for V. Then for every [math]\displaystyle{ v \in V }[/math] there is a unique linear combination of the basis vectors that equals [math]\displaystyle{ v }[/math]:

[math]\displaystyle{ v = \alpha _1 b_1 + \alpha _2 b_2 + \cdots + \alpha _n b_n . }[/math]

The coordinate vector of [math]\displaystyle{ v }[/math] relative to B is the sequence of coordinates

[math]\displaystyle{ [v]_B = (\alpha _1, \alpha _2, \ldots, \alpha _n) . }[/math]

This is also called the representation of [math]\displaystyle{ v }[/math] with respect to B, or the B representation of [math]\displaystyle{ v }[/math]. The [math]\displaystyle{ \alpha _1, \alpha _2, \ldots, \alpha _n }[/math] are called the coordinates of [math]\displaystyle{ v }[/math]. The order of the basis becomes important here, since it determines the order in which the coefficients are listed in the coordinate vector.

Coordinate vectors of finite-dimensional vector spaces can be represented by matrices as column or row vectors. In the above notation, one can write

[math]\displaystyle{ [v]_B = \begin{bmatrix} \alpha_1 \\ \vdots \\ \alpha_n \end{bmatrix} }[/math]

and

[math]\displaystyle{ [v]_B^T = \begin{bmatrix} \alpha_1 & \alpha_2 & \cdots & \alpha_n \end{bmatrix} }[/math]

where [math]\displaystyle{ [v]_B^T }[/math] is the transpose of the matrix [math]\displaystyle{ [v]_B }[/math].

The standard representation

We can mechanize the above transformation by defining a function [math]\displaystyle{ \phi_B }[/math], called the standard representation of V with respect to B, that takes every vector to its coordinate representation: [math]\displaystyle{ \phi_B(v)=[v]_B }[/math]. Then [math]\displaystyle{ \phi_B }[/math] is a linear transformation from V to Fn. In fact, it is an isomorphism, and its inverse [math]\displaystyle{ \phi_B^{-1}:F^n\to V }[/math] is simply

[math]\displaystyle{ \phi_B^{-1}(\alpha_1,\ldots,\alpha_n)=\alpha_1 b_1+\cdots+\alpha_n b_n. }[/math]

Alternatively, we could have defined [math]\displaystyle{ \phi_B^{-1} }[/math] to be the above function from the beginning, realized that [math]\displaystyle{ \phi_B^{-1} }[/math] is an isomorphism, and defined [math]\displaystyle{ \phi_B }[/math] to be its inverse.

Examples

Example 1

Let [math]\displaystyle{ P_3 }[/math] be the space of all the algebraic polynomials of degree at most 3 (i.e. the highest exponent of x can be 3). This space is linear and spanned by the following polynomials:

[math]\displaystyle{ B_P = \left\{ 1, x, x^2, x^3 \right\} }[/math]

matching

[math]\displaystyle{ 1 := \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} ; \quad x := \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} ; \quad x^2 := \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} ; \quad x^3 := \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} }[/math]

then the coordinate vector corresponding to the polynomial

[math]\displaystyle{ p \left( x \right) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 }[/math]

is

[math]\displaystyle{ \begin{bmatrix} a_0 \\ a_1 \\ a_2 \\ a_3 \end{bmatrix}. }[/math]

According to that representation, the differentiation operator d/dx which we shall mark D will be represented by the following matrix:

[math]\displaystyle{ Dp(x) = P'(x) ; \quad [D] = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} }[/math]

Using that method it is easy to explore the properties of the operator, such as: invertibility, Hermitian or anti-Hermitian or neither, spectrum and eigenvalues, and more.

Example 2

The Pauli matrices, which represent the spin operator when transforming the spin eigenstates into vector coordinates.

Basis transformation matrix

Let B and C be two different bases of a vector space V, and let us mark with [math]\displaystyle{ \lbrack M \rbrack_C^B }[/math] the matrix which has columns consisting of the C representation of basis vectors b1, b2, …, bn:

[math]\displaystyle{ \lbrack M\rbrack_C^B = \begin{bmatrix} \lbrack b_1\rbrack_C & \cdots & \lbrack b_n\rbrack_C \end{bmatrix} }[/math]

This matrix is referred to as the basis transformation matrix from B to C. It can be regarded as an automorphism over [math]\displaystyle{ F^n }[/math]. Any vector v represented in B can be transformed to a representation in C as follows:

[math]\displaystyle{ \lbrack v\rbrack_C = \lbrack M\rbrack_C^B \lbrack v\rbrack_B. }[/math]

Under the transformation of basis, notice that the superscript on the transformation matrix, M, and the subscript on the coordinate vector, v, are the same, and seemingly cancel, leaving the remaining subscript. While this may serve as a memory aid, it is important to note that no such cancellation, or similar mathematical operation, is taking place.

Corollary

The matrix M is an invertible matrix and M−1 is the basis transformation matrix from C to B. In other words,

[math]\displaystyle{ \begin{align} \operatorname{Id} &= \lbrack M\rbrack_C^B \lbrack M\rbrack_B^C = \lbrack M\rbrack_C^C \\[3pt] &= \lbrack M\rbrack_B^C \lbrack M\rbrack_C^B = \lbrack M\rbrack_B^B \end{align} }[/math]

Infinite-dimensional vector spaces

Suppose V is an infinite-dimensional vector space over a field F. If the dimension is κ, then there is some basis of κ elements for V. After an order is chosen, the basis can be considered an ordered basis. The elements of V are finite linear combinations of elements in the basis, which give rise to unique coordinate representations exactly as described before. The only change is that the indexing set for the coordinates is not finite. Since a given vector v is a finite linear combination of basis elements, the only nonzero entries of the coordinate vector for v will be the nonzero coefficients of the linear combination representing v. Thus the coordinate vector for v is zero except in finitely many entries.

The linear transformations between (possibly) infinite-dimensional vector spaces can be modeled, analogously to the finite-dimensional case, with infinite matrices. The special case of the transformations from V into V is described in the full linear ring article.

See also

References