Comparison of vector algebra and geometric algebra

From HandWiki

Geometric algebra is an extension of vector algebra, providing additional algebraic structures on vector spaces, with geometric interpretations.

Vector algebra uses all dimensions and signatures, as does geometric algebra, notably 3+1 spacetime as well as 2 dimensions.

Basic concepts and operations

Geometric algebra (GA) is an extension or completion of vector algebra (VA).[1] The reader is herein assumed to be familiar with the basic concepts and operations of VA and this article will mainly concern itself with operations in [math]\displaystyle{ \mathcal G_{3} }[/math] the GA of 3D space (nor is this article intended to be mathematically rigorous). In GA, vectors are not normally written boldface as the meaning is usually clear from the context.

The fundamental difference is that GA provides a new product of vectors called the "geometric product". Elements of GA are graded multivectors: scalars are grade 0, usual vectors are grade 1, bivectors are grade 2 and the highest grade (3 in the 3D case) is traditionally called the pseudoscalar and designated [math]\displaystyle{ I }[/math].

The ungeneralized 3D vector form of the geometric product is:[2]

[math]\displaystyle{ ab=a \cdot b + a \wedge b }[/math]

that is the sum of the usual dot (inner) product and the outer (exterior) product (this last is closely related to the cross product and will be explained below).

In VA, entities such as pseudovectors and pseudoscalars need to be bolted on, whereas in GA the equivalent bivector and pseudovector respectively exist naturally as subspaces of the algebra.

For example, applying vector calculus in 2 dimensions, such as to compute torque or curl, requires adding an artificial 3rd dimension and extending the vector field to be constant in that dimension, or alternately considering these to be scalars. The torque or curl is then a normal vector field in this 3rd dimension. By contrast, geometric algebra in 2 dimensions defines these as a pseudoscalar field (a bivector), without requiring a 3rd dimension. Similarly, the scalar triple product is ad hoc, and can instead be expressed uniformly using the exterior product and the geometric product.

Translations between formalisms

Here are some comparisons between standard [math]\displaystyle{ {\mathbb R}^3 }[/math] vector relations and their corresponding exterior product and geometric product equivalents. All the exterior and geometric product equivalents here are good for more than three dimensions, and some also for two. In two dimensions the cross product is undefined even if what it describes (like torque) is perfectly well defined in a plane without introducing an arbitrary normal vector outside of the space.

Many of these relationships only require the introduction of the exterior product to generalize, but since that may not be familiar to somebody with only a background in vector algebra and calculus, some examples are given.

Cross and exterior products

The cross product in relation to the exterior product. In red are the orthogonal unit vector, and the "parallel" unit bivector.

[math]\displaystyle{ \mathbf u \times \mathbf v }[/math] is perpendicular to the plane containing [math]\displaystyle{ \mathbf u }[/math] and [math]\displaystyle{ \mathbf v }[/math].
[math]\displaystyle{ \mathbf u \wedge \mathbf v }[/math] is an oriented representation of the same plane.

We have the pseudoscalar [math]\displaystyle{ I = e_{1}e_{2}e_{3} }[/math] (right handed orthonormal frame) and so

[math]\displaystyle{ e_{1}I = Ie_{1}= e_{2}e_{3} }[/math] returns a bivector and
[math]\displaystyle{ I (e_{2} \wedge e_{3}) = Ie_2e_3 = -e_1 }[/math] returns a vector perpendicular to the [math]\displaystyle{ e_{2} \wedge e_{3} }[/math] plane.

This yields a convenient definition for the cross product of traditional vector algebra:

[math]\displaystyle{ {u}\times{v} = -I({u}\wedge{v}) }[/math]

(this is antisymmetric). Relevant is the distinction between polar and axial vectors in vector algebra, which is natural in geometric algebra as the distinction between vectors and bivectors (elements of grade two).

The [math]\displaystyle{ I }[/math] here is a unit pseudoscalar of Euclidean 3-space, which establishes a duality between the vectors and the bivectors, and is named so because of the expected property

[math]\displaystyle{ I^2 = (e_1 e_2 e_3)^2 = e_1 e_2 e_3 e_1 e_2 e_3 = - e_1 e_2 e_1 e_3 e_2 e_3 = e_1 e_1 e_2 e_3 e_2 e_3 = - e_3 e_2 e_2 e_3 = -1 }[/math]

The equivalence of the [math]\displaystyle{ \mathbb{R}^3 }[/math] cross product and the exterior product expression above can be confirmed by direct multiplication of [math]\displaystyle{ -I = - {e_1} {e_2} {e_3} }[/math] with a determinant expansion of the exterior product

[math]\displaystyle{ u \wedge v = \sum_{1\leq i\lt j\leq 3}(u_i v_j - v_i u_j) {e_i} \wedge {e_j} = \sum_{1\leq i\lt j\leq 3}(u_i v_j - v_i u_j) {e_i} {e_j} }[/math]

See also Cross product as an exterior product. Essentially, the geometric product of a bivector and the pseudoscalar of Euclidean 3-space provides a method of calculation of the Hodge dual.

Cross and commutator products

The pseudovector/bivector subalgebra of the geometric algebra of Euclidean 3-dimensional space form a 3-dimensional vector space themselves. Let the standard unit pseudovectors/bivectors of the subalgebra be [math]\displaystyle{ \mathbf{i} = \mathbf{e_2} \mathbf{e_3} }[/math], [math]\displaystyle{ \mathbf{j} = \mathbf{e_1} \mathbf{e_3} }[/math], and [math]\displaystyle{ \mathbf{k} = \mathbf{e_1} \mathbf{e_2} }[/math], and the anti-commutative commutator product be defined as [math]\displaystyle{ A \times B = \tfrac{1}{2} (AB - BA) }[/math], where [math]\displaystyle{ AB }[/math] is the geometric product. The commutator product is distributive over addition and linear, as the geometric product is distributive over addition and linear.

From the definition of the commutator product, [math]\displaystyle{ \mathbf{i} }[/math], [math]\displaystyle{ \mathbf{j} }[/math] and [math]\displaystyle{ \mathbf{k} }[/math] satisfy the following equalities: [math]\displaystyle{ \mathbf{i} \times \mathbf{j} = \tfrac{1}{2}(\mathbf{i} \mathbf{j} - \mathbf{j} \mathbf{i}) = \tfrac{1}{2}((\mathbf{e_2} \mathbf{e_3} \mathbf{e_1} \mathbf{e_3} - \mathbf{e_1} \mathbf{e_3} \mathbf{e_2} \mathbf{e_3}) = \tfrac{1}{2}(- \mathbf{e_2} \mathbf{e_3} \mathbf{e_3} \mathbf{e_1} + \mathbf{e_1} \mathbf{e_3} \mathbf{e_3} \mathbf{e_2}) = \tfrac{1}{2}(- \mathbf{e_2} \mathbf{e_1} + \mathbf{e_1} \mathbf{e_2}) = \tfrac{1}{2} (\mathbf{e_1} \mathbf{e_2} + \mathbf{e_1} \mathbf{e_2}) = \mathbf{e_1} \mathbf{e_2} = \mathbf{k} }[/math] [math]\displaystyle{ \mathbf{j} \times \mathbf{k} = \tfrac{1}{2}(\mathbf{j} \mathbf{k} - \mathbf{k} \mathbf{j}) = \tfrac{1}{2}((\mathbf{e_1} \mathbf{e_3} \mathbf{e_1} \mathbf{e_2} - \mathbf{e_1} \mathbf{e_2} \mathbf{e_1} \mathbf{e_3}) = \tfrac{1}{2}(- \mathbf{e_3} \mathbf{e_1} \mathbf{e_1} \mathbf{e_2} + \mathbf{e_2} \mathbf{e_1} \mathbf{e_1} \mathbf{e_3}) = \tfrac{1}{2}(- \mathbf{e_3} \mathbf{e_2} + \mathbf{e_2} \mathbf{e_3}) = \tfrac{1}{2} (\mathbf{e_2} \mathbf{e_3} + \mathbf{e_2} \mathbf{e_3}) = \mathbf{e_2} \mathbf{e_3} = \mathbf{i} }[/math] [math]\displaystyle{ \mathbf{k} \times \mathbf{i} = \tfrac{1}{2}(\mathbf{k} \mathbf{i} - \mathbf{i} \mathbf{k}) = \tfrac{1}{2}(\mathbf{e_1} \mathbf{e_2} \mathbf{e_2} \mathbf{e_3} - \mathbf{e_2} \mathbf{e_3} \mathbf{e_1} \mathbf{e_2}) = \tfrac{1}{2}(\mathbf{e_1} \mathbf{e_2} \mathbf{e_2} \mathbf{e_3} - \mathbf{e_3} \mathbf{e_2} \mathbf{e_2} \mathbf{e_1}) = \tfrac{1}{2}(\mathbf{e_1} \mathbf{e_3} - \mathbf{e_3} \mathbf{e_1}) = \tfrac{1}{2} (\mathbf{e_1} \mathbf{e_3} + \mathbf{e_1} \mathbf{e_3}) = \mathbf{e_1} \mathbf{e_3} = \mathbf{j} }[/math] which imply, by the anti-commutativity of the commutator product, that [math]\displaystyle{ \mathbf{j} \times \mathbf{i} = -\mathbf{k} }[/math] [math]\displaystyle{ \mathbf{k} \times \mathbf{j} = -\mathbf{i} }[/math] [math]\displaystyle{ \mathbf{i} \times \mathbf{k} = -\mathbf{j} }[/math]

The anti-commutativity of the commutator product also implies that [math]\displaystyle{ \mathbf{i} \times \mathbf{i} = \mathbf{j} \times \mathbf{j} = \mathbf{k} \times \mathbf{k} = 0 }[/math]

These equalities and properties are sufficient to determine the commutator product of any two pseudovectors/bivectors [math]\displaystyle{ \mathbf{A} }[/math] and [math]\displaystyle{ \mathbf{B} }[/math]. As the pseudovectors/bivectors form a vector space, each pseudovector/bivector can be defined as the sum of three orthogonal components parallel to the standard basis pseudovectors/bivectors: [math]\displaystyle{ \mathbf{A} = (A_1 \mathbf{i} + A_2 \mathbf{j} + A_3 \mathbf{k}) }[/math] [math]\displaystyle{ \mathbf{B} = (B_1 \mathbf{i} + B_2 \mathbf{j} + B_3 \mathbf{k}) }[/math]

Their commutator product [math]\displaystyle{ \mathbf{A} \times \mathbf{B} }[/math] can be expanded using its distributive property: [math]\displaystyle{ \begin{align} \mathbf{A} \times \mathbf{B} &= (A_1 \mathbf{i} + A_2 \mathbf{j} + A_3 \mathbf{k}) \times (B_1 \mathbf{i} + B_2 \mathbf{j} + B_3 \mathbf{k}) \\ &= A_1 B_1 \mathbf{i} \times \mathbf{i} + A_1 B_2 \mathbf{i} \times \mathbf{j} + A_1 B_3 \mathbf{i} \times \mathbf{k} + A_2 B_1 \mathbf{j} \times \mathbf{i} + A_2 B_2 \mathbf{j} \times \mathbf{j} + A_2 B_3 \mathbf{j} \times \mathbf{k} + A_3 B_1 \mathbf{k} \times \mathbf{i} + A_3 B_2 \mathbf{k} \times \mathbf{j} + A_3 B_3 \mathbf{k} \times \mathbf{k} \\ &= A_1 B_2 \mathbf{k} - A_1 B_3 \mathbf{j} - A_2 B_1 \mathbf{k} + A_2 B_3 \mathbf{i} + A_3 B_1 \mathbf{j} - A_3 B_2 \mathbf{i} = (A_2 B_3 - A_3 B_2) \mathbf{i} + (A_3 B_1 - A_1 B_3) \mathbf{j} + (A_1 B_2 - A_2 B_1) \mathbf{k} \end{align} }[/math] which is precisely the cross product in vector algebra for pseudovectors.

Norm of a vector

Ordinarily:

[math]\displaystyle{ {\Vert \mathbf u \Vert}^2 = \mathbf u \cdot \mathbf u }[/math]

Making use of the geometric product and the fact that the exterior product of a vector with itself is zero:

[math]\displaystyle{ \mathbf u \, \mathbf u = {\Vert \mathbf u \Vert}^2 = {\mathbf u}^2 = \mathbf u \cdot \mathbf u + \mathbf u \wedge \mathbf u = \mathbf u \cdot \mathbf u }[/math]

Lagrange identity

In three dimensions the product of two vector lengths can be expressed in terms of the dot and cross products

[math]\displaystyle{ {\Vert \mathbf{u} \Vert}^2 {\Vert \mathbf{v} \Vert}^2 = ({\mathbf{u} \cdot \mathbf{v}})^2 + {\Vert \mathbf{u} \times \mathbf{v} \Vert}^2 }[/math]

The corresponding generalization expressed using the geometric product is

[math]\displaystyle{ {\Vert \mathbf{u} \Vert}^2 {\Vert \mathbf{v} \Vert}^2 = ({\mathbf{u} \cdot \mathbf{v}})^2 - (\mathbf{u} \wedge \mathbf{v})^2 }[/math]

This follows from expanding the geometric product of a pair of vectors with its reverse

[math]\displaystyle{ (\mathbf{u} \mathbf{v})(\mathbf{v} \mathbf{u}) = ({\mathbf{u} \cdot \mathbf{v}} + {\mathbf{u} \wedge \mathbf{v}}) ({\mathbf{u} \cdot \mathbf{v}} - {\mathbf{u} \wedge \mathbf{v}}) }[/math]

Determinant expansion of cross and wedge products

[math]\displaystyle{ \mathbf u \times \mathbf v = \sum_{i\lt j}{ \begin{vmatrix}u_i & u_j\\v_i & v_j\end{vmatrix} {\mathbf e}_i \times {\mathbf e}_j } }[/math]
[math]\displaystyle{ \mathbf u \wedge \mathbf v = \sum_{i\lt j}{ \begin{vmatrix}u_i & u_j\\v_i & v_j\end{vmatrix} {\mathbf e}_i \wedge {\mathbf e}_j } }[/math]

Linear algebra texts will often use the determinant for the solution of linear systems by Cramer's rule or for and matrix inversion.

An alternative treatment is to axiomatically introduce the wedge product, and then demonstrate that this can be used directly to solve linear systems. This is shown below, and does not require sophisticated math skills to understand.

It is then possible to define determinants as nothing more than the coefficients of the wedge product in terms of "unit k-vectors" ([math]\displaystyle{ {\mathbf e}_i \wedge {\mathbf e}_j }[/math] terms) expansions as above.

A one-by-one determinant is the coefficient of [math]\displaystyle{ \mathbf{e}_1 }[/math] for an [math]\displaystyle{ \mathbb R^1 }[/math] 1-vector.
A two-by-two determinant is the coefficient of [math]\displaystyle{ \mathbf{e}_1 \wedge \mathbf{e}_2 }[/math] for an [math]\displaystyle{ \mathbb R^2 }[/math] bivector
A three-by-three determinant is the coefficient of [math]\displaystyle{ \mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_3 }[/math] for an [math]\displaystyle{ \mathbb R^3 }[/math] trivector
...

When linear system solution is introduced via the wedge product, Cramer's rule follows as a side-effect, and there is no need to lead up to the end results with definitions of minors, matrices, matrix invertibility, adjoints, cofactors, Laplace expansions, theorems on determinant multiplication and row column exchanges, and so forth.

Matrix Related

Matrix inversion (Cramer's rule) and determinants can be naturally expressed in terms of the wedge product.

The use of the wedge product in the solution of linear equations can be quite useful for various geometric product calculations.

Traditionally, instead of using the wedge product, Cramer's rule is usually presented as a generic algorithm that can be used to solve linear equations of the form [math]\displaystyle{ A x = b }[/math] (or equivalently to invert a matrix). Namely

[math]\displaystyle{ x = \frac{1}{|A|}\operatorname{adj}( A) b . }[/math]

This is a useful theoretic result. For numerical problems row reduction with pivots and other methods are more stable and efficient.

When the wedge product is coupled with the Clifford product and put into a natural geometric context, the fact that the determinants are used in the expression of [math]\displaystyle{ {\mathbb R}^N }[/math] parallelogram area and parallelepiped volumes (and higher-dimensional generalizations thereof) also comes as a nice side-effect.

As is also shown below, results such as Cramer's rule also follow directly from the wedge product's selection of non-identical elements. The result is then simple enough that it could be derived easily if required instead of having to remember or look up a rule.

Two variables example

[math]\displaystyle{ \begin{bmatrix} a & b \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = a x + b y = c . }[/math]

Pre- and post-multiplying by [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math],

[math]\displaystyle{ ( a x + b y ) \wedge b = ( a \wedge b) x = c \wedge b }[/math]
[math]\displaystyle{ a \wedge ( a x + b y ) = ( a \wedge b) y = a \wedge c }[/math]

Provided [math]\displaystyle{ a \wedge b \neq 0 }[/math] the solution is

[math]\displaystyle{ \begin{bmatrix} x \\ y \end{bmatrix} = \frac{1}{ a \wedge b} \begin{bmatrix} c \wedge b \\ a \wedge c\end{bmatrix} . }[/math]

For [math]\displaystyle{ a, b \in {\mathbb R}^2 }[/math], this is Cramer's rule since the [math]\displaystyle{ {e}_1 \wedge {e}_2 }[/math] factors of the wedge products

[math]\displaystyle{ u \wedge v = \begin{vmatrix}u_1 & u_2 \\ v_1 & v_2 \end{vmatrix} {e}_1 \wedge {e}_2 }[/math]

divide out.

Similarly, for three, or N variables, the same ideas hold

[math]\displaystyle{ \begin{bmatrix} a & b & c \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = d }[/math]
[math]\displaystyle{ \begin{bmatrix}x \\ y \\ z\end{bmatrix} = \frac{1}{ a \wedge b \wedge c} \begin{bmatrix} d \wedge b \wedge c \\ a \wedge d \wedge c \\ a \wedge b \wedge d \end{bmatrix} }[/math]

Again, for the three variable three equation case this is Cramer's rule since the [math]\displaystyle{ {e}_1 \wedge {e}_2 \wedge {e}_3 }[/math] factors of all the wedge products divide out, leaving the familiar determinants.

A numeric example with three equations and two unknowns: In case there are more equations than variables and the equations have a solution, then each of the k-vector quotients will be scalars.

To illustrate here is the solution of a simple example with three equations and two unknowns.

[math]\displaystyle{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} x + \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} y = \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix} }[/math]

The right wedge product with [math]\displaystyle{ (1, 1, 1) }[/math] solves for [math]\displaystyle{ x }[/math]

[math]\displaystyle{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} \wedge \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} x = \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix} \wedge \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} }[/math]

and a left wedge product with [math]\displaystyle{ (1, 1, 0) }[/math] solves for [math]\displaystyle{ y }[/math]

[math]\displaystyle{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} \wedge \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} y = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} \wedge \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix}. }[/math]

Observe that both of these equations have the same factor, so one can compute this only once (if this was zero it would indicate the system of equations has no solution).

Collection of results for [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] yields a Cramer's rule-like form:

[math]\displaystyle{ \begin{bmatrix} x \\ y \end{bmatrix} = \frac{1}{(1, 1, 0) \wedge (1, 1, 1)} \begin{bmatrix} (1, 1, 2) \wedge (1, 1, 1) \\ (1, 1, 0) \wedge (1, 1, 2) \end{bmatrix}. }[/math]

Writing [math]\displaystyle{ {e} _i \wedge {e} _j = {e} _{ij} }[/math], we have the result:

[math]\displaystyle{ \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} {-{e}_{13} - {e}_{23}} \\ {2{e}_{13} +2{e}_{23}} \\ \end{bmatrix} = \begin{bmatrix} -1 \\ 2 \end{bmatrix}. }[/math]

Equation of a plane

For the plane of all points [math]\displaystyle{ {\mathbf r} }[/math] through the plane passing through three independent points [math]\displaystyle{ {\mathbf r}_0 }[/math], [math]\displaystyle{ {\mathbf r}_1 }[/math], and [math]\displaystyle{ {\mathbf r}_2 }[/math], the normal form of the equation is

[math]\displaystyle{ (({\mathbf r}_2 - {\mathbf r}_0) \times ({\mathbf r}_1 - {\mathbf r}_0)) \cdot ({\mathbf r} - {\mathbf r}_0) = 0. }[/math]

The equivalent wedge product equation is

[math]\displaystyle{ ({\mathbf r}_2 - {\mathbf r}_0) \wedge ({\mathbf r}_1 - {\mathbf r}_0) \wedge ({\mathbf r} - {\mathbf r}_0) = 0. }[/math]

Projection and rejection

Using the Gram–Schmidt process a single vector can be decomposed into two components with respect to a reference vector, namely the projection onto a unit vector in a reference direction, and the difference between the vector and that projection.

With, [math]\displaystyle{ \hat{u} = u / {\Vert u \Vert} }[/math], the projection of [math]\displaystyle{ v }[/math] onto [math]\displaystyle{ \hat{u} }[/math] is

[math]\displaystyle{ \mathrm{Proj}_{{\hat{u}}}\,{v} = \hat{u} ( \hat{u} \cdot v) }[/math]

Orthogonal to that vector is the difference, designated the rejection,

[math]\displaystyle{ v - \hat{u} ( \hat{u} \cdot v) = \frac{1}{{\Vert u \Vert}^2} ( {\Vert u \Vert}^2 v - u ( u \cdot v)) }[/math]

The rejection can be expressed as a single geometric algebraic product in a few different ways

[math]\displaystyle{ \frac{ u }{{ u}^2} ( u v - u \cdot v) = \frac{1}{ u} ( u \wedge v ) = \hat{u} ( \hat{u} \wedge v ) = ( v \wedge \hat{u} ) \hat{u} }[/math]

The similarity in form between the projection and the rejection is notable. The sum of these recovers the original vector

[math]\displaystyle{ v = \hat{u} ( \hat{u} \cdot v) + \hat{u} ( \hat{u} \wedge v ) }[/math]

Here the projection is in its customary vector form. An alternate formulation is possible that puts the projection in a form that differs from the usual vector formulation

[math]\displaystyle{ v = \frac{1}{ u} ( {u} \cdot v) + \frac{1}{ u} ( {u} \wedge v ) = ( {v} \cdot u) \frac{1}{ u} + ( v \wedge u ) \frac{1}{ u} }[/math]

Working backwards from the result, it can be observed that this orthogonal decomposition result can in fact follow more directly from the definition of the geometric product itself.

[math]\displaystyle{ v = \hat{u} \hat{u} v = \hat{u} ( \hat{u} \cdot v + \hat{u} \wedge v ) }[/math]

With this approach, the original geometrical consideration is not necessarily obvious, but it is a much quicker way to get at the same algebraic result.

However, the hint that one can work backwards, coupled with the knowledge that the wedge product can be used to solve sets of linear equations (see: [1] ), the problem of orthogonal decomposition can be posed directly,

Let [math]\displaystyle{ v = a u + x }[/math], where [math]\displaystyle{ u \cdot x = 0 }[/math]. To discard the portions of [math]\displaystyle{ v }[/math] that are colinear with [math]\displaystyle{ u }[/math], take the exterior product

[math]\displaystyle{ u \wedge v = u \wedge (a u + x) = u \wedge x }[/math]

Here the geometric product can be employed

[math]\displaystyle{ u \wedge v = u \wedge x = u x - u \cdot x = u x }[/math]

Because the geometric product is invertible, this can be solved for x:

[math]\displaystyle{ x = \frac{1}{u}( u \wedge v ) . }[/math]

The same techniques can be applied to similar problems, such as calculation of the component of a vector in a plane and perpendicular to the plane.

For three dimensions the projective and rejective components of a vector with respect to an arbitrary non-zero unit vector, can be expressed in terms of the dot and cross product

[math]\displaystyle{ \mathbf v = (\mathbf v \cdot \hat{\mathbf u})\hat{\mathbf u} + \hat{\mathbf u} \times (\mathbf v \times \hat{\mathbf u}) . }[/math]

For the general case the same result can be written in terms of the dot and wedge product and the geometric product of that and the unit vector

[math]\displaystyle{ \mathbf v = (\mathbf v \cdot \hat{\mathbf u})\hat{\mathbf u} + (\mathbf v \wedge \hat{\mathbf u}) \hat{\mathbf u}. }[/math]

It's also worthwhile to point out that this result can also be expressed using right or left vector division as defined by the geometric product:

[math]\displaystyle{ \mathbf v = (\mathbf v \cdot \mathbf u)\frac{1}{\mathbf u} + (\mathbf v \wedge \mathbf u) \frac{1}{\mathbf u} }[/math]
[math]\displaystyle{ \mathbf v = \frac{1}{\mathbf u}(\mathbf u \cdot \mathbf v) + \frac{1}{\mathbf u}(\mathbf u \wedge \mathbf v) . }[/math]

Like vector projection and rejection, higher-dimensional analogs of that calculation are also possible using the geometric product.

As an example, one can calculate the component of a vector perpendicular to a plane and the projection of that vector onto the plane.

Let [math]\displaystyle{ w = a u + b v + x }[/math], where [math]\displaystyle{ u \cdot x = v \cdot x = 0 }[/math]. As above, to discard the portions of [math]\displaystyle{ w }[/math] that are colinear with [math]\displaystyle{ u }[/math] or [math]\displaystyle{ v }[/math], take the wedge product

[math]\displaystyle{ w \wedge u \wedge v = (a u + b v + x) \wedge u \wedge v = x \wedge u \wedge v . }[/math]

Having done this calculation with a vector projection, one can guess that this quantity equals [math]\displaystyle{ x ( u \wedge v ) }[/math]. One can also guess there is a vector and bivector dot product like quantity such that the allows the calculation of the component of a vector that is in the "direction of a plane". Both of these guesses are correct, and validating these facts is worthwhile. However, skipping ahead slightly, this to-be-proven fact allows for a nice closed form solution of the vector component outside of the plane:

[math]\displaystyle{ x = ( w \wedge u \wedge v)\frac{1}{ u \wedge v} = \frac{1}{ u \wedge v}( u \wedge v \wedge w). }[/math]

Notice the similarities between this planar rejection result and the vector rejection result. To calculate the component of a vector outside of a plane we take the volume spanned by three vectors (trivector) and "divide out" the plane.

Independent of any use of the geometric product it can be shown that this rejection in terms of the standard basis is

[math]\displaystyle{ x = \frac{1}{(A_{u,v})^2} \sum_{i\lt j\lt k} \begin{vmatrix}w_i & w_j & w_k \\u_i & u_j & u_k \\v_i & v_j & v_k \\\end{vmatrix} \begin{vmatrix}u_i & u_j & u_k \\v_i & v_j & v_k \\ { e}_i & { e}_j & { e}_k \\ \end{vmatrix} }[/math]

where

[math]\displaystyle{ (A_{u,v})^2 = \sum_{i\lt j} \begin{vmatrix}u_i & u_j\\v_i & v_j\end{vmatrix} = -( u \wedge v)^2 }[/math]

is the squared area of the parallelogram formed by [math]\displaystyle{ u }[/math], and [math]\displaystyle{ v }[/math].

The (squared) magnitude of [math]\displaystyle{ x }[/math] is

[math]\displaystyle{ {\Vert x \Vert}^2 = x \cdot w = \frac{1}{(A_{u,v})^2} \sum_{i\lt j\lt k} {\begin{vmatrix}w_i & w_j & w_k \\u_i & u_j & u_k \\v_i & v_j & v_k \\\end{vmatrix}}^2 }[/math]

Thus, the (squared) volume of the parallelopiped (base area times perpendicular height) is

[math]\displaystyle{ \sum_{i\lt j\lt k} {\begin{vmatrix}w_i & w_j & w_k \\u_i & u_j & u_k \\v_i & v_j & v_k \\\end{vmatrix}}^2 }[/math]

Note the similarity in form to the w, u, v trivector itself

[math]\displaystyle{ \sum_{i\lt j\lt k} {\begin{vmatrix}w_i & w_j & w_k \\u_i & u_j & u_k \\v_i & v_j & v_k \\\end{vmatrix}} {e}_i \wedge {e}_j \wedge {e}_k , }[/math]

which, if you take the set of [math]\displaystyle{ {e}_i \wedge {e}_j \wedge {e}_k }[/math] as a basis for the trivector space, suggests this is the natural way to define the measure of a trivector. Loosely speaking, the measure of a vector is a length, the measure of a bivector is an area, and the measure of a trivector is a volume.

If a vector is factored directly into projective and rejective terms using the geometric product [math]\displaystyle{ v = \frac{1}{u}( u \cdot v + u \wedge v) }[/math], then it is not necessarily obvious that the rejection term, a product of vector and bivector is even a vector. Expansion of the vector bivector product in terms of the standard basis vectors has the following form

Let [math]\displaystyle{ r = \frac{1}{ u} ( u \wedge v ) = \frac{ u}{ u^2} ( u \wedge v ) = \frac{1}{{\Vert u \Vert}^2} u ( u \wedge v ) }[/math]

It can be shown that

[math]\displaystyle{ r = \frac{1}{{\Vert{ u}\Vert}^2} \sum_{i\lt j}\begin{vmatrix}u_i & u_j\\v_i & v_j\end{vmatrix} \begin{vmatrix}u_i & u_j\\ e_i & e_j\end{vmatrix} }[/math]

(a result that can be shown more easily straight from [math]\displaystyle{ r = v - \hat{u} ( \hat{u} \cdot v) }[/math]).

The rejective term is perpendicular to [math]\displaystyle{ u }[/math], since [math]\displaystyle{ \begin{vmatrix}u_i & u_j\\ u_i & u_j\end{vmatrix} = 0 }[/math] implies [math]\displaystyle{ r \cdot u = 0 }[/math].

The magnitude of [math]\displaystyle{ r }[/math] is

[math]\displaystyle{ {\Vert r \Vert}^2 = r \cdot v = \frac{1}{{\Vert{u}\Vert}^2} \sum_{i\lt j}\begin{vmatrix}u_i & u_j\\v_i & v_j\end{vmatrix}^2 . }[/math]

So, the quantity

[math]\displaystyle{ {\Vert r \Vert}^2 {\Vert{ u}\Vert}^2 = \sum_{i\lt j}\begin{vmatrix}u_i & u_j\\v_i & v_j\end{vmatrix}^2 }[/math]

is the squared area of the parallelogram formed by [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math].

It is also noteworthy that the bivector can be expressed as

[math]\displaystyle{ u \wedge v = \sum_{i\lt j}{ \begin{vmatrix}u_i & u_j\\v_i & v_j\end{vmatrix} e_i \wedge e_j } . }[/math]

Thus is it natural, if one considers each term [math]\displaystyle{ e_i \wedge e_j }[/math] as a basis vector of the bivector space, to define the (squared) "length" of that bivector as the (squared) area.

Going back to the geometric product expression for the length of the rejection [math]\displaystyle{ \frac{1}{u} ( u \wedge v ) }[/math] we see that the length of the quotient, a vector, is in this case is the "length" of the bivector divided by the length of the divisor.

This may not be a general result for the length of the product of two k-vectors, however it is a result that may help build some intuition about the significance of the algebraic operations. Namely,

When a vector is divided out of the plane (parallelogram span) formed from it and another vector, what remains is the perpendicular component of the remaining vector, and its length is the planar area divided by the length of the vector that was divided out.

Area of the parallelogram defined by u and v

If A is the area of the parallelogram defined by u and v, then

[math]\displaystyle{ A^2 = {\Vert \mathbf u \times \mathbf v \Vert}^2 = \sum_{i\lt j}{\begin{vmatrix}u_i & u_j\\v_i & v_j\end{vmatrix}}^2 , }[/math]

and

[math]\displaystyle{ A^2 = -(\mathbf u \wedge \mathbf v)^2 = \sum_{i\lt j}{\begin{vmatrix}u_i & u_j\\v_i & v_j\end{vmatrix}}^2 . }[/math]

Note that this squared bivector is a geometric multiplication; this computation can alternatively be stated as the Gram determinant of the two vectors.

Angle between two vectors

[math]\displaystyle{ ({\sin \theta})^2 = \frac{{\Vert \mathbf u \times \mathbf v \Vert}^2}{{\Vert \mathbf u \Vert}^2 {\Vert \mathbf v \Vert}^2} }[/math]
[math]\displaystyle{ ({\sin \theta})^2 = -\frac{(\mathbf u \wedge \mathbf v)^2}{{ \mathbf u }^2 { \mathbf v }^2} }[/math]

Volume of the parallelopiped formed by three vectors

In vector algebra, the volume of a parallelopiped is given by the square root of the squared norm of the scalar triple product: [math]\displaystyle{ V^2 = {\Vert (\mathbf u \times \mathbf v) \cdot \mathbf w \Vert}^2 = {\begin{vmatrix} u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \\ w_1 & w_2 & w_3 \\ \end{vmatrix}}^2 }[/math] [math]\displaystyle{ V^2 = -(\mathbf u \wedge \mathbf v \wedge \mathbf w)^2 = -\left(\sum_{i\lt j\lt k} \begin{vmatrix} u_i & u_j & u_k \\ v_i & v_j & v_k \\ w_i & w_j & w_k \\ \end{vmatrix} \hat{\mathbf e}_i \wedge \hat{\mathbf e}_j \wedge \hat{\mathbf e}_k \right)^2 = \sum_{i\lt j\lt k} {\begin{vmatrix} u_i & u_j & u_k \\ v_i & v_j & v_k \\ w_i & w_j & w_k \\ \end{vmatrix}}^2 }[/math]

Product of a vector and a bivector

In order to justify the normal to a plane result above, a general examination of the product of a vector and bivector is required. Namely, [math]\displaystyle{ w ( u \wedge v ) = \sum_{i,j\lt k}w_i {e}_i {\begin{vmatrix}u_j & u_k \\ v_j & v_k \\ \end{vmatrix}} {e}_j \wedge {e}_k }[/math]

This has two parts, the vector part where [math]\displaystyle{ i=j }[/math] or [math]\displaystyle{ i=k }[/math], and the trivector parts where no indexes equal. After some index summation trickery, and grouping terms and so forth, this is [math]\displaystyle{ w ( u \wedge v) = \sum_{i\lt j}(w_i e_j - w_j e_i ) {\begin{vmatrix} u_i & u_j \\ v_i & v_j \end{vmatrix}} + \sum_{i\lt j\lt k} {\begin{vmatrix} w_i & w_j & w_k \\ u_i & u_j & u_k \\ v_i & v_j & v_k \end{vmatrix}} {e}_i \wedge {e}_j \wedge {e}_k }[/math]

The trivector term is [math]\displaystyle{ w \wedge u \wedge v }[/math]. Expansion of [math]\displaystyle{ ( u \wedge v) w }[/math] yields the same trivector term (it is the completely symmetric part), and the vector term is negated. Like the geometric product of two vectors, this geometric product can be grouped into symmetric and antisymmetric parts, one of which is a pure k-vector. In analogy the antisymmetric part of this product can be called a generalized dot product, and is roughly speaking the dot product of a "plane" (bivector), and a vector.

The properties of this generalized dot product remain to be explored, but first here is a summary of the notation [math]\displaystyle{ w ( u \wedge v) = w \cdot ( u \wedge v) + w \wedge u \wedge v }[/math] [math]\displaystyle{ ( u \wedge v) w = - w \cdot ( u \wedge v) + w \wedge u \wedge v }[/math] [math]\displaystyle{ w \wedge u \wedge v = \frac{1}{2} ( w ( u \wedge v ) + ( u \wedge v ) w) }[/math] [math]\displaystyle{ w \cdot ( u \wedge v) = \frac{1}{2} ( w ( u \wedge v ) - ( u \wedge v ) w) }[/math]

Let [math]\displaystyle{ w = x + y }[/math], where [math]\displaystyle{ x = a u + b v }[/math], and [math]\displaystyle{ y \cdot u = y \cdot v = 0 }[/math]. Expressing [math]\displaystyle{ w }[/math] and the [math]\displaystyle{ u \wedge v }[/math], products in terms of these components is [math]\displaystyle{ w ( u \wedge v ) = x ( u \wedge v ) + y ( u \wedge v ) = x \cdot ( u \wedge v) + y \cdot ( u \wedge v) + y \wedge u \wedge v }[/math]

With the conditions and definitions above, and some manipulation, it can be shown that the term [math]\displaystyle{ y \cdot ( u \wedge v) = 0 }[/math], which then justifies the previous solution of the normal to a plane problem. Since the vector term of the vector bivector product the name dot product is zero when the vector is perpendicular to the plane (bivector), and this vector, bivector "dot product" selects only the components that are in the plane, so in analogy to the vector-vector dot product this name itself is justified by more than the fact this is the non-wedge product term of the geometric vector-bivector product.

Derivative of a unit vector

It can be shown that a unit vector derivative can be expressed using the cross product

[math]\displaystyle{ \frac{d}{dt}\left(\frac{\mathbf r}{\Vert \mathbf r \Vert}\right) = \frac{1}{{\Vert \mathbf r \Vert}^3}\left(\mathbf r \times \frac{d \mathbf r}{dt}\right) \times \mathbf r = \left(\hat{\mathbf r} \times \frac{1}{{\Vert \mathbf r \Vert}} \frac{d \mathbf r}{dt}\right) \times \hat{\mathbf r} }[/math]

The equivalent geometric product generalization is

[math]\displaystyle{ \frac{d}{dt}\left(\frac{\mathbf r}{\Vert \mathbf r \Vert}\right) = \frac{1}{{\Vert \mathbf r \Vert}^3}\mathbf r \left(\mathbf r \wedge \frac{d \mathbf r}{dt}\right) = \frac{1}{{ \mathbf r }}\left(\hat{\mathbf r} \wedge \frac{d \mathbf r}{dt}\right) }[/math]

Thus this derivative is the component of [math]\displaystyle{ \frac{1}{{\Vert \mathbf r \Vert}}\frac{d \mathbf r}{dt} }[/math] in the direction perpendicular to [math]\displaystyle{ \mathbf r }[/math]. In other words, this is [math]\displaystyle{ \frac{1}{{\Vert \mathbf r \Vert}}\frac{d \mathbf r}{dt} }[/math] minus the projection of that vector onto [math]\displaystyle{ \hat{\mathbf r} }[/math].

This intuitively makes sense (but a picture would help) since a unit vector is constrained to circular motion, and any change to a unit vector due to a change in its generating vector has to be in the direction of the rejection of [math]\displaystyle{ \hat{\mathbf r} }[/math] from [math]\displaystyle{ \frac{d \mathbf r}{dt} }[/math]. That rejection has to be scaled by 1/|r| to get the final result.

When the objective isn't comparing to the cross product, it's also notable that this unit vector derivative can be written

[math]\displaystyle{ {{ \mathbf r }} \frac{d \hat{\mathbf r}}{dt} = \hat{\mathbf r} \wedge \frac{d \mathbf r}{dt} }[/math]

See also

Citations

References and further reading