Principal axis theorem

From HandWiki
Short description: Principal axes of an ellipsoid or hyperboloid are perpendicular

In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with an ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.

Mathematically, the principal axis theorem is a generalization of the method of completing the square from elementary algebra. In linear algebra and functional analysis, the principal axis theorem is a geometrical counterpart of the spectral theorem. It has applications to the statistics of principal components analysis and the singular value decomposition. In physics, the theorem is fundamental to the studies of angular momentum and birefringence.

Motivation

The equations in the Cartesian plane R2:

[math]\displaystyle{ \begin{align} \frac{x^2}{9} + \frac{y^2}{25} &= 1 \\[3pt] \frac{x^2}{9} - \frac{y^2}{25} &= 1 \end{align} }[/math]

define, respectively, an ellipse and a hyperbola. In each case, the x and y axes are the principal axes. This is easily seen, given that there are no cross-terms involving products xy in either expression. However, the situation is more complicated for equations like

[math]\displaystyle{ 5x^2 + 8xy + 5y^2 = 1. }[/math]

Here some method is required to determine whether this is an ellipse or a hyperbola. The basic observation is that if, by completing the square, the quadratic expression can be reduced to a sum of two squares then the equation defines an ellipse, whereas if it reduces to a difference of two squares then the equation represents a hyperbola:

[math]\displaystyle{ \begin{align} u(x, y)^2 + v(x, y)^2 &= 1\qquad \text{(ellipse)} \\ u(x, y)^2 - v(x, y)^2 &= 1\qquad \text{(hyperbola)}. \end{align} }[/math]

Thus, in our example expression, the problem is how to absorb the coefficient of the cross-term 8xy into the functions u and v. Formally, this problem is similar to the problem of matrix diagonalization, where one tries to find a suitable coordinate system in which the matrix of a linear transformation is diagonal. The first step is to find a matrix in which the technique of diagonalization can be applied.

The trick is to write the quadratic form as

[math]\displaystyle{ 5x^2 + 8xy + 5y^2 = \begin{bmatrix} x & y \end{bmatrix} \begin{bmatrix} 5 & 4 \\ 4 & 5 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \mathbf{x}^\textsf{T} A\mathbf{x} }[/math]

where the cross-term has been split into two equal parts. The matrix A in the above decomposition is a symmetric matrix. In particular, by the spectral theorem, it has real eigenvalues and is diagonalizable by an orthogonal matrix (orthogonally diagonalizable).

To orthogonally diagonalize A, one must first find its eigenvalues, and then find an orthonormal eigenbasis. Calculation reveals that the eigenvalues of A are

[math]\displaystyle{ \lambda_1 = 1,\quad \lambda_2 = 9 }[/math]

with corresponding eigenvectors

[math]\displaystyle{ \mathbf{v}_1 = \begin{bmatrix} 1 \\ -1 \end{bmatrix},\quad \mathbf{v}_2 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}. }[/math]

Dividing these by their respective lengths yields an orthonormal eigenbasis:

[math]\displaystyle{ \mathbf{u}_1 = \begin{bmatrix} 1/\sqrt{2} \\ -1/\sqrt{2} \end{bmatrix},\quad \mathbf{u}_2 = \begin{bmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \end{bmatrix}. }[/math]

Now the matrix S = [u1 u2] is an orthogonal matrix, since it has orthonormal columns, and A is diagonalized by:

[math]\displaystyle{ A = SDS^{-1} = SDS^\textsf{T} = \begin{bmatrix} 1/\sqrt{2} & 1/\sqrt{2}\\ -1/\sqrt{2} & 1/\sqrt{2} \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 9 \end{bmatrix} \begin{bmatrix} 1/\sqrt{2} & -1/\sqrt{2} \\ 1/\sqrt{2} & 1/\sqrt{2} \end{bmatrix}. }[/math]

This applies to the present problem of "diagonalizing" the quadratic form through the observation that

[math]\displaystyle{ 5x^2 + 8xy + 5y^2 = \mathbf{x}^\textsf{T} A\mathbf{x} = \mathbf{x}^\textsf{T}\left(SDS^\textsf{T}\right)\mathbf{x} = \left(S^\textsf{T} \mathbf{x}\right)^\textsf{T} D\left(S^\textsf{T} \mathbf{x}\right) = 1\left(\frac{x - y}{\sqrt{2}}\right)^2 + 9\left(\frac{x + y}{\sqrt{2}}\right)^2. }[/math]

Thus, the equation [math]\displaystyle{ 5x^2 + 8xy + 5y^2 = 1 }[/math] is that of an ellipse, since the left side can be written as the sum of two squares.

It is tempting to simplify this expression by pulling out factors of 2. However, it is important not to do this. The quantities

[math]\displaystyle{ c_1 = \frac{x - y}{\sqrt{2}},\quad c_2 = \frac{x + y}{\sqrt{2}} }[/math]

have a geometrical meaning. They determine an orthonormal coordinate system on R2. In other words, they are obtained from the original coordinates by the application of a rotation (and possibly a reflection). Consequently, one may use the c1 and c2 coordinates to make statements about length and angles (particularly length), which would otherwise be more difficult in a different choice of coordinates (by rescaling them, for instance). For example, the maximum distance from the origin on the ellipse c12 + 9c22 = 1 occurs when c2 = 0, so at the points c1 = ±1. Similarly, the minimum distance is where c2 = ±1/3.

It is possible now to read off the major and minor axes of this ellipse. These are precisely the individual eigenspaces of the matrix A, since these are where c2 = 0 or c1 = 0. Symbolically, the principal axes are

[math]\displaystyle{ E_1 = \text{span}\left(\begin{bmatrix} 1/\sqrt{2} \\ -1/\sqrt{2} \end{bmatrix}\right),\quad E_2 = \text{span}\left(\begin{bmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \end{bmatrix}\right). }[/math]

To summarize:

  • The equation is for an ellipse, since both eigenvalues are positive. (Otherwise, if one were positive and the other negative, it would be a hyperbola.)
  • The principal axes are the lines spanned by the eigenvectors.
  • The minimum and maximum distances to the origin can be read off the equation in diagonal form.

Using this information, it is possible to attain a clear geometrical picture of the ellipse: to graph it, for instance.

Formal statement

The principal axis theorem concerns quadratic forms in Rn, which are homogeneous polynomials of degree 2. Any quadratic form may be represented as

[math]\displaystyle{ Q(\mathbf{x}) = \mathbf{x}^\textsf{T} A\mathbf{x} }[/math]

where A is a symmetric matrix.

The first part of the theorem is contained in the following statements guaranteed by the spectral theorem:

  • The eigenvalues of A are real.
  • A is diagonalizable, and the eigenspaces of A are mutually orthogonal.

In particular, A is orthogonally diagonalizable, since one may take a basis of each eigenspace and apply the Gram-Schmidt process separately within the eigenspace to obtain an orthonormal eigenbasis.

For the second part, suppose that the eigenvalues of A are λ1, ..., λn (possibly repeated according to their algebraic multiplicities) and the corresponding orthonormal eigenbasis is u1, ..., un. Then,

[math]\displaystyle{ \mathbf{c} = [\mathbf{u}_1, \ldots,\mathbf{u}_n]^\textsf{T} \mathbf{x}, }[/math]

and

[math]\displaystyle{ Q(\mathbf{x}) = \lambda_1 c_1^2 + \lambda_2 c_2^2 + \dots + \lambda_n c_n^2, }[/math]

where ci is the i-th entry of c . Furthermore,

The i-th principal axis is the line determined by equating cj =0 for all [math]\displaystyle{ j = 1,\ldots, i-1, i+1,\ldots, n }[/math]. The i-th principal axis is the span of the vector ui .

See also

References

  • Strang, Gilbert (1994). Introduction to Linear Algebra. Wellesley-Cambridge Press. ISBN 0-9614088-5-5.