# Raising and lowering indices

In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions.

## Vectors, covectors and the metric

### Mathematical formulation

Mathematically vectors are elements of a vector space $\displaystyle{ V }$ over a field $\displaystyle{ K }$, and for use in physics $\displaystyle{ V }$ is usually defined with $\displaystyle{ K=\mathbb{R} }$ or $\displaystyle{ \mathbb{C} }$. Concretely, if the dimension $\displaystyle{ n=\text{dim}V }$ of $\displaystyle{ V }$ is finite, then, after making a choice of basis, we can view such vector spaces as $\displaystyle{ \mathbb{R}^n }$ or $\displaystyle{ \mathbb{C}^n }$.

The dual space is the space of linear functionals mapping $\displaystyle{ V\rightarrow K }$. Concretely, in matrix notation these can be thought of as row vectors, which give a number when applied to column vectors. We denote this by $\displaystyle{ V^*:= \text{Hom}(V,K) }$, so that $\displaystyle{ \alpha \in V^* }$ is a linear map $\displaystyle{ \alpha:V\rightarrow K }$.

Then under a choice of basis $\displaystyle{ \{e_i\} }$, we can view vectors $\displaystyle{ v\in V }$ as an $\displaystyle{ K^n }$ vector with components $\displaystyle{ v^i }$ (vectors are taken by convention to have indices up). This picks out a choice of basis $\displaystyle{ \{e^i\} }$ for $\displaystyle{ V^* }$, defined by the set of relations $\displaystyle{ e^i(e_j) = \delta^i_j }$.

For applications, raising and lowering is done using a structure known as the (pseudo-)metric tensor (the 'pseudo-' refers to the fact we allow the metric to be indefinite). Formally, this is a non-degenerate, symmetric bilinear form

$\displaystyle{ g:V\times V\rightarrow K \text{ a bilinear form} }$
$\displaystyle{ g(u,v) = g(v,u) \text{ for all }u,v\in V \text{ (Symmetric)} }$
$\displaystyle{ \forall v\in V, \exists u\in V \text{ such that } g(v,u)\neq 0 \text{ (Non-degenerate)} }$

In this basis, it has components $\displaystyle{ g(e_i,e_j) = g_{ij} }$, and can be viewed as a symmetric matrix in $\displaystyle{ \text{Mat}_{n\times n}(K) }$ with these components. The inverse metric exists due to non-degeneracy and is denoted $\displaystyle{ g^{ij} }$, and as a matrix is the inverse to $\displaystyle{ g_{ij} }$.

### Raising and lowering vectors and covectors

Raising and lowering is then done in coordinates. Given a vector with components $\displaystyle{ v^i }$, we can contract with the metric to obtain a covector:

$\displaystyle{ g_{ij}v^j = v_i }$

and this is what we mean by lowering the index. Conversely, contracting a covector with the inverse metric gives a vector:

$\displaystyle{ g^{ij}\alpha_j=\alpha^i. }$

This process is called raising the index.

Raising and then lowering the same index (or conversely) are inverse operations, which is reflected in the metric and inverse metric tensors being inverse to each other (as is suggested by the terminology):

$\displaystyle{ g^{ij}g_{jk}=g_{kj}g^{ji}={\delta^i}_k={\delta_k}^i }$

where $\displaystyle{ \delta^i_j }$ is the Kronecker delta or identity matrix.

Finite-dimensional real vector spaces with (pseudo-)metrics are classified up to signature, a coordinate-free property which is well-defined by Sylvester's law of inertia. Possible metrics on real space are indexed by signature $\displaystyle{ (p,q) }$. This is a metric associated to $\displaystyle{ n=p+q }$ dimensional real space. The metric has signature $\displaystyle{ (p,q) }$ if there exists a basis (referred to as an orthonormal basis) such that in this basis, the metric takes the form $\displaystyle{ (g_{ij}) = \text{diag}(+1, \cdots, +1, -1, \cdots, -1) }$ with $\displaystyle{ p }$ positive ones and $\displaystyle{ q }$ negative ones.

The concrete space with elements which are $\displaystyle{ n }$-vectors and this concrete realization of the metric is denoted $\displaystyle{ \mathbb{R}^{p,q}=(\mathbb{R}^n,g_{ij}) }$, where the 2-tuple $\displaystyle{ (\mathbb{R}^n, g_{ij}) }$ is meant to make it clear that the underlying vector space of $\displaystyle{ \mathbb{R}^{p,q} }$ is $\displaystyle{ \mathbb{R}^n }$: equipping this vector space with the metric $\displaystyle{ g_{ij} }$ is what turns the space into $\displaystyle{ \mathbb{R}^{p,q} }$.

Examples:

• $\displaystyle{ \mathbb{R}^3 }$ is a model for 3-dimensional space. The metric is equivalent to the standard dot product.
• $\displaystyle{ \mathbb{R}^{n,0} = \mathbb{R}^n }$, equivalent to $\displaystyle{ n }$ dimensional real space as an inner product space with $\displaystyle{ g_{ij} = \delta_{ij} }$. In Euclidean space, raising and lowering is not necessary due to vectors and covector components being the same.
• $\displaystyle{ \mathbb{R}^{1,3} }$ is Minkowski space (or rather, Minkowski space in a choice of orthonormal basis), a model for spacetime with weak curvature. It is common convention to use greek indices when writing expressions involving tensors in Minkowski space, while Latin indices are reserved for Euclidean space.

Well-formulated expressions are constrained by the rules of Einstein summation: any index may appear at most once and furthermore a raised index must contract with a lowered index. With these rules we can immediately see that an expression such as

$\displaystyle{ g_{ij}v^iu^j }$

is well formulated while

$\displaystyle{ g_{ij}v_iu_j }$

is not.

### Example in Minkowski spacetime

The covariant 4-position is given by

$\displaystyle{ X_\mu = (-ct, x, y, z) }$

with components:

$\displaystyle{ X_0 = -ct, \quad X_1 = x, \quad X_2 = y, \quad X_3 = z }$

(where x,y,z are the usual Cartesian coordinates) and the Minkowski metric tensor with metric signature (− + + +) is defined as

$\displaystyle{ \eta_{\mu \nu} = \eta^{\mu \nu} = \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} }$

in components:

$\displaystyle{ \eta_{00} = -1, \quad \eta_{i0} = \eta_{0i} = 0,\quad \eta_{ij} = \delta_{ij}\,(i,j \neq 0). }$

To raise the index, multiply by the tensor and contract:

$\displaystyle{ X^\lambda = \eta^{\lambda\mu}X_\mu = \eta^{\lambda 0}X_0 + \eta^{\lambda i}X_i }$

then for λ = 0:

$\displaystyle{ X^0 = \eta^{00}X_0 + \eta^{0i}X_i = -X_0 }$

and for λ = j = 1, 2, 3:

$\displaystyle{ X^j = \eta^{j0}X_0 + \eta^{ji}X_i = \delta^{ji}X_i = X_j \,. }$

So the index-raised contravariant 4-position is:

$\displaystyle{ X^\mu = (ct, x, y, z)\,. }$

This operation is equivalent to the matrix multiplication

$\displaystyle{ \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} -ct \\ x \\ y \\ z \end{pmatrix} = \begin{pmatrix} ct \\ x \\ y \\ z \end{pmatrix}. }$

Given two vectors, $\displaystyle{ X^\mu }$ and $\displaystyle{ Y^\mu }$, we can write down their (pseudo-)inner product in two ways:

$\displaystyle{ \eta_{\mu\nu}X^\mu Y^\nu. }$

By lowering indices, we can write this expression as

$\displaystyle{ X_\mu Y^\mu. }$

What is this in matrix notation? The first expression can be written as

$\displaystyle{ \begin{pmatrix} X^0 & X^1 & X^2 & X^3 \end{pmatrix} \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} Y^0 \\ Y^1 \\ Y^2 \\ Y^3\end{pmatrix} }$

while the second is, after lowering the indices of $\displaystyle{ X^\mu }$,

$\displaystyle{ \begin{pmatrix} -X^0 & X^1 & X^2 & X^3 \end{pmatrix}\begin{pmatrix} Y^0 \\ Y^1 \\ Y^2 \\ Y^3\end{pmatrix}. }$

### Coordinate free formalism

It is instructive to consider what raising and lowering means in the abstract linear algebra setting.

We first fix definitions: $\displaystyle{ V }$ is a finite-dimensional vector space over a field $\displaystyle{ K }$. Typically $\displaystyle{ K=\mathbb{R} }$ or $\displaystyle{ \mathbb{C} }$.

$\displaystyle{ \phi }$ is a non-degenerate bilinear form, that is, $\displaystyle{ \phi:V\times V\rightarrow K }$ is a map which is linear in both arguments, making it a bilinear form.

By $\displaystyle{ \phi }$ being non-degenerate we mean that for each $\displaystyle{ v\in V }$, there is a $\displaystyle{ u\in V }$ such that

$\displaystyle{ \phi(v,u)\neq 0. }$

In concrete applications, $\displaystyle{ \phi }$ is often considered a structure on the vector space, for example an inner product or more generally a metric tensor which is allowed to have indefinite signature, or a symplectic form $\displaystyle{ \omega }$. Together these cover the cases where $\displaystyle{ \phi }$ is either symmetric or anti-symmetric, but in full generality $\displaystyle{ \phi }$ need not be either of these cases.

There is a partial evaluation map associated to $\displaystyle{ \phi }$,

$\displaystyle{ \phi(\cdot, - ):V\rightarrow V^*; v\mapsto \phi(v,\cdot) }$

where $\displaystyle{ \cdot }$ denotes an argument which is to be evaluated, and $\displaystyle{ - }$ denotes an argument whose evaluation is deferred. Then $\displaystyle{ \phi(v,\cdot) }$ is an element of $\displaystyle{ V^* }$, which sends $\displaystyle{ u\mapsto \phi(v,u) }$.

We made a choice to define this partial evaluation map as being evaluated on the first argument. We could just as well have defined it on the second argument, and non-degeneracy is also independent of argument chosen. Also, when $\displaystyle{ \phi }$ has well defined (anti-)symmetry, evaluating on either argument is equivalent (up to a minus sign for anti-symmetry).

Non-degeneracy shows that the partial evaluation map is injective, or equivalently that the kernel of the map is trivial. In finite dimension, the dual space $\displaystyle{ V^* }$ has equal dimension to $\displaystyle{ V }$, so non-degeneracy is enough to conclude the map is a linear isomorphism. If $\displaystyle{ \phi }$ is a structure on the vector space sometimes call this the canonical isomorphism $\displaystyle{ V\rightarrow V^* }$.

It therefore has an inverse, $\displaystyle{ \phi^{-1}:V^*\rightarrow V, }$ and this is enough to define an associated bilinear form on the dual:

$\displaystyle{ \phi^{-1}:V^*\times V^*\rightarrow K, \phi^{-1}(\alpha,\beta) = \phi(\phi^{-1}(\alpha),\phi^{-1}(\beta)). }$

where the repeated use of $\displaystyle{ \phi^{-1} }$ is disambiguated by the argument taken. That is, $\displaystyle{ \phi^{-1}(\alpha) }$ is the inverse map, while $\displaystyle{ \phi^{-1}(\alpha,\beta) }$ is the bilinear form.

Checking these expressions in coordinates makes it evident that this is what raising and lowering indices means abstractly.

## Tensors

We will not develop the abstract formalism for tensors straightaway. Formally, an $\displaystyle{ (r,s) }$ tensor is an object described via its components, and has $\displaystyle{ r }$ components up, $\displaystyle{ s }$ components down. A generic $\displaystyle{ (r,s) }$ tensor is written

$\displaystyle{ T^{\mu_1\cdots \mu_r}{}_{\nu_1\cdots \nu_s}. }$

We can use the metric tensor to raise and lower tensor indices just as we raised and lowered vector indices and raised covector indices.

### Examples

• A (0,0) tensor is a number in the field $\displaystyle{ \mathbb{F} }$.
• A (1,0) tensor is a vector.
• A (0,1) tensor is a covector.
• A (0,2) tensor is a bilinear form. An example is the metric tensor $\displaystyle{ g_{\mu\nu}. }$
• A (1,1) tensor is a linear map. An example is the delta, $\displaystyle{ \delta^\mu{}_\nu }$, which is the identity map, or a Lorentz transformation $\displaystyle{ \Lambda^\mu{}_\nu. }$

### Example of raising and lowering

For a (0,2) tensor,[1] twice contracting with the inverse metric tensor and contracting in different indices raises each index:

$\displaystyle{ A^{\mu\nu}=g^{\mu\rho}g^{\nu\sigma}A_{\rho \sigma}. }$

Similarly, twice contracting with the metric tensor and contracting in different indices lowers each index:

$\displaystyle{ A_{\mu\nu}=g_{\mu\rho}g_{\nu\sigma}A^{\rho\sigma} }$

Let's apply this to the theory of electromagnetism.

The contravariant electromagnetic tensor in the (+ − − −) signature is given by[2]

$\displaystyle{ F^{\alpha\beta} = \begin{pmatrix} 0 & -\frac{E_x}{c} & -\frac{E_y}{c} & -\frac{E_z}{c} \\ \frac{E_x}{c} & 0 & -B_z & B_y \\ \frac{E_y}{c} & B_z & 0 & -B_x \\ \frac{E_z}{c} & -B_y & B_x & 0 \end{pmatrix}. }$

In components,

$\displaystyle{ F^{0i} = -F^{i0} = - \frac{E^i}{c} ,\quad F^{ij} = - \varepsilon^{ijk} B_k }$

To obtain the covariant tensor Fαβ, contract with the inverse metric tensor:

\displaystyle{ \begin{align} F_{\alpha\beta} & = \eta_{\alpha\gamma} \eta_{\beta\delta} F^{\gamma\delta} \\ & = \eta_{\alpha 0} \eta_{\beta 0} F^{0 0} + \eta_{\alpha i} \eta_{\beta 0} F^{i 0} + \eta_{\alpha 0} \eta_{\beta i} F^{0 i} + \eta_{\alpha i} \eta_{\beta j} F^{i j} \end{align} }

and since F00 = 0 and F0i = − Fi0, this reduces to

$\displaystyle{ F_{\alpha\beta} = \left(\eta_{\alpha i} \eta_{\beta 0} - \eta_{\alpha 0} \eta_{\beta i} \right) F^{i 0} + \eta_{\alpha i} \eta_{\beta j} F^{i j} }$

Now for α = 0, β = k = 1, 2, 3:

\displaystyle{ \begin{align} F_{0k} & = \left(\eta_{0i} \eta_{k0} - \eta_{00} \eta_{ki} \right) F^{i0} + \eta_{0i} \eta_{kj} F^{ij} \\ & = \bigl(0 - (-\delta_{ki}) \bigr) F^{i0} + 0 \\ & = F^{k0} = - F^{0k} \\ \end{align} }

and by antisymmetry, for α = k = 1, 2, 3, β = 0:

$\displaystyle{ F_{k0} = - F^{k0} }$

then finally for α = k = 1, 2, 3, β = l = 1, 2, 3;

\displaystyle{ \begin{align} F_{kl} & = \left(\eta_{ k i} \eta_{ l 0} - \eta_{ k 0} \eta_{ l i} \right) F^{i 0} + \eta_{ k i} \eta_{ l j} F^{i j} \\ & = 0 + \delta_{ k i} \delta_{ l j} F^{i j} \\ & = F^{k l} \\ \end{align} }

The (covariant) lower indexed tensor is then:

$\displaystyle{ F_{\alpha\beta} = \begin{pmatrix} 0 & \frac{E_x}{c} & \frac{E_y}{c} & \frac{E_z}{c} \\ -\frac{E_x}{c} & 0 & -B_z & B_y \\ -\frac{E_y}{c} & B_z & 0 & -B_x \\ -\frac{E_z}{c} & -B_y & B_x & 0 \end{pmatrix} }$

This operation is equivalent to the matrix multiplication

$\displaystyle{ \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & -\frac{E_x}{c} & -\frac{E_y}{c} & -\frac{E_z}{c} \\ \frac{E_x}{c} & 0 & -B_z & B_y \\ \frac{E_y}{c} & B_z & 0 & -B_x \\ \frac{E_z}{c} & -B_y & B_x & 0 \end{pmatrix} \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} =\begin{pmatrix} 0 & \frac{E_x}{c} & \frac{E_y}{c} & \frac{E_z}{c} \\ -\frac{E_x}{c} & 0 & -B_z & B_y \\ -\frac{E_y}{c} & B_z & 0 & -B_x \\ -\frac{E_z}{c} & -B_y & B_x & 0 \end{pmatrix}. }$

### General rank

For a tensor of order n, indices are raised by (compatible with above):[1]

$\displaystyle{ g^{j_1i_1}g^{j_2i_2}\cdots g^{j_ni_n}A_{i_1i_2\cdots i_n} = A^{j_1j_2\cdots j_n} }$

and lowered by:

$\displaystyle{ g_{j_1i_1}g_{j_2i_2}\cdots g_{j_ni_n}A^{i_1i_2\cdots i_n} = A_{j_1j_2\cdots j_n} }$

and for a mixed tensor:

$\displaystyle{ g_{p_1i_1}g_{p_2i_2}\cdots g_{p_ni_n}g^{q_1j_1}g^{q_2j_2}\cdots g^{q_mj_m}{A^{i_1i_2\cdots i_n}}_{j_1j_2\cdots j_m} = {A_{p_1p_2\cdots p_n}}^{q_1q_2\cdots q_m} }$

We need not raise or lower all indices at once: it is perfectly fine to raise or lower a single index. Lowering an index of an $\displaystyle{ (r,s) }$ tensor gives a $\displaystyle{ (r-1,s+1) }$ tensor, while raising an index gives a $\displaystyle{ (r+1,s-1) }$ (where $\displaystyle{ r,s }$ have suitable values, for example we cannot lower the index of a $\displaystyle{ (0,2) }$ tensor.)