Tensor calculus

From HandWiki
Short description: Extension of vector calculus to tensors

In mathematics, tensor calculus, tensor analysis, or Ricci calculus is an extension of vector calculus to tensor fields (tensors that may vary over a manifold, e.g. in spacetime).

Developed by Gregorio Ricci-Curbastro and his student Tullio Levi-Civita,[1] it was used by Albert Einstein to develop his general theory of relativity. Unlike the infinitesimal calculus, tensor calculus allows presentation of physics equations in a form that is independent of the choice of coordinates on the manifold.

Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning.

Working with a main proponent of the exterior calculus Elie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:[2]

In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus.

Syntax

Tensor notation makes use of upper and lower indexes on objects that are used to label a variable object as covariant (lower index), contravariant (upper index), or mixed covariant and contravariant (having both upper and lower indexes). In fact in conventional math syntax we make use of covariant indexes when dealing with Cartesian coordinate systems [math]\displaystyle{ (x_1, x_2, x_3) }[/math] frequently without realizing this is a limited use of tensor syntax as covariant indexed components.

Tensor notation allows upper index on an object that may be confused with normal power operations from conventional math syntax.

Key concepts

Vector decomposition

Tensors notation allows a vector ([math]\displaystyle{ \vec{V} }[/math]) to be decomposed into an Einstein summation representing the tensor contraction of a basis vector ([math]\displaystyle{ \vec{Z}_i }[/math] or [math]\displaystyle{ \vec{Z}^i }[/math]) with a component vector ([math]\displaystyle{ V_i }[/math] or [math]\displaystyle{ V^i }[/math]).

[math]\displaystyle{ \vec{V} = V^i \vec{Z}_i = V_i \vec{Z}^i }[/math]

Every vector has two different representations, one referred to as contravariant component ([math]\displaystyle{ V^i }[/math]) with a covariant basis ([math]\displaystyle{ \vec{Z}_i }[/math]), and the other as a covariant component ([math]\displaystyle{ V_i }[/math]) with a contravariant basis ([math]\displaystyle{ \vec{Z}^i }[/math]). Tensor objects with all upper indexes are referred to as contravariant, and tensor objects with all lower indexes are referred to as covariant. The need to distinguish between contravariant and covariant arises from the fact that when we dot an arbitrary vector with its basis vector related to a particular coordinate system, there are two ways of interpreting this dot product, either we view it as the projection of the basis vector onto the arbitrary vector, or we view it as the projection of the arbitrary vector onto the basis vector, both views of the dot product are entirely equivalent, but have different component elements and different basis vectors:

[math]\displaystyle{ \vec{V} \cdot \vec{Z}_i = V_i = \vec{V}^T \vec{Z}_i = \vec{Z}_i^T \vec{V} = {\mathrm{proj}_{\vec{Z}^i}(\vec{V})} \cdot \vec{Z}_i = {\mathrm{proj}_{\vec{V}}(\vec{Z}^i)} \cdot \vec{V} }[/math]

[math]\displaystyle{ \vec{V} \cdot \vec{Z}^i = V^i = \vec{V}^T \vec{Z}^i = {\vec{Z}^i}^T \vec{V} = {\mathrm{proj}_{\vec{Z}_i}(\vec{V})} \cdot \vec{Z}^i = {\mathrm{proj}_{\vec{V}}(\vec{Z}_i)} \cdot \vec{V} }[/math]

For example, in physics you start with a vector field, you decompose it with respect to the covariant basis, and that's how you get the contravariant coordinates. For orthonormal cartesian coordinates, the covariant and contravariant basis are identical, since the basis set in this case is just the identity matrix, however, for non-affine coordinate system such as polar or spherical there is a need to distinguish between decomposition by use of contravariant or covariant basis set for generating the components of the coordinate system.

Covariant vector decomposition

[math]\displaystyle{ \vec{V} = V^i \vec{Z}_i }[/math]

variable description Type
[math]\displaystyle{ \vec{V} }[/math] vector invariant
[math]\displaystyle{ V^i }[/math] contravariant components (ordered set of scalars) variant
[math]\displaystyle{ \vec{Z}_i }[/math] covariant bases (ordered set of vectors) variant

Contravariant vector decomposition

[math]\displaystyle{ \vec{V} = V_i \vec{Z}^i }[/math]

variable description type
[math]\displaystyle{ \vec{V} }[/math] vector invariant
[math]\displaystyle{ V_i }[/math] covariant components (ordered set of scalars) variant
[math]\displaystyle{ \vec{Z}^i }[/math] contravariant bases (ordered set of covectors) variant

Metric tensor

The metric tensor represents a matrix with scalar elements ([math]\displaystyle{ Z_{ij} }[/math] or [math]\displaystyle{ Z^{ij} }[/math]) and is a tensor object which is used to raise or lower the index on another tensor object by an operation called contraction, thus allowing a covariant tensor to be converted to a contravariant tensor, and vice versa.

Example of lowering index using metric tensor:

[math]\displaystyle{ T_i=Z_{ij}T^j }[/math]

Example of raising index using metric tensor:

[math]\displaystyle{ T^i=Z^{ij}T_j }[/math]

The metric tensor is defined as:

[math]\displaystyle{ Z_{ij} = \vec{Z}_i \cdot \vec{Z}_j }[/math]

[math]\displaystyle{ Z^{ij} = \vec{Z}^i \cdot \vec{Z}^j }[/math]

This means that if we take every permutation of a basis vector set and dotted them against each other, and then arrange them into a square matrix, we would have a metric tensor. The caveat here is which of the two vectors in the permutation is used for projection against the other vector, that is the distinguishing property of the covariant metric tensor in comparison with the contravariant metric tensor.

Two flavors of metric tensors exist: (1) the contravariant metric tensor ([math]\displaystyle{ Z^{ij} }[/math]), and (2) the covariant metric tensor ([math]\displaystyle{ Z_{ij} }[/math]). These two flavors of metric tensor are related by the identity:

[math]\displaystyle{ Z_{ik}Z^{jk} = \delta^j_i }[/math]

For an orthonormal Cartesian coordinate system, the metric tensor is just the kronecker delta [math]\displaystyle{ \delta_{ij} }[/math] or [math]\displaystyle{ \delta^{ij} }[/math], which is just a tensor equivalent of the identity matrix, and [math]\displaystyle{ \delta_{ij} = \delta^{ij} = \delta^i_j }[/math].

Jacobian

In addition a tensor can be readily converted from an unbarred([math]\displaystyle{ x }[/math]) to a barred coordinate([math]\displaystyle{ \bar{x} }[/math]) system having different sets of basis vectors:

[math]\displaystyle{ f(x^1, x^2, \dots, x^n) = f\bigg(x^1(\bar{x}), x^2(\bar{x}), \dots, x^n(\bar{x})\bigg) = \bar{f}(\bar{x}^1, \bar{x}^2, \dots, \bar{x}^n)= \bar{f}\bigg(\bar{x}^1(x), \bar{x}^2(x), \dots, \bar{x}^n(x)\bigg) }[/math]

by use of Jacobian matrix relationships between the barred and unbarred coordinate system ([math]\displaystyle{ \bar{J}=J^{-1} }[/math]). The Jacobian between the barred and unbarred system is instrumental in defining the covariant and contravariant basis vectors, in that in order for these vectors to exist they need to satisfy the following relationship relative to the barred and unbarred system:

Contravariant vectors are required to obey the laws:

[math]\displaystyle{ v^i = \bar{v}^r\frac{\partial x^i(\bar{x})}{\partial \bar{x}^r} }[/math]

[math]\displaystyle{ \bar{v}^i = v^r\frac{\partial \bar{x}^i(x)}{\partial x^r} }[/math]

Covariant vectors are required to obey the laws:

[math]\displaystyle{ v_i = \bar{v}_r\frac{\partial \bar{x}^i(x)}{\partial x^r} }[/math]

[math]\displaystyle{ \bar{v}_i = v_r\frac{\partial x^r(\bar{x})}{\partial \bar{x}^i} }[/math]

There are two flavors of Jacobian matrix:

1. The J matrix representing the change from unbarred to barred coordinates. To find J, we take the "barred gradient", i.e. partial derivative with respect to [math]\displaystyle{ \bar{x}^i }[/math]:

[math]\displaystyle{ J = \bar{\nabla} f(x(\bar{x})) }[/math]

2. The [math]\displaystyle{ \bar{J} }[/math] matrix, representing the change from barred to unbarred coordinates. To find [math]\displaystyle{ \bar{J} }[/math], we take the "unbarred gradient", i.e. partial derive with respect to [math]\displaystyle{ x^i }[/math]:

[math]\displaystyle{ \bar{J} = \nabla \bar{f}(\bar{x}(x)) }[/math]

Gradient vector

Tensor calculus provides a generalization to the gradient vector formula from standard calculus that works in all coordinate systems:

[math]\displaystyle{ \nabla F = \nabla_i F \vec{Z}^i }[/math]

Where:

[math]\displaystyle{ \nabla_i F = \frac{\partial F}{\partial Z^i} }[/math]

In contrast, for standard calculus, the gradient vector formula is dependent on the coordinate system in use (example: Cartesian gradient vector formula vs. the polar gradient vector formula vs. the spherical gradient vector formula, etc.). In standard calculus, each coordinate system has its own specific formula, unlike tensor calculus that has only one gradient formula that is equivalent for all coordinate systems. This is made possible by an understanding of the metric tensor that tensor calculus makes use of.

See also

References

Further reading

External links