# Gradient theorem

__: Evaluates a line integral through a gradient field using the original scalar field__

**Short description**Part of a series of articles about |

Calculus |
---|

The **gradient theorem**, also known as the **fundamental theorem of calculus for line integrals**, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. The theorem is a generalization of the second fundamental theorem of calculus to any curve in a plane or space (generally *n*-dimensional) rather than just the real line.

For *φ* : *U* ⊆ **R**^{n} → **R** as a differentiable function and γ as any continuous curve in *U* which starts at a point **p** and ends at a point **q**, then

[math]\displaystyle{ \int_{\gamma} \nabla\varphi(\mathbf{r})\cdot \mathrm{d}\mathbf{r} = \varphi\left(\mathbf{q}\right) - \varphi\left(\mathbf{p}\right) }[/math]

where ∇*φ* denotes the gradient vector field of *φ*.

The gradient theorem implies that line integrals through gradient fields are path-independent. In physics this theorem is one of the ways of defining a *conservative* force. By placing φ as potential, ∇*φ* is a conservative field. Work done by conservative forces does not depend on the path followed by the object, but only the end points, as the above equation shows.

The gradient theorem also has an interesting converse: any path-independent vector field can be expressed as the gradient of a scalar field. Just like the gradient theorem itself, this converse has many striking consequences and applications in both pure and applied mathematics.

## Proof

If φ is a differentiable function from some open subset *U* ⊆ **R**^{n} to **R** and **r** is a differentiable function from some closed interval [*a*, *b*] to U (Note that **r** is differentiable at the interval endpoints *a* and *b*. To do this, **r** is defined on an interval that is larger than and includes [*a*, *b*].), then by the multivariate chain rule, the composite function *φ* ∘ **r** is differentiable on [*a*, *b*]:

[math]\displaystyle{ \frac{\mathrm{d}}{\mathrm{d}t}(\varphi \circ \mathbf{r})(t)=\nabla \varphi(\mathbf{r}(t)) \cdot \mathbf{r}'(t) }[/math]

for all t in [*a*, *b*]. Here the ⋅ denotes the usual inner product.

Now suppose the domain U of φ contains the differentiable curve γ with endpoints **p** and **q**. (This is oriented in the direction from **p** to **q**). If **r** parametrizes γ for t in [*a*, *b*] (i.e., **r** represents γ as a function of t), then

[math]\displaystyle{ \begin{align} \int_{\gamma} \nabla\varphi(\mathbf{r}) \cdot \mathrm{d}\mathbf{r} &=\int_a^b \nabla\varphi(\mathbf{r}(t)) \cdot \mathbf{r}'(t)\mathrm{d}t \\ &=\int_a^b \frac{d}{dt}\varphi(\mathbf{r}(t))\mathrm{d}t =\varphi(\mathbf{r}(b))-\varphi(\mathbf{r}(a))=\varphi\left(\mathbf{q}\right)-\varphi\left(\mathbf{p}\right) , \end{align} }[/math]

where the definition of a line integral is used in the first equality, the above equation is used in the second equality, and the second fundamental theorem of calculus is used in the third equality.^{[1]}

Even if the gradient theorem (also called *fundamental theorem of calculus for line integrals*) has been proved for a differentiable (so looked as smooth) curve so far, the theorem is also proved for a piecewise-smooth curve since this curve is made by joining multiple differentiable curves so the proof for this curve is made by the proof per differentiable curve component.^{[2]}

## Examples

### Example 1

Suppose *γ* ⊂ **R**^{2} is the circular arc oriented counterclockwise from (5, 0) to (−4, 3). Using the definition of a line integral,

[math]\displaystyle{ \begin{align} \int_{\gamma} y\, \mathrm{d}x + x\, \mathrm{d}y &= \int_0^{\pi - \tan^{-1}\!\left(\frac{3}{4}\right)} ((5\sin t)(-5 \sin t) + (5 \cos t)(5 \cos t))\, \mathrm{d}t \\ &= \int_0^{\pi - \tan^{-1}\!\left(\frac{3}{4}\right)} 25 \left(-\sin^2 t + \cos^2 t\right) \mathrm{d}t \\ &= \int_0^{\pi - \tan^{-1}\!\left(\frac{3}{4}\right)} 25 \cos(2t) \mathrm{d}t \ =\ \left.\tfrac{25}{2}\sin(2t)\right|_0^{\pi - \tan^{-1}\!\left(\tfrac{3}{4}\right)} \\[.5em] &= \tfrac{25}{2}\sin\left(2\pi - 2\tan^{-1}\!\!\left(\tfrac{3}{4}\right)\right) \\[.5em] &= -\tfrac{25}{2}\sin\left(2\tan^{-1}\!\!\left(\tfrac{3}{4}\right)\right) \ =\ -\frac{25(3/4)}{(3/4)^2 + 1} = -12. \end{align} }[/math]

This result can be obtained much more simply by noticing that the function [math]\displaystyle{ f(x,y)=xy }[/math] has gradient [math]\displaystyle{ \nabla f(x,y)=(y,x) }[/math], so by the Gradient Theorem:

[math]\displaystyle{ \int_{\gamma} y \,\mathrm{d}x+x \,\mathrm{d}y=\int_{\gamma}\nabla(xy) \cdot (\mathrm{d}x,\mathrm{d}y)\ =\ xy\,|_{(5,0)}^{(-4,3)}=-4 \cdot 3-5 \cdot 0=-12 . }[/math]

### Example 2

For a more abstract example, suppose *γ* ⊂ **R**^{n} has endpoints **p**, **q**, with orientation from **p** to **q**. For **u** in **R**^{n}, let |**u**| denote the Euclidean norm of **u**. If *α* ≥ 1 is a real number, then

[math]\displaystyle{ \begin{align} \int_{\gamma} |\mathbf{x}|^{\alpha - 1} \mathbf{x} \cdot \mathrm{d}\mathbf{x} &= \frac{1}{\alpha + 1} \int_{\gamma} (\alpha + 1) |\mathbf{x}|^{(\alpha + 1) - 2} \mathbf{x} \cdot \mathrm{d}\mathbf{x} \\ &= \frac{1}{\alpha + 1} \int_{\gamma} \nabla |\mathbf{x}|^{\alpha + 1} \cdot \mathrm{d}\mathbf{x}= \frac{|\mathbf{q}|^{\alpha + 1} - |\mathbf{p}|^{\alpha + 1}}{\alpha + 1} \end{align} }[/math]

Here the final equality follows by the gradient theorem, since the function *f*(**x**) = |**x**|^{α+1} is differentiable on **R**^{n} if *α* ≥ 1.

If *α* < 1 then this equality will still hold in most cases, but caution must be taken if *γ* passes through or encloses the origin, because the integrand vector field |**x**|^{α − 1}**x** will fail to be defined there. However, the case *α* = −1 is somewhat different; in this case, the integrand becomes |**x**|^{−2}**x** = ∇(log |**x**|), so that the final equality becomes log |**q**| − log |**p**|.

Note that if *n* = 1, then this example is simply a slight variant of the familiar power rule from single-variable calculus.

### Example 3

Suppose there are n point charges arranged in three-dimensional space, and the i-th point charge has charge *Q*_{i} and is located at position **p**_{i} in **R**^{3}. We would like to calculate the work done on a particle of charge q as it travels from a point **a** to a point **b** in **R**^{3}. Using Coulomb's law, we can easily determine that the force on the particle at position **r** will be

[math]\displaystyle{ \mathbf{F}(\mathbf{r}) = kq\sum_{i=1}^n \frac{Q_i(\mathbf{r} - \mathbf{p}_i)}{\left|\mathbf{r} - \mathbf{p}_i\right|^3} }[/math]

Here |**u**| denotes the Euclidean norm of the vector **u** in **R**^{3}, and *k* = 1/(4*πε*_{0}), where *ε*_{0} is the vacuum permittivity.

Let *γ* ⊂ **R**^{3} − {**p**_{1}, ..., **p**_{n}} be an arbitrary differentiable curve from **a** to **b**. Then the work done on the particle is

[math]\displaystyle{ W = \int_{\gamma} \mathbf{F}(\mathbf{r}) \cdot \mathrm{d}\mathbf{r} = \int_{\gamma} \left( kq\sum_{i=1}^n \frac{Q_i(\mathbf{r} - \mathbf{p}_i)}{\left|\mathbf{r} - \mathbf{p}_i\right|^3} \right) \cdot \mathrm{d}\mathbf{r} = kq \sum_{i=1}^n \left( Q_i \int_\gamma \frac{\mathbf{r} - \mathbf{p}_i}{\left|\mathbf{r} - \mathbf{p}_i\right|^3} \cdot \mathrm{d}\mathbf{r} \right) }[/math]

Now for each i, direct computation shows that

[math]\displaystyle{ \frac{\mathbf{r} - \mathbf{p}_i}{\left|\mathbf{r} - \mathbf{p}_i\right|^3} = -\nabla \frac{1}{\left|\mathbf{r} - \mathbf{p}_i\right|}. }[/math]

Thus, continuing from above and using the gradient theorem,

[math]\displaystyle{ W = -kq \sum_{i=1}^n \left( Q_i \int_{\gamma} \nabla \frac{1}{\left|\mathbf{r} - \mathbf{p}_i\right|} \cdot \mathrm{d}\mathbf{r} \right) = kq \sum_{i=1}^n Q_i \left( \frac{1}{\left|\mathbf{a} - \mathbf{p}_i\right|} - \frac{1}{\left|\mathbf{b} - \mathbf{p}_i\right|} \right) }[/math]

We are finished. Of course, we could have easily completed this calculation using the powerful language of electrostatic potential or electrostatic potential energy (with the familiar formulas *W* = −Δ*U* = −*q*Δ*V*). However, we have not yet *defined* potential or potential energy, because the *converse* of the gradient theorem is required to prove that these are well-defined, differentiable functions and that these formulas hold (see below). Thus, we have solved this problem using only Coulomb's Law, the definition of work, and the gradient theorem.

## Converse of the gradient theorem

The gradient theorem states that if the vector field **F** is the gradient of some scalar-valued function (i.e., if **F** is conservative), then **F** is a path-independent vector field (i.e., the integral of **F** over some piecewise-differentiable curve is dependent only on end points). This theorem has a powerful converse:

**Theorem** — If **F** is a path-independent vector field, then **F** is the gradient of some scalar-valued function.^{[3]}

It is straightforward to show that a vector field is path-independent if and only if the integral of the vector field over every closed loop in its domain is zero. Thus the converse can alternatively be stated as follows: If the integral of **F** over every closed loop in the domain of **F** is zero, then **F** is the gradient of some scalar-valued function.

### Proof of the converse

Suppose U is an open, path-connected subset of **R**^{n}, and **F** : *U* → **R**^{n} is a continuous and path-independent vector field. Fix some element **a** of U, and define *f* : *U* → **R** by[math]\displaystyle{ f(\mathbf{x}) := \int_{\gamma[\mathbf{a}, \mathbf{x}]} \mathbf{F}(\mathbf{u}) \cdot \mathrm{d}\mathbf{u} }[/math]Here *γ*[**a**, **x**] is any (differentiable) curve in U originating at **a** and terminating at **x**. We know that *f* is well-defined because **F** is path-independent.

Let **v** be any nonzero vector in **R**^{n}. By the definition of the directional derivative,[math]\displaystyle{ \begin{align}
\frac{\partial f(\mathbf{x})}{\partial \mathbf{v}} &= \lim_{t \to 0} \frac{f(\mathbf{x} + t\mathbf{v}) - f(\mathbf{x})}{t} \\
&= \lim_{t \to 0} \frac{\int_{\gamma[\mathbf{a}, \mathbf{x} + t\mathbf{v}]} \mathbf{F}(\mathbf{u}) \cdot \mathrm{d}\mathbf{u} - \int_{\gamma[\mathbf{a}, \mathbf{x}]} \mathbf{F}(\mathbf{u}) \cdot d\mathbf{u}}{t} \\
&= \lim_{t \to 0} \frac{1}{t} \int_{\gamma[\mathbf{x}, \mathbf{x} + t\mathbf{v}]} \mathbf{F}(\mathbf{u}) \cdot \mathrm{d}\mathbf{u}
\end{align} }[/math]To calculate the integral within the final limit, we must parametrize *γ*[**x**, **x** + *t***v**]. Since **F** is path-independent, U is open, and t is approaching zero, we may assume that this path is a straight line, and parametrize it as **u**(*s*) = **x** + *s***v** for 0 < *s* < *t*. Now, since **u'**(*s*) = **v**, the limit becomes[math]\displaystyle{ \lim_{t \to 0} \frac{1}{t} \int_0^t \mathbf{F}(\mathbf{u}(s)) \cdot \mathbf{u}'(s)\, \mathrm{d}s = \frac{\mathrm{d}}{\mathrm{d}t} \int_0^t \mathbf{F}(\mathbf{x} + s\mathbf{v}) \cdot \mathbf{v}\, \mathrm{d}s \bigg|_{t=0} = \mathbf{F}(\mathbf{x}) \cdot \mathbf{v} }[/math]where the first equality is from the definition of the derivative with a fact that the integral is equal to 0 at t = 0, and the second equality is from the first fundamental theorem of calculus. Thus we have a formula for ∂_{v}*f*, (one of ways to represent the directional derivative) where **v** is arbitrary; for [math]\displaystyle{ f(\mathbf{x}) := \int_{\gamma[\mathbf{a}, \mathbf{x}]} \mathbf{F}(\mathbf{u}) \cdot \mathrm{d}\mathbf{u} }[/math] (see its full definition above), its directional derivative with respect to **v** is[math]\displaystyle{ \frac{\partial f(\mathbf{x})}{\partial \mathbf{v}} = \partial _ \mathbf{v} f(\mathbf{x}) = D_{\mathbf{v}}f(\mathbf{x}) = \mathbf{F}(\mathbf{x}) \cdot \mathbf{v} }[/math]where the first two equalities just show different representations of the directional derivative. According to the definition of the gradient of a scalar function *f*, [math]\displaystyle{ \nabla f(\mathbf{x}) = \mathbf{F}(\mathbf{x}) }[/math], thus we have found a scalar-valued function f whose gradient is the path-independent vector field **F** (i.e., **F** is a conservative vector field.), as desired.^{[3]}

### Example of the converse principle

To illustrate the power of this converse principle, we cite an example that has significant physical consequences. In classical electromagnetism, the electric force is a path-independent force; i.e. the work done on a particle that has returned to its original position within an electric field is zero (assuming that no changing magnetic fields are present).

Therefore, the above theorem implies that the electric force field **F**_{e} : *S* → **R**^{3} is conservative (here S is some open, path-connected subset of **R**^{3} that contains a charge distribution). Following the ideas of the above proof, we can set some reference point **a** in S, and define a function *U _{e}*:

*S*→

**R**by

[math]\displaystyle{ U_e(\mathbf{r}) := -\int_{\gamma[\mathbf{a},\mathbf{r}]} \mathbf{F}_e(\mathbf{u}) \cdot \mathrm{d}\mathbf{u} }[/math]

Using the above proof, we know *U*_{e} is well-defined and differentiable, and **F**_{e} = −∇*U _{e}* (from this formula we can use the gradient theorem to easily derive the well-known formula for calculating work done by conservative forces:

*W*= −Δ

*U*). This function

*U*

_{e}is often referred to as the electrostatic potential energy of the system of charges in S (with reference to the zero-of-potential

**a**). In many cases, the domain S is assumed to be unbounded and the reference point

**a**is taken to be "infinity", which can be made rigorous using limiting techniques. This function

*U*

_{e}is an indispensable tool used in the analysis of many physical systems.

## Generalizations

Many of the critical theorems of vector calculus generalize elegantly to statements about the integration of differential forms on manifolds. In the language of differential forms and exterior derivatives, the gradient theorem states that

[math]\displaystyle{ \int_{\partial \gamma} \phi = \int_{\gamma} \mathrm{d}\phi }[/math]

for any 0-form, ϕ, defined on some differentiable curve *γ* ⊂ **R**^{n} (here the integral of *ϕ* over the boundary of the γ is understood to be the evaluation of *ϕ* at the endpoints of *γ*).

Notice the striking similarity between this statement and the generalized Stokes’ theorem, which says that the integral of any compactly supported differential form ω over the boundary of some orientable manifold Ω is equal to the integral of its exterior derivative d*ω* over the whole of Ω, i.e.,

[math]\displaystyle{ \int_{\partial \Omega}\omega=\int_{\Omega}\mathrm{d}\omega }[/math]

This powerful statement is a generalization of the gradient theorem from 1-forms defined on one-dimensional manifolds to differential forms defined on manifolds of arbitrary dimension.

The converse statement of the gradient theorem also has a powerful generalization in terms of differential forms on manifolds. In particular, suppose ω is a form defined on a contractible domain, and the integral of ω over any closed manifold is zero. Then there exists a form ψ such that *ω* = d*ψ*. Thus, on a contractible domain, every closed form is exact. This result is summarized by the Poincaré lemma.

## See also

- State function
- Scalar potential
- Jordan curve theorem
- Differential of a function
- Classical mechanics
- Line integral § Path independence
- Conservative vector field § Path independence

## References

- ↑ Williamson, Richard and Trotter, Hale. (2004).
*Multivariable Mathematics, Fourth Edition,*p. 374. Pearson Education, Inc. - ↑ Stewart, James (2015). "16.3 The Fundamental Theorem for Line Integrals" (in English).
*Calculus*(8th ed.). Cengage Learning. pp. 1127–1128. ISBN 978-1-285-74062-1. - ↑
^{3.0}^{3.1}"Williamson, Richard and Trotter, Hale. (2004).*Multivariable Mathematics, Fourth Edition*, p. 410. Pearson Education, Inc."

Original source: https://en.wikipedia.org/wiki/Gradient theorem.
Read more |