Absolute convergence

From HandWiki
Short description: Mode of convergence of an infinite series

In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series [math]\displaystyle{ \textstyle\sum_{n=0}^\infty a_n }[/math] is said to converge absolutely if [math]\displaystyle{ \textstyle\sum_{n=0}^\infty \left|a_n\right| = L }[/math] for some real number [math]\displaystyle{ \textstyle L. }[/math] Similarly, an improper integral of a function, [math]\displaystyle{ \textstyle\int_0^\infty f(x)\,dx, }[/math] is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if [math]\displaystyle{ \textstyle\int_0^\infty |f(x)|dx = L. }[/math]

Absolute convergence is important for the study of infinite series because its definition is strong enough to have properties of finite sums that not all convergent series possess - a convergent series that is not absolutely convergent is called conditionally convergent, while absolutely convergent series behave "nicely". For instance, rearrangements do not change the value of the sum. This is not true for conditionally convergent series: The alternating harmonic series [math]\displaystyle{ 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\frac{1}{6}+\cdots }[/math] converges to [math]\displaystyle{ \ln 2, }[/math] while its rearrangement [math]\displaystyle{ 1+\frac{1}{3}-\frac{1}{2}+\frac{1}{5}+\frac{1}{7}-\frac{1}{4}+\cdots }[/math] (in which the repeating pattern of signs is two positive terms followed by one negative term) converges to [math]\displaystyle{ \frac{3}{2}\ln 2. }[/math]


In finite sums, the order in which terms are added is associative, meaning that the order does not matter. 1 + 2 + 3 is the same as 3 + 2 + 1. However, this is not true when adding infinitely many numbers, and wrongly assuming that it is true can lead to apparent paradoxes. One classic example is the alternating sum

[math]\displaystyle{ S = 1 - 1 + 1 - 1 + 1 - 1... }[/math]

whose terms alternate between +1 and -1. What is the value of S? One way to evaluate S is to group the first and second term, the third and fourth, and so on:

[math]\displaystyle{ S_1 = (1 - 1) + (1 - 1) + (1 - 1).... = 0 + 0 + 0 ... = 0 }[/math]

But another way to evaluate S is to leave the first term alone and group the second and third term, then the fourth and fifth term, and so on:

[math]\displaystyle{ S_2 = 1 + (-1 + 1) + (-1 + 1) + (-1 + 1).... = 1 + 0 + 0 + 0 ... = 1 }[/math]

This leads to an apparent paradox: does [math]\displaystyle{ S = 0 }[/math] or [math]\displaystyle{ S = 1 }[/math]?

The answer is that because S is not absolutely convergent, rearranging its terms changes the value of the sum. This means [math]\displaystyle{ S_1 }[/math] and [math]\displaystyle{ S_2 }[/math] are not equal. In fact, the series [math]\displaystyle{ 1 - 1 + 1 - 1 + ... }[/math] does not converge, so S does not have a value to find in the first place. A series that is absolutely convergent does not have this problem: rearranging its terms does not change the value of the sum.

Definition for real and complex numbers

A sum of real numbers or complex numbers [math]\displaystyle{ \sum_{n=0}^{\infty} a_n }[/math] is absolutely convergent if the sum of the absolute values of the terms [math]\displaystyle{ \sum_{n=0}^{\infty} |a_n| }[/math] converges.

Sums of more general elements

The same definition can be used for series [math]\displaystyle{ \sum_{n=0}^{\infty} a_n }[/math] whose terms [math]\displaystyle{ a_n }[/math] are not numbers but rather elements of an arbitrary abelian topological group. In that case, instead of using the absolute value, the definition requires the group to have a norm, which is a positive real-valued function [math]\displaystyle{ \|\cdot\|: G \to \R_+ }[/math] on an abelian group [math]\displaystyle{ G }[/math] (written additively, with identity element 0) such that:

  1. The norm of the identity element of [math]\displaystyle{ G }[/math] is zero: [math]\displaystyle{ \|0\| = 0. }[/math]
  2. For every [math]\displaystyle{ x \in G, }[/math] [math]\displaystyle{ \|x\| = 0 }[/math] implies [math]\displaystyle{ x = 0. }[/math]
  3. For every [math]\displaystyle{ x \in G, }[/math] [math]\displaystyle{ \|-x\| = \|x\|. }[/math]
  4. For every [math]\displaystyle{ x, y \in G, }[/math] [math]\displaystyle{ \|x+y\| \leq \|x\| + \|y\|. }[/math]

In this case, the function [math]\displaystyle{ d(x,y) = \|x-y\| }[/math] induces the structure of a metric space (a type of topology) on [math]\displaystyle{ G. }[/math]

Then, a [math]\displaystyle{ G }[/math]-valued series is absolutely convergent if [math]\displaystyle{ \sum_{n=0}^{\infty} \|a_n\| \lt \infty. }[/math]

In particular, these statements apply using the norm [math]\displaystyle{ |x| }[/math] (absolute value) in the space of real numbers or complex numbers.

In topological vector spaces

If [math]\displaystyle{ X }[/math] is a topological vector space (TVS) and [math]\displaystyle{ \left(x_\alpha\right)_{\alpha \in A} }[/math] is a (possibly uncountable) family in [math]\displaystyle{ X }[/math] then this family is absolutely summable if[1]

  1. [math]\displaystyle{ \left(x_\alpha\right)_{\alpha \in A} }[/math] is summable in [math]\displaystyle{ X }[/math] (that is, if the limit [math]\displaystyle{ \lim_{H \in \mathcal{F}(A)} x_H }[/math] of the net [math]\displaystyle{ \left(x_H\right)_{H \in \mathcal{F}(A)} }[/math] converges in [math]\displaystyle{ X, }[/math] where [math]\displaystyle{ \mathcal{F}(A) }[/math] is the directed set of all finite subsets of [math]\displaystyle{ A }[/math] directed by inclusion [math]\displaystyle{ \subseteq }[/math] and [math]\displaystyle{ x_H := \sum_{i \in H} x_i }[/math]), and
  2. for every continuous seminorm [math]\displaystyle{ p }[/math] on [math]\displaystyle{ X, }[/math] the family [math]\displaystyle{ \left(p \left(x_\alpha\right)\right)_{\alpha \in A} }[/math] is summable in [math]\displaystyle{ \R. }[/math]

If [math]\displaystyle{ X }[/math] is a normable space and if [math]\displaystyle{ \left(x_\alpha\right)_{\alpha \in A} }[/math] is an absolutely summable family in [math]\displaystyle{ X, }[/math] then necessarily all but a countable collection of [math]\displaystyle{ x_\alpha }[/math]'s are 0.

Absolutely summable families play an important role in the theory of nuclear spaces.

Relation to convergence

If [math]\displaystyle{ G }[/math] is complete with respect to the metric [math]\displaystyle{ d, }[/math] then every absolutely convergent series is convergent. The proof is the same as for complex-valued series: use the completeness to derive the Cauchy criterion for convergence—a series is convergent if and only if its tails can be made arbitrarily small in norm—and apply the triangle inequality.

In particular, for series with values in any Banach space, absolute convergence implies convergence. The converse is also true: if absolute convergence implies convergence in a normed space, then the space is a Banach space.

If a series is convergent but not absolutely convergent, it is called conditionally convergent. An example of a conditionally convergent series is the alternating harmonic series. Many standard tests for divergence and convergence, most notably including the ratio test and the root test, demonstrate absolute convergence. This is because a power series is absolutely convergent on the interior of its disk of convergence.[lower-alpha 1]

Proof that any absolutely convergent series of complex numbers is convergent

Suppose that [math]\displaystyle{ \sum \left|a_k\right|, a_k \in \Complex }[/math] is convergent. Then equivalently, [math]\displaystyle{ \sum \left[ \operatorname{Re}\left(a_k\right)^2 + \operatorname{Im}\left(a_k\right)^2 \right]^{1/2} }[/math] is convergent, which implies that [math]\displaystyle{ \sum \left|\operatorname{Re}\left(a_k\right)\right| }[/math] and [math]\displaystyle{ \sum\left|\operatorname{Im}\left(a_k\right)\right| }[/math] converge by termwise comparison of non-negative terms. It suffices to show that the convergence of these series implies the convergence of [math]\displaystyle{ \sum \operatorname{Re}\left(a_k\right) }[/math] and [math]\displaystyle{ \sum \operatorname{Im}\left(a_k\right), }[/math] for then, the convergence of [math]\displaystyle{ \sum a_k=\sum \operatorname{Re}\left(a_k\right) + i \sum \operatorname{Im}\left(a_k\right) }[/math] would follow, by the definition of the convergence of complex-valued series.

The preceding discussion shows that we need only prove that convergence of [math]\displaystyle{ \sum \left|a_k\right|, a_k\in\R }[/math] implies the convergence of [math]\displaystyle{ \sum a_k. }[/math]

Let [math]\displaystyle{ \sum \left|a_k\right|, a_k\in\R }[/math] be convergent. Since [math]\displaystyle{ 0 \leq a_k + \left|a_k\right| \leq 2\left|a_k\right|, }[/math] we have [math]\displaystyle{ 0 \leq \sum_{k = 1}^n (a_k + \left|a_k\right|) \leq \sum_{k = 1}^n 2\left|a_k\right|. }[/math] Since [math]\displaystyle{ \sum 2\left|a_k\right| }[/math] is convergent, [math]\displaystyle{ s_n=\sum_{k = 1}^n \left(a_k + \left|a_k\right|\right) }[/math] is a bounded monotonic sequence of partial sums, and [math]\displaystyle{ \sum \left(a_k + \left|a_k\right|\right) }[/math] must also converge. Noting that [math]\displaystyle{ \sum a_k = \sum \left(a_k + \left|a_k\right|\right) - \sum \left|a_k\right| }[/math] is the difference of convergent series, we conclude that it too is a convergent series, as desired.

Alternative proof using the Cauchy criterion and triangle inequality

By applying the Cauchy criterion for the convergence of a complex series, we can also prove this fact as a simple implication of the triangle inequality.[2] By the Cauchy criterion, [math]\displaystyle{ \sum |a_i| }[/math] converges if and only if for any [math]\displaystyle{ \varepsilon \gt 0, }[/math] there exists [math]\displaystyle{ N }[/math] such that [math]\displaystyle{ \left|\sum_{i=m}^n \left|a_i\right| \right| = \sum_{i=m}^n |a_i| \lt \varepsilon }[/math] for any [math]\displaystyle{ n \gt m \geq N. }[/math] But the triangle inequality implies that [math]\displaystyle{ \big|\sum_{i=m}^n a_i\big| \leq \sum_{i=m}^n |a_i|, }[/math] so that [math]\displaystyle{ \left|\sum_{i=m}^n a_i\right| \lt \varepsilon }[/math] for any [math]\displaystyle{ n \gt m \geq N, }[/math] which is exactly the Cauchy criterion for [math]\displaystyle{ \sum a_i. }[/math]

Proof that any absolutely convergent series in a Banach space is convergent

The above result can be easily generalized to every Banach space [math]\displaystyle{ (X, \|\,\cdot\,\|). }[/math] Let [math]\displaystyle{ \sum x_n }[/math] be an absolutely convergent series in [math]\displaystyle{ X. }[/math] As [math]\displaystyle{ \sum_{k=1}^n\|x_k\| }[/math] is a Cauchy sequence of real numbers, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math] and large enough natural numbers [math]\displaystyle{ m \gt n }[/math] it holds: [math]\displaystyle{ \left| \sum_{k=1}^m \|x_k\| - \sum_{k=1}^n \|x_k\| \right| = \sum_{k=n+1}^m \|x_k\| \lt \varepsilon. }[/math]

By the triangle inequality for the norm ǁ⋅ǁ, one immediately gets: [math]\displaystyle{ \left\|\sum_{k=1}^m x_k - \sum_{k=1}^n x_k\right\| = \left\|\sum_{k=n+1}^m x_k\right\| \leq \sum_{k=n+1}^m \|x_k\| \lt \varepsilon, }[/math] which means that [math]\displaystyle{ \sum_{k=1}^n x_k }[/math] is a Cauchy sequence in [math]\displaystyle{ X, }[/math] hence the series is convergent in [math]\displaystyle{ X. }[/math][3]

Rearrangements and unconditional convergence

Real and complex numbers

When a series of real or complex numbers is absolutely convergent, any rearrangement or reordering of that series' terms will still converge to the same value. This fact is one reason absolutely convergent series are useful: showing a series is absolutely convergent allows terms to be paired or rearranged in convenient ways without changing the sum's value.

The Riemann rearrangement theorem shows that the converse is also true: every real or complex-valued series whose terms cannot be reordered to give a different value is absolutely convergent.

Series with coefficients in more general space

The term unconditional convergence is used to refer to a series where any rearrangement of its terms still converges to the same value. For any series with values in a normed abelian group [math]\displaystyle{ G }[/math], as long as [math]\displaystyle{ G }[/math] is complete, every series which converges absolutely also converges unconditionally.

Stated more formally:

Theorem —  Let [math]\displaystyle{ G }[/math] be a normed abelian group. Suppose [math]\displaystyle{ \sum_{i=1}^\infty a_i = A \in G, \quad \sum_{i=1}^\infty \|a_i\|\lt \infty. }[/math] If [math]\displaystyle{ \sigma : \N \to \N }[/math] is any permutation, then [math]\displaystyle{ \sum_{i=1}^\infty a_{\sigma(i)}=A. }[/math]

For series with more general coefficients, the converse is more complicated. As stated in the previous section, for real-valued and complex-valued series, unconditional convergence always implies absolute convergence. However, in the more general case of a series with values in any normed abelian group [math]\displaystyle{ G }[/math], the converse does not always hold: there can exist series which are not absolutely convergent, yet unconditionally convergent.

For example, in the Banach space, one series which is unconditionally convergent but not absolutely convergent is: [math]\displaystyle{ \sum_{n=1}^\infty \tfrac{1}{n} e_n, }[/math]

where [math]\displaystyle{ \{e_n\}_{n=1}^{\infty} }[/math] is an orthonormal basis. A theorem of A. Dvoretzky and C. A. Rogers asserts that every infinite-dimensional Banach space has an unconditionally convergent series that is not absolutely convergent.[4]

Proof of the theorem

For any [math]\displaystyle{ \varepsilon \gt 0, }[/math] we can choose some [math]\displaystyle{ \kappa_\varepsilon, \lambda_\varepsilon \in \N, }[/math] such that: [math]\displaystyle{ \begin{align} \text{ for all } N \gt \kappa_\varepsilon &\quad \sum_{n=N}^\infty \|a_n\| \lt \tfrac{\varepsilon}{2} \\ \text{ for all } N \gt \lambda_\varepsilon &\quad \left\|\sum_{n=1}^N a_n - A\right\| \lt \tfrac{\varepsilon}{2} \end{align} }[/math]

Let [math]\displaystyle{ \begin{align} N_\varepsilon &=\max \left\{\kappa_\varepsilon, \lambda_\varepsilon \right\} \\ M_{\sigma,\varepsilon} &= \max \left\{\sigma^{-1}\left(\left\{ 1, \ldots, N_\varepsilon \right\}\right)\right\} \end{align} }[/math] where [math]\displaystyle{ \sigma^{-1}\left(\left\{1, \ldots, N_\varepsilon\right\}\right) = \left\{\sigma^{-1}(1), \ldots, \sigma^{-1}\left(N_\varepsilon\right)\right\} }[/math] so that [math]\displaystyle{ M_{\sigma,\varepsilon} }[/math] is the smallest natural number such that the list [math]\displaystyle{ a_{\sigma(0)}, \ldots, a_{\sigma\left(M_{\sigma,\varepsilon}\right)} }[/math] includes all of the terms [math]\displaystyle{ a_0, \ldots, a_{N_\varepsilon} }[/math] (and possibly others).

Finally for any integer [math]\displaystyle{ N \gt M_{\sigma,\varepsilon} }[/math] let [math]\displaystyle{ \begin{align} I_{\sigma,\varepsilon} &= \left\{ 1,\ldots,N \right\}\setminus \sigma^{-1}\left(\left \{ 1, \ldots, N_\varepsilon \right \}\right) \\ S_{\sigma,\varepsilon} &= \min \sigma\left(I_{\sigma,\varepsilon}\right) = \min \left\{\sigma(k) \ : \ k \in I_{\sigma,\varepsilon}\right\} \\ L_{\sigma,\varepsilon} &= \max \sigma\left(I_{\sigma,\varepsilon}\right) = \max \left\{\sigma(k) \ : \ k \in I_{\sigma,\varepsilon}\right\} \\ \end{align} }[/math] so that [math]\displaystyle{ \begin{align} \left\|\sum_{i\in I_{\sigma,\varepsilon}} a_{\sigma(i)}\right\| &\leq \sum_{i \in I_{\sigma,\varepsilon}} \left\|a_{\sigma(i)}\right\| \\ &\leq \sum_{j = S_{\sigma,\varepsilon}}^{L_{\sigma,\varepsilon}} \left\|a_j\right\| && \text{ since } I_{\sigma,\varepsilon} \subseteq \left\{S_{\sigma,\varepsilon}, S_{\sigma,\varepsilon} + 1, \ldots, L_{\sigma,\varepsilon}\right\} \\ &\leq \sum_{j = N_\varepsilon + 1}^{\infty} \left\|a_j\right\| && \text{ since } S_{\sigma,\varepsilon} \geq N_{\varepsilon} + 1 \\ &\lt \frac{\varepsilon}{2} \end{align} }[/math] and thus [math]\displaystyle{ \begin{align} \left\|\sum_{i=1}^N a_{\sigma(i)}-A \right\| &= \left\| \sum_{i \in \sigma^{-1}\left(\{ 1,\dots,N_\varepsilon \}\right)} a_{\sigma(i)} - A + \sum_{i\in I_{\sigma,\varepsilon}} a_{\sigma(i)} \right\| \\ &\leq \left\|\sum_{j=1}^{N_\varepsilon} a_j - A \right\| + \left\|\sum_{i\in I_{\sigma,\varepsilon}} a_{\sigma(i)} \right\| \\ &\lt \left\|\sum_{j=1}^{N_\varepsilon} a_j - A \right\| + \frac{\varepsilon}{2}\\ &\lt \varepsilon \end{align} }[/math]

This shows that [math]\displaystyle{ \text{ for all } \varepsilon \gt 0, \text{ there exists } M_{\sigma,\varepsilon}, \text{ for all } N \gt M_{\sigma,\varepsilon} \quad \left\|\sum_{i=1}^N a_{\sigma(i)} - A\right\| \lt \varepsilon, }[/math] that is: [math]\displaystyle{ \sum_{i=1}^\infty a_{\sigma(i)} = A. }[/math]


Products of series

The Cauchy product of two series converges to the product of the sums if at least one of the series converges absolutely. That is, suppose that [math]\displaystyle{ \sum_{n=0}^\infty a_n = A \quad \text{ and } \quad \sum_{n=0}^\infty b_n = B. }[/math]

The Cauchy product is defined as the sum of terms [math]\displaystyle{ c_n }[/math] where: [math]\displaystyle{ c_n = \sum_{k=0}^n a_k b_{n-k}. }[/math]

If either the [math]\displaystyle{ a_n }[/math] or [math]\displaystyle{ b_n }[/math] sum converges absolutely then [math]\displaystyle{ \sum_{n=0}^\infty c_n = A B. }[/math]

Absolute convergence over sets

A generalization of the absolute convergence of a series, is the absolute convergence of a sum of a function over a set. We can first consider a countable set [math]\displaystyle{ X }[/math] and a function [math]\displaystyle{ f : X \to \R. }[/math] We will give a definition below of the sum of [math]\displaystyle{ f }[/math] over [math]\displaystyle{ X, }[/math] written as [math]\displaystyle{ \sum_{x \in X} f(x). }[/math]

First note that because no particular enumeration (or "indexing") of [math]\displaystyle{ X }[/math] has yet been specified, the series [math]\displaystyle{ \sum_{x \in X}f(x) }[/math] cannot be understood by the more basic definition of a series. In fact, for certain examples of [math]\displaystyle{ X }[/math] and [math]\displaystyle{ f, }[/math] the sum of [math]\displaystyle{ f }[/math] over [math]\displaystyle{ X }[/math] may not be defined at all, since some indexing may produce a conditionally convergent series.

Therefore we define [math]\displaystyle{ \sum_{x \in X} f(x) }[/math] only in the case where there exists some bijection [math]\displaystyle{ g : \Z^+ \to X }[/math] such that [math]\displaystyle{ \sum_{n=1}^\infty f(g(n)) }[/math] is absolutely convergent. Note that here, "absolutely convergent" uses the more basic definition, applied to an indexed series. In this case, the value of the sum of [math]\displaystyle{ f }[/math] over [math]\displaystyle{ X }[/math][5] is defined by [math]\displaystyle{ \sum_{x \in X}f(x) := \sum_{n=1}^\infty f(g(n)) }[/math]

Note that because the series is absolutely convergent, then every rearrangement is identical to a different choice of bijection [math]\displaystyle{ g. }[/math] Since all of these sums have the same value, then the sum of [math]\displaystyle{ f }[/math] over [math]\displaystyle{ X }[/math] is well-defined.

Even more generally we may define the sum of [math]\displaystyle{ f }[/math] over [math]\displaystyle{ X }[/math] when [math]\displaystyle{ X }[/math] is uncountable. But first we define what it means for the sum to be convergent.

Let [math]\displaystyle{ X }[/math] be any set, countable or uncountable, and [math]\displaystyle{ f : X \to \R }[/math] a function. We say that the sum of [math]\displaystyle{ f }[/math] over [math]\displaystyle{ X }[/math] converges absolutely if [math]\displaystyle{ \sup\left\{\sum_{x \in A} |f(x)|: A\subseteq X, A \text{ is finite }\right\} \lt \infty. }[/math]

There is a theorem which states that, if the sum of [math]\displaystyle{ f }[/math] over [math]\displaystyle{ X }[/math] is absolutely convergent, then [math]\displaystyle{ f }[/math] takes non-zero values on a set that is at most countable. Therefore, the following is a consistent definition of the sum of [math]\displaystyle{ f }[/math] over [math]\displaystyle{ X }[/math] when the sum is absolutely convergent. [math]\displaystyle{ \sum_{x \in X} f(x) := \sum_{x \in X : f(x) \neq 0} f(x). }[/math]

Note that the final series uses the definition of a series over a countable set.

Some authors define an iterated sum [math]\displaystyle{ \sum_{m=1}^\infty \sum_{n=1}^\infty a_{m,n} }[/math] to be absolutely convergent if the iterated series [math]\displaystyle{ \sum_{m=1}^\infty \sum_{n=1}^\infty |a_{m,n}| \lt \infty. }[/math][6] This is in fact equivalent to the absolute convergence of [math]\displaystyle{ \sum_{(m,n) \in \N \times \N} a_{m,n}. }[/math] That is to say, if the sum of [math]\displaystyle{ f }[/math] over [math]\displaystyle{ X, }[/math] [math]\displaystyle{ \sum_{(m,n) \in \N \times \N} a_{m,n}, }[/math] converges absolutely, as defined above, then the iterated sum [math]\displaystyle{ \sum_{m=1}^\infty \sum_{n=1}^\infty a_{m,n} }[/math] converges absolutely, and vice versa.

Absolute convergence of integrals

The integral [math]\displaystyle{ \int_A f(x)\,dx }[/math] of a real or complex-valued function is said to converge absolutely if [math]\displaystyle{ \int_A \left|f(x)\right|\,dx \lt \infty. }[/math] One also says that [math]\displaystyle{ f }[/math] is absolutely integrable. The issue of absolute integrability is intricate and depends on whether the Riemann, Lebesgue, or Kurzweil-Henstock (gauge) integral is considered; for the Riemann integral, it also depends on whether we only consider integrability in its proper sense ([math]\displaystyle{ f }[/math] and [math]\displaystyle{ A }[/math] both bounded), or permit the more general case of improper integrals.

As a standard property of the Riemann integral, when [math]\displaystyle{ A=[a,b] }[/math] is a bounded interval, every continuous function is bounded and (Riemann) integrable, and since [math]\displaystyle{ f }[/math] continuous implies [math]\displaystyle{ |f| }[/math] continuous, every continuous function is absolutely integrable. In fact, since [math]\displaystyle{ g\circ f }[/math] is Riemann integrable on [math]\displaystyle{ [a,b] }[/math] if [math]\displaystyle{ f }[/math] is (properly) integrable and [math]\displaystyle{ g }[/math] is continuous, it follows that [math]\displaystyle{ |f|=|\cdot|\circ f }[/math] is properly Riemann integrable if [math]\displaystyle{ f }[/math] is. However, this implication does not hold in the case of improper integrals. For instance, the function [math]\displaystyle{ f:[1,\infty) \to \R : x \mapsto \frac{\sin x}{x} }[/math] is improperly Riemann integrable on its unbounded domain, but it is not absolutely integrable: [math]\displaystyle{ \int_1^\infty \frac{\sin x}{x}\,dx = \frac{1}{2}\bigl[\pi - 2\,\mathrm{Si}(1)\bigr] \approx 0.62, \text{ but } \int_1^\infty \left|\frac{\sin x}{x}\right| dx = \infty. }[/math] Indeed, more generally, given any series [math]\displaystyle{ \sum_{n=0}^\infty a_n }[/math] one can consider the associated step function [math]\displaystyle{ f_a: [0,\infty) \to \R }[/math] defined by [math]\displaystyle{ f_a([n,n+1)) = a_n. }[/math] Then [math]\displaystyle{ \int_0^\infty f_a \, dx }[/math] converges absolutely, converges conditionally or diverges according to the corresponding behavior of [math]\displaystyle{ \sum_{n=0}^\infty a_n. }[/math]

The situation is different for the Lebesgue integral, which does not handle bounded and unbounded domains of integration separately (see below). The fact that the integral of [math]\displaystyle{ |f| }[/math] is unbounded in the examples above implies that [math]\displaystyle{ f }[/math] is also not integrable in the Lebesgue sense. In fact, in the Lebesgue theory of integration, given that [math]\displaystyle{ f }[/math] is measurable, [math]\displaystyle{ f }[/math] is (Lebesgue) integrable if and only if [math]\displaystyle{ |f| }[/math] is (Lebesgue) integrable. However, the hypothesis that [math]\displaystyle{ f }[/math] is measurable is crucial; it is not generally true that absolutely integrable functions on [math]\displaystyle{ [a,b] }[/math] are integrable (simply because they may fail to be measurable): let [math]\displaystyle{ S \subset [a,b] }[/math] be a nonmeasurable subset and consider [math]\displaystyle{ f = \chi_S - 1/2, }[/math] where [math]\displaystyle{ \chi_S }[/math] is the characteristic function of [math]\displaystyle{ S. }[/math] Then [math]\displaystyle{ f }[/math] is not Lebesgue measurable and thus not integrable, but [math]\displaystyle{ |f| \equiv 1/2 }[/math] is a constant function and clearly integrable.

On the other hand, a function [math]\displaystyle{ f }[/math] may be Kurzweil-Henstock integrable (gauge integrable) while [math]\displaystyle{ |f| }[/math] is not. This includes the case of improperly Riemann integrable functions.

In a general sense, on any measure space [math]\displaystyle{ A, }[/math] the Lebesgue integral of a real-valued function is defined in terms of its positive and negative parts, so the facts:

  1. [math]\displaystyle{ f }[/math] integrable implies [math]\displaystyle{ |f| }[/math] integrable
  2. [math]\displaystyle{ f }[/math] measurable, [math]\displaystyle{ |f| }[/math] integrable implies [math]\displaystyle{ f }[/math] integrable

are essentially built into the definition of the Lebesgue integral. In particular, applying the theory to the counting measure on a set [math]\displaystyle{ S, }[/math] one recovers the notion of unordered summation of series developed by Moore–Smith using (what are now called) nets. When [math]\displaystyle{ S = \N }[/math] is the set of natural numbers, Lebesgue integrability, unordered summability and absolute convergence all coincide.

Finally, all of the above holds for integrals with values in a Banach space. The definition of a Banach-valued Riemann integral is an evident modification of the usual one. For the Lebesgue integral one needs to circumvent the decomposition into positive and negative parts with Daniell's more functional analytic approach, obtaining the Bochner integral.

See also


  1. Here, the disk of convergence is used to refer to all points whose distance from the center of the series is less than the radius of convergence. That is, the disk of convergence is made up of all points for which the power series converges.


  1. Schaefer & Wolff 1999, pp. 179-180.
  2. Rudin, Walter (1976). Principles of Mathematical Analysis. New York: McGraw-Hill. pp. 71–72. ISBN 0-07-054235-X. https://archive.org/details/1979RudinW. 
  3. An introduction to Banach space theory, Graduate Texts in Mathematics, 183, New York: Springer-Verlag, 1998, p. 20, ISBN 0-387-98431-3  (Theorem 1.3.9)
  4. Dvoretzky, A.; Rogers, C. A. (1950), "Absolute and unconditional convergence in normed linear spaces", Proc. Natl. Acad. Sci. U.S.A. 36:192–197.
  5. Tao, Terrance (2016). Analysis I. New Delhi: Hindustan Book Agency. pp. 188–191. ISBN 978-9380250649. 
  6. Strichartz, Robert (2000). The Way of Analysis. Jones & Bartlett Learning. pp. 259,260. ISBN 978-0763714970. 

Works cited

General references