Cauchy condensation test
Part of a series of articles about |
Calculus |
---|
In mathematics, the Cauchy condensation test, named after Augustin-Louis Cauchy, is a standard convergence test for infinite series. For a non-increasing sequence [math]\displaystyle{ f(n) }[/math] of non-negative real numbers, the series [math]\displaystyle{ \sum\limits_{n=1}^{\infty} f(n) }[/math] converges if and only if the "condensed" series [math]\displaystyle{ \sum\limits_{n=0}^{\infty} 2^{n} f(2^{n}) }[/math] converges. Moreover, if they converge, the sum of the condensed series is no more than twice as large as the sum of the original.
Estimate
The Cauchy condensation test follows from the stronger estimate, [math]\displaystyle{ \sum_{n=1}^{\infty} f(n) \leq \sum_{n=0}^{\infty} 2^n f(2^n) \leq\ 2\sum_{n=1}^{\infty} f(n), }[/math] which should be understood as an inequality of extended real numbers. The essential thrust of a proof follows, patterned after Oresme's proof of the divergence of the harmonic series.
To see the first inequality, the terms of the original series are rebracketed into runs whose lengths are powers of two, and then each run is bounded above by replacing each term by the largest term in that run. That term is always the first one, since by assumption the terms are non-increasing. [math]\displaystyle{ \begin{array}{rcccccccl}\displaystyle \sum\limits_{n=1}^{\infty} f(n) & = &f(1) & + & f(2) + f(3) & + & f(4) + f(5) + f(6) + f(7) & + & \cdots \\ & = &f(1) & + & \Big(f(2) + f(3)\Big) & + & \Big(f(4) + f(5) + f(6) + f(7)\Big) & + &\cdots \\ & \leq &f(1) & + & \Big(f(2) + f(2)\Big) & + & \Big(f(4) + f(4) + f(4) + f(4)\Big) & + &\cdots \\ & = &f(1) & + & 2 f(2) & + & 4 f(4)& + &\cdots = \sum\limits_{n=0}^{\infty} 2^{n} f(2^{n}) \end{array} }[/math]
To see the second inequality, these two series are again rebracketed into runs of power of two length, but "offset" as shown below, so that the run of [math]\displaystyle{ 2 \sum_{n=1}^{\infty} f(n) }[/math] which begins with [math]\displaystyle{ f(2^{n}) }[/math] lines up with the end of the run of [math]\displaystyle{ \sum_{n=0}^{\infty} 2^{n} f(2^{n}) }[/math] which ends with [math]\displaystyle{ f(2^{n}) }[/math], so that the former stays always "ahead" of the latter. [math]\displaystyle{ \begin{align} \sum_{n=0}^{\infty} 2^{n}f(2^{n}) & = f(1) + \Big(f(2) + f(2)\Big) + \Big(f(4) + f(4) + f(4) +f(4)\Big) + \cdots \\ & = \Big(f(1) + f(2)\Big) + \Big(f(2) + f(4) + f(4) + f(4)\Big) + \cdots \\ & \leq \Big(f(1) + f(1)\Big) + \Big(f(2) + f(2) + f(3) + f(3)\Big) + \cdots = 2 \sum_{n=1}^{\infty} f(n) \end{align} }[/math]
Integral comparison
The "condensation" transformation [math]\displaystyle{ f(n) \rarr 2^{n} f(2^{n}) }[/math] recalls the integral variable substitution [math]\displaystyle{ x \rarr e^{x} }[/math] yielding [math]\displaystyle{ f(x)\,\mathrm{d}x \rarr e^{x} f(e^{x})\,\mathrm{d}x }[/math].
Pursuing this idea, the integral test for convergence gives us, in the case of monotone [math]\displaystyle{ f }[/math], that [math]\displaystyle{ \sum\limits_{n=1}^{\infty}f(n) }[/math] converges if and only if [math]\displaystyle{ \displaystyle\int_{1}^{\infty}f(x)\,\mathrm{d}x }[/math] converges. The substitution [math]\displaystyle{ x\rarr 2^x }[/math] yields the integral [math]\displaystyle{ \displaystyle \log 2\ \int_{2}^{\infty}\!2^{x}f(2^{x})\,\mathrm{d}x }[/math]. We then notice that [math]\displaystyle{ \displaystyle \log 2\ \int_{2}^{\infty}\!2^{x}f(2^{x})\,\mathrm{d}x \lt \log 2\ \int_{0}^{\infty}\!2^{x}f(2^{x})\,\mathrm{d}x }[/math], where the right hand side comes from applying the integral test to the condensed series [math]\displaystyle{ \sum\limits_{n=0}^{\infty} 2^{n}f(2^{n}) }[/math]. Therefore, [math]\displaystyle{ \sum\limits_{n=1}^{\infty} f(n) }[/math] converges if and only if [math]\displaystyle{ \sum\limits_{n=0}^{\infty} 2^{n}f(2^{n}) }[/math] converges.
Examples
The test can be useful for series where n appears as in a denominator in f. For the most basic example of this sort, the harmonic series [math]\displaystyle{ \sum_{n=1}^{\infty} 1/n }[/math] is transformed into the series [math]\displaystyle{ \sum 1 }[/math], which clearly diverges.
As a more complex example, take [math]\displaystyle{ f(n) := n^{-a} (\log n)^{-b} (\log \log n)^{-c}. }[/math]
Here the series definitely converges for a > 1, and diverges for a < 1. When a = 1, the condensation transformation gives the series [math]\displaystyle{ \sum n^{-b} (\log n)^{-c}. }[/math]
The logarithms "shift to the left". So when a = 1, we have convergence for b > 1, divergence for b < 1. When b = 1 the value of c enters.
This result readily generalizes: the condensation test, applied repeatedly, can be used to show that for [math]\displaystyle{ k = 1,2,3,\ldots }[/math], the generalized Bertrand series [math]\displaystyle{ \sum_{n\geq N} \frac{1}{n \cdot \log n \cdot \log\log n \cdots \log^{\circ (k-1)} n \cdot(\log^{\circ k} n)^\alpha} \quad\quad (N=\lfloor \exp^{\circ k} (0) \rfloor+1) }[/math] converges for [math]\displaystyle{ \alpha \gt 1 }[/math] and diverges for [math]\displaystyle{ 0 \lt \alpha \leq 1 }[/math].[1] Here [math]\displaystyle{ f^{\circ m} }[/math] denotes the mth iterate of a function [math]\displaystyle{ f }[/math], so that [math]\displaystyle{ f^{\circ m} (x) := \begin{cases} f(f^{\circ(m-1)}(x)), & m=1, 2, 3,\ldots; \\ x, & m = 0. \end{cases} }[/math] The lower limit of the sum, [math]\displaystyle{ N }[/math], was chosen so that all terms of the series are positive. Notably, these series provide examples of infinite sums that converge or diverge arbitrarily slowly. For instance, in the case of [math]\displaystyle{ k = 2 }[/math] and [math]\displaystyle{ \alpha = 1 }[/math], the partial sum exceeds 10 only after [math]\displaystyle{ 10^{10^{100}} }[/math](a googolplex) terms; yet the series diverges nevertheless.
Schlömilch's generalization
A generalization of the condensation test was given by Oskar Schlömilch.[2] Let u(n) be a strictly increasing sequence of positive integers such that the ratio of successive differences is bounded: there is a positive real number N, for which [math]\displaystyle{ {\Delta u(n) \over \Delta u(n{-}1)} \ =\ {u(n{+}1)-u(n) \over u(n)-u(n{-}1)} \ \lt \ N \ \text{ for all } n. }[/math]
Then, provided that [math]\displaystyle{ f(n) }[/math] meets the same preconditions as in Cauchy's convergence test, the convergence of the series [math]\displaystyle{ \sum_{n=1}^{\infty} f(n) }[/math] is equivalent to the convergence of [math]\displaystyle{ \sum_{n=0}^{\infty} {\Delta u(n)}\, f(u(n)) \ =\ \sum_{n=0}^{\infty} \Big(u(n{+}1)-u(n)\Big) f(u(n)). }[/math]
Taking [math]\displaystyle{ u(n) = 2^n }[/math] so that [math]\displaystyle{ \Delta u(n) = u(n{+}1)-u(n) = 2^n }[/math], the Cauchy condensation test emerges as a special case.
References
- ↑ Rudin, Walter (1976). Principles of Mathematical Analysis. New York: McGraw-Hill. pp. 62–63. ISBN 0-07-054235-X. https://archive.org/details/1979RudinW.
- ↑ Elijah Liflyand, Sergey Tikhonov, & Maria Zeltse (2012) Extending tests for convergence of number series page 7/28 via Brandeis University
- Bonar, Khoury (2006). Real Infinite Series. Mathematical Association of America. ISBN:0-88385-745-6.
External links
Original source: https://en.wikipedia.org/wiki/Cauchy condensation test.
Read more |