Iterated limit
In multivariable calculus, an iterated limit is a limit of a sequence or a limit of a function in the form
- [math]\displaystyle{ \lim_{m \to \infty} \lim_{n \to \infty} a_{n,m} = \lim_{m \to \infty} \left( \lim_{n \to \infty} a_{n,m} \right) }[/math],
- [math]\displaystyle{ \lim_{y \to b} \lim_{x \to a} f(x, y) = \lim_{y \to b} \left( \lim_{x \to a} f(x, y) \right) }[/math],
or other similar forms.
An iterated limit is only defined for an expression whose value depends on at least two variables. To evaluate such a limit, one takes the limiting process as one of the two variables approaches some number, getting an expression whose value depends only on the other variable, and then one takes the limit as the other variable approaches some number.
Types of iterated limits
This section introduces definitions of iterated limits in two variables. These may generalize easily to multiple variables.
Iterated limit of sequence
For each [math]\displaystyle{ n, m \in \mathbf{N} }[/math], let [math]\displaystyle{ a_{n,m} \in \mathbf{R} }[/math] be a real double sequence. Then there are two forms of iterated limits, namely
- [math]\displaystyle{ \lim_{m \to \infty} \lim_{n \to \infty} a_{n,m} \qquad \text{and} \qquad \lim_{n \to \infty} \lim_{m \to \infty} a_{n,m} }[/math].
For example, let
- [math]\displaystyle{ a_{n,m} = \frac{n}{n+m} }[/math].
Then
- [math]\displaystyle{ \lim_{m \to \infty} \lim_{n \to \infty} a_{n,m} = \lim_{m \to \infty} 1 = 1 }[/math], and
- [math]\displaystyle{ \lim_{n \to \infty} \lim_{m \to \infty} a_{n,m} = \lim_{n \to \infty} 0 = 0 }[/math].
Iterated limit of function
Let [math]\displaystyle{ f: X\times Y \to \mathbf{R} }[/math]. Then there are also two forms of iterated limits, namely
- [math]\displaystyle{ \lim_{y \to b} \lim_{x \to a} f(x, y) \qquad \text{and} \qquad \lim_{x \to a} \lim_{y \to b} f(x, y) }[/math].
For example, let [math]\displaystyle{ f : \mathbf{R}^2\setminus\{(0,0)\} \to \mathbf{R} }[/math] such that
- [math]\displaystyle{ f(x,y) = \frac{x^2}{x^2+y^2} }[/math].
Then
- [math]\displaystyle{ \lim_{y \to 0} \lim_{x \to 0} \frac{x^2}{x^2+y^2} = \lim_{y \to 0} 0 = 0 }[/math], and
- [math]\displaystyle{ \lim_{x \to 0} \lim_{y\to0} \frac{x^2}{x^2+y^2} = \lim_{x \to 0} 1 = 1 }[/math].[1]
The limit(s) for x and/or y can also be taken at infinity, i.e.,
- [math]\displaystyle{ \lim_{y \to \infty} \lim_{x \to \infty} f(x, y) \qquad \text{and} \qquad \lim_{x \to \infty} \lim_{y \to \infty} f(x, y) }[/math].
Iterated limit of sequence of functions
For each [math]\displaystyle{ n \in \mathbf{N} }[/math], let [math]\displaystyle{ f_n : X \to \mathbf{R} }[/math] be a sequence of functions. Then there are two forms of iterated limits, namely
- [math]\displaystyle{ \lim_{n \to \infty} \lim_{x \to a} f_n(x) \qquad \text{and} \qquad \lim_{x \to a} \lim_{n \to \infty} f_n(x) }[/math].
For example, let [math]\displaystyle{ f_n : [0, 1] \to \mathbf{R} }[/math] such that
- [math]\displaystyle{ f_n(x) = x^n }[/math].
Then
- [math]\displaystyle{ \lim_{n \to \infty} \lim_{x \to 1} f_n(x) = \lim_{n \to \infty} 1^n = 1 }[/math], and
- [math]\displaystyle{ \lim_{x \to 1} \lim_{n \to \infty} f_n(x) = \lim_{x \to 1} 0 = 0 }[/math].[2]
The limit in x can also be taken at infinity, i.e.,
- [math]\displaystyle{ \lim_{n \to \infty} \lim_{x \to \infty} f_n(x) \qquad \text{and} \qquad \lim_{x \to \infty} \lim_{n \to \infty} f_n(x) }[/math].
Note that the limit in n is taken discretely, while the limit in x is taken continuously.
Comparison with other limits in multiple variables
This section introduces various definitions of limits in two variables. These may generalize easily to multiple variables.
Limit of sequence
For a double sequence [math]\displaystyle{ a_{n,m} \in \mathbf{R} }[/math], there is another definition of limit, which is commonly referred to as double limit, denote by
- [math]\displaystyle{ L = \lim_{\begin{smallmatrix} n \to \infty \\ m \to \infty \end{smallmatrix}} a_{n,m} }[/math],
which means that for all [math]\displaystyle{ \epsilon \gt 0 }[/math], there exist [math]\displaystyle{ N=N(\epsilon) \in \mathbf{N} }[/math] such that [math]\displaystyle{ n,m \gt N }[/math] implies [math]\displaystyle{ \left| a_{n,m} - L \right| \lt \epsilon }[/math].[3]
The following theorem states the relationship between double limit and iterated limits.
- Theorem 1. If [math]\displaystyle{ \lim_{\begin{smallmatrix} n \to \infty \\ m \to \infty \end{smallmatrix}} a_{n,m} }[/math] exists and equals L, [math]\displaystyle{ \lim_{n \to \infty}a_{n,m} }[/math] exists for each large m, and [math]\displaystyle{ \lim_{m \to \infty}a_{n,m} }[/math] exists for each large n, then [math]\displaystyle{ \lim_{m \to \infty} \lim_{n \to \infty} a_{n,m} }[/math] and [math]\displaystyle{ \lim_{n \to \infty} \lim_{m \to \infty} a_{n,m} }[/math] also exist, and they equal L, i.e.,
Proof. By existence of [math]\displaystyle{ \lim_{\begin{smallmatrix} n \to \infty \\ m \to \infty \end{smallmatrix}} a_{n,m} }[/math] for any [math]\displaystyle{ \epsilon \gt 0 }[/math], there exists [math]\displaystyle{ N_1=N_1(\epsilon) \in \mathbf{N} }[/math] such that [math]\displaystyle{ n,m \gt N_1 }[/math] implies [math]\displaystyle{ \left| a_{n,m} - L \right|\lt \frac{\epsilon}{2} }[/math].
Let each [math]\displaystyle{ n \gt N_0 }[/math] such that [math]\displaystyle{ \lim_{n \to \infty}a_{n,m}=A_{n} }[/math] exists, there exists [math]\displaystyle{ N_2=N_2(\epsilon) \in \mathbf{N} }[/math] such that [math]\displaystyle{ m \gt N_2 }[/math] implies [math]\displaystyle{ \left| a_{n,m} - A_{n} \right|\lt \frac{\epsilon}{2} }[/math].
Both the above statements are true for [math]\displaystyle{ n \gt \max(N_0,N_1) }[/math] and [math]\displaystyle{ m \gt \max(N_1,N_2) }[/math]. Combining equations from the above two, for any [math]\displaystyle{ \epsilon \gt 0 }[/math] there exists [math]\displaystyle{ N=N(\epsilon) \in \mathbf{N} }[/math] for all [math]\displaystyle{ n \gt N }[/math],
[math]\displaystyle{ \left | A_{n} - L \right|\lt \epsilon }[/math],
which proves that [math]\displaystyle{ \lim_{n \to \infty} \lim_{m \to \infty} a_{n,m} = \lim_{\begin{smallmatrix} n \to \infty \\ m \to \infty \end{smallmatrix}} a_{n,m} \displaystyle }[/math]. Similarly for [math]\displaystyle{ \lim_{m \to \infty}a_{n,m} }[/math], we prove: [math]\displaystyle{ \lim_{m \to \infty} \lim_{n \to \infty} a_{n,m} = \lim_{n \to \infty} \lim_{m \to \infty} a_{n,m} = \lim_{\begin{smallmatrix} n \to \infty \\ m \to \infty \end{smallmatrix}} a_{n,m} }[/math].
For example, let
- [math]\displaystyle{ a_{n,m} = \frac{1}{n} + \frac{1}{m} }[/math].
Since [math]\displaystyle{ \lim_{\begin{smallmatrix} n \to \infty \\ m \to \infty \end{smallmatrix}} a_{n,m} = 0 }[/math], [math]\displaystyle{ \lim_{n \to \infty} a_{n,m} = \frac{1}{m} }[/math], and [math]\displaystyle{ \lim_{m \to \infty} = \frac{1}{n} }[/math], we have
- [math]\displaystyle{ \lim_{m \to \infty} \lim_{n \to \infty} a_{n,m} = \lim_{n \to \infty} \lim_{m \to \infty} a_{n,m} = 0 }[/math].
This theorem requires the single limits [math]\displaystyle{ \lim_{n \to \infty} a_{n,m} }[/math] and [math]\displaystyle{ \lim_{m \to \infty} a_{n,m} }[/math] to converge. This condition cannot be dropped. For example, consider
- [math]\displaystyle{ a_{n,m} = (-1)^m\left( \frac{1}{n} + \frac{1}{m} \right) }[/math].
Then we may see that
- [math]\displaystyle{ \lim_{\begin{smallmatrix} n \to \infty \\ m \to \infty \end{smallmatrix}} a_{n,m} = \lim_{m \to \infty} \lim_{n \to \infty} a_{n,m} = 0 }[/math],
- but [math]\displaystyle{ \lim_{n \to \infty} \lim_{m \to \infty} a_{n,m} }[/math] does not exist.
This is because [math]\displaystyle{ \lim_{m \to \infty} a_{n,m} }[/math] does not exist in the first place.
Limit of function
For a two-variable function [math]\displaystyle{ f : X \times Y \to \mathbf{R} }[/math], there are two other types of limits. One is the ordinary limit, denoted by
- [math]\displaystyle{ L = \lim_{(x,y) \to (a, b)} f(x, y) }[/math],
which means that for all [math]\displaystyle{ \epsilon \gt 0 }[/math], there exist [math]\displaystyle{ \delta=\delta(\epsilon) \gt 0 }[/math] such that [math]\displaystyle{ 0 \lt \sqrt{(x-a)^2 + (y-b)^2} \lt \delta }[/math] implies [math]\displaystyle{ \left| f(x,y) - L \right| \lt \epsilon }[/math].[6]
For this limit to exist, f(x, y) can be made as close to L as desired along every possible path approaching the point (a, b). In this definition, the point (a, b) is excluded from the paths. Therefore, the value of f at the point (a, b), even if it is defined, does not affect the limit.
The other type is the double limit, denoted by
- [math]\displaystyle{ L = \lim_{\begin{smallmatrix} x \to a \\ y \to b \end{smallmatrix}} f(x,y) }[/math],
which means that for all [math]\displaystyle{ \epsilon \gt 0 }[/math], there exist [math]\displaystyle{ \delta=\delta(\epsilon) \gt 0 }[/math] such that [math]\displaystyle{ 0 \lt \left|x - a \right| \lt \delta }[/math] and [math]\displaystyle{ 0 \lt \left|y - b \right| \lt \delta }[/math] implies [math]\displaystyle{ \left| f(x,y) - L \right| \lt \epsilon }[/math].[7]
For this limit to exist, f(x, y) can be made as close to L as desired along every possible path approaching the point (a, b), except the lines x=a and y=b. In other words, the value of f along the lines x=a and y=b does not affect the limit. This is different from the ordinary limit where only the point (a, b) is excluded. In this sense, ordinary limit is a stronger notion than double limit:
- Theorem 2. If [math]\displaystyle{ \lim_{(x,y) \to (a,b)} f(x,y) }[/math] exists and equals L, then[math]\displaystyle{ \lim_{\begin{smallmatrix} x \to a \\ y \to b \end{smallmatrix}} f(x, y) }[/math] exists and equals L, i.e.,
- [math]\displaystyle{ \lim_{\begin{smallmatrix} x \to a \\ y \to b \end{smallmatrix}} f(x, y) = \lim_{(x,y) \to (a,b)} f(x,y) }[/math].
Both of these limits do not involve first taking one limit and then another. This contrasts with iterated limits where the limiting process is taken in x-direction first, and then in y-direction (or in reversed order).
The following theorem states the relationship between double limit and iterated limits:
- Theorem 3. If [math]\displaystyle{ \lim_{\begin{smallmatrix}
x \to a \\ y \to b
\end{smallmatrix}} f(x, y) }[/math] exists and equals L, [math]\displaystyle{ \lim_{x \to a} f(x,y) }[/math] exists for each y near b, and [math]\displaystyle{ \lim_{y \to b} f(x,y) }[/math] exists for each x near a, then [math]\displaystyle{ \lim_{x \to a} \lim_{y \to b} f(x, y) }[/math] and [math]\displaystyle{ \lim_{y \to b} \lim_{x \to a} f(x, y) }[/math] also exist, and they equal L, i.e.,
- [math]\displaystyle{ \lim_{x \to a} \lim_{y \to b} f(x, y) = \lim_{y \to b} \lim_{x \to a} f(x, y) = \lim_{\begin{smallmatrix} x \to a \\ y \to b \end{smallmatrix}} f(x, y) }[/math].
For example, let
- [math]\displaystyle{ f(x,y) = \begin{cases} 1 \quad \text{for} \quad xy \ne 0 \\ 0 \quad \text{for} \quad xy = 0 \end{cases} }[/math].
Since [math]\displaystyle{ \lim_{\begin{smallmatrix} x \to 0 \\ y \to 0 \end{smallmatrix}} f(x, y) = 1 }[/math], [math]\displaystyle{ \lim_{x \to 0} f(x, y) = \begin{cases} 1 \quad \text{for} \quad y \ne 0 \\ 0 \quad \text{for} \quad y = 0 \end{cases} }[/math] and [math]\displaystyle{ \lim_{y \to 0} f(x, y) = \begin{cases} 1 \quad \text{for} \quad x \ne 0 \\ 0 \quad \text{for} \quad x = 0 \end{cases} }[/math], we have
- [math]\displaystyle{ \lim_{x \to 0} \lim_{y \to 0} f(x, y) = \lim_{y \to 0} \lim_{x \to 0} f(x, y) = 1 }[/math].
(Note that in this example, [math]\displaystyle{ \lim_{(x,y) \to (0,0)} f(x,y) }[/math] does not exist.)
This theorem requires the single limits [math]\displaystyle{ \lim_{x \to a} f(x, y) }[/math] and [math]\displaystyle{ \lim_{y \to b} f(x, y) }[/math] to exist. This condition cannot be dropped. For example, consider
- [math]\displaystyle{ f(x, y) = x \sin \left( \frac{1}{y} \right) }[/math].
Then we may see that
- [math]\displaystyle{ \lim_{\begin{smallmatrix} x \to 0 \\ y \to 0 \end{smallmatrix}} f(x, y) = \lim_{y \to 0} \lim_{x \to 0} f(x,y) = 0 }[/math],
- but [math]\displaystyle{ \lim_{x \to 0} \lim_{y \to 0} f(x,y) }[/math] does not exist.
This is because [math]\displaystyle{ \lim_{y \to 0} f(x,y) }[/math] does not exist for x near 0 in the first place.
Combining Theorem 2 and 3, we have the following corollary:
- Corollary 3.1. If [math]\displaystyle{ \lim_{(x,y) \to (a,b)} f(x, y) }[/math] exists and equals L, [math]\displaystyle{ \lim_{x \to a} f(x,y) }[/math] exists for each y near b, and [math]\displaystyle{ \lim_{y \to b} f(x,y) }[/math] exists for each x near a, then [math]\displaystyle{ \lim_{x \to a} \lim_{y \to b} f(x, y) }[/math] and [math]\displaystyle{ \lim_{y \to b} \lim_{x \to a} f(x, y) }[/math] also exist, and they equal L, i.e.,
- [math]\displaystyle{ \lim_{x \to a} \lim_{y \to b} f(x, y) = \lim_{y \to b} \lim_{x \to a} f(x, y) = \lim_{(x,y) \to (a,b)} f(x, y) }[/math].
Limit at infinity of function
For a two-variable function [math]\displaystyle{ f : X \times Y \to \mathbf{R} }[/math], we may also define the double limit at infinity
- [math]\displaystyle{ L = \lim_{\begin{smallmatrix} x \to \infty \\ y \to \infty \end{smallmatrix}} f(x,y) }[/math],
which means that for all [math]\displaystyle{ \epsilon \gt 0 }[/math], there exist [math]\displaystyle{ M = M(\epsilon) \gt 0 }[/math] such that [math]\displaystyle{ x \gt M }[/math] and [math]\displaystyle{ y \gt M }[/math] implies [math]\displaystyle{ \left| f(x,y) - L \right| \lt \epsilon }[/math].
Similar definitions may be given for limits at negative infinity.
The following theorem states the relationship between double limit at infinity and iterated limits at infinity:
- Theorem 4. If [math]\displaystyle{ \lim_{\begin{smallmatrix}
x \to \infty \\ y \to \infty
\end{smallmatrix}} f(x, y) }[/math] exists and equals L, [math]\displaystyle{ \lim_{x \to \infty} f(x,y) }[/math] exists for each large y, and [math]\displaystyle{ \lim_{y \to \infty} f(x,y) }[/math] exists for each large x, then [math]\displaystyle{ \lim_{x \to \infty} \lim_{y \to \infty} f(x, y) }[/math] and [math]\displaystyle{ \lim_{y \to \infty} \lim_{x \to \infty} f(x, y) }[/math] also exist, and they equal L, i.e.,
- [math]\displaystyle{ \lim_{x \to \infty} \lim_{y \to \infty} f(x, y) = \lim_{y \to \infty} \lim_{x \to \infty} f(x, y) = \lim_{\begin{smallmatrix} x \to \infty \\ y \to \infty \end{smallmatrix}} f(x, y) }[/math].
For example, let
- [math]\displaystyle{ f(x,y) = \frac{x\sin y}{xy + y} }[/math].
Since [math]\displaystyle{ \lim_{\begin{smallmatrix} x \to \infty \\ y \to \infty \end{smallmatrix}}(x,y) = 0 }[/math], [math]\displaystyle{ \lim_{x \to \infty}f(x, y) = \frac{\sin y}{y} }[/math] and [math]\displaystyle{ \lim_{y \to \infty} f(x, y) = 0 }[/math], we have
- [math]\displaystyle{ \lim_{y \to \infty} \lim_{x \to \infty} f(x,y) = \lim_{x \to \infty} \lim_{y \to \infty} f(x,y) = 0 }[/math].
Again, this theorem requires the single limits [math]\displaystyle{ \lim_{x \to \infty} f(x, y) }[/math] and [math]\displaystyle{ \lim_{y \to \infty} f(x, y) }[/math] to exist. This condition cannot be dropped. For example, consider
- [math]\displaystyle{ f(x, y) =\frac{\cos x}{y} }[/math].
Then we may see that
- [math]\displaystyle{ \lim_{\begin{smallmatrix} x \to \infty \\ y \to \infty \end{smallmatrix}} f(x, y) = \lim_{x \to \infty} \lim_{y \to \infty} f(x,y) = 0 }[/math],
- but [math]\displaystyle{ \lim_{y \to \infty} \lim_{x \to \infty} f(x,y) }[/math] does not exist.
This is because [math]\displaystyle{ \lim_{x \to \infty} f(x,y) }[/math] does not exist for fixed y in the first place.
Invalid converses of the theorems
The converses of Theorems 1, 3 and 4 do not hold, i.e., the existence of iterated limits, even if they are equal, does not imply the existence of the double limit. A counter-example is
- [math]\displaystyle{ f(x,y) = \frac{xy}{x^2+y^2} }[/math]
near the point (0, 0). On one hand,
- [math]\displaystyle{ \lim_{x \to 0} \lim_{y \to 0} f(x,y) = \lim_{y \to 0} \lim_{x \to 0} f(x,y) = 0 }[/math].
On the other hand, the double limit [math]\displaystyle{ \lim_{\begin{smallmatrix} x \to a \\ y \to b \end{smallmatrix}} f(x, y) }[/math] does not exist. This can be seen by taking the limit along the path (x, y) = (t, t) → (0,0), which gives
- [math]\displaystyle{ \lim_{\begin{smallmatrix} t \to 0 \\ t \to 0 \end{smallmatrix}} f(t,t) = \lim_{t \to 0} \frac{t^2}{t^2+t^2} = \frac{1}{2} }[/math],
and along the path (x, y) = (t, t2) → (0,0), which gives
- [math]\displaystyle{ \lim_{\begin{smallmatrix} t \to 0 \\ t^2 \to 0 \end{smallmatrix}} f(t,t^2) = \lim_{t \to 0} \frac{t^3}{t^2+t^4} = 0 }[/math].
Moore-Osgood theorem for interchanging limits
In the examples above, we may see that interchanging limits may or may not give the same result. A sufficient condition for interchanging limits is given by the Moore-Osgood theorem.[8] The essence of the interchangeability depends on uniform convergence.
Interchanging limits of sequences
The following theorem allows us to interchange two limits of sequences.
- Theorem 5. If [math]\displaystyle{ \lim_{n \to \infty} a_{n,m} = b_m }[/math] uniformly (in m), and [math]\displaystyle{ \lim_{m \to \infty} a_{n,m} = c_n }[/math] for each large n, then both [math]\displaystyle{ \lim_{m \to \infty} b_m }[/math] and [math]\displaystyle{ \lim_{n \to \infty} c_n }[/math] exists and are equal to the double limit, i.e.,
- [math]\displaystyle{ \lim_{m \to \infty} \lim_{n \to \infty} a_{n,m} = \lim_{n \to \infty} \lim_{m \to \infty} a_{n,m} = \lim_{\begin{smallmatrix} n \to \infty \\ m \to \infty \end{smallmatrix}} a_{n,m} }[/math].[3]
- Proof. By the uniform convergence, for any [math]\displaystyle{ \epsilon \gt 0 }[/math] there exist [math]\displaystyle{ N_1(\epsilon)\in\mathbf{N} }[/math] such that for all [math]\displaystyle{ m \in \mathbf{N} }[/math], [math]\displaystyle{ n, k \gt N_1 }[/math] implies [math]\displaystyle{ \left| a_{n,m} - a_{k,m} \right| \lt \frac{\epsilon}{3} }[/math].
- As [math]\displaystyle{ m \to \infty }[/math], we have [math]\displaystyle{ \left|c_{n} - c_{k} \right| \lt \frac{\epsilon}{3} }[/math], which means that [math]\displaystyle{ c_n }[/math] is a Cauchy sequence which converges to a limit [math]\displaystyle{ L }[/math]. In addition, as [math]\displaystyle{ k \to \infty }[/math], we have [math]\displaystyle{ \left|c_n - L\right| \lt \frac{\epsilon}{3} }[/math].
- On the other hand, if we take [math]\displaystyle{ k \to \infty }[/math] first, we have [math]\displaystyle{ \left| a_{n,m} - b_m \right| \lt \frac{\epsilon}{3} }[/math].
- By the pointwise convergence, for any [math]\displaystyle{ \epsilon \gt 0 }[/math] and [math]\displaystyle{ n \gt N_1 }[/math], there exist [math]\displaystyle{ N_2(\epsilon, n) \in \mathbf{N} }[/math] such that [math]\displaystyle{ m \gt N_2 }[/math] implies [math]\displaystyle{ \left| a_{n,m} - c_n \right| \lt \frac{\epsilon}{3} }[/math].
- Then for that fixed [math]\displaystyle{ n }[/math], [math]\displaystyle{ m \gt N_2 }[/math] implies [math]\displaystyle{ \left| b_m - L \right| \le \left| b_m - a_{n,m} \right| + \left| a_{n,m} - c_n \right| + \left| c_n - L \right| \le \epsilon }[/math].
- This proves that [math]\displaystyle{ \lim_{m \to \infty}b_m = L = \lim_{n \to \infty}c_n }[/math].
- Also, by taking [math]\displaystyle{ N = \max\{N_1, N_2\} }[/math], we see that this limit also equals [math]\displaystyle{ \lim_{\begin{smallmatrix} n \to \infty \\ m \to \infty \end{smallmatrix}} a_{n,m} }[/math].
A corollary is about the interchangeability of infinite sum.
- Corollary 5.1. If [math]\displaystyle{ \sum^\infty_{n=1} a_{n,m} }[/math] converges uniformly (in m), and [math]\displaystyle{ \sum^\infty_{m =1} a_{n,m} }[/math] converges for each large n, then [math]\displaystyle{ \sum^\infty_{m=1} \sum^\infty_{n=1} a_{n,m} = \sum^\infty_{n=1} \sum^\infty_{m=1} a_{n,m} }[/math].
- Proof. Direct application of Theorem 5 on [math]\displaystyle{ S_{k,\ell} = \sum_{m=1}^k \sum_{n=1}^\ell a_{n,m} }[/math].
Interchanging limits of functions
Similar results hold for multivariable functions.
- Theorem 6. If [math]\displaystyle{ \lim_{x \to a} f(x,y) = g(y) }[/math] uniformly (in y) on [math]\displaystyle{ Y \setminus\{b\} }[/math], and [math]\displaystyle{ \lim_{y \to b} f(x,y) = h(x) }[/math] for each x near a, then both [math]\displaystyle{ \lim_{y \to b} g(y) }[/math] and [math]\displaystyle{ \lim_{x \to a} h(x) }[/math] exists and are equal to the double limit, i.e.,
- [math]\displaystyle{ \lim_{y \to b} \lim_{x \to a} f(x,y) = \lim_{x \to a} \lim_{y \to b} f(x,y) = \lim_{\begin{smallmatrix} x \to a \\ y \to b \end{smallmatrix}} f(x,y) }[/math].[9]
- The a and b here can possibly be infinity.
- Proof. By the existence uniform limit, for any [math]\displaystyle{ \epsilon \gt 0 }[/math] there exist [math]\displaystyle{ \delta_1(\epsilon) \gt 0 }[/math] such that for all [math]\displaystyle{ y \in Y \setminus \{b\} }[/math], [math]\displaystyle{ 0 \lt \left| x - a \right| \lt \delta_1 }[/math] and [math]\displaystyle{ 0 \lt \left| w - a \right| \lt \delta_1 }[/math] implies [math]\displaystyle{ \left| f(x,y) - f(w,y) \right| \lt \frac{\epsilon}{3} }[/math].
- As [math]\displaystyle{ y \to b }[/math], we have [math]\displaystyle{ \left|h(x) - h(w) \right| \lt \frac{\epsilon}{3} }[/math]. By Cauchy criterion, [math]\displaystyle{ \lim_{x\to a}h(x) }[/math] exists and equals a number [math]\displaystyle{ L }[/math]. In addition, as [math]\displaystyle{ w \to a }[/math], we have [math]\displaystyle{ \left|h(x) - L\right| \lt \frac{\epsilon}{3} }[/math].
- On the other hand, if we take [math]\displaystyle{ w \to a }[/math] first, we have [math]\displaystyle{ \left| f(x,y) - g(y) \right| \lt \frac{\epsilon}{3} }[/math].
- By the existence of pointwise limit, for any [math]\displaystyle{ \epsilon \gt 0 }[/math] and [math]\displaystyle{ x }[/math] near [math]\displaystyle{ a }[/math], there exist [math]\displaystyle{ \delta_2(\epsilon, x) \gt 0 }[/math] such that [math]\displaystyle{ 0 \lt \left| y - b \right| \lt \delta_2 }[/math] implies [math]\displaystyle{ \left| f(x,y) - h(x) \right| \lt \frac{\epsilon}{3} }[/math].
- Then for that fixed [math]\displaystyle{ x }[/math], [math]\displaystyle{ 0 \lt \left| y - b \right| \lt \delta_2 }[/math] implies [math]\displaystyle{ \left| g(y) - L \right| \le \left| g(y) - f(x,y) \right| + \left| f(x,y) - h(x) \right| + \left| h(x) - L \right| \le \epsilon }[/math].
- This proves that [math]\displaystyle{ \lim_{y \to b}g(y) = L = \lim_{x \to a}h(x) }[/math].
- Also, by taking [math]\displaystyle{ \delta = \min\{\delta_1, \delta_2\} }[/math], we see that this limit also equals [math]\displaystyle{ \lim_{\begin{smallmatrix} x \to a \\ y \to b \end{smallmatrix}} f(x,y) }[/math].
Note that this theorem does not imply the existence of [math]\displaystyle{ \lim_{(x,y)\to(a,b)} f(x,y) }[/math]. A counter-example is [math]\displaystyle{ f(x,y) = \begin{cases} 1 \quad \text{for} \quad xy \ne 0 \\ 0 \quad \text{for} \quad xy = 0 \end{cases} }[/math] near (0,0).[10]
Interchanging limits of sequences of functions
An important variation of Moore-Osgood theorem is specifically for sequences of functions.
- Theorem 7. If [math]\displaystyle{ \lim_{n \to \infty} f_n(x) = f(x) }[/math] uniformly (in x) on [math]\displaystyle{ X\setminus\{a\} }[/math], and [math]\displaystyle{ \lim_{x \to a} f_n(x) = L_n }[/math] for each large n, then both [math]\displaystyle{ \lim_{x \to a} f(x) }[/math] and [math]\displaystyle{ \lim_{n \to \infty} L_n }[/math] exists and are equal, i.e.,
- [math]\displaystyle{ \lim_{n \to \infty} \lim_{x \to a} f_n(x) = \lim_{x \to a} \lim_{n \to \infty} f_n(x) }[/math].[11]
- The a here can possibly be infinity.
- Proof. By the uniform convergence, for any [math]\displaystyle{ \epsilon \gt 0 }[/math] there exist [math]\displaystyle{ N(\epsilon)\in\mathbf{N} }[/math] such that for all [math]\displaystyle{ x \in D\setminus\{a\} }[/math], [math]\displaystyle{ n, m \gt N }[/math] implies [math]\displaystyle{ \left| f_n(x) - f_m(x) \right| \lt \frac{\epsilon}{3} }[/math].
- As [math]\displaystyle{ x \to a }[/math], we have [math]\displaystyle{ \left|L_n - L_m \right| \lt \frac{\epsilon}{3} }[/math], which means that [math]\displaystyle{ L_n }[/math] is a Cauchy sequence which converges to a limit [math]\displaystyle{ L }[/math]. In addition, as [math]\displaystyle{ m \to \infty }[/math], we have [math]\displaystyle{ \left|L_n - L\right| \lt \frac{\epsilon}{3} }[/math].
- On the other hand, if we take [math]\displaystyle{ m \to \infty }[/math] first, we have [math]\displaystyle{ \left| f_n(x) - f(x) \right| \lt \frac{\epsilon}{3} }[/math].
- By the existence of pointwise limit, for any [math]\displaystyle{ \epsilon \gt 0 }[/math] and [math]\displaystyle{ n \gt N }[/math], there exist [math]\displaystyle{ \delta(\epsilon, n) \gt 0 }[/math] such that [math]\displaystyle{ 0 \lt \left| x - a \right| \lt \delta }[/math] implies [math]\displaystyle{ \left| f_n(x) - L_n \right| \lt \frac{\epsilon}{3} }[/math].
- Then for that fixed [math]\displaystyle{ n }[/math], [math]\displaystyle{ 0 \lt \left| x - a \right| \lt \delta }[/math] implies [math]\displaystyle{ \left| f(x) - L \right| \le \left| f(x) - f_n(x) \right| + \left| f_n(x) - L_n \right| + \left| L_n - L \right| \le \epsilon }[/math].
- This proves that [math]\displaystyle{ \lim_{x \to a}f(x) = L = \lim_{n \to \infty}L_n }[/math].
A corollary is the continuity theorem for uniform convergence as follows:
- Corollary 7.1. If [math]\displaystyle{ \lim_{n \to \infty} f_n(x) = f(x) }[/math] uniformly (in x) on [math]\displaystyle{ X }[/math], and [math]\displaystyle{ f_n(x) }[/math] are continuous at [math]\displaystyle{ x=a \in X }[/math], then [math]\displaystyle{ f(x) }[/math] is also continuous at [math]\displaystyle{ x=a }[/math].
- In other words, the uniform limit of continuous functions is continuous.
- Proof. By Theorem 7, [math]\displaystyle{ \lim_{x\to a}f(x) = \lim_{x\to a} \lim_{n \to \infty} f_n(x) = \lim_{n \to \infty} \lim_{x\to a} f_n(x) = \lim_{n \to \infty} f_n(a) = f(a) }[/math].
Another corollary is about the interchangeability of limit and infinite sum.
- Corollary 7.2. If [math]\displaystyle{ \sum^\infty_{n=0} f_n(x) }[/math] converges uniformly (in x) on [math]\displaystyle{ X \setminus \{a\} }[/math], and [math]\displaystyle{ \lim_{x \to a} f_n(x) }[/math] exists for each large n, then [math]\displaystyle{ \lim_{x \to a} \sum^\infty_{n=0} f_n(x) = \sum^\infty_{n=0} \lim_{x \to a} f_n(x) }[/math].
- Proof. Direct application of Theorem 7 on [math]\displaystyle{ S_k(x) = \sum_{n=0}^k f_n(x) }[/math] near [math]\displaystyle{ x = a }[/math].
Applications
Sum of infinite entries in a matrix
Consider a matrix of infinite entries
- [math]\displaystyle{ \begin{bmatrix} 1 & -1 & 0 & 0 & \cdots \\ 0 & 1 & -1 & 0 & \cdots \\ 0 & 0 & 1 & -1 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix} }[/math].
Suppose we would like to find the sum of all entries. If we sum it column by column first, we will find that the first column gives 1, while all others give 0. Hence the sum of all columns is 1. However, if we sum it row by row first, it will find that all rows give 0. Hence the sum of all rows is 0.
The explanation for this paradox is that the vertical sum to infinity and horizontal sum to infinity are two limiting processes that cannot be interchanged. Let [math]\displaystyle{ S_{n,m} }[/math] be the sum of entries up to entries (n, m). Then we have [math]\displaystyle{ \lim_{m \to \infty} \lim_{n \to \infty} S_{n,m} = 1 }[/math], but [math]\displaystyle{ \lim_{n \to \infty} \lim_{m \to \infty} S_{n,m} = 0 }[/math]. In this case, the double limit [math]\displaystyle{ \lim_{\begin{smallmatrix} n \to \infty \\ m \to \infty \end{smallmatrix}} S_{n,m} }[/math] does not exist, and thus this problem is not well-defined.
Integration over unbounded interval
By the integration theorem for uniform convergence, once we have [math]\displaystyle{ \lim_{n \to \infty} f_n(x) }[/math] converges uniformly on [math]\displaystyle{ X }[/math], the limit in n and an integration over a bounded interval [math]\displaystyle{ [a, b] \subseteq X }[/math] can be interchanged:
- [math]\displaystyle{ \lim_{n\to \infty} \int_a^b f_n(x) \mathrm{d}x = \int_a^b \lim_{n\to \infty} f_n(x) \mathrm{d}x }[/math].
However, such a property may fail for an improper integral over an unbounded interval [math]\displaystyle{ [a, \infty) \subseteq X }[/math]. In this case, one may rely on the Moore-Osgood theorem.
Consider [math]\displaystyle{ L = \int_0^\infty \frac{x^2}{e^x - 1} \mathrm{d}x = \lim_{b\to \infty} \int_0^b\frac{x^2}{e^x - 1} \mathrm{d}x }[/math] as an example.
We first expand the integrand as [math]\displaystyle{ \frac{x^2}{e^x - 1} = \frac{x^2 e^{-x}}{1- e^{-x}} = \sum_{k=0}^\infty x^2 e^{-kx} }[/math] for [math]\displaystyle{ x \in [0, \infty) }[/math]. (Here x=0 is a limiting case.)
One can prove by calculus that for [math]\displaystyle{ x \in [0, \infty) }[/math] and [math]\displaystyle{ k \ge 1 }[/math], we have [math]\displaystyle{ x^2 e^{-kx} \le \frac{4}{e^2 k^2} }[/math]. By Weierstrass M-test, [math]\displaystyle{ \sum_{k=0}^\infty x^2 e^{-kx} }[/math] converges uniformly on [math]\displaystyle{ [0, \infty) }[/math].
Then by the integration theorem for uniform convergence, [math]\displaystyle{ L = \lim_{b \to \infty} \int_0^b \sum_{k=0}^\infty x^2 e^{-kx} \mathrm{d}x = \lim_{b \to \infty} \sum_{k=0}^\infty \int_0^b x^2 e^{-kx} \mathrm{d}x }[/math].
To further interchange the limit [math]\displaystyle{ \lim_{b \to \infty} }[/math] with the infinite summation [math]\displaystyle{ \sum_{k=0}^\infty }[/math], the Moore-Osgood theorem requires the infinite series to be uniformly convergent.
Note that [math]\displaystyle{ \int_0^b x^2 e^{-kx}\mathrm{d}x \le \int_0^\infty x^2 e^{-kx} \mathrm{d}x = \frac{2}{k^3} }[/math]. Again, by Weierstrass M-test, [math]\displaystyle{ \sum_{k=0}^\infty \int_0^b x^2 e^{-kx} }[/math] converges uniformly on [math]\displaystyle{ [0, \infty) }[/math].
Then by the Moore-Osgood theorem, [math]\displaystyle{ L = \lim_{b \to \infty} \sum_{k=0}^\infty \int_0^b x^2 e^{-kx} = \sum_{k=0}^\infty \lim_{b \to \infty} \int_0^b x^2 e^{-kx} = \sum_{k=0}^\infty \frac{2}{k^3} = 2 \zeta(3) }[/math]. (Here is the Riemann zeta function.)
See also
Notes
- ↑ One should pay attention to the fact
- [math]\displaystyle{ \lim_{y \to 0} \frac{x^2}{x^2+y^2}=\begin{cases}1 & \text{for } x\neq 0 \\ 0 & \text{for } x=0 \end{cases} }[/math]
- ↑ One should pay attention to the fact
- [math]\displaystyle{ \lim_{n \to \infty} x^n=\begin{cases} 0 & \text{for } x \in [0, 1) \\ 1 & \text{for } x = 1 \end{cases} }[/math].
- ↑ 3.0 3.1 Zakon, Elias (2011). "Chapter 4. Function Limits and Continuity". Mathematical Anaylysis, Volume I. pp. 223. ISBN 9781617386473.
- ↑ Habil, Eissa (2005). "Double Sequences and Double Series" (in en). https://www.researchgate.net/publication/242705642_Double_Sequences_and_Double_Series.
- ↑ Apostol, Tom M. (2002). "Infinite Series and Infinite Products". Mathematical Analysis (2nd ed.). Narosa. pp. 199-200. ISBN 978-8185015668.
- ↑ Stewart, James (2020). "Chapter 14.2 Limits and Continuity". Multivariable Calculus (9th ed.). pp. 952–953. ISBN 9780357042922.
- ↑ Zakon, Elias (2011). "Chapter 4. Function Limits and Continuity". Mathematical Anaylysis, Volume I. pp. 219–220. ISBN 9781617386473.
- ↑ Taylor, Angus E. (2012). General Theory of Functions and Integration. Dover Books on Mathematics Series. pp. 139–140. ISBN 9780486152141.
- ↑ Kadelburg, Zoran (2005). "Interchanging Two Limits" (in en). https://www.researchgate.net/publication/265227198_Interchanging_two_limits.
- ↑ Gelbaum, Bearnard; Olmsted, John (2003). "Chapter 9. Functions of Two Variables.". Counterexamples in Analysis. pp. 118–119. ISBN 0486428753.
- ↑ Loring, Terry. "The Moore-Osgood Theorem on Exchanging Limits" (in en). https://math.unm.edu/~loring/links/analysis_f10/exchange.pdf.
Original source: https://en.wikipedia.org/wiki/Iterated limit.
Read more |