# Limit of a function

__: Point to which functions converge in analysis__

**Short description**[math]\displaystyle{ x }[/math] | [math]\displaystyle{ \frac{\sin x}{x} }[/math] |
---|---|

1 | 0.841471... |

0.1 | 0.998334... |

0.01 | 0.999983... |

Part of a series of articles about |

Calculus |
---|

In mathematics, the **limit of a function** is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input.

Formal definitions, first devised in the early 19th century, are given below. Informally, a function *f* assigns an output *f*(*x*) to every input *x*. We say that the function has a limit *L* at an input *p,* if *f*(*x*) gets closer and closer to *L* as *x* moves closer and closer to *p*. More specifically, when *f* is applied to any input *sufficiently* close to *p*, the output value is forced *arbitrarily* close to *L*. On the other hand, if some inputs very close to *p* are taken to outputs that stay a fixed distance apart, then we say the limit *does not exist*.

The notion of a limit has many applications in modern calculus. In particular, the many definitions of continuity employ the concept of limit: roughly, a function is continuous if all of its limits agree with the values of the function. The concept of limit also appears in the definition of the derivative: in the calculus of one variable, this is the limiting value of the slope of secant lines to the graph of a function.

## History

Although implicit in the development of calculus of the 17th and 18th centuries, the modern idea of the limit of a function goes back to Bolzano who, in 1817, introduced the basics of the epsilon-delta technique to define continuous functions. However, his work was not known during his lifetime.^{[1]}

In his 1821 book *Cours d'analyse*, Cauchy discussed variable quantities, infinitesimals and limits, and defined continuity of [math]\displaystyle{ y=f(x) }[/math] by saying that an infinitesimal change in *x* necessarily produces an infinitesimal change in *y*, while (Grabiner 1983) claims that he used a rigorous epsilon-delta definition in proofs.^{[2]} In 1861, Weierstrass first introduced the epsilon-delta definition of limit in the form it is usually written today.^{[3]} He also introduced the notations **lim** and **lim**_{x→x0}.^{[4]}

The modern notation of placing the arrow below the limit symbol is due to Hardy, which is introduced in his book *A Course of Pure Mathematics* in 1908.^{[5]}

## Motivation

Imagine a person walking over a landscape represented by the graph of *y* = *f*(*x*). Their horizontal position is measured by the value of *x*, much like the position given by a map of the land or by a global positioning system. Their altitude is given by the coordinate *y*. They walk toward the horizontal position given by *x* = *p*. As they get closer and closer to it, they notice that their altitude approaches *L*. If asked about the altitude of *x* = *p*, they would then answer *L*.

What, then, does it mean to say, their altitude is approaching *L?* It means that their altitude gets nearer and nearer to *L*—except for a possible small error in accuracy. For example, suppose we set a particular accuracy goal for our traveler: they must get within ten meters of *L*. They report back that indeed, they can get within ten vertical meters of *L*, since they note that when they are within fifty horizontal meters of *p*, their altitude is *always* ten meters or less from *L*.

The accuracy goal is then changed: can they get within one vertical meter? Yes. If they are anywhere within seven horizontal meters of *p*, their altitude will always remain within one meter from the target *L*. In summary, to say that the traveler's altitude approaches *L* as their horizontal position approaches *p*, is to say that for every target accuracy goal, however small it may be, there is some neighbourhood of *p* whose altitude fulfills that accuracy goal.

The initial informal statement can now be explicated:

- The limit of a function
*f*(*x*) as*x*approaches*p*is a number*L*with the following property: given any target distance from*L*, there is a distance from*p*within which the values of*f*(*x*) remain within the target distance.

In fact, this explicit statement is quite close to the formal definition of the limit of a function, with values in a topological space.

More specifically, to say that

- [math]\displaystyle{ \lim_{x \to p}f(x) = L }[/math],

is to say that *ƒ*(*x*) can be made as close to *L* as desired, by making *x* close enough, but not equal, to *p*.

The following definitions, known as (*ε*, *δ*)-definitions, are the generally accepted definitions for the limit of a function in various contexts.

## Functions of a single variable

### (*ε*, *δ*)-definition of limit

Suppose [math]\displaystyle{ f: \R \rightarrow \R }[/math] is a function defined on the real line, and there are two real numbers *p* and *L*. One would say that **the limit of f, as x approaches p, is L** and written

- [math]\displaystyle{ \lim_{x \to p} f(x) = L }[/math],

or alternatively, say ** f(x) tends to L as x tends to p**, and written:

- [math]\displaystyle{ f(x) \to L \;\; \text{as} \;\; x \to p }[/math],

if the following property holds:

For every real *ε* > 0, there exists a real *δ* > 0 such that for all real *x*, 0 < |*x* − *p*| < *δ* implies |*f*(*x*) − *L*| < *ε*.^{[6]}

Or, symbolically:

- [math]\displaystyle{ (\forall \varepsilon \gt 0 ) \, (\exists \delta \gt 0) \, (\forall x \in \R) \, (0 \lt |x - p| \lt \delta \implies |f(x) - L| \lt \varepsilon) }[/math].

For example, we may say

- [math]\displaystyle{ \lim_{x \to 2} 4x + 1 = 9 }[/math]

because for every real *ε* > 0, we can take *δ* = *ε*/4, so that for all real *x*, if 0 < |*x* − *p*| < *δ*, then |*f*(*x*) − *L*| < *ε*.

A more general definition applies for functions defined on subsets of the real line. Let (*a*, *b*) be an open interval in [math]\displaystyle{ \R }[/math], and a number *p* in (*a*, *b*). Let [math]\displaystyle{ f: S \to \R }[/math] be a real-valued function defined on *S* — a set that contains all of (*a*, *b*), except possibly at *p* itself. It is then said that the limit of *f* as *x* approaches *p* is *L,* if:

- For every real
*ε*> 0, there exists a real*δ*> 0 such that for all*x*∈ (*a*,*b*), 0 < |*x*−*p*| <*δ*implies that |*f*(*x*) −*L*| <*ε*.

Or, symbolically:

- [math]\displaystyle{ (\forall \varepsilon \gt 0 ) \, (\exists \delta \gt 0) \, (\forall x \in (a, b)) \, (0 \lt |x - p| \lt \delta \implies |f(x) - L| \lt \varepsilon) }[/math].

For example, we may say

- [math]\displaystyle{ \lim_{x \to 1} \sqrt{x+3} = 2 }[/math]

because for every real *ε* > 0, we can take *δ* = *ε*, so that for all real *x* ≥ −3, if 0 < |*x* − 1| < *δ*, then |*f*(*x*) − 2| < *ε*. In this example, *S* = [−3, ∞) contains open intervals around the point 1 (for example, the interval (0, 2)).

Here, note that the value of the limit does not depend on *f* being defined at *p*, nor on the value *f*(*p*)—if it is defined. For example,

- [math]\displaystyle{ \lim_{x \to 1} \frac{2x^2 - x - 1}{x-1} = 3 }[/math]

because for every *ε* > 0, we can take *δ* = *ε*/2, so that for all real *x* ≠ 1, if 0 < |*x* − 1| < *δ*, then |*f*(*x*) − 3| < *ε*. Note that here *f*(1) is undefined.

The letters *ε* and *δ* can be understood as "error" and "distance". In fact, Cauchy used *ε* as an abbreviation for "error" in some of his work,^{[2]} though in his definition of continuity, he used an infinitesimal [math]\displaystyle{ \alpha }[/math] rather than either *ε* or *δ* (see *Cours d'Analyse*). In these terms, the error (*ε*) in the measurement of the value at the limit can be made as small as desired, by reducing the distance (*δ*) to the limit point. As discussed below, this definition also works for functions in a more general context. The idea that *δ* and *ε* represent distances helps suggest these generalizations.

### Existence and one-sided limits

Alternatively, *x* may approach *p* from above (right) or below (left), in which case the limits may be written as

- [math]\displaystyle{ \lim_{x \to p^+}f(x) = L }[/math]

or

- [math]\displaystyle{ \lim_{x \to p^-}f(x) = L }[/math]

respectively. If these limits exist at p and are equal there, then this can be referred to as *the* limit of *f*(*x*) at *p*.^{[7]} If the one-sided limits exist at *p*, but are unequal, then there is no limit at *p* (i.e., the limit at *p* does not exist). If either one-sided limit does not exist at *p*, then the limit at *p* also does not exist.

A formal definition is as follows. The **limit of f as x approaches p from above is L** if:

- For every
*ε*> 0, there exists a*δ*> 0 such that whenever 0 <*x*−*p*<*δ*, we have |*f*(*x*) −*L*| <*ε*. - [math]\displaystyle{ (\forall \varepsilon \gt 0 ) \, (\exists \delta \gt 0) \, (\forall x \in (a,b))\, (0 \lt x - p \lt \delta \implies |f(x) - L| \lt \varepsilon) }[/math].

The **limit of f as x approaches p from below is L** if:

- For every
*ε*> 0, there exists a*δ*> 0 such that whenever 0 <*p*−*x*<*δ*, we have |*f*(*x*) −*L*| <*ε*. - [math]\displaystyle{ (\forall \varepsilon \gt 0 )\, (\exists \delta \gt 0) \, (\forall x \in (a,b)) \, (0 \lt p - x \lt \delta \implies |f(x) - L| \lt \varepsilon) }[/math].

If the limit does not exist, then the oscillation of *f* at *p* is non-zero.

### More general subsets

Apart from open intervals, limits can be defined for functions on arbitrary subsets of **R**, as follows (Bartle Sherbert): let [math]\displaystyle{ f : S \to \R }[/math] be a real-valued function defined on arbitrary [math]\displaystyle{ S \subseteq \R }[/math]. Let *p* be a limit point of *S*—that is, *p* is the limit of some sequence of elements of *S* distinct from p. Then we say **the limit of f, as x approaches p from values in S, is L**, written

- [math]\displaystyle{ \lim_{\begin{smallmatrix} x\to p \\ x\in S \end{smallmatrix}} f(x) = L }[/math]

if the following holds:

- For every
*ε*>*0*, there exists a*δ*>*0*such that for all*x*∈*S*, 0 < |*x*−*p*| <*δ*implies that |*f*(*x*) −*L*| <*ε*. - [math]\displaystyle{ (\forall \varepsilon \gt 0 )\, (\exists \delta \gt 0) \,(\forall x \in S)\, (0 \lt |x - p| \lt \delta \implies |f(x) - L| \lt \varepsilon) }[/math].

The condition that *f* be defined on *S* is that *S* be a subset of the domain of *f*. This generalization includes as special cases limits on an interval, as well as left-handed limits of real-valued functions (e.g., by taking *S* to be an open interval of the form [math]\displaystyle{ (-\infty,a) }[/math]), and right-handed limits (e.g., by taking *S* to be an open interval of the form [math]\displaystyle{ (a,\infty) }[/math]). It also extends the notion of one-sided limits to the included endpoints of (half-)closed intervals, so the square root function *f*(*x*) = √*x* can have limit 0 as x approaches 0 from above:

- [math]\displaystyle{ \lim_{\begin{smallmatrix} x\to 0 \\ x\in [0, \infty) \end{smallmatrix}} \sqrt{x} = 0 }[/math]

since for every *ε* > 0, we may take *δ* = *ε* such that for all *x* ≥ 0, if 0 < |*x* − 0| < *δ*, then |*f*(*x*) − 0| < *ε*.

### Deleted versus non-deleted limits

The definition of limit given here does not depend on how (or whether) *f* is defined at *p*. (Bartle 1967) refers to this as a *deleted limit*, because it excludes the value of *f* at *p*. The corresponding **non-deleted limit** does depend on the value of *f* at *p*, if *p* is in the domain of *f*. Let [math]\displaystyle{ f : S \to \R }[/math] be a real-valued function. **The non-deleted limit of f, as x approaches p, is L** if

- For every
*ε*>*0*, there exists a*δ*>*0*such that for all*x*∈*S*, |*x*−*p*| <*δ*implies |*f*(*x*) −*L*| <*ε* - [math]\displaystyle{ (\forall \varepsilon \gt 0 )\, (\exists \delta \gt 0) \, (\forall x \in S)\, (|x - p| \lt \delta \implies |f(x) - L| \lt \varepsilon) }[/math].

The definition is the same, except that the neighborhood |*x* − *p*| < *δ* now includes the point *p*, in contrast to the deleted neighborhood 0 < |*x* − *p*| < *δ*. This makes the definition of a non-deleted limit less general. One of the advantages of working with non-deleted limits is that they allow to state the theorem about limits of compositions without any constraints on the functions (other than the existence of their non-deleted limits) ((Hubbard 2015)).

(Bartle 1967) notes that although by "limit" some authors do mean this non-deleted limit, deleted limits are the most popular. For example, (Apostol 1974), (Courant 1924), (Hardy 1921), (Rudin 1964), (Whittaker Watson) all take "limit" to mean the deleted limit.

### Examples

#### Non-existence of one-sided limit(s)

The function

- [math]\displaystyle{ f(x)=\begin{cases} \sin\frac{5}{x-1} & \text{ for } x\lt 1 \\ 0 & \text{ for } x=1 \\ \frac{0.1}{x-1}& \text{ for } x\gt 1 \end{cases} }[/math]

has no limit at [math]\displaystyle{ x_0 = 1 }[/math] (the left-hand limit does not exist due to the oscillatory nature of the sine function, and the right-hand limit does not exist due to the asymptotic behaviour of the reciprocal function), but has a limit at every other *x*-coordinate.

The function

- [math]\displaystyle{ f(x)=\begin{cases} 1 & x \text{ rational } \\ 0 & x \text{ irrational } \end{cases} }[/math]

(a.k.a., the Dirichlet function) has no limit at any *x*-coordinate.

#### Non-equality of one-sided limits

The function

- [math]\displaystyle{ f(x)=\begin{cases} 1 & \text{ for } x \lt 0 \\ 2 & \text{ for } x \ge 0 \end{cases} }[/math]

has a limit at every non-zero *x*-coordinate (the limit equals 1 for negative *x* and equals 2 for positive *x*). The limit at *x* = 0 does not exist (the left-hand limit equals 1, whereas the right-hand limit equals 2).

#### Limits at only one point

The functions

- [math]\displaystyle{ f(x)=\begin{cases} x & x \text{ rational } \\ 0 & x \text{ irrational } \end{cases} }[/math]

and

- [math]\displaystyle{ f(x)=\begin{cases} |x| & x \text{ rational } \\ 0 & x \text{ irrational } \end{cases} }[/math]

both have a limit at *x* = 0 and it equals 0.

#### Limits at countably many points

The function

- [math]\displaystyle{ f(x)=\begin{cases} \sin x & x \text{ irrational } \\ 1 & x \text{ rational } \end{cases} }[/math]

has a limit at any *x*-coordinate of the form [math]\displaystyle{ \frac{\pi}{2} + 2n\pi }[/math], where *n* is any integer.

## Limits involving infinity

### Limits at infinity

Let [math]\displaystyle{ f:S \to\mathbb{R} }[/math] be a function defined on [math]\displaystyle{ S\subseteq\mathbb{R} }[/math]. **The limit of f as x approaches infinity is L**, denoted

- [math]\displaystyle{ \lim_{x \to \infty}f(x) = L }[/math],

means that:

- For every
*ε*> 0, there exists a*c*> 0 such that whenever*x*>*c*, we have |*f*(*x*) −*L*| <*ε*. - [math]\displaystyle{ (\forall \varepsilon \gt 0 )\, (\exists c \gt 0) \,(\forall x \in S) \,(x \gt c \implies |f(x) - L| \lt \varepsilon) }[/math].

Similarly, **the limit of f as x approaches minus infinity is L**, denoted

- [math]\displaystyle{ \lim_{x \to -\infty}f(x) = L }[/math],

means that:

- For every
*ε*> 0, there exists a*c*> 0 such that whenever*x*< −*c*, we have |*f*(*x*) −*L*| <*ε*. - [math]\displaystyle{ (\forall \varepsilon \gt 0)\, (\exists c \gt 0) \,(\forall x \in S)\, (x \lt -c \implies |f(x) - L| \lt \varepsilon) }[/math].

For example,

- [math]\displaystyle{ \lim_{x \to \infty} \left(-\frac{3\sin x}{x} + 4\right) = 4 }[/math]

because for every *ε* > 0, we can take *c* = 3/*ε* such that for all real *x*, if *x* > *c*, then |*f*(*x*) − 4| < *ε*.

Another example is that

- [math]\displaystyle{ \lim_{x \to -\infty}e^{x} = 0 }[/math]

because for every *ε* > 0, we can take *c* = max{1, −ln(*ε*)} such that for all real *x*, if *x* < −*c*, then |*f*(*x*) − 0| < *ε*.

### Infinite limits

For a function whose values grow without bound, the function diverges and the usual limit does not exist. However, in this case one may introduce limits with infinite values.

Let [math]\displaystyle{ f:S \to\mathbb{R} }[/math] be a function defined on [math]\displaystyle{ S\subseteq\mathbb{R} }[/math]. The statement **the limit of f as x approaches p is infinity**, denoted

- [math]\displaystyle{ \lim_{x \to p} f(x) = \infty, }[/math]

means that:

- For every
*N*> 0, there exists a*δ*> 0 such that whenever 0 < |*x*−*p*| <*δ*, we have*f*(*x*) >*N*. - [math]\displaystyle{ (\forall N \gt 0)\, (\exists \delta \gt 0)\, (\forall x \in S)\, (0 \lt | x-p | \lt \delta \implies f(x) \gt N) }[/math].

The statement **the limit of f as x approaches p is minus infinity**, denoted

- [math]\displaystyle{ \lim_{x \to p} f(x) = -\infty, }[/math]

means that:

- For every
*N*> 0, there exists a*δ*> 0 such that whenever 0 < |*x*−*p*| <*δ*, we have*f*(*x*) < −*N*. - [math]\displaystyle{ (\forall N \gt 0) \, (\exists \delta \gt 0) \, (\forall x \in S)\, (0 \lt | x-p | \lt \delta \implies f(x) \lt -N) }[/math].

For example,

- [math]\displaystyle{ \lim_{x \to 1} \frac{1}{(x-1)^2} = \infty }[/math]

because for every *N* > 0, we can take *δ* = 1/√*N* such that for all real *x* > 0, if 0 < *x* − 1 < *δ*, then *f*(*x*) > *N*.

These ideas can be combined in a natural way to produce definitions for different combinations, such as

- [math]\displaystyle{ \lim_{x \to \infty} f(x) = \infty }[/math], or [math]\displaystyle{ \lim_{x \to p^+}f(x) = -\infty }[/math].

For example,

- [math]\displaystyle{ \lim_{x \to 0^+} \ln x = -\infty }[/math]

because for every *N* > 0, we can take *δ* = *e*^{−N} such that for all real *x* > 0, if 0 < *x* − 0 < *δ*, then *f*(*x*) < −*N*.

Limits involving infinity are connected with the concept of asymptotes.

These notions of a limit attempt to provide a metric space interpretation to limits at infinity. In fact, they are consistent with the topological space definition of limit if

- a neighborhood of −∞ is defined to contain an interval [−∞,
*c*) for some*c*∈**R**, - a neighborhood of ∞ is defined to contain an interval (
*c*, ∞] where*c*∈**R**, and - a neighborhood of
*a*∈**R**is defined in the normal way metric space**R**.

In this case, **R** is a topological space and any function of the form *f*: *X* → *Y* with *X*, *Y*⊆ **R** is subject to the topological definition of a limit. Note that with this topological definition, it is easy to define infinite limits at finite points, which have not been defined above in the metric sense.

### Alternative notation

Many authors^{[8]} allow for the projectively extended real line to be used as a way to include infinite values as well as extended real line. With this notation, the extended real line is given as **R** ∪ {−∞, +∞} and the projectively extended real line is **R** ∪ {∞} where a neighborhood of ∞ is a set of the form {*x*: |*x*| > *c*}. The advantage is that one only needs three definitions for limits (left, right, and central) to cover all the cases.
As presented above, for a completely rigorous account, we would need to consider 15 separate cases for each combination of infinities (five directions: −∞, left, central, right, and +∞; three bounds: −∞, finite, or +∞). There are also noteworthy pitfalls. For example, when working with the extended real line, [math]\displaystyle{ x^{-1} }[/math] does not possess a central limit (which is normal):

- [math]\displaystyle{ \lim_{x \to 0^{+}}{1\over x} = +\infty, \lim_{x \to 0^{-}}{1\over x} = -\infty }[/math].

In contrast, when working with the projective real line, infinities (much like 0) are unsigned, so, the central limit *does* exist in that context:

- [math]\displaystyle{ \lim_{x \to 0^{+}}{1\over x} = \lim_{x \to 0^{-}}{1\over x} = \lim_{x \to 0}{1\over x} = \infty }[/math].

In fact there are a plethora of conflicting formal systems in use. In certain applications of numerical differentiation and integration, it is, for example, convenient to have signed zeroes. A simple reason has to do with the converse of [math]\displaystyle{ \lim_{x \to 0^{-}}{x^{-1}} = -\infty }[/math], namely, it is convenient for [math]\displaystyle{ \lim_{x \to -\infty}{x^{-1}} = -0 }[/math] to be considered true. Such zeroes can be seen as an approximation to infinitesimals.

### Limits at infinity for rational functions

There are three basic rules for evaluating limits at infinity for a rational function *f*(*x*) = *p*(*x*)/*q*(*x*): (where *p* and *q* are polynomials):

- If the degree of
*p*is greater than the degree of*q*, then the limit is positive or negative infinity depending on the signs of the leading coefficients; - If the degree of
*p*and*q*are equal, the limit is the leading coefficient of*p*divided by the leading coefficient of*q*; - If the degree of
*p*is less than the degree of*q*, the limit is 0.

If the limit at infinity exists, it represents a horizontal asymptote at *y* = *L*. Polynomials do not have horizontal asymptotes; such asymptotes may however occur with rational functions.

## Functions of more than one variable

### Ordinary limits

By noting that |*x* − *p*| represents a distance, the definition of a limit can be extended to functions of more than one variable. In the case of a function [math]\displaystyle{ f : S \times T \to \R }[/math] defined on [math]\displaystyle{ S \times T \subseteq \R^2 }[/math], we defined the limit as follows: **the limit of f as (x, y) approaches (p, q) is L**, written

- [math]\displaystyle{ \lim_{(x,y) \to (p, q)} f(x, y) = L }[/math]

if the following condition holds:

- For every
*ε*> 0, there exists a*δ*> 0 such that for all*x*in*S*and*y*in*T*, whenever 0 < √(*x*−*p*)^{2}+ (*y*−*q*)^{2}<*δ*, we have |*f*(*x*,*y*) −*L*| <*ε*.^{[9]} - [math]\displaystyle{ (\forall \varepsilon \gt 0)\, (\exists \delta \gt 0)\, (\forall x \in S) \, (\forall y \in T)\, (0 \lt \sqrt{(x-p)^2 + (y-q)^2} \lt \delta \implies |f(x, y) - L| \lt \varepsilon) }[/math].

Here √(*x*−*p*)^{2} + (*y*−*q*)^{2} is the Euclidean distance between (*x*, *y*) and (*p*, *q*). (This can in fact be replaced by any norm ||(*x*, *y*) − (*p*, *q*)||, and be extended to any number of variables.)

For example, we may say

- [math]\displaystyle{ \lim_{(x,y) \to (0, 0)} \frac{x^4}{x^2+y^2} = 0 }[/math]

because for every *ε* > 0, we can take *δ* = √*ε* such that for all real *x* ≠ 0 and real *y* ≠ 0, if 0 < √(*x*−0)^{2} + (*y*−0)^{2} < *δ*, then |*f*(*x*, *y*) − 0| < *ε*.

Similar to the case in single variable, the value of *f* at (*p*, *q*) does not matter in this definition of limit.

For such a multivariable limit to exist, this definition requires the value of *f* approaches *L* along every possible path approaching (*p*, *q*).^{[10]} In the above example, the function

- [math]\displaystyle{ f(x, y) = \frac{x^4}{x^2+y^2} }[/math]

satisfies this condition. This can be seen by considering the polar coordinates (*x*, *y*) = (*r* cos(*θ*), *r* sin(*θ*)) → (0, 0), which gives

- [math]\displaystyle{ \lim_{r \to 0} f(r \cos \theta, r \sin \theta) = \lim_{r \to 0} \frac{r^4 \cos^4 \theta}{r^2} = \lim_{r \to 0} r^2 \cos^4 \theta }[/math].

Here *θ* = *θ*(*r*) is a function of *r* which controls the shape of the path along which *f* is approaching (*p*, *q*). Since cos(*θ*) is bounded between [−1, 1], by the sandwich theorem, this limit tends to 0.

In contrast, the function

- [math]\displaystyle{ f(x, y) = \frac{xy}{x^2 + y^2} }[/math]

does not have a limit at (0, 0). Taking the path (*x*, *y*) = (*t*, 0) → (0, 0), we obtain

- [math]\displaystyle{ \lim_{t \to 0} f(t, 0) = \lim_{t \to 0} \frac{0}{t^2} = 0 }[/math],

while taking the path (*x*, *y*) = (*t*, *t*) → (0, 0), we obtain

- [math]\displaystyle{ \lim_{t \to 0} f(t, t) = \lim_{t \to 0} \frac{t^2}{t^2 + t^2} = \frac{1}{2} }[/math].

Since the two values do not agree, *f* does not tend to a single value as (*x*, *y*) approaches (0, 0).

### Multiple limits

Although less commonly used, there is another type of limit for a multivariable function, known as the **multiple limit**. For a two-variable function, this is the **double limit**.^{[11]} Let [math]\displaystyle{ f : S \times T \to \R }[/math] be defined on [math]\displaystyle{ S \times T \subseteq \R^2 }[/math], we say **the double limit of f as x approaches p and y approaches q is L**, written

- [math]\displaystyle{ \lim_{\begin{smallmatrix} x\to p \\ y\to q \end{smallmatrix}} f(x, y) = L }[/math]

if the following condition holds:

- For every
*ε*> 0, there exists a*δ*> 0 such that for all*x*in*S*and*y*in*T*, whenever 0 < |*x*−*p*| <*δ*and 0 < |*y*−*q*| <*δ*, we have |*f*(*x*,*y*) −*L*| <*ε*.^{[11]} - [math]\displaystyle{ (\forall \varepsilon \gt 0)\, (\exists \delta \gt 0)\, (\forall x \in S) \, (\forall y \in T)\, ( (0 \lt |x-p| \lt \delta) \land (0 \lt |y-q| \lt \delta) \implies |f(x, y) - L| \lt \varepsilon) }[/math].

For such a double limit to exist, this definition requires the value of *f* approaches *L* along every possible path approaching (*p*, *q*), excluding the two lines *x* = *p* and *y* = *q*. As a result, the multiple limit is a weaker notion than the ordinary limit: if the ordinary limit exists and equals *L*, then the multiple limit exists and also equals *L*. Note that the converse is not true: the existence of the multiple limits does not imply the existence of the ordinary limit. Consider the example

- [math]\displaystyle{ f(x,y) = \begin{cases} 1 \quad \text{for} \quad xy \ne 0 \\ 0 \quad \text{for} \quad xy = 0 \end{cases} }[/math]

where

- [math]\displaystyle{ \lim_{\begin{smallmatrix} x\to 0 \\ y\to 0 \end{smallmatrix}} f(x, y) = 1 }[/math]

but

- [math]\displaystyle{ \lim_{(x, y) \to (0, 0)} f(x, y) }[/math] does not exist.

If the domain of *f* is restricted to [math]\displaystyle{ (S\setminus\{p\}) \times (T\setminus\{q\}) }[/math], then the two definitions of limits coincide.^{[11]}

### Multiple limits at infinity

The concept of multiple limit can extend to the limit at infinity, in a way similar to that of a single variable function. For [math]\displaystyle{ f : S \times T \to \R }[/math], we say **the double limit of f as x and y approaches infinity is L**, written

- [math]\displaystyle{ \lim_{\begin{smallmatrix} x\to \infty \\ y\to \infty \end{smallmatrix}} f(x, y) = L }[/math]

if the following condition holds:

- For every
*ε*> 0, there exists a*c*> 0 such that for all*x*in*S*and*y*, whenever*x*>*c*and*y*>*c*, we have |*f*(*x*,*y*) −*L*| <*ε*. - [math]\displaystyle{ (\forall \varepsilon \gt 0)\, (\exists c\gt 0)\, (\forall x \in S) \, (\forall y \in T)\, ( (x \gt c) \land (y \gt c) \implies |f(x, y) - L| \lt \varepsilon) }[/math].

We say **the double limit of f as x and y approaches minus infinity is L**, written

- [math]\displaystyle{ \lim_{\begin{smallmatrix} x\to -\infty \\ y\to -\infty \end{smallmatrix}} f(x, y) = L }[/math]

if the following condition holds:

- For every
*ε*> 0, there exists a*c*> 0 such that*x*in*S*and*y*in*T*, whenever*x*< −*c*and*y*< −*c*, we have |*f*(*x*,*y*) −*L*| <*ε*. - [math]\displaystyle{ (\forall \varepsilon \gt 0)\, (\exists c\gt 0)\, (\forall x \in S) \, (\forall y \in T)\, ( (x \lt -c) \land (y \lt -c) \implies |f(x, y) - L| \lt \varepsilon) }[/math].

### Pointwise limits and uniform limits

Let [math]\displaystyle{ f : S \times T \to \R }[/math]. Instead of taking limit as (*x*, *y*) → (*p*, *q*), we may consider taking the limit of just one variable, say, *x* → *p*, to obtain a single-variable function of *y*, namely [math]\displaystyle{ g : T \to \R }[/math]. In fact, this limiting process can be done in two distinct ways. The first one is called **pointwise limit**. We say **the pointwise limit of f as x approaches p is g**, denoted

- [math]\displaystyle{ \lim_{x\to p}f(x, y) = g(y) }[/math], or
- [math]\displaystyle{ \lim_{x \to p}f(x, y) = g(y) \;\; \text{pointwise} }[/math].

Alternatively, we may say ** f tends to g pointwise as x approaches p**, denoted

- [math]\displaystyle{ f(x, y) \to g(y) \;\; \text{as} \;\; x \to p }[/math], or
- [math]\displaystyle{ f(x, y) \to g(y) \;\; \text{pointwise} \;\; \text{as} \;\; x \to p }[/math].

This limit exists if the following holds:

- For every
*ε*> 0 and every fixed*y*in*T*, there exists a*δ*(*ε*,*y*) > 0 such that for all*x*in*S*, whenever 0 < |*x*−*p*| <*δ*, we have |*f*(*x*,*y*) −*g*(*y*)| <*ε*.^{[12]} - [math]\displaystyle{ (\forall \varepsilon \gt 0)\, (\forall y \in T) \, (\exists \delta\gt 0)\, (\forall x \in S)\, ( 0 \lt |x-p| \lt \delta \implies |f(x, y) - g(y)| \lt \varepsilon) }[/math].

Here, *δ* = *δ*(*ε*, *y*) is a function of both *ε* and *y*. Each *δ* is chosen for a *specific point* of *y*. Hence we say the limit is pointwise in *y*. For example,

- [math]\displaystyle{ f(x, y) = \frac{x}{\cos y} }[/math]

has a pointwise limit of constant zero function

- [math]\displaystyle{ \lim_{x \to 0}f(x, y) = 0(y) \;\; \text{pointwise} }[/math]

because for every fixed *y*, the limit is clearly 0. Note that this argument fails if *y* is not fixed: if *y* is very close to *π*/2, the value of the fraction may deviate from 0.

This leads to another definition of limit, namely the **uniform limit**. We say **the uniform limit of f on T as x approaches p is g**, denoted

- [math]\displaystyle{ \underset{\begin{smallmatrix} x\to p \\ y\in T \end{smallmatrix}}{\mathrm{unif} \lim \;} f(x, y) = g(y) }[/math], or
- [math]\displaystyle{ \lim_{x \to p}f(x, y) = g(y) \;\; \text{uniformly on} \; T }[/math].

Alternatively, we may say ** f tends to g uniformly on T as x approaches p**, denoted

- [math]\displaystyle{ f(x, y) \rightrightarrows g(y) \; \text{on} \; T \;\; \text{as} \;\; x \to p }[/math], or
- [math]\displaystyle{ f(x, y) \to g(y) \;\; \text{uniformly on}\; T \;\; \text{as} \;\; x \to p }[/math].

This limit exists if the following holds:

- For every
*ε*> 0, there exists a*δ*(*ε*) > 0 such that for all*x*in*S*and*y*in*T*, whenever 0 < |*x*−*p*| <*δ*, we have |*f*(*x*,*y*) −*g*(*y*)| <*ε*.^{[12]} - [math]\displaystyle{ (\forall \varepsilon \gt 0) \, (\exists \delta \gt 0)\, (\forall x \in S)\, (\forall y \in T)\, ( 0 \lt |x-p| \lt \delta \implies |f(x, y) - g(y)| \lt \varepsilon) }[/math].

Here, *δ* = *δ*(*ε*) is a function of only *ε* but not *y*. In other words, *δ* is *uniformly applicable* to all *y* in *T*. Hence we say the limit is uniform in *y*. For example,

- [math]\displaystyle{ f(x, y) = x \cos y }[/math]

has a uniform limit of constant zero function

- [math]\displaystyle{ \lim_{x \to 0}f(x, y) = 0(y) \;\; \text{ uniformly on}\; \R }[/math]

because for all real *y*, cos(*y*) is bounded between [−1, 1]. Hence no matter how *y* behaves, we may use the sandwich theorem to show that the limit is 0.

### Iterated limits

Let [math]\displaystyle{ f : S \times T \to \R }[/math]. We may consider taking the limit of just one variable, say, *x* → *p*, to obtain a single-variable function of *y*, namely [math]\displaystyle{ g : T \to \R }[/math], and then take limit in the other variable, namely *y* → *q*, to get a number [math]\displaystyle{ L }[/math]. Symbolically,

- [math]\displaystyle{ \lim_{y \to q} \lim_{x \to p} f(x, y) = \lim_{y \to q} g(y) = L }[/math].

This limit is known as **iterated limit** of the multivariable function.^{[13]} Note that the order of taking limits may affect the result, i.e.,

- [math]\displaystyle{ \lim_{y \to q} \lim_{x \to p} f(x,y) \ne \lim_{x \to p} \lim_{y \to q} f(x, y) }[/math] in general.

A sufficient condition of equality is given by the Moore-Osgood theorem, which requires the limit [math]\displaystyle{ \lim_{x \to p}f(x, y) = g(y) }[/math] to be uniform on *T*.^{[14]}

## Functions on metric spaces

Suppose *M* and *N* are subsets of metric spaces *A* and *B*, respectively, and *f* : *M* → *N* is defined between *M* and *N*, with *x* ∈ *M,* *p* a limit point of *M* and *L* ∈ *N*. It is said that **the limit of f as x approaches p is L** and write

- [math]\displaystyle{ \lim_{x \to p}f(x) = L }[/math]

if the following property holds:

- For every
*ε*> 0, there exists a*δ*> 0 such that for all points*x*∈*M*, 0 <*d*_{A}(*x*,*p*) <*δ*implies*d*_{B}(*f*(*x*),*L*) <*ε*.^{[15]} - [math]\displaystyle{ (\forall \varepsilon \gt 0 )\, (\exists \delta \gt 0) \,(\forall x \in M) \,(0 \lt d_A(x, p) \lt \delta \implies d_B(f(x), L) \lt \varepsilon) }[/math].

Again, note that *p* need not be in the domain of *f*, nor does *L* need to be in the range of *f*, and even if *f*(*p*) is defined it need not be equal to *L*.

### Euclidean metric

The limit in Euclidean space is a direct generalization of limits to vector-valued functions. For example, we may consider a function [math]\displaystyle{ f:S \times T \to \R^3 }[/math] such that

- [math]\displaystyle{ f(x, y) = (f_1(x, y), f_2(x, y), f_3(x, y) ) }[/math].

Then, under the usual Euclidean metric,

- [math]\displaystyle{ \lim_{(x, y) \to (p, q)} f(x, y) = (L_1, L_2, L_3) }[/math]

if the following holds:

- For every
*ε*> 0, there exists a*δ*> 0 such that for all*x*in*S*and*y*in*T*, 0 < √(*x*−*p*)^{2}+ (*y*−*q*)^{2}<*δ*implies √(*f*_{1}−*L*_{1})^{2}+ (*f*_{2}−*L*_{2})^{2}+ (*f*_{3}−*L*_{3})^{2}<*ε*.^{[16]} - [math]\displaystyle{ (\forall \varepsilon \gt 0 )\, (\exists \delta \gt 0) \, (\forall x \in S) \, (\forall y \in T)\, (0 \lt \sqrt{(x-p)^2+(y-q)^2} \lt \delta \implies \sqrt{(f_1-L_1)^2 + (f_2-L_2)^2 + (f_3-L_3)^2} \lt \varepsilon) }[/math].

In this example, the function concerned are finite-dimension vector-valued function. In this case, the **limit theorem for vector-valued function** states that if the limit of each component exists, then the limit of a vector-valued function equals the vector with each component taken the limit:^{[16]}

- [math]\displaystyle{ \lim_{(x, y) \to (p, q)} \left(f_1(x, y), f_2(x, y), f_3(x, y)\right) = \left(\lim_{(x, y) \to (p, q)}f_1(x, y), \lim_{(x, y) \to (p, q)}f_2(x, y), \lim_{(x, y) \to (p, q)}f_3(x, y)\right) }[/math].

### Manhattan metric

One might also want to consider spaces other than Euclidean space. An example would be the Manhattan space. Consider [math]\displaystyle{ f:S \to \R^2 }[/math] such that

- [math]\displaystyle{ f(x) = (f_1(x), f_2(x)) }[/math].

Then, under the Manhattan metric,

- [math]\displaystyle{ \lim_{x \to p} f(x) = (L_1, L_2) }[/math]

if the following holds:

- For every
*ε*> 0, there exists a*δ*> 0 such that for all*x*in*S*, 0 < |*x*−*p*| <*δ*implies |*f*_{1}−*L*_{1}| + |*f*_{2}−*L*_{2}| <*ε*. - [math]\displaystyle{ (\forall \varepsilon \gt 0 )\, (\exists \delta \gt 0) \,(\forall x \in S) \,(0 \lt |x - p| \lt \delta \implies |f_1 - L_1| + |f_2 - L_2| \lt \varepsilon) }[/math].

Since this is also a finite-dimension vector-valued function, the limit theorem stated above also applies.^{[17]}

### Uniform metric

Finally, we will discuss the limit in function space, which has infinite dimensions. Consider a function *f*(*x*, *y*) in the function space [math]\displaystyle{ S \times T \to \R }[/math]. We want to find out as *x* approaches *p*, how *f*(*x*, *y*) will tend to another function *g*(*y*), which is in the function space [math]\displaystyle{ T \to \R }[/math]. The "closeness" in this function space may be measured under the uniform metric.^{[18]} Then, we will say **the uniform limit of f on T as x approaches p is g** and write

- [math]\displaystyle{ \underset{\begin{smallmatrix} x\to p \\ y\in T \end{smallmatrix}}{\mathrm{unif} \lim \;} f(x, y) = g(y) }[/math], or
- [math]\displaystyle{ \lim_{x \to p}f(x, y) = g(y) \;\; \text{uniformly on} \; T }[/math],

if the following holds:

- For every
*ε*> 0, there exists a*δ*> 0 such that for all*x*in*S*, 0 < |*x*−*p*| <*δ*implies sup_{y∈T }|*f*(*x*,*y*) −*g*(*y*)| <*ε*. - [math]\displaystyle{ (\forall \varepsilon \gt 0 )\, (\exists \delta \gt 0) \,(\forall x \in S) \,(0 \lt |x-p| \lt \delta \implies \sup_{y \in T} | f(x, y) - g(y) | \lt \varepsilon) }[/math].

In fact, one can see that this definition is equivalent to that of the uniform limit of a multivariable function introduced in the previous section.

## Functions on topological spaces

Suppose *X*,*Y* are topological spaces with *Y* a Hausdorff space. Let *p* be a limit point of Ω ⊆ *X*, and *L* ∈*Y*. For a function *f* : Ω → *Y*, it is said that the **limit of f as x approaches p is L**, written

- [math]\displaystyle{ \lim_{x \to p}f(x) = L }[/math],

if the following property holds:

- For every open neighborhood
*V*of*L*, there exists an open neighborhood*U*of*p*such that*f*(*U*∩ Ω − {*p*}) ⊆*V*.

This last part of the definition can also be phrased "there exists an open punctured neighbourhood *U* of *p* such that *f*(*U*∩Ω) ⊆ *V* ".

Note that the domain of *f* does not need to contain *p*. If it does, then the value of *f* at *p* is irrelevant to the definition of the limit. In particular, if the domain of *f* is *X* − {*p*} (or all of *X*), then the limit of *f* as *x* → *p* exists and is equal to *L* if, for all subsets Ω of *X* with limit point *p*, the limit of the restriction of *f* to Ω exists and is equal to *L*. Sometimes this criterion is used to establish the *non-existence* of the two-sided limit of a function on **R** by showing that the one-sided limits either fail to exist or do not agree. Such a view is fundamental in the field of general topology, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets.

Alternatively, the requirement that *Y* be a Hausdorff space can be relaxed to the assumption that *Y* be a general topological space, but then the limit of a function may not be unique. In particular, one can no longer talk about *the limit* of a function at a point, but rather *a limit* or *the set of limits* at a point.

A function is continuous at a limit point *p* of and in its domain if and only if *f*(*p*) is *the* (or, in the general case, *a*) limit of *f*(*x*) as *x* tends to *p*.

There is another type of limit of a function, namely the **sequential limit**. Let *f* : *X* → *Y* be a mapping from a topological space *X* into a Hausdorff space *Y*, *p* ∈ *X* a limit point of *X* and *L* ∈ *Y*. The sequential limit of *f* as *x* tends to *p* is *L* if

If *L* is the limit (in the sense above) of *f* as *x* approaches *p*, then it is a sequential limit as well, however the converse need not hold in general. If in addition *X* is metrizable, then *L* is the sequential limit of *f* as *x* approaches *p* if and only if it is the limit (in the sense above) of *f* as *x* approaches *p*.

## Other characterizations

### In terms of sequences

For functions on the real line, one way to define the limit of a function is in terms of the limit of sequences. (This definition is usually attributed to Eduard Heine.) In this setting:

- [math]\displaystyle{ \lim_{x\to a}f(x)=L }[/math]

if, and only if, for all sequences [math]\displaystyle{ x_n }[/math] (with [math]\displaystyle{ x_n }[/math] not equal to *a* for all *n*) converging to [math]\displaystyle{ a }[/math] the sequence [math]\displaystyle{ f(x_n) }[/math] converges to [math]\displaystyle{ L }[/math]. It was shown by Sierpiński in 1916 that proving the equivalence of this definition and the definition above, requires and is equivalent to a weak form of the axiom of choice. Note that defining what it means for a sequence [math]\displaystyle{ x_n }[/math] to converge to [math]\displaystyle{ a }[/math] requires the epsilon, delta method.

Similarly as it was the case of Weierstrass's definition, a more general Heine definition applies to functions defined on subsets of the real line. Let *f* be a real-valued function with the domain *Dm*(*f*). Let *a* be the limit of a sequence of elements of *Dm*(*f*) \ {*a*}. Then the limit (in this sense) of *f* is *L* as *x* approaches *p*
if for every sequence [math]\displaystyle{ x_n }[/math] ∈ *Dm*(*f*) \ {*a*} (so that for all *n*, [math]\displaystyle{ x_n }[/math] is not equal to *a*) that converges to *a*, the sequence [math]\displaystyle{ f(x_n) }[/math] converges to [math]\displaystyle{ L }[/math]. This is the same as the definition of a sequential limit in the preceding section obtained by regarding the subset *Dm*(*f*) of **R** as a metric space with the induced metric.

### In non-standard calculus

In non-standard calculus the limit of a function is defined by:

- [math]\displaystyle{ \lim_{x\to a}f(x)=L }[/math]

if and only if for all [math]\displaystyle{ x\in \mathbb{R}^* }[/math], [math]\displaystyle{ f^*(x)-L }[/math] is infinitesimal whenever [math]\displaystyle{ x-a }[/math] is infinitesimal. Here [math]\displaystyle{ \mathbb{R}^* }[/math] are the hyperreal numbers and [math]\displaystyle{ f^* }[/math] is the natural extension of *f* to the non-standard real numbers. Keisler proved that such a hyperreal definition of limit reduces the quantifier complexity by two quantifiers.^{[19]} On the other hand, Hrbacek writes that for the definitions to be valid for all hyperreal numbers they must implicitly be grounded in the ε-δ method, and claims that, from the pedagogical point of view, the hope that non-standard calculus could be done without ε-δ methods cannot be realized in full.^{[20]}
Bŀaszczyk et al. detail the usefulness of microcontinuity in developing a transparent definition of uniform continuity, and characterize Hrbacek's criticism as a "dubious lament".^{[21]}

### In terms of nearness

At the 1908 international congress of mathematics F. Riesz introduced an alternate way defining limits and continuity in concept called "nearness".^{[22]} A point [math]\displaystyle{ x }[/math] is defined to be near a set [math]\displaystyle{ A\subseteq \mathbb{R} }[/math] if for every [math]\displaystyle{ r\gt 0 }[/math] there is a point [math]\displaystyle{ a\in A }[/math] so that [math]\displaystyle{ |x-a|\lt r }[/math]. In this setting the

- [math]\displaystyle{ \lim_{x\to a} f(x)=L }[/math]

if and only if for all [math]\displaystyle{ A\subseteq \mathbb{R} }[/math], [math]\displaystyle{ L }[/math] is near [math]\displaystyle{ f(A) }[/math] whenever [math]\displaystyle{ a }[/math] is near [math]\displaystyle{ A }[/math]. Here [math]\displaystyle{ f(A) }[/math] is the set [math]\displaystyle{ \{f(x) | x \in A\} }[/math]. This definition can also be extended to metric and topological spaces.

## Relationship to continuity

The notion of the limit of a function is very closely related to the concept of continuity. A function *ƒ* is said to be continuous at *c* if it is both defined at *c* and its value at *c* equals the limit of *f* as *x* approaches *c*:

- [math]\displaystyle{ \lim_{x\to c} f(x) = f(c). }[/math]

(We have here assumed that *c* is a limit point of the domain of *f*.)

## Properties

If a function *f* is real-valued, then the limit of *f* at *p* is *L* if and only if both the right-handed limit and left-handed limit of *f* at *p* exist and are equal to *L*.

The function *f* is continuous at *p* if and only if the limit of *f*(*x*) as *x* approaches *p* exists and is equal to *f*(*p*). If *f* : *M* → *N* is a function between metric spaces *M* and *N*, then it is equivalent that *f* transforms every sequence in *M* which converges towards *p* into a sequence in *N* which converges towards *f*(*p*).

If *N* is a normed vector space, then the limit operation is linear in the following sense: if the limit of *f*(*x*) as *x* approaches *p* is *L* and the limit of *g*(*x*) as *x* approaches *p* is *P*, then the limit of *f*(*x*) + g(*x*) as *x* approaches *p* is *L* + *P*. If *a* is a scalar from the base field, then the limit of *af*(*x*) as *x* approaches *p* is *aL*.

If *f* and *g* are real-valued (or complex-valued) functions, then taking the limit of an operation on *f*(*x*) and *g*(*x*) (e.g., [math]\displaystyle{ f+g }[/math]*,* [math]\displaystyle{ f-g }[/math]*,* [math]\displaystyle{ f\times g }[/math]*,* [math]\displaystyle{ f/g }[/math]*,* [math]\displaystyle{ f^g }[/math]) under certain conditions is compatible with the operation of limits of *f(x)* and *g(x)*. This fact is often called the **algebraic limit theorem**. The main condition needed to apply the following rules is that the limits on the right-hand sides of the equations exist (in other words, these limits are finite values including 0). Additionally, the identity for division requires that the denominator on the right-hand side is non-zero (division by 0 is not defined), and the identity for exponentiation requires that the base is positive, or zero while the exponent is positive (finite).

- [math]\displaystyle{ \begin{matrix} \lim\limits_{x \to p} & (f(x) + g(x)) & = & \lim\limits_{x \to p} f(x) + \lim\limits_{x \to p} g(x) \\ \lim\limits_{x \to p} & (f(x) - g(x)) & = & \lim\limits_{x \to p} f(x) - \lim\limits_{x \to p} g(x) \\ \lim\limits_{x \to p} & (f(x)\cdot g(x)) & = & \lim\limits_{x \to p} f(x) \cdot \lim\limits_{x \to p} g(x) \\ \lim\limits_{x \to p} & (f(x)/g(x)) & = & {\lim\limits_{x \to p} f(x) / \lim\limits_{x \to p} g(x)} \\ \lim\limits_{x \to p} & f(x)^{g(x)} & = & {\lim\limits_{x \to p} f(x) ^ {\lim\limits_{x \to p} g(x)}} \end{matrix} }[/math]

These rules are also valid for one-sided limits, including when *p* is ∞ or −∞. In each rule above, when one of the limits on the right is ∞ or −∞, the limit on the left may sometimes still be determined by the following rules.

*q*+ ∞ = ∞ if*q*≠ −∞*q*× ∞ = ∞ if*q*> 0*q*× ∞ = −∞ if*q*< 0*q*/ ∞ = 0 if*q*≠ ∞ and*q*≠ −∞- ∞
^{q}= 0 if*q*< 0 - ∞
^{q}= ∞ if*q*> 0 *q*^{∞}= 0 if 0 <*q*< 1*q*^{∞}= ∞ if*q*> 1*q*^{−∞}= ∞ if 0 <*q*< 1*q*^{−∞}= 0 if*q*> 1

(see also Extended real number line).

In other cases the limit on the left may still exist, although the right-hand side, called an *indeterminate form*, does not allow one to determine the result. This depends on the functions *f* and *g*. These indeterminate forms are:

- 0 / 0
- ±∞ / ±∞
- 0 × ±∞
- ∞ + −∞
- 0
^{0} - ∞
^{0} - 1
^{±∞}

See further L'Hôpital's rule below and Indeterminate form.

### Limits of compositions of functions

In general, from knowing that

- [math]\displaystyle{ \lim_{y \to b} f(y) = c }[/math] and [math]\displaystyle{ \lim_{x \to a} g(x) = b }[/math],

it does *not* follow that [math]\displaystyle{ \lim_{x \to a} f(g(x)) = c }[/math]. However, this "chain rule" does hold if one of the following *additional* conditions holds:

*f*(*b*) =*c*(that is,*f*is continuous at*b*), or*g*does not take the value*b*near*a*(that is, there exists a [math]\displaystyle{ \delta \gt 0 }[/math] such that if [math]\displaystyle{ 0\lt |x-a|\lt \delta }[/math] then [math]\displaystyle{ |g(x)-b|\gt 0 }[/math]).

As an example of this phenomenon, consider the following function that violates both additional restrictions:

- [math]\displaystyle{ f(x)=g(x)=\begin{cases}0 & \text{if } x\neq 0 \\ 1 & \text{if } x=0 \end{cases}. }[/math]

Since the value at *f*(0) is a removable discontinuity,

- [math]\displaystyle{ \lim_{x \to a} f(x) = 0 }[/math] for all [math]\displaystyle{ a }[/math].

Thus, the naïve chain rule would suggest that the limit of *f*(*f*(*x*)) is 0. However, it is the case that

- [math]\displaystyle{ f(f(x))=\begin{cases}1 & \text{if } x\neq 0 \\ 0 & \text{if } x=0 \end{cases} }[/math]

and so

- [math]\displaystyle{ \lim_{x \to a} f(f(x)) = 1 }[/math] for all [math]\displaystyle{ a }[/math].

### Limits of special interest

#### Rational functions

For [math]\displaystyle{ n }[/math] a nonnegative integer and constants [math]\displaystyle{ a_1, a_2, a_3,\ldots, a_n }[/math] and [math]\displaystyle{ b_1, b_2, b_3,\ldots, b_n }[/math],

- [math]\displaystyle{ \lim_{x \to \infty} \frac{a_{1}{x}^{n}+a_{2}{x}^{n-1}+a_{3}{x}^{n-2}+...+a_n}{b_{1}{x}^{n}+b_{2}{x}^{n-1}+b_{3}{x}^{n-2}+...+b_n} = \frac{a_1}{b_1} }[/math]

This can be proven by dividing both the numerator and denominator by [math]\displaystyle{ x^{n} }[/math]. If the numerator is a polynomial of higher degree, the limit does not exist. If the denominator is of higher degree, the limit is 0.

#### Trigonometric functions

- [math]\displaystyle{ \lim_{x \to 0} \frac{\sin x}{x} = 1 }[/math]
- [math]\displaystyle{ \lim_{x \to 0} \frac{1 - \cos x}{x} = 0 }[/math]

#### Exponential functions

- [math]\displaystyle{ \lim_{x \to 0} (1+x)^{\frac{1}{x}} = \lim_{r \to \infty} \left(1+\frac{1}{r}\right)^{r} = e }[/math]
- [math]\displaystyle{ \lim_{x \to 0} \frac{e^{x}-1}{x} = 1 }[/math]
- [math]\displaystyle{ \lim_{x \to 0} \frac{e^{ax}-1}{bx} = \frac{a}{b} }[/math]
- [math]\displaystyle{ \lim_{x \to 0} \frac{c^{ax}-1}{bx} = \frac{a}{b}\ln c }[/math]
- [math]\displaystyle{ \lim_{x \to 0^+} x^{x} = 1 }[/math]

#### Logarithmic functions

- [math]\displaystyle{ \lim_{x \to 0} \frac{\ln(1+x)}{x} = 1 }[/math]
- [math]\displaystyle{ \lim_{x \to 0} \frac{\ln(1+ax)}{bx} = \frac{a}{b} }[/math]
- [math]\displaystyle{ \lim_{x \to 0} \frac{\log_c(1+ax)}{bx} = \frac{a}{b\ln c} }[/math]

### L'Hôpital's rule

This rule uses derivatives to find limits of indeterminate forms 0/0 or ±∞/∞, and only applies to such cases. Other indeterminate forms may be manipulated into this form. Given two functions *f*(*x*) and *g*(*x*), defined over an open interval *I* containing the desired limit point *c*, then if:

- [math]\displaystyle{ \lim_{x \to c}f(x)=\lim_{x \to c}g(x)=0, }[/math] or [math]\displaystyle{ \lim_{x \to c}f(x)=\pm\lim_{x \to c}g(x) = \pm\infty }[/math], and
- [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math] are differentiable over [math]\displaystyle{ I \setminus \{c\} }[/math], and
- [math]\displaystyle{ g'(x)\neq 0 }[/math] for all [math]\displaystyle{ x \in I \setminus \{c\} }[/math], and
- [math]\displaystyle{ \lim_{x\to c}\frac{f'(x)}{g'(x)} }[/math] exists,

then:

- [math]\displaystyle{ \lim_{x \to c} \frac{f(x)}{g(x)} = \lim_{x \to c} \frac{f'(x)}{g'(x)} }[/math].

Normally, the first condition is the most important one.

For example: [math]\displaystyle{ \lim_{x \to 0} \frac{\sin (2x)}{\sin (3x)} = \lim_{x \to 0} \frac{2 \cos (2x)}{3 \cos (3x)} = \frac{2 \sdot 1}{3 \sdot 1} = \frac{2}{3}. }[/math]

### Summations and integrals

Specifying an infinite bound on a summation or integral is a common shorthand for specifying a limit.

A short way to write the limit [math]\displaystyle{ \lim_{n \to \infty} \sum_{i=s}^n f(i) }[/math] is [math]\displaystyle{ \sum_{i=s}^\infty f(i) }[/math]. An important example of limits of sums such as these are series.

A short way to write the limit [math]\displaystyle{ \lim_{x \to \infty} \int_a^x f(t) \; dt }[/math] is [math]\displaystyle{ \int_a^\infty f(t) \; dt }[/math].

A short way to write the limit [math]\displaystyle{ \lim_{x \to -\infty} \int_x^b f(t) \; dt }[/math] is [math]\displaystyle{ \int_{-\infty}^b f(t) \; dt }[/math].

## See also

- Big O notation – Notation describing limiting behavior
- L'Hôpital's rule – Mathematical rule for evaluating some limits
- List of limits
- Limit of a sequence – Value to which tends an infinite sequence
- Limit superior and limit inferior – Bounds of a sequence
- Net (mathematics) – A generalization of a sequence of points
- Non-standard calculus
- Squeeze theorem – Method for finding limits in calculus
- Subsequential limit – The limit of some subsequence

## Notes

- ↑ Felscher, Walter (2000), "Bolzano, Cauchy, Epsilon, Delta",
*American Mathematical Monthly***107**(9): 844–862, doi:10.2307/2695743 - ↑
^{2.0}^{2.1}Grabiner, Judith V. (1983), "Who Gave You the Epsilon? Cauchy and the Origins of Rigorous Calculus",*American Mathematical Monthly***90**(3): 185–194, doi:10.2307/2975545, collected in Who Gave You the Epsilon?, ISBN 978-0-88385-569-0 pp. 5–13. Also available at: http://www.maa.org/pubs/Calc_articles/ma002.pdf - ↑ Sinkevich, G. I. (2017). "Historia epsylontyki".
*Antiquitates Mathematicae*(Cornell University)**10**. doi:10.14708/am.v10i0.805. - ↑ Burton, David M. (1997),
*The History of Mathematics: An introduction*(Third ed.), New York: McGraw–Hill, pp. 558–559, ISBN 978-0-07-009465-9 - ↑ Miller, Jeff (1 December 2004),
*Earliest Uses of Symbols of Calculus*, http://jeff560.tripod.com/calculus.html, retrieved 2008-12-18 - ↑ Weisstein, Eric W.. "Epsilon-Delta Definition" (in en). https://mathworld.wolfram.com/Epsilon-DeltaDefinition.html.
- ↑ Weisstein, Eric W.. "Limit" (in en). https://mathworld.wolfram.com/Limit.html.
- ↑ For example, Limit at
*Encyclopedia of Mathematics* - ↑ Stewart, James (2020). "Chapter 14.2 Limits and Continuity".
*Multivariable Calculus*(9th ed.). pp. 952. ISBN 9780357042922. - ↑ Stewart, James (2020). "Chapter 14.2 Limits and Continuity".
*Multivariable Calculus*(9th ed.). pp. 953. ISBN 9780357042922. - ↑
^{11.0}^{11.1}^{11.2}Zakon, Elias (2011). "Chapter 4. Function Limits and Continuity".*Mathematical Anaylysis, Volume I*. pp. 219–220. ISBN 9781617386473. - ↑
^{12.0}^{12.1}Zakon, Elias (2011). "Chapter 4. Function Limits and Continuity".*Mathematical Anaylysis, Volume I*. pp. 220. ISBN 9781617386473. - ↑ Zakon, Elias (2011). "Chapter 4. Function Limits and Continuity".
*Mathematical Anaylysis, Volume I*. pp. 223. ISBN 9781617386473. - ↑ Taylor, Angus E. (2012).
*General Theory of Functions and Integration*. Dover Books on Mathematics Series. p. 139-140. ISBN 9780486152141. - ↑ Rudin, W (1986).
*Principles of mathematical analysis*. McGraw - Hill Book C. pp. 84. OCLC 962920758. http://worldcat.org/oclc/962920758. - ↑
^{16.0}^{16.1}Hartman, Gregory (2019). "The Calculus of Vector-Valued Functions II" (in en). https://math.libretexts.org/Courses/Georgia_State_University_-_Perimeter_College/MATH_2215%3A_Calculus_III/13%3A_Vector-valued_Functions/The_Calculus_of_Vector-Valued_Functions_II. - ↑ Zakon, Elias (2011). "Chapter 4. Function Limits and Continuity".
*Mathematical Anaylysis, Volume I*. pp. 172. ISBN 9781617386473. - ↑ Rudin, W (1986).
*Principles of mathematical analysis*. McGraw - Hill Book C. pp. 150–151. OCLC 962920758. http://worldcat.org/oclc/962920758. - ↑ Keisler, H. Jerome (2008), "Quantifiers in limits",
*Andrzej Mostowski and foundational studies*, IOS, Amsterdam, pp. 151–170, http://www.math.wisc.edu/~keisler/limquant7.pdf - ↑ Hrbacek, K. (2007), "Stratified Analysis?", in Van Den Berg, I.; Neves, V.,
*The Strength of Nonstandard Analysis*, Springer - ↑ Bŀaszczyk, Piotr (2012), "Ten misconceptions from the history of analysis and their debunking",
*Foundations of Science***18**(1): 43–74, doi:10.1007/s10699-012-9285-8 - ↑ F. Riesz (7 April 1908), "Stetigkeitsbegriff und abstrakte Mengenlehre (The Concept of Continuity and Abstract Set Theory)",
*1908 International Congress of Mathematicians*

## References

- Apostol, Tom M. (1974),
*Mathematical Analysis*(2 ed.), Addison–Wesley, ISBN 0-201-00288-4 - Bartle, Robert (1967),
*The elements of real analysis*, Wiley - Courant, Richard (1924),
*Vorlesungen über Differential- und Integralrechnung*, Springer Verlag - Hardy, G.H. (1921),
*A course in pure mathematics*, Cambridge University Press - Hubbard, John H. (2015),
*Vector calculus, linear algebra, and differential forms: A unified approach*(Fifth ed.), Matrix Editions - Page, Warren; Hersh, Reuben; Selden, Annie et al., eds. (2002), "Media Highlights",
*The College Mathematics***33**(2): 147–154. - Rudin, Walter (1964),
*Principles of mathematical analysis*, McGraw-Hill - Sutherland, W. A. (1975),
*Introduction to Metric and Topological Spaces*, Oxford: Oxford University Press, ISBN 0-19-853161-3 - Sherbert, Robert (2000),
*Introduction to real analysis*, Wiley - Whittaker; Watson (1904),
*A Course of Modern Analysis*, Cambridge University Press

## External links

- MacTutor History of Weierstrass.
- MacTutor History of Bolzano
- Visual Calculus by Lawrence S. Husch, University of Tennessee (2001)

Original source: https://en.wikipedia.org/wiki/Limit of a function.
Read more |