Real analysis
In mathematics, the branch of real analysis studies the behavior of real numbers, sequences and series of real numbers, and real functions.[1] Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.
Real analysis is distinguished from complex analysis, which deals with the study of complex numbers and their functions.
Scope
Construction of the real numbers
The theorems of real analysis rely on the properties of the real number system, which must be established. The real number system consists of an uncountable set ([math]\displaystyle{ \mathbb{R} }[/math]), together with two binary operations denoted + and ⋅, and a total order denoted ≤. The operations make the real numbers a field, and, along with the order, an ordered field. The real number system is the unique complete ordered field, in the sense that any other complete ordered field is isomorphic to it. Intuitively, completeness means that there are no 'gaps' (or 'holes') in the real numbers. This property distinguishes the real numbers from other ordered fields (e.g., the rational numbers [math]\displaystyle{ \mathbb{Q} }[/math]) and is critical to the proof of several key properties of functions of the real numbers. The completeness of the reals is often conveniently expressed as the least upper bound property (see below).
Order properties of the real numbers
The real numbers have various lattice-theoretic properties that are absent in the complex numbers. Also, the real numbers form an ordered field, in which sums and products of positive numbers are also positive. Moreover, the ordering of the real numbers is total, and the real numbers have the least upper bound property:
Every nonempty subset of [math]\displaystyle{ \mathbb{R} }[/math] that has an upper bound has a least upper bound that is also a real number.
These order-theoretic properties lead to a number of fundamental results in real analysis, such as the monotone convergence theorem, the intermediate value theorem and the mean value theorem.
However, while the results in real analysis are stated for real numbers, many of these results can be generalized to other mathematical objects. In particular, many ideas in functional analysis and operator theory generalize properties of the real numbers – such generalizations include the theories of Riesz spaces and positive operators. Also, mathematicians consider real and imaginary parts of complex sequences, or by pointwise evaluation of operator sequences.[clarification needed]
Topological properties of the real numbers
Many of the theorems of real analysis are consequences of the topological properties of the real number line. The order properties of the real numbers described above are closely related to these topological properties. As a topological space, the real numbers has a standard topology, which is the order topology induced by order [math]\displaystyle{ \lt }[/math]. Alternatively, by defining the metric or distance function [math]\displaystyle{ d:\mathbb{R}\times\mathbb{R}\to\mathbb{R}_{\geq 0} }[/math] using the absolute value function as [math]\displaystyle{ d(x, y) = |x - y| }[/math], the real numbers become the prototypical example of a metric space. The topology induced by metric [math]\displaystyle{ d }[/math] turns out to be identical to the standard topology induced by order [math]\displaystyle{ \lt }[/math]. Theorems like the intermediate value theorem that are essentially topological in nature can often be proved in the more general setting of metric or topological spaces rather than in [math]\displaystyle{ \mathbb{R} }[/math] only. Often, such proofs tend to be shorter or simpler compared to classical proofs that apply direct methods.
Sequences
A sequence is a function whose domain is a countable, totally ordered set.[2] The domain is usually taken to be the natural numbers,[3] although it is occasionally convenient to also consider bidirectional sequences indexed by the set of all integers, including negative indices.
Of interest in real analysis, a real-valued sequence, here indexed by the natural numbers, is a map [math]\displaystyle{ a : \N \to \R : n \mapsto a_n }[/math]. Each [math]\displaystyle{ a(n) = a_n }[/math] is referred to as a term (or, less commonly, an element) of the sequence. A sequence is rarely denoted explicitly as a function; instead, by convention, it is almost always notated as if it were an ordered ∞-tuple, with individual terms or a general term enclosed in parentheses:[4] [math]\displaystyle{ (a_n) = (a_n)_{n \in \N}=(a_1, a_2, a_3, \dots) . }[/math] A sequence that tends to a limit (i.e., [math]\displaystyle{ \lim_{n \to \infty} a_n }[/math] exists) is said to be convergent; otherwise it is divergent. (See the section on limits and convergence for details.) A real-valued sequence [math]\displaystyle{ (a_n) }[/math] is bounded if there exists [math]\displaystyle{ M\in\R }[/math] such that [math]\displaystyle{ |a_n|\lt M }[/math] for all [math]\displaystyle{ n\in\mathbb{N} }[/math]. A real-valued sequence [math]\displaystyle{ (a_n) }[/math] is monotonically increasing or decreasing if [math]\displaystyle{ a_1 \leq a_2 \leq a_3 \leq \cdots }[/math] or [math]\displaystyle{ a_1 \geq a_2 \geq a_3 \geq \cdots }[/math] holds, respectively. If either holds, the sequence is said to be monotonic. The monotonicity is strict if the chained inequalities still hold with [math]\displaystyle{ \leq }[/math] or [math]\displaystyle{ \geq }[/math] replaced by < or >.
Given a sequence [math]\displaystyle{ (a_n) }[/math], another sequence [math]\displaystyle{ (b_k) }[/math] is a subsequence of [math]\displaystyle{ (a_n) }[/math] if [math]\displaystyle{ b_k=a_{n_k} }[/math] for all positive integers [math]\displaystyle{ k }[/math] and [math]\displaystyle{ (n_k) }[/math] is a strictly increasing sequence of natural numbers.
Limits and convergence
Roughly speaking, a limit is the value that a function or a sequence "approaches" as the input or index approaches some value.[5] (This value can include the symbols [math]\displaystyle{ \pm\infty }[/math] when addressing the behavior of a function or sequence as the variable increases or decreases without bound.) The idea of a limit is fundamental to calculus (and mathematical analysis in general) and its formal definition is used in turn to define notions like continuity, derivatives, and integrals. (In fact, the study of limiting behavior has been used as a characteristic that distinguishes calculus and mathematical analysis from other branches of mathematics.)
The concept of limit was informally introduced for functions by Newton and Leibniz, at the end of the 17th century, for building infinitesimal calculus. For sequences, the concept was introduced by Cauchy, and made rigorous, at the end of the 19th century by Bolzano and Weierstrass, who gave the modern ε-δ definition, which follows.
Definition. Let [math]\displaystyle{ f }[/math] be a real-valued function defined on [math]\displaystyle{ E\subset\mathbb{R} }[/math]. We say that [math]\displaystyle{ f(x) }[/math] tends to [math]\displaystyle{ L }[/math] as [math]\displaystyle{ x }[/math] approaches [math]\displaystyle{ x_0 }[/math], or that the limit of [math]\displaystyle{ f(x) }[/math] as [math]\displaystyle{ x }[/math] approaches [math]\displaystyle{ x_0 }[/math] is [math]\displaystyle{ L }[/math] if, for any [math]\displaystyle{ \varepsilon\gt 0 }[/math], there exists [math]\displaystyle{ \delta\gt 0 }[/math] such that for all [math]\displaystyle{ x\in E }[/math], [math]\displaystyle{ 0 \lt |x - x_0| \lt \delta }[/math] implies that [math]\displaystyle{ |f(x) - L| \lt \varepsilon }[/math]. We write this symbolically as [math]\displaystyle{ f(x)\to L\ \ \text{as}\ \ x\to x_0 , }[/math] or as [math]\displaystyle{ \lim_{x\to x_0} f(x) = L . }[/math] Intuitively, this definition can be thought of in the following way: We say that [math]\displaystyle{ f(x)\to L }[/math] as [math]\displaystyle{ x\to x_0 }[/math], when, given any positive number [math]\displaystyle{ \varepsilon }[/math], no matter how small, we can always find a [math]\displaystyle{ \delta }[/math], such that we can guarantee that [math]\displaystyle{ f(x) }[/math] and [math]\displaystyle{ L }[/math] are less than [math]\displaystyle{ \varepsilon }[/math] apart, as long as [math]\displaystyle{ x }[/math] (in the domain of [math]\displaystyle{ f }[/math]) is a real number that is less than [math]\displaystyle{ \delta }[/math] away from [math]\displaystyle{ x_0 }[/math] but distinct from [math]\displaystyle{ x_0 }[/math]. The purpose of the last stipulation, which corresponds to the condition [math]\displaystyle{ 0\lt |x-x_0| }[/math] in the definition, is to ensure that [math]\displaystyle{ \lim_{x \to x_0} f(x)=L }[/math] does not imply anything about the value of [math]\displaystyle{ f(x_0) }[/math] itself. Actually, [math]\displaystyle{ x_0 }[/math] does not even need to be in the domain of [math]\displaystyle{ f }[/math] in order for [math]\displaystyle{ \lim_{x \to x_0} f(x) }[/math] to exist.
In a slightly different but related context, the concept of a limit applies to the behavior of a sequence [math]\displaystyle{ (a_n) }[/math] when [math]\displaystyle{ n }[/math] becomes large.
Definition. Let [math]\displaystyle{ (a_n) }[/math] be a real-valued sequence. We say that [math]\displaystyle{ (a_n) }[/math] converges to [math]\displaystyle{ a }[/math] if, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math], there exists a natural number [math]\displaystyle{ N }[/math] such that [math]\displaystyle{ n\geq N }[/math] implies that [math]\displaystyle{ |a-a_n| \lt \varepsilon }[/math]. We write this symbolically as [math]\displaystyle{ a_n \to a\ \ \text{as}\ \ n \to \infty , }[/math]or as[math]\displaystyle{ \lim_{n \to \infty} a_n = a ; }[/math] if [math]\displaystyle{ (a_n) }[/math] fails to converge, we say that [math]\displaystyle{ (a_n) }[/math] diverges.
Generalizing to a real-valued function of a real variable, a slight modification of this definition (replacement of sequence [math]\displaystyle{ (a_n) }[/math] and term [math]\displaystyle{ a_n }[/math] by function [math]\displaystyle{ f }[/math] and value [math]\displaystyle{ f(x) }[/math] and natural numbers [math]\displaystyle{ N }[/math] and [math]\displaystyle{ n }[/math] by real numbers [math]\displaystyle{ M }[/math] and [math]\displaystyle{ x }[/math], respectively) yields the definition of the limit of [math]\displaystyle{ f(x) }[/math] as [math]\displaystyle{ x }[/math] increases without bound, notated [math]\displaystyle{ \lim_{x \to \infty} f(x) }[/math]. Reversing the inequality [math]\displaystyle{ x\geq M }[/math] to [math]\displaystyle{ x \leq M }[/math] gives the corresponding definition of the limit of [math]\displaystyle{ f(x) }[/math] as [math]\displaystyle{ x }[/math] decreases without bound, [math]\displaystyle{ \lim_{x \to -\infty} f(x) }[/math].
Sometimes, it is useful to conclude that a sequence converges, even though the value to which it converges is unknown or irrelevant. In these cases, the concept of a Cauchy sequence is useful.
Definition. Let [math]\displaystyle{ (a_n) }[/math] be a real-valued sequence. We say that [math]\displaystyle{ (a_n) }[/math] is a Cauchy sequence if, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math], there exists a natural number [math]\displaystyle{ N }[/math] such that [math]\displaystyle{ m,n\geq N }[/math] implies that [math]\displaystyle{ |a_m-a_n| \lt \varepsilon }[/math].
It can be shown that a real-valued sequence is Cauchy if and only if it is convergent. This property of the real numbers is expressed by saying that the real numbers endowed with the standard metric, [math]\displaystyle{ (\R, |\cdot|) }[/math], is a complete metric space. In a general metric space, however, a Cauchy sequence need not converge.
In addition, for real-valued sequences that are monotonic, it can be shown that the sequence is bounded if and only if it is convergent.
Uniform and pointwise convergence for sequences of functions
In addition to sequences of numbers, one may also speak of sequences of functions on [math]\displaystyle{ E\subset \mathbb{R} }[/math], that is, infinite, ordered families of functions [math]\displaystyle{ f_n:E\to\mathbb{R} }[/math], denoted [math]\displaystyle{ (f_n)_{n=1}^\infty }[/math], and their convergence properties. However, in the case of sequences of functions, there are two kinds of convergence, known as pointwise convergence and uniform convergence, that need to be distinguished.
Roughly speaking, pointwise convergence of functions [math]\displaystyle{ f_n }[/math] to a limiting function [math]\displaystyle{ f:E\to\mathbb{R} }[/math], denoted [math]\displaystyle{ f_n \rightarrow f }[/math], simply means that given any [math]\displaystyle{ x\in E }[/math], [math]\displaystyle{ f_n(x)\to f(x) }[/math] as [math]\displaystyle{ n\to\infty }[/math]. In contrast, uniform convergence is a stronger type of convergence, in the sense that a uniformly convergent sequence of functions also converges pointwise, but not conversely. Uniform convergence requires members of the family of functions, [math]\displaystyle{ f_n }[/math], to fall within some error [math]\displaystyle{ \varepsilon \gt 0 }[/math] of [math]\displaystyle{ f }[/math] for every value of [math]\displaystyle{ x\in E }[/math], whenever [math]\displaystyle{ n\geq N }[/math], for some integer [math]\displaystyle{ N }[/math]. For a family of functions to uniformly converge, sometimes denoted [math]\displaystyle{ f_n\rightrightarrows f }[/math], such a value of [math]\displaystyle{ N }[/math] must exist for any [math]\displaystyle{ \varepsilon\gt 0 }[/math] given, no matter how small. Intuitively, we can visualize this situation by imagining that, for a large enough [math]\displaystyle{ N }[/math], the functions [math]\displaystyle{ f_N, f_{N+1}, f_{N+2},\ldots }[/math] are all confined within a 'tube' of width [math]\displaystyle{ 2\varepsilon }[/math] about [math]\displaystyle{ f }[/math] (that is, between [math]\displaystyle{ f - \varepsilon }[/math] and [math]\displaystyle{ f+\varepsilon }[/math]) for every value in their domain [math]\displaystyle{ E }[/math].
The distinction between pointwise and uniform convergence is important when exchanging the order of two limiting operations (e.g., taking a limit, a derivative, or integral) is desired: in order for the exchange to be well-behaved, many theorems of real analysis call for uniform convergence. For example, a sequence of continuous functions (see below) is guaranteed to converge to a continuous limiting function if the convergence is uniform, while the limiting function may not be continuous if convergence is only pointwise. Karl Weierstrass is generally credited for clearly defining the concept of uniform convergence and fully investigating its implications.
Compactness
- general topology that plays an important role in many of the theorems of real analysis. The property of compactness is a generalization of the notion of a set being closed and bounded. (In the context of real analysis, these notions are equivalent: a set in Euclidean space is compact if and only if it is closed and bounded.) Briefly, a closed set contains all of its boundary points, while a set is bounded if there exists a real number such that the distance between any two points of the set is less than that number. In [math]\displaystyle{ \mathbb{R} }[/math], sets that are closed and bounded, and therefore compact, include the empty set, any finite number of points, closed intervals, and their finite unions. However, this list is not exhaustive; for instance, the set [math]\displaystyle{ \{1/n:n\in\mathbb{N}\}\cup \{0}\ }[/math] is a compact set; the Cantor ternary set [math]\displaystyle{ \mathcal{C}\subset [0,1] }[/math] is another example of a compact set. On the other hand, the set [math]\displaystyle{ \{1/n:n\in\mathbb{N}\} }[/math] is not compact because it is bounded but not closed, as the boundary point 0 is not a member of the set. The set [math]\displaystyle{ [0,\infty) }[/math] is also not compact because it is closed but not bounded. Compactness is a concept from
For subsets of the real numbers, there are several equivalent definitions of compactness.
Definition. A set [math]\displaystyle{ E\subset\mathbb{R} }[/math] is compact if it is closed and bounded.
This definition also holds for Euclidean space of any finite dimension, [math]\displaystyle{ \mathbb{R}^n }[/math], but it is not valid for metric spaces in general. The equivalence of the definition with the definition of compactness based on subcovers, given later in this section, is known as the Heine-Borel theorem.
A more general definition that applies to all metric spaces uses the notion of a subsequence (see above).
Definition. A set [math]\displaystyle{ E }[/math] in a metric space is compact if every sequence in [math]\displaystyle{ E }[/math] has a convergent subsequence.
This particular property is known as subsequential compactness. In [math]\displaystyle{ \mathbb{R} }[/math], a set is subsequentially compact if and only if it is closed and bounded, making this definition equivalent to the one given above. Subsequential compactness is equivalent to the definition of compactness based on subcovers for metric spaces, but not for topological spaces in general.
The most general definition of compactness relies on the notion of open covers and subcovers, which is applicable to topological spaces (and thus to metric spaces and [math]\displaystyle{ \mathbb{R} }[/math] as special cases). In brief, a collection of open sets [math]\displaystyle{ U_{\alpha} }[/math] is said to be an open cover of set [math]\displaystyle{ X }[/math] if the union of these sets is a superset of [math]\displaystyle{ X }[/math]. This open cover is said to have a finite subcover if a finite subcollection of the [math]\displaystyle{ U_{\alpha} }[/math] could be found that also covers [math]\displaystyle{ X }[/math].
Definition. A set [math]\displaystyle{ X }[/math] in a topological space is compact if every open cover of [math]\displaystyle{ X }[/math] has a finite subcover.
Compact sets are well-behaved with respect to properties like convergence and continuity. For instance, any Cauchy sequence in a compact metric space is convergent. As another example, the image of a compact metric space under a continuous map is also compact.
Continuity
A function from the set of real numbers to the real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve with no "holes" or "jumps".
There are several ways to make this intuition mathematically rigorous. Several definitions of varying levels of generality can be given. In cases where two or more definitions are applicable, they are readily shown to be equivalent to one another, so the most convenient definition can be used to determine whether a given function is continuous or not. In the first definition given below, [math]\displaystyle{ f:I\to\R }[/math] is a function defined on a non-degenerate interval [math]\displaystyle{ I }[/math] of the set of real numbers as its domain. Some possibilities include [math]\displaystyle{ I=\R }[/math], the whole set of real numbers, an open interval [math]\displaystyle{ I = (a, b) = \{x \in \R \mid a \lt x \lt b \}, }[/math] or a closed interval [math]\displaystyle{ I = [a, b] = \{x \in \R \mid a \leq x \leq b\}. }[/math] Here, [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are distinct real numbers, and we exclude the case of [math]\displaystyle{ I }[/math] being empty or consisting of only one point, in particular.
Definition. If [math]\displaystyle{ I\subset \mathbb{R} }[/math] is a non-degenerate interval, we say that [math]\displaystyle{ f:I \to \R }[/math] is continuous at [math]\displaystyle{ p\in I }[/math] if [math]\displaystyle{ \lim_{x \to p} f(x) = f(p) }[/math]. We say that [math]\displaystyle{ f }[/math] is a continuous map if [math]\displaystyle{ f }[/math] is continuous at every [math]\displaystyle{ p\in I }[/math].
In contrast to the requirements for [math]\displaystyle{ f }[/math] to have a limit at a point [math]\displaystyle{ p }[/math], which do not constrain the behavior of [math]\displaystyle{ f }[/math] at [math]\displaystyle{ p }[/math] itself, the following two conditions, in addition to the existence of [math]\displaystyle{ \lim_{x\to p} f(x) }[/math], must also hold in order for [math]\displaystyle{ f }[/math] to be continuous at [math]\displaystyle{ p }[/math]: (i) [math]\displaystyle{ f }[/math] must be defined at [math]\displaystyle{ p }[/math], i.e., [math]\displaystyle{ p }[/math] is in the domain of [math]\displaystyle{ f }[/math]; and (ii) [math]\displaystyle{ f(x)\to f(p) }[/math] as [math]\displaystyle{ x\to p }[/math]. The definition above actually applies to any domain [math]\displaystyle{ E }[/math] that does not contain an isolated point, or equivalently, [math]\displaystyle{ E }[/math] where every [math]\displaystyle{ p\in E }[/math] is a limit point of [math]\displaystyle{ E }[/math]. A more general definition applying to [math]\displaystyle{ f:X\to\mathbb{R} }[/math] with a general domain [math]\displaystyle{ X\subset \mathbb{R} }[/math] is the following:
Definition. If [math]\displaystyle{ X }[/math] is an arbitrary subset of [math]\displaystyle{ \mathbb{R} }[/math], we say that [math]\displaystyle{ f:X\to\mathbb{R} }[/math] is continuous at [math]\displaystyle{ p\in X }[/math] if, for any [math]\displaystyle{ \varepsilon\gt 0 }[/math], there exists [math]\displaystyle{ \delta\gt 0 }[/math] such that for all [math]\displaystyle{ x\in X }[/math], [math]\displaystyle{ |x-p|\lt \delta }[/math] implies that [math]\displaystyle{ |f(x)-f(p)| \lt \varepsilon }[/math]. We say that [math]\displaystyle{ f }[/math] is a continuous map if [math]\displaystyle{ f }[/math] is continuous at every [math]\displaystyle{ p\in X }[/math].
A consequence of this definition is that [math]\displaystyle{ f }[/math] is trivially continuous at any isolated point [math]\displaystyle{ p\in X }[/math]. This somewhat unintuitive treatment of isolated points is necessary to ensure that our definition of continuity for functions on the real line is consistent with the most general definition of continuity for maps between topological spaces (which includes metric spaces and [math]\displaystyle{ \mathbb{R} }[/math] in particular as special cases). This definition, which extends beyond the scope of our discussion of real analysis, is given below for completeness.
Definition. If [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are topological spaces, we say that [math]\displaystyle{ f:X\to Y }[/math] is continuous at [math]\displaystyle{ p\in X }[/math] if [math]\displaystyle{ f^{-1} (V) }[/math] is a neighborhood of [math]\displaystyle{ p }[/math] in [math]\displaystyle{ X }[/math] for every neighborhood [math]\displaystyle{ V }[/math] of [math]\displaystyle{ f(p) }[/math] in [math]\displaystyle{ Y }[/math]. We say that [math]\displaystyle{ f }[/math] is a continuous map if [math]\displaystyle{ f^{-1}(U) }[/math] is open in [math]\displaystyle{ X }[/math] for every [math]\displaystyle{ U }[/math] open in [math]\displaystyle{ Y }[/math].
(Here, [math]\displaystyle{ f^{-1}(S) }[/math] refers to the preimage of [math]\displaystyle{ S\subset Y }[/math] under [math]\displaystyle{ f }[/math].)
Uniform continuity
Definition. If [math]\displaystyle{ X }[/math] is a subset of the real numbers, we say a function [math]\displaystyle{ f:X\to\mathbb{R} }[/math] is uniformly continuous on [math]\displaystyle{ X }[/math] if, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math], there exists a [math]\displaystyle{ \delta\gt 0 }[/math] such that for all [math]\displaystyle{ x,y\in X }[/math], [math]\displaystyle{ |x-y|\lt \delta }[/math] implies that [math]\displaystyle{ |f(x)-f(y)| \lt \varepsilon }[/math].
Explicitly, when a function is uniformly continuous on [math]\displaystyle{ X }[/math], the choice of [math]\displaystyle{ \delta }[/math] needed to fulfill the definition must work for all of [math]\displaystyle{ X }[/math] for a given [math]\displaystyle{ \varepsilon }[/math]. In contrast, when a function is continuous at every point [math]\displaystyle{ p\in X }[/math] (or said to be continuous on [math]\displaystyle{ X }[/math]), the choice of [math]\displaystyle{ \delta }[/math] may depend on both [math]\displaystyle{ \varepsilon }[/math] and [math]\displaystyle{ p }[/math]. In contrast to simple continuity, uniform continuity is a property of a function that only makes sense with a specified domain; to speak of uniform continuity at a single point [math]\displaystyle{ p }[/math] is meaningless.
On a compact set, it is easily shown that all continuous functions are uniformly continuous. If [math]\displaystyle{ E }[/math] is a bounded noncompact subset of [math]\displaystyle{ \mathbb{R} }[/math], then there exists [math]\displaystyle{ f:E\to\mathbb{R} }[/math] that is continuous but not uniformly continuous. As a simple example, consider [math]\displaystyle{ f:(0,1)\to\mathbb{R} }[/math] defined by [math]\displaystyle{ f(x)=1/x }[/math]. By choosing points close to 0, we can always make [math]\displaystyle{ |f(x)-f(y)| \gt \varepsilon }[/math] for any single choice of [math]\displaystyle{ \delta\gt 0 }[/math], for a given [math]\displaystyle{ \varepsilon \gt 0 }[/math].
Absolute continuity
Definition. Let [math]\displaystyle{ I\subset\mathbb{R} }[/math] be an interval on the real line. A function [math]\displaystyle{ f:I \to \mathbb{R} }[/math] is said to be absolutely continuous on [math]\displaystyle{ I }[/math] if for every positive number [math]\displaystyle{ \varepsilon }[/math], there is a positive number [math]\displaystyle{ \delta }[/math] such that whenever a finite sequence of pairwise disjoint sub-intervals [math]\displaystyle{ (x_1, y_1), (x_2,y_2),\ldots, (x_n,y_n) }[/math] of [math]\displaystyle{ I }[/math] satisfies[6]
- [math]\displaystyle{ \sum_{k=1}^{n} (y_k - x_k) \lt \delta }[/math]
then
- [math]\displaystyle{ \sum_{k=1}^{n} | f(y_k) - f(x_k) | \lt \varepsilon. }[/math]
Absolutely continuous functions are continuous: consider the case n = 1 in this definition. The collection of all absolutely continuous functions on I is denoted AC(I). Absolute continuity is a fundamental concept in the Lebesgue theory of integration, allowing the formulation of a generalized version of the fundamental theorem of calculus that applies to the Lebesgue integral.
Differentiation
The notion of the derivative of a function or differentiability originates from the concept of approximating a function near a given point using the "best" linear approximation. This approximation, if it exists, is unique and is given by the line that is tangent to the function at the given point [math]\displaystyle{ a }[/math], and the slope of the line is the derivative of the function at [math]\displaystyle{ a }[/math].
A function [math]\displaystyle{ f:\mathbb{R}\to\mathbb{R} }[/math] is differentiable at [math]\displaystyle{ a }[/math] if the limit
- [math]\displaystyle{ f'(a)=\lim_{h\to 0}\frac{f(a+h)-f(a)}{h} }[/math]
exists. This limit is known as the derivative of [math]\displaystyle{ f }[/math] at [math]\displaystyle{ a }[/math], and the function [math]\displaystyle{ f' }[/math], possibly defined on only a subset of [math]\displaystyle{ \mathbb{R} }[/math], is the derivative (or derivative function) of [math]\displaystyle{ f }[/math]. If the derivative exists everywhere, the function is said to be differentiable.
As a simple consequence of the definition, [math]\displaystyle{ f }[/math] is continuous at [math]\displaystyle{ a }[/math] if it is differentiable there. Differentiability is therefore a stronger regularity condition (condition describing the "smoothness" of a function) than continuity, and it is possible for a function to be continuous on the entire real line but not differentiable anywhere (see Weierstrass's nowhere differentiable continuous function). It is possible to discuss the existence of higher-order derivatives as well, by finding the derivative of a derivative function, and so on.
One can classify functions by their differentiability class. The class [math]\displaystyle{ C^0 }[/math] (sometimes [math]\displaystyle{ C^0([a,b]) }[/math] to indicate the interval of applicability) consists of all continuous functions. The class [math]\displaystyle{ C^1 }[/math] consists of all differentiable functions whose derivative is continuous; such functions are called continuously differentiable. Thus, a [math]\displaystyle{ C^1 }[/math] function is exactly a function whose derivative exists and is of class [math]\displaystyle{ C^0 }[/math]. In general, the classes [math]\displaystyle{ C^k }[/math] can be defined recursively by declaring [math]\displaystyle{ C^0 }[/math] to be the set of all continuous functions and declaring [math]\displaystyle{ C^k }[/math] for any positive integer [math]\displaystyle{ k }[/math] to be the set of all differentiable functions whose derivative is in [math]\displaystyle{ C^{k-1} }[/math]. In particular, [math]\displaystyle{ C^k }[/math] is contained in [math]\displaystyle{ C^{k-1} }[/math] for every [math]\displaystyle{ k }[/math], and there are examples to show that this containment is strict. Class [math]\displaystyle{ C^\infty }[/math] is the intersection of the sets [math]\displaystyle{ C^k }[/math] as [math]\displaystyle{ k }[/math] varies over the non-negative integers, and the members of this class are known as the smooth functions. Class [math]\displaystyle{ C^\omega }[/math] consists of all analytic functions, and is strictly contained in [math]\displaystyle{ C^\infty }[/math] (see bump function for a smooth function that is not analytic).
Series
A series formalizes the imprecise notion of taking the sum of an endless sequence of numbers. The idea that taking the sum of an "infinite" number of terms can lead to a finite result was counterintuitive to the ancient Greeks and led to the formulation of a number of paradoxes by Zeno and other philosophers. The modern notion of assigning a value to a series avoids dealing with the ill-defined notion of adding an "infinite" number of terms. Instead, the finite sum of the first [math]\displaystyle{ n }[/math] terms of the sequence, known as a partial sum, is considered, and the concept of a limit is applied to the sequence of partial sums as [math]\displaystyle{ n }[/math] grows without bound. The series is assigned the value of this limit, if it exists.
Given an (infinite) sequence [math]\displaystyle{ (a_n) }[/math], we can define an associated series as the formal mathematical object [math]\displaystyle{ a_1 + a_2 + a_3 + \cdots = \sum_{n=1}^{\infty} a_n }[/math], sometimes simply written as [math]\displaystyle{ \sum a_n }[/math]. The partial sums of a series [math]\displaystyle{ \sum a_n }[/math] are the numbers [math]\displaystyle{ s_n=\sum_{j=1}^n a_j }[/math]. A series [math]\displaystyle{ \sum a_n }[/math] is said to be convergent if the sequence consisting of its partial sums, [math]\displaystyle{ (s_n) }[/math], is convergent; otherwise it is divergent. The sum of a convergent series is defined as the number [math]\displaystyle{ s = \lim_{n \to \infty} s_n }[/math].
The word "sum" is used here in a metaphorical sense as a shorthand for taking the limit of a sequence of partial sums and should not be interpreted as simply "adding" an infinite number of terms. For instance, in contrast to the behavior of finite sums, rearranging the terms of an infinite series may result in convergence to a different number (see the article on the Riemann rearrangement theorem for further discussion).
An example of a convergent series is a geometric series which forms the basis of one of Zeno's famous paradoxes:
- [math]\displaystyle{ \sum_{n=1}^\infty \frac{1}{2^n} = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots = 1 . }[/math]
In contrast, the harmonic series has been known since the Middle Ages to be a divergent series:
- [math]\displaystyle{ \sum_{n=1}^\infty \frac{1}{n} = 1 + \frac{1}{2} + \frac{1}{3} + \cdots = \infty . }[/math]
(Here, "[math]\displaystyle{ =\infty }[/math]" is merely a notational convention to indicate that the partial sums of the series grow without bound.)
A series [math]\displaystyle{ \sum a_n }[/math] is said to converge absolutely if [math]\displaystyle{ \sum |a_n| }[/math] is convergent. A convergent series [math]\displaystyle{ \sum a_n }[/math] for which [math]\displaystyle{ \sum |a_n| }[/math] diverges is said to converge non-absolutely.[7] It is easily shown that absolute convergence of a series implies its convergence. On the other hand, an example of a series that converges non-absolutely is
- [math]\displaystyle{ \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n} = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots = \ln 2 . }[/math]
Taylor series
The Taylor series of a real or complex-valued function ƒ(x) that is infinitely differentiable at a real or complex number a is the power series
- [math]\displaystyle{ f(a) + \frac{f'(a)}{1!} (x-a) + \frac{f''(a)}{2!} (x-a)^2 + \frac{f^{(3)}(a)}{3!} (x-a)^3 + \cdots. }[/math]
which can be written in the more compact sigma notation as
- [math]\displaystyle{ \sum_{n=0} ^ {\infty} \frac {f^{(n)}(a)}{n!} \, (x-a)^{n} }[/math]
where n! denotes the factorial of n and ƒ (n)(a) denotes the nth derivative of ƒ evaluated at the point a. The derivative of order zero ƒ is defined to be ƒ itself and (x − a)0 and 0! are both defined to be 1. In the case that a = 0, the series is also called a Maclaurin series.
A Taylor series of f about point a may diverge, converge at only the point a, converge for all x such that [math]\displaystyle{ |x-a|\lt R }[/math] (the largest such R for which convergence is guaranteed is called the radius of convergence), or converge on the entire real line. Even a converging Taylor series may converge to a value different from the value of the function at that point. If the Taylor series at a point has a nonzero radius of convergence, and sums to the function in the disc of convergence, then the function is analytic. The analytic functions have many fundamental properties. In particular, an analytic function of a real variable extends naturally to a function of a complex variable. It is in this way that the exponential function, the logarithm, the trigonometric functions and their inverses are extended to functions of a complex variable.
Fourier series
Fourier series decomposes periodic functions or periodic signals into the sum of a (possibly infinite) set of simple oscillating functions, namely sines and cosines (or complex exponentials). The study of Fourier series typically occurs and is handled within the branch mathematics > mathematical analysis > Fourier analysis.
Integration
Integration is a formalization of the problem of finding the area bound by a curve and the related problems of determining the length of a curve or volume enclosed by a surface. The basic strategy to solving problems of this type was known to the ancient Greeks and Chinese, and was known as the method of exhaustion. Generally speaking, the desired area is bounded from above and below, respectively, by increasingly accurate circumscribing and inscribing polygonal approximations whose exact areas can be computed. By considering approximations consisting of a larger and larger ("infinite") number of smaller and smaller ("infinitesimal") pieces, the area bound by the curve can be deduced, as the upper and lower bounds defined by the approximations converge around a common value.
The spirit of this basic strategy can easily be seen in the definition of the Riemann integral, in which the integral is said to exist if upper and lower Riemann (or Darboux) sums converge to a common value as thinner and thinner rectangular slices ("refinements") are considered. Though the machinery used to define it is much more elaborate compared to the Riemann integral, the Lebesgue integral was defined with similar basic ideas in mind. Compared to the Riemann integral, the more sophisticated Lebesgue integral allows area (or length, volume, etc.; termed a "measure" in general) to be defined and computed for much more complicated and irregular subsets of Euclidean space, although there still exist "non-measurable" subsets for which an area cannot be assigned.
Riemann integration
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let [math]\displaystyle{ [a,b] }[/math] be a closed interval of the real line; then a tagged partition [math]\displaystyle{ \cal{P} }[/math] of [math]\displaystyle{ [a,b] }[/math] is a finite sequence
- [math]\displaystyle{ a = x_0 \le t_1 \le x_1 \le t_2 \le x_2 \le \cdots \le x_{n-1} \le t_n \le x_n = b . \,\! }[/math]
This partitions the interval [math]\displaystyle{ [a,b] }[/math] into [math]\displaystyle{ n }[/math] sub-intervals [math]\displaystyle{ [x_{i-1},x_i] }[/math] indexed by [math]\displaystyle{ i=1,\ldots, n }[/math], each of which is "tagged" with a distinguished point [math]\displaystyle{ t_i\in[x_{i-1},x_i] }[/math]. For a function [math]\displaystyle{ f }[/math] bounded on [math]\displaystyle{ [a,b] }[/math], we define the Riemann sum of [math]\displaystyle{ f }[/math] with respect to tagged partition [math]\displaystyle{ \cal{P} }[/math] as
- [math]\displaystyle{ \sum_{i=1}^{n} f(t_i) \Delta_i, }[/math]
where [math]\displaystyle{ \Delta_i=x_i-x_{i-1} }[/math] is the width of sub-interval [math]\displaystyle{ i }[/math]. Thus, each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, [math]\displaystyle{ \|\Delta_i\| = \max_{i=1,\ldots, n}\Delta_i }[/math]. We say that the Riemann integral of [math]\displaystyle{ f }[/math] on [math]\displaystyle{ [a,b] }[/math] is [math]\displaystyle{ S }[/math] if for any [math]\displaystyle{ \varepsilon\gt 0 }[/math] there exists [math]\displaystyle{ \delta\gt 0 }[/math] such that, for any tagged partition [math]\displaystyle{ \cal{P} }[/math] with mesh [math]\displaystyle{ \| \Delta_i \| \lt \delta }[/math], we have
- [math]\displaystyle{ \left| S - \sum_{i=1}^{n} f(t_i)\Delta_i \right| \lt \varepsilon. }[/math]
This is sometimes denoted [math]\displaystyle{ \mathcal{R}\int_{a}^b f=S }[/math]. When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum is known as the upper (respectively, lower) Darboux sum. A function is Darboux integrable if the upper and lower Darboux sums can be made to be arbitrarily close to each other for a sufficiently small mesh. Although this definition gives the Darboux integral the appearance of being a special case of the Riemann integral, they are, in fact, equivalent, in the sense that a function is Darboux integrable if and only if it is Riemann integrable, and the values of the integrals are equal. In fact, calculus and real analysis textbooks often conflate the two, introducing the definition of the Darboux integral as that of the Riemann integral, due to the slightly easier to apply definition of the former.
The fundamental theorem of calculus asserts that integration and differentiation are inverse operations in a certain sense.
Lebesgue integration and measure
Lebesgue integration is a mathematical construction that extends the integral to a larger class of functions; it also extends the domains on which these functions can be defined. The concept of a measure, an abstraction of length, area, or volume, is central to Lebesgue integral probability theory.
Distributions
Distributions (or generalized functions) are objects that generalize functions. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.
Relation to complex analysis
Real analysis is an area of analysis that studies concepts such as sequences and their limits, continuity, differentiation, integration and sequences of functions. By definition, real analysis focuses on the real numbers, often including positive and negative infinity to form the extended real line. Real analysis is closely related to complex analysis, which studies broadly the same properties of complex numbers. In complex analysis, it is natural to define differentiation via holomorphic functions, which have a number of useful properties, such as repeated differentiability, expressibility as power series, and satisfying the Cauchy integral formula.
In real analysis, it is usually more natural to consider differentiable, smooth, or harmonic functions, which are more widely applicable, but may lack some more powerful properties of holomorphic functions. However, results such as the fundamental theorem of algebra are simpler when expressed in terms of complex numbers.
Techniques from the theory of analytic functions of a complex variable are often used in real analysis – such as evaluation of real integrals by residue calculus.
Important results
Important results include the Bolzano–Weierstrass and Heine–Borel theorems, the intermediate value theorem and mean value theorem, Taylor's theorem, the fundamental theorem of calculus, the Arzelà-Ascoli theorem, the Stone-Weierstrass theorem, Fatou's lemma, and the monotone convergence and dominated convergence theorems.
Various ideas from real analysis can be generalized from the real line to broader or more abstract contexts. These generalizations link real analysis to other disciplines and subdisciplines. For instance, generalization of ideas like continuous functions and compactness from real analysis to metric spaces and topological spaces connects real analysis to the field of general topology, while generalization of finite-dimensional Euclidean spaces to infinite-dimensional analogs led to the concepts of Banach spaces and Hilbert spaces and, more generally to functional analysis. Georg Cantor's investigation of sets and sequence of real numbers, mappings between them, and the foundational issues of real analysis gave birth to naive set theory. The study of issues of convergence for sequences of functions eventually gave rise to Fourier analysis as a subdiscipline of mathematical analysis. Investigation of the consequences of generalizing differentiability from functions of a real variable to ones of a complex variable gave rise to the concept of holomorphic functions and the inception of complex analysis as another distinct subdiscipline of analysis. On the other hand, the generalization of integration from the Riemann sense to that of Lebesgue led to the formulation of the concept of abstract measure spaces, a fundamental concept in measure theory. Finally, the generalization of integration from the real line to curves and surfaces in higher dimensional space brought about the study of vector calculus, whose further generalization and formalization played an important role in the evolution of the concepts of differential forms and smooth (differentiable) manifolds in differential geometry and other closely related areas of geometry and topology.
See also
- List of real analysis topics
- Time-scale calculus – a unification of real analysis with calculus of finite differences
- Real multivariable function
- Real coordinate space
- Complex analysis
References
- ↑ Tao, Terence (2003). "Lecture notes for MATH 131AH". https://www.math.ucla.edu/~tao/resource/general/131ah.1.03w/week1.pdf.
- ↑ "Sequences intro". https://www.khanacademy.org/math/algebra/x2f8bb11595b61c86:sequences/x2f8bb11595b61c86:introduction-to-arithmetic-sequences/v/explicit-and-recursive-definitions-of-sequences.
- ↑ Gaughan, Edward (2009). "1.1 Sequences and Convergence". Introduction to Analysis. AMS (2009). ISBN 978-0-8218-4787-9.
- ↑ Some authors (e.g., Rudin 1976) use braces instead and write [math]\displaystyle{ \{a_n\} }[/math]. However, this notation conflicts with the usual notation for a set, which, in contrast to a sequence, disregards the order and the multiplicity of its elements.
- ↑ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 978-0-495-01166-8. https://archive.org/details/calculusearlytra00stew_1.
- ↑ Royden 1988, Sect. 5.4, page 108; Nielsen 1997, Definition 15.6 on page 251; Athreya & Lahiri 2006, Definitions 4.4.1, 4.4.2 on pages 128,129. The interval I is assumed to be bounded and closed in the former two books but not the latter book.
- ↑ The term unconditional convergence refers to series whose sum does not depend on the order of the terms (i.e., any rearrangement gives the same sum). Convergence is termed conditional otherwise. For series in [math]\displaystyle{ \R^n }[/math], it can be shown that absolute convergence and unconditional convergence are equivalent. Hence, the term "conditional convergence" is often used to mean non-absolute convergence. However, in the general setting of Banach spaces, the terms do not coincide, and there are unconditionally convergent series that do not converge absolutely.
Sources
- Athreya, Krishna B.; Lahiri, Soumendra N. (2006), Measure theory and probability theory, Springer, ISBN 0-387-32903-X
- Nielsen, Ole A. (1997), An introduction to integration and measure theory, Wiley-Interscience, ISBN 0-471-59518-7
- Royden, H.L. (1988), Real Analysis (third ed.), Collier Macmillan, ISBN 0-02-404151-3
Bibliography
- Abbott, Stephen (2001). Understanding Analysis. Undergraduate Texts in Mathematics. New York: Springer-Verlag. ISBN 0-387-95060-5.
- Aliprantis, Charalambos D.; Burkinshaw, Owen (1998). Principles of real analysis (3rd ed.). Academic. ISBN 0-12-050257-7.
- Bartle, Robert G.; Sherbert, Donald R. (2011). Introduction to Real Analysis (4th ed.). New York: John Wiley and Sons. ISBN 978-0-471-43331-6.
- Bressoud, David (2007). A Radical Approach to Real Analysis. MAA. ISBN 978-0-88385-747-2.
- Browder, Andrew (1996). Mathematical Analysis: An Introduction. Undergraduate Texts in Mathematics. New York: Springer-Verlag. ISBN 0-387-94614-4.
- Carothers, Neal L. (2000). Real Analysis. Cambridge: Cambridge University Press. ISBN 978-0521497565. https://archive.org/details/CarothersN.L.RealAnalysisCambridge2000Isbn0521497566416S.
- Dangello, Frank; Seyfried, Michael (1999). Introductory Real Analysis. Brooks Cole. ISBN 978-0-395-95933-6.
- Kolmogorov, A. N.; Fomin, S. V. (1975). Introductory Real Analysis. Translated by Richard A. Silverman. Dover Publications. ISBN 0486612260. https://archive.org/details/introductoryreal00kolm_0. Retrieved 2 April 2013.
- Rudin, Walter (1976). Principles of Mathematical Analysis. Walter Rudin Student Series in Advanced Mathematics (3rd ed.). New York: McGraw–Hill. ISBN 978-0-07-054235-8. https://archive.org/details/PrinciplesOfMathematicalAnalysis.
- Rudin, Walter (1987). Real and Complex Analysis (3rd ed.). New York: McGraw-Hill. ISBN 978-0-07-054234-1. https://archive.org/details/RudinW.RealAndComplexAnalysis3e1987.
- Spivak, Michael (1994). Calculus (3rd ed.). Houston, Texas: Publish or Perish, Inc.. ISBN 091409890X.
External links
- How We Got From There to Here: A Story of Real Analysis by Robert Rogers and Eugene Boman
- A First Course in Analysis by Donald Yau
- Analysis WebNotes by John Lindsay Orr
- Interactive Real Analysis by Bert G. Wachsmuth
- A First Analysis Course by John O'Connor
- Mathematical Analysis I by Elias Zakon
- Mathematical Analysis II by Elias Zakon
- Trench, William F. (2003). Introduction to Real Analysis. Prentice Hall. ISBN 978-0-13-045786-8. http://ramanujan.math.trinity.edu/wtrench/texts/TRENCH_REAL_ANALYSIS.PDF.
- Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis
- Basic Analysis: Introduction to Real Analysis by Jiri Lebl
- Topics in Real and Functional Analysis by Gerald Teschl, University of Vienna.
Original source: https://en.wikipedia.org/wiki/Real analysis.
Read more |