Basel problem
The Basel problem is a problem in mathematical analysis with relevance to number theory, concerning an infinite sum of inverse squares. It was first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734,[1] and read on 5 December 1735 in The Saint Petersburg Academy of Sciences.[2] Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up more than a century later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem.
The Basel problem asks for the precise summation of the reciprocals of the squares of the natural numbers, i.e. the precise sum of the infinite series: [math]\displaystyle{ \sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots. }[/math]
The sum of the series is approximately equal to 1.644934.[3] The Basel problem asks for the exact sum of this series (in closed form), as well as a proof that this sum is correct. Euler found the exact sum to be [math]\displaystyle{ \pi^2/6 }[/math] and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, although he was later proven correct. He produced an accepted proof in 1741.
The solution to this problem can be used to estimate the probability that two large random numbers are coprime. Two random integers in the range from 1 to [math]\displaystyle{ n }[/math], in the limit as [math]\displaystyle{ n }[/math] goes to infinity, are relatively prime with a probability that approaches [math]\displaystyle{ 6/\pi^2 }[/math], the reciprocal of the solution to the Basel problem.[4]
Euler's approach
Euler's original derivation of the value [math]\displaystyle{ \pi^2/6 }[/math] essentially extended observations about finite polynomials and assumed that these same properties hold true for infinite series.
Of course, Euler's original reasoning requires justification (100 years later, Karl Weierstrass proved that Euler's representation of the sine function as an infinite product is valid, by the Weierstrass factorization theorem), but even without justification, by simply obtaining the correct value, he was able to verify it numerically against partial sums of the series. The agreement he observed gave him sufficient confidence to announce his result to the mathematical community.
To follow Euler's argument, recall the Taylor series expansion of the sine function [math]\displaystyle{ \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots }[/math] Dividing through by [math]\displaystyle{ x }[/math] gives [math]\displaystyle{ \frac{\sin x}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + \cdots . }[/math]
The Weierstrass factorization theorem shows that the left-hand side is the product of linear factors given by its roots, just as for finite polynomials. Euler assumed this as a heuristic for expanding an infinite degree polynomial in terms of its roots, but in fact it is not always true for general [math]\displaystyle{ P(x) }[/math].[5] This factorization expands the equation into: [math]\displaystyle{ \begin{align} \frac{\sin x}{x} &= \left(1 - \frac{x}{\pi}\right)\left(1 + \frac{x}{\pi}\right)\left(1 - \frac{x}{2\pi}\right)\left(1 + \frac{x}{2\pi}\right)\left(1 - \frac{x}{3\pi}\right)\left(1 + \frac{x}{3\pi}\right) \cdots \\ &= \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right) \cdots \end{align} }[/math]
If we formally multiply out this product and collect all the x2 terms (we are allowed to do so because of Newton's identities), we see by induction that the x2 coefficient of sin x/x is [6] [math]\displaystyle{ -\left(\frac{1}{\pi^2} + \frac{1}{4\pi^2} + \frac{1}{9\pi^2} + \cdots \right) = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}. }[/math]
But from the original infinite series expansion of sin x/x, the coefficient of x2 is −1/3! = −1/6. These two coefficients must be equal; thus, [math]\displaystyle{ -\frac{1}{6} = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}. }[/math]
Multiplying both sides of this equation by −π2 gives the sum of the reciprocals of the positive square integers. [math]\displaystyle{ \sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}. }[/math]
This method of calculating [math]\displaystyle{ \zeta(2) }[/math] is detailed in expository fashion most notably in Havil's Gamma book which details many zeta function and logarithm-related series and integrals, as well as a historical perspective, related to the Euler gamma constant.[7]
Generalizations of Euler's method using elementary symmetric polynomials
Using formulae obtained from elementary symmetric polynomials,[8] this same approach can be used to enumerate formulae for the even-indexed even zeta constants which have the following known formula expanded by the Bernoulli numbers: [math]\displaystyle{ \zeta(2n) = \frac{(-1)^{n-1} (2\pi)^{2n}}{2 \cdot (2n)!} B_{2n}. }[/math]
For example, let the partial product for [math]\displaystyle{ \sin(x) }[/math] expanded as above be defined by [math]\displaystyle{ \frac{S_n(x)}{x} := \prod\limits_{k=1}^n \left(1 - \frac{x^2}{k^2 \cdot \pi^2}\right) }[/math]. Then using known formulas for elementary symmetric polynomials (a.k.a., Newton's formulas expanded in terms of power sum identities), we can see (for example) that [math]\displaystyle{ \begin{align} \left[x^4\right] \frac{S_n(x)}{x} & = \frac{1}{2\pi^4}\left(\left(H_n^{(2)}\right)^2 - H_n^{(4)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{2\pi^4}\left(\zeta(2)^2-\zeta(4)\right) \\[4pt] & \qquad \implies \zeta(4) = \frac{\pi^4}{90} = -2\pi^4 \cdot [x^4] \frac{\sin(x)}{x} +\frac{\pi^4}{36} \\[8pt] \left[x^6\right] \frac{S_n(x)}{x} & = -\frac{1}{6\pi^6}\left(\left(H_n^{(2)}\right)^3 - 2H_n^{(2)} H_n^{(4)} + 2H_n^{(6)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{6\pi^6}\left(\zeta(2)^3-3\zeta(2)\zeta(4) + 2\zeta(6)\right) \\[4pt] & \qquad \implies \zeta(6) = \frac{\pi^6}{945} = -3 \cdot \pi^6 [x^6] \frac{\sin(x)}{x} - \frac{2}{3} \frac{\pi^2}{6} \frac{\pi^4}{90} + \frac{\pi^6}{216}, \end{align} }[/math]
and so on for subsequent coefficients of [math]\displaystyle{ [x^{2k}] \frac{S_n(x)}{x} }[/math]. There are other forms of Newton's identities expressing the (finite) power sums [math]\displaystyle{ H_n^{(2k)} }[/math] in terms of the elementary symmetric polynomials, [math]\displaystyle{ e_i \equiv e_i\left(-\frac{\pi^2}{1^2}, -\frac{\pi^2}{2^2}, -\frac{\pi^2}{3^2}, -\frac{\pi^2}{4^2}, \ldots\right), }[/math] but we can go a more direct route to expressing non-recursive formulas for [math]\displaystyle{ \zeta(2k) }[/math] using the method of elementary symmetric polynomials. Namely, we have a recurrence relation between the elementary symmetric polynomials and the power sum polynomials given as on this page by [math]\displaystyle{ (-1)^{k}k e_k(x_1,\ldots,x_n) = \sum_{j=1}^k (-1)^{k-j-1} p_j(x_1,\ldots,x_n)e_{k-j}(x_1,\ldots,x_n), }[/math]
which in our situation equates to the limiting recurrence relation (or generating function convolution, or product) expanded as [math]\displaystyle{ \frac{\pi^{2k}}{2}\cdot \frac{(2k) \cdot (-1)^k}{(2k+1)!} = -[x^{2k}] \frac{\sin(\pi x)}{\pi x} \times \sum_{i \geq 1} \zeta(2i) x^i. }[/math]
Then by differentiation and rearrangement of the terms in the previous equation, we obtain that [math]\displaystyle{ \zeta(2k) = [x^{2k}]\frac{1}{2}\left(1-\pi x\cot(\pi x)\right). }[/math]
Consequences of Euler's proof
By the above results, we can conclude that [math]\displaystyle{ \zeta(2k) }[/math] is always a rational multiple of [math]\displaystyle{ \pi^{2k} }[/math]. In particular, since [math]\displaystyle{ \pi }[/math] and integer powers of it are transcendental, we can conclude at this point that [math]\displaystyle{ \zeta(2k) }[/math] is irrational, and more precisely, transcendental for all [math]\displaystyle{ k \geq 1 }[/math]. By contrast, the properties of the odd-indexed zeta constants, including Apéry's constant [math]\displaystyle{ \zeta(3) }[/math], are almost completely unknown.
The Riemann zeta function
The Riemann zeta function ζ(s) is one of the most significant functions in mathematics because of its relationship to the distribution of the prime numbers. The zeta function is defined for any complex number s with real part greater than 1 by the following formula: [math]\displaystyle{ \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}. }[/math]
Taking s = 2, we see that ζ(2) is equal to the sum of the reciprocals of the squares of all positive integers: [math]\displaystyle{ \zeta(2) = \sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \cdots = \frac{\pi^2}{6} \approx 1.644934. }[/math]
Convergence can be proven by the integral test, or by the following inequality: [math]\displaystyle{ \begin{align} \sum_{n=1}^N \frac{1}{n^2} & \lt 1 + \sum_{n=2}^N \frac{1}{n(n-1)} \\ & = 1 + \sum_{n=2}^N \left( \frac{1}{n-1} - \frac{1}{n} \right) \\ & = 1 + 1 - \frac{1}{N} \;{\stackrel{N \to \infty}{\longrightarrow}}\; 2. \end{align} }[/math]
This gives us the upper bound 2, and because the infinite sum contains no negative terms, it must converge to a value strictly between 0 and 2. It can be shown that ζ(s) has a simple expression in terms of the Bernoulli numbers whenever s is a positive even integer. With s = 2n:[9] [math]\displaystyle{ \zeta(2n) = \frac{(2\pi)^{2n}(-1)^{n+1}B_{2n}}{2\cdot(2n)!}. }[/math]
A proof using Euler's formula and L'Hôpital's rule
The normalized sinc function [math]\displaystyle{ \text{sinc}(x)=\frac{\sin (\pi x)}{\pi x} }[/math] has a Weierstrass factorization representation as an infinite product: [math]\displaystyle{ \frac{\sin (\pi x)}{\pi x} = \prod_{n=1}^\infty \left(1-\frac{x^2}{n^2}\right). }[/math]
The infinite product is analytic, so taking the natural logarithm of both sides and differentiating yields [math]\displaystyle{ \frac{\pi \cos (\pi x)}{\sin (\pi x)}-\frac{1}{x}=-\sum_{n=1}^\infty \frac{2x}{n^2-x^2} }[/math]
(by uniform convergence, the interchange of the derivative and infinite series is permissible). After dividing the equation by [math]\displaystyle{ 2x }[/math] and regrouping one gets [math]\displaystyle{ \frac{1}{2x^2}-\frac{\pi \cot (\pi x)}{2x}=\sum_{n=1}^\infty \frac{1}{n^2-x^2}. }[/math]
We make a change of variables ([math]\displaystyle{ x=-it }[/math]): [math]\displaystyle{ -\frac{1}{2t^2}+\frac{\pi \cot (-\pi it)}{2it}=\sum_{n=1}^\infty \frac{1}{n^2+t^2}. }[/math]
Euler's formula can be used to deduce that [math]\displaystyle{ \frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2it}\frac{i\left(e^{2\pi t}+1\right)}{e^{2\pi t}-1}=\frac{\pi}{2t}+\frac{\pi}{t\left(e^{2\pi t} - 1\right)}. }[/math] or using the corresponding hyperbolic function: [math]\displaystyle{ \frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2t}{i\cot (\pi i t)}=\frac{\pi}{2t}\coth(\pi t). }[/math]
Then [math]\displaystyle{ \sum_{n=1}^\infty \frac{1}{n^2+t^2}=\frac{\pi \left(te^{2\pi t}+t\right)-e^{2\pi t}+1}{2\left(t^2 e^{2\pi t}-t^2\right)}=-\frac{1}{2t^2} + \frac{\pi}{2t} \coth(\pi t). }[/math]
Now we take the limit as [math]\displaystyle{ t }[/math] approaches zero and use L'Hôpital's rule thrice. By Tannery's theorem applied to [math]\displaystyle{ \lim_{t\to\infty}\sum_{n=1}^\infty 1/(n^2+1/t^2) }[/math], we can interchange the limit and infinite series so that [math]\displaystyle{ \lim_{t\to 0}\sum_{n=1}^\infty 1/(n^2+t^2)=\sum_{n=1}^\infty 1/n^2 }[/math] and by L'Hôpital's rule [math]\displaystyle{ \begin{align}\sum_{n=1}^\infty \frac{1}{n^2}&=\lim_{t\to 0}\frac{\pi}{4}\frac{2\pi te^{2\pi t}-e^{2\pi t}+1}{\pi t^2 e^{2\pi t} + te^{2\pi t}-t}\\[6pt] &=\lim_{t\to 0}\frac{\pi^3 te^{2\pi t}}{2\pi \left(\pi t^2 e^{2\pi t}+2te^{2\pi t} \right)+e^{2\pi t}-1}\\[6pt] &=\lim_{t\to 0}\frac{\pi^2 (2\pi t+1)}{4\pi^2 t^2+12\pi t+6}\\[6pt] &=\frac{\pi^2}{6}.\end{align} }[/math]
A proof using Fourier series
Use Parseval's identity (applied to the function f(x) = x) to obtain [math]\displaystyle{ \sum_{n=-\infty}^\infty |c_n|^2 = \frac{1}{2\pi}\int_{-\pi}^\pi x^2 \, dx, }[/math] where [math]\displaystyle{ \begin{align} c_n &= \frac{1}{2\pi}\int_{-\pi}^\pi x e^{-inx} \, dx \\[4pt] &= \frac{n\pi \cos(n\pi)-\sin(n\pi)}{\pi n^2} i \\[4pt] &= \frac{\cos(n\pi)}{n} i \\[4pt] &= \frac{(-1)^n}{n} i \end{align} }[/math]
for n ≠ 0, and c0 = 0. Thus, [math]\displaystyle{ |c_n|^2 = \begin{cases} \dfrac{1}{n^2}, & \text{for } n \neq 0, \\ 0, & \text{for } n = 0, \end{cases} }[/math]
and [math]\displaystyle{ \sum_{n=-\infty}^\infty |c_n|^2 = 2\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{2\pi} \int_{-\pi}^\pi x^2 \, dx. }[/math]
Therefore, [math]\displaystyle{ \sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{4\pi}\int_{-\pi}^\pi x^2 \, dx = \frac{\pi^2}{6} }[/math] as required.
Another proof using Parseval's identity
Given a complete orthonormal basis in the space [math]\displaystyle{ L^2_{\operatorname{per}}(0, 1) }[/math] of L2 periodic functions over [math]\displaystyle{ (0, 1) }[/math] (i.e., the subspace of square-integrable functions which are also periodic), denoted by [math]\displaystyle{ \{e_i\}_{i=-\infty}^{\infty} }[/math], Parseval's identity tells us that [math]\displaystyle{ \|x\|^2 = \sum_{i=-\infty}^{\infty} |\langle e_i, x\rangle|^2, }[/math]
where [math]\displaystyle{ \|x\| := \sqrt{\langle x,x\rangle} }[/math] is defined in terms of the inner product on this Hilbert space given by [math]\displaystyle{ \langle f, g\rangle = \int_0^1 f(x) \overline{g(x)} \, dx,\ f,g \in L^2_{\operatorname{per}}(0, 1). }[/math]
We can consider the orthonormal basis on this space defined by [math]\displaystyle{ e_k \equiv e_k(\vartheta) := \exp(2\pi\imath k \vartheta) }[/math] such that [math]\displaystyle{ \langle e_k,e_j\rangle = \int_0^1 e^{2\pi\imath (k-j) \vartheta} \, d\vartheta = \delta_{k,j} }[/math]. Then if we take [math]\displaystyle{ f(\vartheta) := \vartheta }[/math], we can compute both that [math]\displaystyle{ \begin{align} \|f\|^2 & = \int_0^1 \vartheta^2 \, d\vartheta = \frac{1}{3} \\ \langle f, e_k\rangle & = \int_0^1 \vartheta e^{-2\pi\imath k\vartheta} \, d\vartheta = \Biggl\{\begin{array}{ll} \frac{1}{2}, & k = 0 \\ -\frac{1}{2\pi\imath k} & k \neq 0, \end{array} \end{align} }[/math]
by elementary calculus and integration by parts, respectively. Finally, by Parseval's identity stated in the form above, we obtain that [math]\displaystyle{ \begin{align} \|f\|^2 = \frac{1}{3} & = \sum_{\stackrel{k=-\infty}{k \neq 0}}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4} = 2 \sum_{k=1}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4} \\ & \implies \frac{\pi^2}{6} = \frac{2 \pi^2}{3} - \frac{\pi^2}{2} = \zeta(2). \end{align} }[/math]
Generalizations and recurrence relations
Note that by considering higher-order powers of [math]\displaystyle{ f_j(\vartheta) := \vartheta^j \in L^2_{\operatorname{per}}(0, 1) }[/math] we can use integration by parts to extend this method to enumerating formulas for [math]\displaystyle{ \zeta(2j) }[/math] when [math]\displaystyle{ j \gt 1 }[/math]. In particular, suppose we let [math]\displaystyle{ I_{j,k} := \int_0^1 \vartheta^j e^{-2\pi\imath k\vartheta} \, d\vartheta, }[/math]
so that integration by parts yields the recurrence relation that [math]\displaystyle{ \begin{align} I_{j,k} & = \begin{cases} \frac{1}{j+1}, & k=0; \\[4pt] -\frac{1}{2\pi\imath \cdot k} + \frac{j}{2\pi\imath \cdot k} I_{j-1,k}, & k \neq 0\end{cases} \\[6pt] & = \begin{cases} \frac{1}{j+1}, & k=0; \\[4pt] -\sum\limits_{m=1}^j \frac{j!}{(j+1-m)!} \cdot \frac{1}{(2\pi\imath \cdot k)^m}, & k \neq 0. \end{cases} \end{align} }[/math]
Then by applying Parseval's identity as we did for the first case above along with the linearity of the inner product yields that [math]\displaystyle{ \begin{align} \|f_j\|^2 = \frac{1}{2j+1} & = 2 \sum_{k \geq 1} I_{j,k} \bar{I}_{j,k} + \frac{1}{(j+1)^2} \\[6pt] & = 2 \sum_{m=1}^j \sum_{r=1}^j \frac{j!^2}{(j+1-m)! (j+1-r)!} \frac{(-1)^r}{\imath^{m+r}} \frac{\zeta(m+r)}{(2\pi)^{m+r}} + \frac{1}{(j+1)^2}. \end{align} }[/math]
Proof using differentiation under the integral sign
It's possible to prove the result using elementary calculus by applying the differentiation under the integral sign technique to an integral due to Freitas:[10] [math]\displaystyle{ I(\alpha) = \int_0^\infty \ln\left(1+\alpha e^{-x}+e^{-2x}\right)dx. }[/math]
While the primitive function of the integrand cannot be expressed in terms of elementary functions, by differentiating with respect to [math]\displaystyle{ \alpha }[/math] we arrive at
[math]\displaystyle{ \frac{dI}{d\alpha} = \int_0^\infty \frac{e^{-x}}{1+\alpha e^{-x}+e^{-2x}}dx, }[/math] which can be integrated by substituting [math]\displaystyle{ u=e^{-x} }[/math] and decomposing into partial fractions. In the range [math]\displaystyle{ -2\leq\alpha\leq 2 }[/math] the definite integral reduces to
[math]\displaystyle{ \frac{dI}{d\alpha} = \frac{2}{\sqrt{4-\alpha^2}}\left[\arctan\left(\frac{\alpha+2}{\sqrt{4-\alpha^2}}\right)-\arctan\left(\frac{\alpha}{\sqrt{4-\alpha^2}}\right)\right]. }[/math]
The expression can be simplified using the arctangent addition formula and integrated with respect to [math]\displaystyle{ \alpha }[/math] by means of trigonometric substitution, resulting in
[math]\displaystyle{ I(\alpha) = -\frac{1}{2}\arccos\left(\frac{\alpha}{2}\right)^2 + c. }[/math]
The integration constant [math]\displaystyle{ c }[/math] can be determined by noticing that two distinct values of [math]\displaystyle{ I(\alpha) }[/math] are related by
[math]\displaystyle{ I(2) = 4I(0), }[/math] because when calculating [math]\displaystyle{ I(2) }[/math] we can factor [math]\displaystyle{ 1+2e^{-x}+e^{-2x} = (1+e^{-x})^2 }[/math] and express it in terms of [math]\displaystyle{ I(0) }[/math] using the logarithm of a power identity and the substitution [math]\displaystyle{ u=x/2 }[/math]. This makes it possible to determine [math]\displaystyle{ c = \frac{\pi^2}{6} }[/math], and it follows that
[math]\displaystyle{ I(-2) = 2\int_0^\infty \ln(1-e^{-x})dx = -\frac{\pi^2}{3}. }[/math]
This final integral can be evaluated by expanding the natural logarithm into its Taylor series:
[math]\displaystyle{ \int_0^\infty \ln(1-e^{-x})dx = - \sum_{n=1}^\infty \int_0^\infty \frac{e^{-nx}}{n}dx = -\sum_{n=1}^\infty\frac{1}{n^2}. }[/math]
The last two identities imply
[math]\displaystyle{ \sum_{n=1}^\infty\frac{1}{n^2} = \frac{\pi^2}{6}. }[/math]
Cauchy's proof
While most proofs use results from advanced mathematics, such as Fourier analysis, complex analysis, and multivariable calculus, the following does not even require single-variable calculus (until a single limit is taken at the end).
For a proof using the residue theorem, see here.
History of this proof
The proof goes back to Augustin Louis Cauchy (Cours d'Analyse, 1821, Note VIII). In 1954, this proof appeared in the book of Akiva and Isaak Yaglom "Nonelementary Problems in an Elementary Exposition". Later, in 1982, it appeared in the journal Eureka,[11] attributed to John Scholes, but Scholes claims he learned the proof from Peter Swinnerton-Dyer, and in any case he maintains the proof was "common knowledge at Cambridge in the late 1960s".[12]
The proof
The main idea behind the proof is to bound the partial (finite) sums [math]\displaystyle{ \sum_{k=1}^m \frac{1}{k^2} = \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2} }[/math] between two expressions, each of which will tend to π2/6 as m approaches infinity. The two expressions are derived from identities involving the cotangent and cosecant functions. These identities are in turn derived from de Moivre's formula, and we now turn to establishing these identities.
Let x be a real number with 0 < x < π/2, and let n be a positive odd integer. Then from de Moivre's formula and the definition of the cotangent function, we have [math]\displaystyle{ \begin{align} \frac{\cos (nx) + i \sin (nx)}{\sin^n x} &= \frac{(\cos x + i\sin x)^n}{\sin^n x} \\[4pt] &= \left(\frac{\cos x + i \sin x}{\sin x}\right)^n \\[4pt] &= (\cot x + i)^n. \end{align} }[/math]
From the binomial theorem, we have [math]\displaystyle{ \begin{align} (\cot x + i)^n = & {n \choose 0} \cot^n x + {n \choose 1} (\cot^{n - 1} x)i + \cdots + {n \choose {n - 1}} (\cot x)i^{n - 1} + {n \choose n} i^n \\[6pt] = & \Bigg( {n \choose 0} \cot^n x - {n \choose 2} \cot^{n - 2} x \pm \cdots \Bigg) \; + \; i\Bigg( {n \choose 1} \cot^{n-1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg). \end{align} }[/math]
Combining the two equations and equating imaginary parts gives the identity [math]\displaystyle{ \frac{\sin (nx)}{\sin^n x} = \Bigg( {n \choose 1} \cot^{n - 1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg). }[/math]
We take this identity, fix a positive integer m, set n = 2m + 1, and consider xr = rπ/2m + 1 for r = 1, 2, ..., m. Then nxr is a multiple of π and therefore sin(nxr) = 0. So, [math]\displaystyle{ 0 = {{2m + 1} \choose 1} \cot^{2m} x_r - {{2m + 1} \choose 3} \cot^{2m - 2} x_r \pm \cdots + (-1)^m{{2m + 1} \choose {2m + 1}} }[/math]
for every r = 1, 2, ..., m. The values xr = x1, x2, ..., xm are distinct numbers in the interval 0 < xr < π/2. Since the function cot2 x is one-to-one on this interval, the numbers tr = cot2 xr are distinct for r = 1, 2, ..., m. By the above equation, these m numbers are the roots of the mth degree polynomial [math]\displaystyle{ p(t) = {{2m + 1} \choose 1}t^m - {{2m + 1} \choose 3}t^{m - 1} \pm \cdots + (-1)^m{{2m+1} \choose {2m + 1}}. }[/math]
By Vieta's formulas we can calculate the sum of the roots directly by examining the first two coefficients of the polynomial, and this comparison shows that [math]\displaystyle{ \cot ^2 x_1 + \cot ^2 x_2 + \cdots + \cot ^2 x_m = \frac{\binom{2m + 1}3} {\binom{2m + 1}1} = \frac{2m(2m - 1)}6. }[/math]
Substituting the identity csc2 x = cot2 x + 1, we have [math]\displaystyle{ \csc ^2 x_1 + \csc ^2 x_2 + \cdots + \csc ^2 x_m = \frac{2m(2m - 1)}6 + m = \frac{2m(2m + 2)}6. }[/math]
Now consider the inequality cot2 x < 1/x2 < csc2 x (illustrated geometrically above). If we add up all these inequalities for each of the numbers xr = rπ/2m + 1, and if we use the two identities above, we get [math]\displaystyle{ \frac{2m(2m - 1)}6 \lt \left(\frac{2m + 1}{\pi} \right)^2 + \left(\frac{2m + 1}{2\pi} \right)^2 + \cdots + \left(\frac{2m + 1}{m \pi} \right)^2 \lt \frac{2m(2m + 2)}6. }[/math]
Multiplying through by (π/2m + 1)2, this becomes [math]\displaystyle{ \frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m - 1}{2m + 1}\right) \lt \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2} \lt \frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m + 2}{2m + 1}\right). }[/math]
As m approaches infinity, the left and right hand expressions each approach π2/6, so by the squeeze theorem, [math]\displaystyle{ \zeta(2) = \sum_{k=1}^\infty \frac{1}{k^2} = \lim_{m \to \infty}\left(\frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2}\right) = \frac{\pi ^2}{6} }[/math]
and this completes the proof.
Proof assuming Weil's conjecture on Tamagawa numbers
A proof is also possible assuming Weil's conjecture on Tamagawa numbers.[13] The conjecture asserts for the case of the algebraic group SL2(R) that the Tamagawa number of the group is one. That is, the quotient of the special linear group over the rational adeles by the special linear group of the rationals (a compact set, because [math]\displaystyle{ SL_2(\mathbb Q) }[/math] is a lattice in the adeles) has Tamagawa measure 1: [math]\displaystyle{ \tau(SL_2(\mathbb Q)\setminus SL_2(A_{\mathbb Q}))=1. }[/math]
To determine a Tamagawa measure, the group [math]\displaystyle{ SL_2 }[/math] consists of matrices [math]\displaystyle{ \begin{bmatrix}x&y\\z&t\end{bmatrix} }[/math] with [math]\displaystyle{ xt-yz=1 }[/math]. An invariant volume form on the group is [math]\displaystyle{ \omega = \frac1x dx\wedge dy\wedge dz. }[/math]
The measure of the quotient is the product of the measures of [math]\displaystyle{ SL_2(\mathbb Z)\setminus SL_2(\mathbb R) }[/math] corresponding to the infinite place, and the measures of [math]\displaystyle{ SL_2(\mathbb Z_p) }[/math] in each finite place, where [math]\displaystyle{ \mathbb Z_p }[/math] is the p-adic integers.
For the local factors, [math]\displaystyle{ \omega(SL_2(\mathbb Z_p)) = |SL_2(F_p)|\omega(SL_2(\mathbb Z_p,p)) }[/math] where [math]\displaystyle{ F_p }[/math] is the field with [math]\displaystyle{ p }[/math] elements, and [math]\displaystyle{ SL_2(\mathbb Z_p,p) }[/math] is the congruence subgroup modulo [math]\displaystyle{ p }[/math]. Since each of the coordinates [math]\displaystyle{ x,y,z }[/math] map the latter group onto [math]\displaystyle{ p\mathbb Z_p }[/math] and [math]\displaystyle{ \left|\frac1x\right|_p=1 }[/math], the measure of [math]\displaystyle{ SL_2(\mathbb Z_p,p) }[/math] is [math]\displaystyle{ \mu_p(p\mathbb Z_p)^3=p^{-3} }[/math], where [math]\displaystyle{ \mu_p }[/math] is the normalized Haar measure on [math]\displaystyle{ \mathbb Z_p }[/math]. Also, a standard computation shows that [math]\displaystyle{ |SL_2(F_p)|=p(p^2-1) }[/math]. Putting these together gives [math]\displaystyle{ \omega(SL_2(\mathbb Z_p))=(1-1/p^2) }[/math].
At the infinite place, an integral computation over the fundamental domain of [math]\displaystyle{ SL_2(\mathbb Z) }[/math] shows that [math]\displaystyle{ \omega(SL_2(\mathbb Z)\setminus SL_2(\mathbb R)=\pi^2/6 }[/math], and therefore the Weil conjecture finally gives [math]\displaystyle{ 1 = \frac{\pi^2}6\prod_p \left(1-\frac1{p^2}\right). }[/math] On the right-hand side, we recognize the Euler product for [math]\displaystyle{ 1/\zeta(2) }[/math], and so this gives the solution to the Basel problem.
This approach shows the connection between (hyperbolic) geometry and arithmetic, and can be inverted to give a proof of the Weil conjecture for the special case of [math]\displaystyle{ SL_2 }[/math], contingent on an independent proof that [math]\displaystyle{ \zeta(2)=\pi^2/6 }[/math].
Other identities
See the special cases of the identities for the Riemann zeta function when [math]\displaystyle{ s = 2. }[/math] Other notably special identities and representations of this constant appear in the sections below.
Series representations
The following are series representations of the constant:[14] [math]\displaystyle{ \begin{align} \zeta(2) &= 3 \sum_{k=1}^\infty \frac{1}{k^2 \binom{2k}{k}} \\[6pt] &= \sum_{i=1}^\infty \sum_{j=1}^\infty \frac{(i-1)! (j-1)!}{(i+j)!}. \end{align} }[/math]
There are also BBP-type series expansions for ζ(2).[14]
Integral representations
The following are integral representations of [math]\displaystyle{ \zeta(2)\text{:} }[/math][15][16][17] [math]\displaystyle{ \begin{align} \zeta(2) & = -\int_0^1 \frac{\log x}{1-x} \, dx \\[6pt] & = \int_0^{\infty} \frac{x}{e^x-1} \, dx \\[6pt] & = \int_0^1 \frac{(\log x)^2}{(1+x)^2} \, dx \\[6pt] & = 2 + 2\int_1^{\infty} \frac{\lfloor x \rfloor -x}{x^3} \, dx \\[6pt] & = \exp\left(2 \int_2^{\infty} \frac{\pi(x)}{x(x^2-1)} \,dx\right) \\[6pt] & = \int_0^1 \int_0^1 \frac{dx \, dy}{1-xy} \\[6pt] & = \frac{4}{3} \int_0^1 \int_0^1 \frac{dx \, dy}{1-(xy)^2} \\[6pt] & = \int_0^1 \int_0^1 \frac{1-x}{1-xy} \, dx \, dy + \frac{2}{3}. \end{align} }[/math]
Continued fractions
In van der Poorten's classic article chronicling Apéry's proof of the irrationality of [math]\displaystyle{ \zeta(3) }[/math],[18] the author notes as "a red herring" the similarity of a continued fraction for Apery's constant, and the following one for the Basel constant: [math]\displaystyle{ \frac{\zeta(2)}{5} = \cfrac{1}{\widetilde{v}_1 - \cfrac{1^4}{\widetilde{v}_2-\cfrac{2^4}{\widetilde{v}_3-\cfrac{3^4}{\widetilde{v}_4-\ddots}}}}, }[/math] where [math]\displaystyle{ \widetilde{v}_n = 11n^2-11n+3 \mapsto \{3,25,69,135,\ldots\} }[/math]. Another continued fraction of a similar form is:[19] [math]\displaystyle{ \frac{\zeta(2)}{2} = \cfrac{1}{v_1 - \cfrac{1^4}{v_2-\cfrac{2^4}{v_3-\cfrac{3^4}{v_4-\ddots}}}}, }[/math] where [math]\displaystyle{ v_n = 2n-1 \mapsto \{1,3,5,7,9,\ldots\} }[/math].
See also
References
- Weil, André (1983), Number Theory: An Approach Through History, Springer-Verlag, ISBN 0-8176-3141-0.
- Dunham, William (1999), Euler: The Master of Us All, Mathematical Association of America, ISBN 0-88385-328-0, https://archive.org/details/eulermasterofusa0000dunh.
- Derbyshire, John (2003), Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics, Joseph Henry Press, ISBN 0-309-08549-7, https://archive.org/details/primeobsessionbe00derb_0.
- Edwards, Harold M. (2001), Riemann's Zeta Function, Dover, ISBN 0-486-41740-9.
Notes
- ↑ Ayoub, Raymond (1974), "Euler and the zeta function", Amer. Math. Monthly 81 (10): 1067–86, doi:10.2307/2319041, https://www.maa.org/programs/maa-awards/writing-awards/euler-and-the-zeta-function
- ↑ E41 – De summis serierum reciprocarum
- ↑ Sloane, N. J. A., ed. "Sequence A013661". OEIS Foundation. https://oeis.org/A013661.
- ↑ Vandervelde, Sam (2009), "Chapter 9: Sneaky segments", Circle in a Box, MSRI Mathematical Circles Library, Mathematical Sciences Research Institute and American Mathematical Society, pp. 101–106
- ↑ A priori, since the left-hand-side is a polynomial (of infinite degree) we can write it as a product of its roots as [math]\displaystyle{ \begin{align} \sin(x) & = x (x^2-\pi^2)(x^2-4\pi^2)(x^2-9\pi^2) \cdots \\ & = Ax \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right) \cdots. \end{align} }[/math] Then since we know from elementary calculus that [math]\displaystyle{ \lim_{x \rightarrow 0} \frac{\sin(x)}{x} = 1 }[/math], we conclude that the leading constant must satisfy [math]\displaystyle{ A = 1 }[/math].
- ↑ In particular, letting [math]\displaystyle{ H_n^{(2)} := \sum_{k=1}^n k^{-2} }[/math] denote a generalized second-order harmonic number, we can easily prove by induction that [math]\displaystyle{ [x^2] \prod_{k=1}^{n} \left(1-\frac{x^2}{\pi^2}\right) = -\frac{H_n^{(2)}}{\pi^2} \rightarrow -\frac{\zeta(2)}{\pi^2} }[/math] as [math]\displaystyle{ n \rightarrow \infty }[/math].
- ↑ Havil, J. (2003), Gamma: Exploring Euler's Constant, Princeton, New Jersey: Princeton University Press, pp. 37–42 (Chapter 4), ISBN 0-691-09983-9, https://archive.org/details/gammaexploringeu00havi_882
- ↑ Cf., the formulae for generalized Stirling numbers proved in: Schmidt, M. D. (2018), "Combinatorial Identities for Generalized Stirling Numbers Expanding f-Factorial Functions and the f-Harmonic Numbers", J. Integer Seq. 21 (Article 18.2.7), https://cs.uwaterloo.ca/journals/JIS/VOL21/Schmidt/schmidt18.html
- ↑ Arakawa, Tsuneo; Ibukiyama, Tomoyoshi; Kaneko, Masanobu (2014), Bernoulli Numbers and Zeta Functions, Springer, p. 61, ISBN 978-4-431-54919-2
- ↑ Freitas, F. L. (2023), "Solution of the Basel problem using the Feynman integral trick", arXiv:2312.04608 [math.CA]
- ↑ Ransford, T J (Summer 1982), "An Elementary Proof of [math]\displaystyle{ \sum_{1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6} }[/math]", Eureka 42 (1): 3–4, https://www.archim.org.uk/eureka/archive/Eureka-42.pdf
- ↑ Proofs from THE BOOK (2nd ed.), Springer, 2001, p. 32, ISBN 9783662043158, https://books.google.com/books?id=QETtCAAAQBAJ&pg=PA32; this anecdote is missing from later editions of this book, which replace it with earlier history of the same proof.
- ↑ Vladimir Platonov; Andrei Rapinchuk (1994), Algebraic groups and number theory, Academic Press|
- ↑ 14.0 14.1 Weisstein, Eric W.. "Riemann Zeta Function \zeta(2)". http://mathworld.wolfram.com/RiemannZetaFunctionZeta2.html.
- ↑ Connon, D. F. (2007), "Some series and integrals involving the Riemann zeta function, binomial coefficients and the harmonic numbers (Volume I)", arXiv:0710.4022 [math.HO]
- ↑ Weisstein, Eric W.. "Double Integral". http://mathworld.wolfram.com/DoubleIntegral.html.
- ↑ Weisstein, Eric W.. "Hadjicostas's Formula". http://mathworld.wolfram.com/HadjicostassFormula.html.html.
- ↑ van der Poorten, Alfred (1979), "A proof that Euler missed ... Apéry's proof of the irrationality of ζ(3)", The Mathematical Intelligencer 1 (4): 195–203, doi:10.1007/BF03028234, http://www.maths.mq.edu.au/~alf/45.pdf
- ↑ Berndt, Bruce C. (1989), Ramanujan's Notebooks: Part II, Springer-Verlag, p. 150, ISBN 978-0-387-96794-3
External links
- An infinite series of surprises by C. J. Sangwin
- From ζ(2) to Π. The Proof. step-by-step proof
- Remarques sur un beau rapport entre les series des puissances tant directes que reciproques, http://eulerarchive.maa.org//docs/translations/E352.pdf, English translation with notes of Euler's paper by Lucas Willis and Thomas J. Osler
- Ed Sandifer, How Euler did it, http://eulerarchive.maa.org/hedi/HEDI-2003-12.pdf
- James A. Sellers (February 5, 2002), Beyond Mere Convergence, http://www.personal.psu.edu/jxs23/p25.pdf, retrieved 2004-02-27
- Robin Chapman, Evaluating ζ(2) (fourteen proofs)
- Visualization of Euler's factorization of the sine function
- Johan W Ästlund (December 8, 2010), Summing inverse squares by Euclidean geometry, http://www.math.chalmers.se/~wastlund/Cosmic.pdf
- Why is pi here? And why is it squared? A geometric answer to the Basel problem on YouTube (animated proof based on the above)
Original source: https://en.wikipedia.org/wiki/Basel problem.
Read more |