Frobenius solution to the hypergeometric equation

From HandWiki

In the following we solve the second-order differential equation called the hypergeometric differential equation using Frobenius method, named after Ferdinand Georg Frobenius. This is a method that uses the series solution for a differential equation, where we assume the solution takes the form of a series. This is usually the method we use for complicated ordinary differential equations. The solution of the hypergeometric differential equation is very important. For instance, Legendre's differential equation can be shown to be a special case of the hypergeometric differential equation. Hence, by solving the hypergeometric differential equation, one may directly compare its solutions to get the solutions of Legendre's differential equation, after making the necessary substitutions. For more details, please check the hypergeometric differential equation.

We shall prove that this equation has three singularities, namely at x = 0, x = 1 and around x = infinity. However, as these will turn out to be regular singular points, we will be able to assume a solution on the form of a series. Since this is a second-order differential equation, we must have two linearly independent solutions.

The problem however will be that our assumed solutions may or not be independent, or worse, may not even be defined (depending on the value of the parameters of the equation). This is why we shall study the different cases for the parameters and modify our assumed solution accordingly.

The equation

Solve the hypergeometric equation around all singularities:

[math]\displaystyle{ x(1-x)y''+\left\{ \gamma -(1+\alpha +\beta )x \right\}y'-\alpha \beta y=0 }[/math]

Solution around x = 0

Let

[math]\displaystyle{ \begin{align} P_0(x) & = -\alpha \beta, \\ P_1(x) & = \gamma - (1+\alpha +\beta )x, \\ P_2(x) & = x(1-x) \end{align} }[/math]

Then

[math]\displaystyle{ P_2(0) = P_2 (1)=0. }[/math]

Hence, x = 0 and x = 1 are singular points. Let's start with x = 0. To see if it is regular, we study the following limits:

[math]\displaystyle{ \begin{align} \lim_{x \to a} \frac{(x - a) P_1(x)}{P_2(x)} &=\lim_{x \to 0} \frac{(x - 0)(\gamma - (1 + \alpha + \beta)x)}{x(1 - x)}=\lim_{x \to 0} \frac{x(\gamma - (1 + \alpha + \beta)x)}{x(1 - x)}= \gamma \\ \lim_{x \to a} \frac{(x - a)^2 P_0(x)}{P_2(x)} &= \lim_{x \to 0} \frac{(x - 0)^2(-\alpha \beta)}{x(1 - x)} = \lim_{x \to 0} \frac{x^2 (-\alpha \beta)}{x(1 - x)} = 0 \end{align} }[/math]

Hence, both limits exist and x = 0 is a regular singular point. Therefore, we assume the solution takes the form

[math]\displaystyle{ y = \sum_{r=0}^\infty a_r x^{r + c} }[/math]

with a0 ≠ 0. Hence,

[math]\displaystyle{ \begin{align} y' &= \sum_{r = 0}^\infty a_r(r + c) x^{r + c - 1} \\ y'' &= \sum_{r = 0}^\infty a_r(r + c)(r + c - 1) x^{r + c - 2}. \end{align} }[/math]

Substituting these into the hypergeometric equation, we get

[math]\displaystyle{ x \sum_{r = 0}^\infty a_r(r + c)(r + c - 1) x^{r + c - 2} - x^2 \sum_{r = 0}^\infty a_r(r + c)(r + c - 1) x^{r + c - 2} + \gamma \sum_{r = 0}^\infty a_r(r + c) x^{r + c - 1} - (1 + \alpha + \beta) x\sum_{r = 0}^\infty a_r(r + c) x^{r + c - 1} -\alpha \beta \sum_{r = 0}^\infty a_r x^{r + c} = 0 }[/math]

That is,

[math]\displaystyle{ \sum_{r = 0}^\infty a_r(r + c)(r + c - 1) x^{r + c - 1} -\sum_{r = 0}^\infty a_r(r + c)(r + c - 1) x^{r + c}+\gamma \sum_{r = 0}^\infty a_r(r + c) x^{r + c - 1} -(1 + \alpha + \beta) \sum_{r = 0}^\infty a_r(r + c) x^{r + c} -\alpha \beta \sum_{r = 0}^\infty a_r x^{r + c} =0 }[/math]

In order to simplify this equation, we need all powers to be the same, equal to r + c − 1, the smallest power. Hence, we switch the indices as follows:

[math]\displaystyle{ \begin{align} &\sum_{r = 0}^\infty a_r(r + c)(r + c - 1)x^{r + c - 1} -\sum_{r = 1}^\infty a_{r - 1}(r + c - 1)(r + c - 2) x^{r + c - 1} +\gamma \sum_{r = 0}^\infty a_r(r + c) x^{r + c - 1} \\ &\qquad -(1 + \alpha + \beta) \sum_{r = 1}^\infty a_{r - 1}(r + c - 1) x^{r + c - 1}-\alpha \beta \sum_{r = 1}^\infty a_{r - 1} x^{r + c - 1} =0 \end{align} }[/math]

Thus, isolating the first term of the sums starting from 0 we get

[math]\displaystyle{ \begin{align} &a_0 (c(c-1) + \gamma c) x^{c - 1}+ \sum_{r = 1}^\infty a_r(r + c)(r + c - 1) x^{r + c - 1} -\sum_{r = 1}^\infty a_{r - 1}(r + c - 1)(r + c - 2) x^{r + c - 1} \\ &\qquad + \gamma \sum_{r = 1}^\infty a_r(r + c) x^{r + c - 1}-(1 + \alpha + \beta) \sum_{r = 1}^\infty a_{r - 1}(r + c - 1) x^{r + c - 1}-\alpha \beta \sum_{r = 1}^\infty a_{r - 1} x^{r + c - 1}= 0 \end{align} }[/math]

Now, from the linear independence of all powers of x, that is, of the functions 1, x, x2, etc., the coefficients of xk vanish for all k. Hence, from the first term, we have

[math]\displaystyle{ a_{0} (c(c - 1) + \gamma c) = 0 }[/math]

which is the indicial equation. Since a0 ≠ 0, we have

[math]\displaystyle{ c(c - 1 + \gamma) = 0. }[/math]

Hence,

[math]\displaystyle{ c_1 = 0, c_2 = 1 - \gamma }[/math]

Also, from the rest of the terms, we have

[math]\displaystyle{ ((r + c)(r + c - 1) + \gamma(r+c)) a_r + (-(r + c - 1)(r + c - 2) - (1 + \alpha + \beta)(r + c - 1) - \alpha\beta) a_{r - 1}= 0 }[/math]

Hence,

[math]\displaystyle{ \begin{align} a_r &= \frac{(r + c - 1)(r + c - 2) + (1 + \alpha + \beta)(r + c - 1) + \alpha\beta} {(r + c)(r + c - 1) + \gamma(r + c)} a_{r - 1} \\ &= \frac{(r + c -1)(r + c + \alpha + \beta - 1) + \alpha\beta}{(r + c)(r + c + \gamma - 1)} a_{r - 1} \end{align} }[/math]

But

[math]\displaystyle{ \begin{align} (r + c - 1)(r + c + \alpha + \beta - 1) + \alpha\beta &= (r + c - 1)(r + c + \alpha - 1) + (r + c - 1)\beta + \alpha\beta \\ &= (r + c - 1)(r + c + \alpha - 1) + \beta(r + c + \alpha - 1) \end{align} }[/math]

Hence, we get the recurrence relation

[math]\displaystyle{ a_r = \frac{(r + c + \alpha - 1)(r + c + \beta - 1)}{(r + c)(r + c + \gamma - 1)} a_{r - 1}, \text{ for } r \geq 1. }[/math]

Let's now simplify this relation by giving ar in terms of a0 instead of ar−1. From the recurrence relation (note: below, expressions of the form (u)r refer to the Pochhammer symbol).

[math]\displaystyle{ \begin{align} a_1 &= \frac{(c + \alpha)(c + \beta)}{(c + 1)(c + \gamma)} a_0 \\ a_2 &= \frac{(c + \alpha + 1)(c + \beta + 1)}{(c + 2)(c + \gamma + 1)} a_1 = \frac{(c + \alpha + 1)(c + \alpha)(c + \beta)(c + \beta + 1)}{(c + 2)(c + 1)(c + \gamma)(c + \gamma + 1)} a_0 = \frac{(c + \alpha)_2 (c + \beta)_2}{(c + 1)_2 (c + \gamma)_2} a_0 \\ a_3 &= \frac{(c + \alpha + 2)(c + \beta + 2)}{(c + 3)(c + \gamma + 2)} a_2 = \frac{(c + \alpha)_2 (c + \alpha + 2)(c + \beta )_2 (c + \beta + 2)}{(c + 1)_2 (c + 3)(c + \gamma)_2 (c + \gamma + 2)} a_0 = \frac{(c + \alpha)_3 (c + \beta)_3}{(c + 1)_3 (c + \gamma)_3} a_0 \end{align} }[/math]

As we can see,

[math]\displaystyle{ a_r =\frac{(c + \alpha)_r (c + \beta)_r}{(c + 1)_r(c + \gamma)_r} a_0, \text{ for } r \geq 0 }[/math]

Hence, our assumed solution takes the form

[math]\displaystyle{ y = a_0 \sum_{r = 0}^\infty \frac{(c + \alpha)_r (c + \beta)_r}{(c + 1)_r (c + \gamma)_r} x^{r + c}. }[/math]

We are now ready to study the solutions corresponding to the different cases for c1 − c2 = γ − 1 (this reduces to studying the nature of the parameter γ: whether it is an integer or not).

Analysis of the solution in terms of the difference γ − 1 of the two roots

γ not an integer

Then y1 = y|c = 0 and y2 = y|c = 1 − γ. Since

[math]\displaystyle{ y = a_0 \sum_{r = 0}^\infty \frac{(c + \alpha )_r (c + \beta)_r}{(c + 1)_r (c + \gamma)_r} x^{r + c}, }[/math]

we have

[math]\displaystyle{ \begin{align} y_1 &= a_0 \sum_{r = 0}^\infty \frac{(\alpha)_r (\beta)_r}{(1)_r (\gamma)_r} x^r = a_0 \cdot {{}_2 F_1}(\alpha, \beta; \gamma; x) \\ y_2 &= a_0 \sum_{r = 0}^\infty \frac{(\alpha + 1 - \gamma)_r (\beta + 1 - \gamma)_r}{(1 - \gamma + 1)_r (1 - \gamma + \gamma)_r} x^{r + 1 - \gamma} \\ &= a_0 x^{1 - \gamma} \sum_{r = 0}^\infty \frac{(\alpha + 1 - \gamma)_r (\beta + 1 - \gamma)_r}{(1)_r (2 - \gamma)_r} x^r \\ &= a_0 x^{1 - \gamma} {{}_2 F_1}(\alpha - \gamma + 1, \beta - \gamma + 1; 2 - \gamma; x) \end{align} }[/math]

Hence, [math]\displaystyle{ y = A' y_1 + B' y_2. }[/math] Let A′ a0 = a and Ba0 = B. Then

[math]\displaystyle{ y = A {{}_2 F_1}(\alpha, \beta; \gamma; x) + B x^{1 - \gamma} {{}_2 F_1}(\alpha - \gamma + 1, \beta - \gamma + 1; 2 - \gamma; x)\, }[/math]

γ = 1

Then y1 = y|c = 0. Since γ = 1, we have

[math]\displaystyle{ y = a_0 \sum_{r = 0}^\infty \frac{(c + \alpha)_r (c + \beta)_r}{(c + 1)_r^2} x^{r + c}. }[/math]

Hence,

[math]\displaystyle{ \begin{align} y_1 &= a_0 \sum_{r = 0}^\infty \frac{(\alpha)_r (\beta)_r}{(1)_r (1)_r} x^r = a_0 {{}_2 F_1}(\alpha, \beta; 1; x) \\ y_2 &= \left.\frac{\partial y}{\partial c}\right|_{c = 0}. \end{align} }[/math]

To calculate this derivative, let

[math]\displaystyle{ M_r = \frac{(c + \alpha)_r (c + \beta)_r}{(c + 1)_r^2}. }[/math]

Then

[math]\displaystyle{ \ln(M_r) = \ln\left(\frac{(c + \alpha)_r (c + \beta)_r}{(c + 1)_r^2}\right)= \ln(c + \alpha)_r + \ln(c + \beta)_r - 2\ln(c + 1)_r }[/math]

But

[math]\displaystyle{ \ln(c + \alpha)_r = \ln\left((c + \alpha)(c + \alpha + 1) \cdots (c + \alpha + r - 1)\right) = \sum_{k = 0}^{r - 1} \ln(c + \alpha + k). }[/math]

Hence,

[math]\displaystyle{ \begin{align} \ln(M_r) &= \sum_{k = 0}^{r - 1} \ln(c + \alpha + k)+ \sum_{k = 0}^{r - 1} \ln(c + \beta + k)- 2 \sum_{k = 0}^{r - 1} \ln(c + 1 + k) \\ &= \sum_{k = 0}^{r - 1} \left(\ln(c + \alpha + k) + \ln(c + \beta + k) -2 \ln(c + 1 + k)\right) \end{align} }[/math]

Differentiating both sides of the equation with respect to c, we get:

[math]\displaystyle{ \frac{1}{M_r} \frac{\partial M_r}{\partial c}= \sum_{k = 0}^{r - 1} \left(\frac{1}{c + \alpha + k} + \frac{1}{c + \beta +k }- \frac{2}{c + 1 + k}\right). }[/math]

Hence,

[math]\displaystyle{ \frac{\partial M_r}{\partial c}= \frac{(c + \alpha)_r (c + \beta)_r}{(c + 1)_r^2}\sum_{k=0}^{r-1} \left(\frac{1}{c + \alpha + k} + \frac{1}{c + \beta + k} - \frac{2}{c + 1 + k}\right). }[/math]

Now,

[math]\displaystyle{ y = a_0 x^c \sum_{r = 0}^\infty \frac{(c + \alpha)_r (c + \beta)_r}{(c + 1)_r^2} x^r = a_0 x^c \sum_{r = 0}^\infty M_r x^r. }[/math]

Hence,

[math]\displaystyle{ \begin{align} \frac{\partial y}{\partial c} &= a_0 x^c \ln(x) \sum_{r = 0}^\infty \frac{(c + \alpha)_r (c + \beta)_r}{(c + 1)_r^2} x^r + a_0 x^c \sum_{r = 0}^\infty \left(\frac{(c + \alpha)_r (c + \beta)_r}{(c + 1)_r^2}\left\{\sum_{k = 0}^{r - 1} \left(\frac{1}{c + \alpha + k} + \frac{1}{c + \beta + k}- \frac{2}{c + 1 + k} \right) \right\} \right) x^r \\ &= a_0 x^c \sum_{r = 0}^\infty \frac{(c + \alpha)_r (c + \beta)_r}{(c + 1)_r)^2}\left(\ln x + \sum_{k = 0}^{r - 1} \left(\frac{1}{c + \alpha + k}+\frac{1}{c + \beta + k} - \frac{2}{c + 1 + k} \right) \right) x^r. \end{align} }[/math]

For c = 0, we get

[math]\displaystyle{ y_2 = a_0 \sum_{r=0}^\infty \frac{(\alpha)_r (\beta)_r}{(1)_r^2} \left(\ln x + \sum_{k = 0}^{r - 1} \left(\frac{1}{\alpha + k} + \frac{1}{\beta + k}-\frac{2}{1 + k} \right) \right) x^r. }[/math]

Hence, y = Cy1 + Dy2. Let Ca0 = C and Da0 = D. Then

[math]\displaystyle{ y = C {{}_2 F_1}(\alpha, \beta; 1; x)+ D \sum_{r = 0}^\infty \frac{(\alpha)_r (\beta)_r}{(1)_r^2} \left(\ln(x) + \sum_{k = 0}^{r - 1} \left(\frac{1}{\alpha + k} + \frac{1}{\beta + k} - \frac{2}{1 + k} \right) \right) x^r }[/math]

γ an integer and γ ≠ 1

γ ≤ 0

The value of [math]\displaystyle{ \gamma }[/math] is [math]\displaystyle{ \gamma=0,-1,-2,\cdots }[/math]. To begin with, we shall simplify matters by concentrating a particular value of [math]\displaystyle{ \gamma }[/math] and generalise the result at a later stage. We shall use the value [math]\displaystyle{ \gamma=-2 }[/math]. The indicial equation has a root at [math]\displaystyle{ c=0 }[/math], and we see from the recurrence relation

[math]\displaystyle{ a_r = \frac{(r + c + \alpha - 1)(r + c + \beta - 1)}{(r + c)(r + c -3)} a_{r - 1}, }[/math]

that when [math]\displaystyle{ r=3 }[/math] that that denominator has a factor [math]\displaystyle{ c }[/math] which vanishes when [math]\displaystyle{ c=0 }[/math]. In this case, a solution can be obtained by putting [math]\displaystyle{ a_0= b_0 c }[/math] where [math]\displaystyle{ b_0 }[/math] is a constant.

With this substitution, the coefficients of [math]\displaystyle{ x^r }[/math] vanish when [math]\displaystyle{ c=0 }[/math] and [math]\displaystyle{ r \lt 3 }[/math]. The factor of [math]\displaystyle{ c }[/math] in the denominator of the recurrence relation cancels with that of the numerator when [math]\displaystyle{ r \ge 3 }[/math]. Hence, our solution takes the form

[math]\displaystyle{ y_1= \frac{b_0}{(-2)\times (-1)} \left( \frac{(\alpha)_{3} (\beta)_{3}}{(3! 0! } x^{3} + \frac{(\alpha)_{4} (\beta)_{4}}{4! 1!} x^{4} + \frac{(\alpha)_{5} (\beta)_{5}}{5! 2!} x^{5}+\cdots \right) }[/math]

[math]\displaystyle{ =\frac{b_0}{ (-2)_2} \sum_{r=3}^\infty \frac{(\alpha)_r (\beta)_r}{r! (r-3)!} x^r =\frac{b_0}{ (-2)_2} \frac{(\alpha)_3 (\beta)_3}{3!} \sum_{r=3}^\infty \frac{(\alpha+3)_{r-3} (\beta+3)_{r-3}}{(1+3)_{r-3} (r-3)!} x^r. }[/math]

If we start the summation at [math]\displaystyle{ r=0 }[/math] rather than [math]\displaystyle{ r=3 }[/math] we see that

[math]\displaystyle{ y_1=b_0 \frac{(\alpha)_3 (\beta)_3}{(-2)_2 \times 3!} x^3 {_2 F_1} (\alpha+3, \beta+3; (1+3); x). }[/math]

The result (as we have written it) generalises easily. For [math]\displaystyle{ \gamma=1+m }[/math], with [math]\displaystyle{ m=1,2,3,\cdots }[/math] then

[math]\displaystyle{ y_1=b_0 \frac{(\alpha)_m (\beta)_m}{(1-m)_{m-1} \times m!} x^m {_2 F_1} (\alpha+m, \beta+m; (1+m); x). }[/math]

Obviously, if [math]\displaystyle{ \gamma=-2 }[/math], then [math]\displaystyle{ m=3 }[/math]. The expression for [math]\displaystyle{ y_1(x) }[/math] we have just given looks a little inelegant since we have a multiplicative constant apart from the usual arbitrary multiplicative constant [math]\displaystyle{ b_0 }[/math]. Later, we shall see that we can recast things in such a way that this extra constant never appears

The other root to the indicial equation is [math]\displaystyle{ c=1-\gamma=3 }[/math], but this gives us (apart from a multiplicative constant) the same result as found using [math]\displaystyle{ c=0 }[/math]. This means we must take the partial derivative (w.r.t. [math]\displaystyle{ c }[/math]) of the usual trial solution in order to find a second independent solution. If we define the linear operator [math]\displaystyle{ L }[/math] as

[math]\displaystyle{ L=x(1-x)\frac{d^2}{d x^2}-(\alpha+\beta+1) x\frac{d}{d x}+\gamma \frac{d}{d x}-\alpha \beta, }[/math]

then since [math]\displaystyle{ \gamma=-2 }[/math] in our case,

[math]\displaystyle{ L c \sum_{r=0}^\infty b_r(c) x^r = b_0 c^2(c-3). }[/math]

(We insist that [math]\displaystyle{ b_0 \ne 0 }[/math].) Taking the partial derivative w.r.t [math]\displaystyle{ c }[/math],

[math]\displaystyle{ L \frac{\partial}{\partial c} c \sum_{r=0}^\infty b_r(c) x^{r+c} = b_0(3 c^2-6c). }[/math]

Note that we must evaluate the partial derivative at [math]\displaystyle{ c=0 }[/math] (and not at the other root [math]\displaystyle{ c=3 }[/math]). Otherwise the right hand side is non-zero in the above, and we do not have a solution of [math]\displaystyle{ Ly(x)=0 }[/math]. The factor [math]\displaystyle{ c }[/math] is not cancelled for [math]\displaystyle{ r=0,1 }[/math] and [math]\displaystyle{ r=2 }[/math]. This part of the second independent solution is

[math]\displaystyle{ {\bigg [} \frac{\partial}{\partial c} b_0 \bigg ( c + c\frac{(c+\alpha)(c+\beta)}{(c+1) (c-2)} x + c\frac{(c+\alpha)(c+\alpha+1)(c+\beta)(c+\beta+1)}{(c+1)(c+2) (c-2)(c-1)} x^2 {\bigg )} {\bigg ]} {\bigg \vert}_{c=0}. }[/math] [math]\displaystyle{ = b_0 \left ( 1 +\frac{\alpha \beta}{1! \times (-2)} x +\frac{\alpha (\alpha+1) \beta(\beta+1)}{2! \times (-2)\times (-1)}x^2 \right ) = b_0 \sum_{r=0}^{3-1} \frac{(\alpha)_r (\beta)_r}{r! (1-3)_r } x^r . }[/math]

Now we can turn our attention to the terms where the factor [math]\displaystyle{ c }[/math] cancels. First

[math]\displaystyle{ c b_3= \frac{b_0}{(c-1)(c-2)} \cancel{c} \frac{ (c+\alpha)(c+\alpha+1)(c+\alpha+2) (c+\beta)(c+\beta+1)(c+\beta+2) }{\cancel{c}(c+1)(c+2)(c+3)}. }[/math]

After this, the recurrence relations give us

[math]\displaystyle{ c b_4=c b_3(c)\frac{ (c+\alpha+3)(c+\beta+3) }{(c+1) (c+4))}. }[/math]

[math]\displaystyle{ c b_5=c b_3(c) \frac{ (c+\alpha+3)(c+\alpha+4)(c+\beta+3)(c+\beta+4)) }{(c+2)(c+1) (c+5)(c+4)}. }[/math]

So, if [math]\displaystyle{ r \ge 3 }[/math] we have

[math]\displaystyle{ c b_r= \frac{b_0}{(c-1)(c-2)} \frac{ (c+\alpha)_r(c+\beta)_r }{(c+1)_{r-3} (c+1)_r}. }[/math]

We need the partial derivatives

[math]\displaystyle{ \frac {\partial c b_3(c)}{\partial c} {\bigg \vert}_{c=0}= \frac{b_0}{(1-3)_{3-1}} \frac{(\alpha)_3 (\beta)_3}{0! 3!} {\bigg [} \frac{1}{1}+\frac{1}{2}+ \frac{1}{\alpha}+\frac{1}{\alpha+1}+\frac{1}{\alpha+2} }[/math] [math]\displaystyle{ + \frac{1}{\beta}+\frac{1}{\beta+1}+\frac{1}{\beta+2}-\frac{1}{1}-\frac{1}{2}-\frac{1}{3} {\bigg ]}. }[/math]

Similarly, we can write

[math]\displaystyle{ \frac {\partial c b_4(c)}{\partial c} {\bigg \vert}_{c=0}= \frac{b_0}{(1-3)_{3-1}} \frac{(\alpha)_4 (\beta)_4}{1! 4!} {\bigg [} \frac{1}{1}+\frac{1}{2} }[/math] [math]\displaystyle{ +\sum_{k=0}^{k=3}\frac{1}{\alpha+k}+\sum_{k=0}^{k=3}\frac{1}{\beta+k} -\frac{1}{1}-\frac{1}{2}-\frac{1}{3} -\frac{1}{4}-\frac{1}{1} {\bigg ]}, }[/math]

and

[math]\displaystyle{ \frac {\partial c b_5(c)}{\partial c} {\bigg \vert}_{c=0}= \frac{b_0}{(1-3)_{3-1}} \frac{(\alpha)_5 (\beta)_5}{2! 5!} {\bigg [} \frac{1}{1}+\frac{1}{2} }[/math] [math]\displaystyle{ +\sum_{k=0}^{k=4}\frac{1}{\alpha+k}+\sum_{k=0}^{k=4}\frac{1}{\beta+k} -\frac{1}{1}-\frac{1}{2}-\frac{1}{3}-\frac{1}{4}-\frac{1}{5} -\frac{1}{1}-\frac{1}{2} {\bigg ]}. }[/math]

It becomes clear that for [math]\displaystyle{ r \ge 3 }[/math]

[math]\displaystyle{ \frac {\partial c b_r(c)}{\partial c} {\bigg \vert}_{c=0}= \frac{b_0}{(1-3)_{3-1}} \frac{(\alpha)_r (\beta)_r}{(r-3)!r!} {\bigg [} H_2 +\sum_{k=0}^{k=r-1}\frac{1}{\alpha+k}+\sum_{k=0}^{k=r-1}\frac{1}{\beta+k} -H_r -H_{r-3} {\bigg ]}. }[/math]

Here, [math]\displaystyle{ H_k }[/math] is the [math]\displaystyle{ k }[/math]th partial sum of the harmonic series, and by definition [math]\displaystyle{ H_0=0 }[/math] and [math]\displaystyle{ H_1=1 }[/math].

Putting these together, for the case [math]\displaystyle{ \gamma=-2 }[/math] we have a second solution

[math]\displaystyle{ y_2(x)= \log x \times \frac{b_0}{ (-2)_2} \sum_{r=3}^\infty \frac{(\alpha)_r (\beta)_r}{r! (r-3)!} x^r + b_0 \sum_{r=0}^{3-1} \frac{(\alpha)_r (\beta)_r}{r! (1-3)_r } x^r }[/math]

[math]\displaystyle{ +\frac{b_0}{ (-2)_2} \sum_{r=3}^\infty \frac{(\alpha)_r (\beta)_r}{(r-3)!r!} {\bigg [} H_2 +\sum_{k=0}^{k=r-1}\frac{1}{\alpha+k}+\sum_{k=0}^{k=r-1}\frac{1}{\beta+k} -H_r -H_{r-3} {\bigg ] x^r}. }[/math]

The two independent solutions for [math]\displaystyle{ \gamma=1-m }[/math] (where [math]\displaystyle{ m }[/math] is a positive integer) are then

[math]\displaystyle{ y_1(x)=\frac{1}{(1-m)_{m-1}} \sum_{r=m}^\infty \frac{(\alpha)_r (\beta)_r}{r! (r-m)!} x^r }[/math]

and

[math]\displaystyle{ y_2(x)= \log x \times y_1(x) + \sum_{r=0}^{m-1} \frac{(\alpha)_r (\beta)_r}{r! (1-m)_r } x^r }[/math]

[math]\displaystyle{ +\frac{1}{ (1-m)_{m-1}} \sum_{r=m}^\infty \frac{ (\alpha)_r (\beta)_r}{(r-m)!r!} {\bigg [} H_{m-1}+\sum_{k=0}^{k=r-1}\frac{1}{\alpha+k} +\sum_{k=0}^{k=r-1}\frac{1}{\beta+k} -H_r -H_{r-m} {\bigg ]} x^r. }[/math]

The general solution is as usual [math]\displaystyle{ y(x)=A y_1(x)+B y_2(x) }[/math] where [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] are arbitrary constants. Now, if the reader consults a ``standard solution" for this case, such as given by Abramowitz and Stegun [1] in §15.5.21 (which we shall write down at the end of the next section) it shall be found that the [math]\displaystyle{ y_2 }[/math] solution we have found looks somewhat different from the standard solution. In our solution for [math]\displaystyle{ y_2 }[/math], the first term in the infinite series part of [math]\displaystyle{ y_2 }[/math] is a term in [math]\displaystyle{ x^m }[/math]. The first term in the corresponding infinite series in the standard solution is a term in [math]\displaystyle{ x^{m+1} }[/math]. The [math]\displaystyle{ x^m }[/math] term is missing from the standard solution. Nonetheless, the two solutions are entirely equivalent.

The "Standard" Form of the Solution γ ≤ 0

The reason for the apparent discrepancy between the solution given above and the standard solution in Abramowitz and Stegun [1] §15.5.21 is that there are an infinite number of ways in which to represent the two independent solutions of the hypergeometric ODE. In the last section, for instance, we replaced [math]\displaystyle{ a_0 }[/math] with [math]\displaystyle{ b_0 c }[/math]. Suppose though, we are given some function [math]\displaystyle{ h(c) }[/math] which is continuous and finite everywhere in an arbitrarily small interval about [math]\displaystyle{ c=0 }[/math]. Suppose we are also given

[math]\displaystyle{ h(c) \vert_{c=0} \ne 0, }[/math] and [math]\displaystyle{ \frac{d h}{d c} {\bigg \vert}_{c=0} \ne 0. }[/math]

Then, if instead of replacing [math]\displaystyle{ a_0 }[/math] with [math]\displaystyle{ b_0 c }[/math] we replace [math]\displaystyle{ a_0 }[/math] with [math]\displaystyle{ b_0 h(c) c }[/math], we still find we have a valid solution of the hypergeometric equation. Clearly, we have an infinity of possibilities for [math]\displaystyle{ h(c) }[/math]. There is however a ``natural choice" for [math]\displaystyle{ h(c) }[/math]. Suppose that [math]\displaystyle{ c b_N(c) =b_0 f(c) }[/math] is the first non zero term in the first [math]\displaystyle{ y_1(x) }[/math] solution with [math]\displaystyle{ c=0 }[/math]. If we make [math]\displaystyle{ h(c) }[/math] the reciprocal of [math]\displaystyle{ f(c) }[/math], then we won't have a multiplicative constant involved in [math]\displaystyle{ y_1(x) }[/math] as we did in the previous section. From another point of view, we get the same result if we ``insist" that [math]\displaystyle{ a_N }[/math] is independent of [math]\displaystyle{ c }[/math], and find [math]\displaystyle{ a_0(c) }[/math] by using the recurrence relations backwards.

For the first [math]\displaystyle{ (c=0) }[/math] solution, the function [math]\displaystyle{ h(c) }[/math] gives us (apart from multiplicative constant) the same [math]\displaystyle{ y_1(x) }[/math] as we would have obtained using [math]\displaystyle{ h(c)=1 }[/math]. Suppose that using [math]\displaystyle{ h(c)=1 }[/math] gives rise to two independent solutions [math]\displaystyle{ y_1(x) }[/math] and [math]\displaystyle{ y_2(x) }[/math]. In the following we shall denote the solutions arrived at given some [math]\displaystyle{ h(c)\ne 1 }[/math] as [math]\displaystyle{ {\tilde y}_1(x) }[/math] and [math]\displaystyle{ {\tilde y}_2(x) }[/math].

The second solution requires us to take the partial derivative w.r.t [math]\displaystyle{ c }[/math], and substituting the usual trial solution gives us

[math]\displaystyle{ L \frac{\partial}{\partial c} \sum_{r=0}^\infty c h(c) b_r x^{r+c} = b_0 \left ( \frac{d h}{d c} c^2 (c-1)+ 2 c h(c) (c-1)+ h(c) c^2 \right ). }[/math]

The operator [math]\displaystyle{ L }[/math] is the same linear operator discussed in the previous section. That is to say, the hypergeometric ODE is represented as [math]\displaystyle{ Ly(x)=0 }[/math].

Evaluating the left hand side at [math]\displaystyle{ c=0 }[/math] will give us a second independent solution. Note that this second solution [math]\displaystyle{ {\tilde y_2} }[/math] is in fact a linear combination of [math]\displaystyle{ y_1(x) }[/math] and [math]\displaystyle{ y_2(x) }[/math].

Any two independent linear combinations ([math]\displaystyle{ {\tilde y}_1 }[/math] and [math]\displaystyle{ {\tilde y}_2 }[/math]) of [math]\displaystyle{ y_1 }[/math] and [math]\displaystyle{ y_2 }[/math] are independent solutions of [math]\displaystyle{ Ly=0 }[/math].

The general solution can be written as a linear combination of [math]\displaystyle{ {\tilde y}_1 }[/math] and [math]\displaystyle{ {\tilde y}_2 }[/math] just as well as linear combinations of [math]\displaystyle{ y_1 }[/math] and [math]\displaystyle{ y_2 }[/math].


We shall review the special case where [math]\displaystyle{ \gamma=1-3=-2 }[/math] that was considered in the last section. If we ``insist" [math]\displaystyle{ a_3(c)=const. }[/math], then the recurrence relations yield

[math]\displaystyle{ a_2=a_3 \frac{ c (3+c)}{(2+\alpha+c)(2+\beta+c) }, }[/math] [math]\displaystyle{ a_1=a_3 \frac{ c (2+c)(3+c)(c-1)}{(1+\alpha+c)(2+\alpha+c)(1+\beta+c)(2+\beta+c)}, }[/math]

and

[math]\displaystyle{ a_0=a_3 \frac{ c (1+c)(2+c)(3+c)(c-1)(c-2)}{(\alpha+c)_3 (\beta+c)_3}=b_0 c h(c). }[/math]

These three coefficients are all zero at [math]\displaystyle{ c=0 }[/math] as expected. We have three terms involved in [math]\displaystyle{ y_2(x) }[/math] by taking the partiial derivative w.r.t [math]\displaystyle{ c }[/math], we denote the sum of the three terms involving these coefficients as [math]\displaystyle{ S_3 }[/math] where

[math]\displaystyle{ S_3=\left [ \frac{\partial }{\partial c} \left (a_0(c) x^c+a_1(c) x^{c+1}+a_2(c) x^{c+2} \right ) \right ]_{c=0}, }[/math] [math]\displaystyle{ =a_3 \left [\frac{3\times 2 \times 1 (-2)\times (-1)}{(\alpha)_3 (\beta)_3 }x^{3-3} + \frac{3 \times 2 \times (-1)}{(\alpha+1)(\alpha+2)(\beta+1)(\beta+2)}x^{3-2} + \frac{3 }{(\alpha+2) (\beta+2)_1 }x^{3-1}\right ] . }[/math]

The reader may confirm that we can tidy this up and make it easy to generalise by putting

[math]\displaystyle{ S_3=-a_3 \sum_{r=1}^3 \frac{ (-3)_r (r-1)!}{(1-\alpha-3)_r (1-\beta-3)_r} x^{3-r}. }[/math]

Next we can turn to the other coefficients, the recurrence relations yield

[math]\displaystyle{ a_4=a_3 \frac{(3+c+\alpha)(3+c+\beta)}{(4+c)(1+c)} }[/math] [math]\displaystyle{ a_5=a_3 \frac{(4+c+\alpha)(3+c+\alpha)(4+c+\beta)(3+c+\alpha}{(5+c)(4+c)(1+c)(2+c)} }[/math]

Setting [math]\displaystyle{ c=0 }[/math] gives us

[math]\displaystyle{ {\tilde y}_1(x)=a_3 x^3 \sum_{r=0}^\infty \frac{(\alpha+3)_r (\beta+3)_r}{(3+1)_r r!} x^r =a_3 x^3 {_2 F_1}(\alpha+3,\beta+3;(1+3);z). }[/math]

This is (apart from the multiplicative constant[math]\displaystyle{ (a)_3 (b)_3/2 }[/math]) the same as [math]\displaystyle{ y_1(x) }[/math]. Now, to find [math]\displaystyle{ {\tilde y}_2 }[/math] we need partial derivatives

[math]\displaystyle{ \frac{\partial a_4 }{\partial c}{\bigg \vert}_{c=0}= a_3 {\bigg [} \frac{(3+c+\alpha)(3+c+\beta)}{(4+c)(1+c)} {\bigg (} \frac{1}{\alpha+3+c}+\frac{1}{\beta+3+c}-\frac{1}{4+c}-\frac{1}{1+c} {\bigg )} {\bigg ]}_{c=0} }[/math]

[math]\displaystyle{ = a_3 \frac{(3+\alpha)_1(3+\beta)_1}{(1+3)_1 \times 1} {\bigg (} \frac{1}{\alpha+3}+\frac{1}{\beta+3}-\frac{1}{4}-\frac{1}{1} {\bigg )}. }[/math]

Then

[math]\displaystyle{ \frac{\partial a_5 }{\partial c}{\bigg \vert}_{c=0} = a_3 \frac{(3+\alpha)_2(3+\beta)_2}{(1+3)_2 \times 1 \times 2} {\bigg (} \frac{1}{\alpha+3}+\frac{1}{\alpha+4}+\frac{1}{\beta+3} +\frac{1}{\beta+4}-\frac{1}{4}-\frac{1}{5}-\frac{1}{1} -\frac{1}{2} {\bigg )}. }[/math]

we can re-write this as

[math]\displaystyle{ \frac{\partial a_5 }{\partial c}{\bigg \vert}_{c=0} =a_3 \frac{(3+\alpha)_2(3+\beta)_2}{(1+3)_2 \times 2!} {\bigg [} \sum_{k=0}^1\left ( \frac{1}{\alpha+3+k}+\frac{1}{\beta+3+k} \right ) +\sum_{k=1}^3 \frac{1}{k}-\sum_{k=1}^5 \frac{1}{k}-\frac{1}{1} -\frac{1}{2} {\bigg ]}. }[/math]

The pattern soon becomes clear, and for [math]\displaystyle{ r=1,2,3,\cdots }[/math]

[math]\displaystyle{ \frac{\partial a_{r+3} }{\partial c}{\bigg \vert}_{c=0} =a_3 \frac{(3+\alpha)_{r}(3+\beta)_r}{(1+3)_r \times r!} {\bigg [} \sum_{k=0}^{r-1} \left ( \frac{1}{\alpha+3+k}+\frac{1}{\beta+3+k} \right ) +\sum_{k=1}^3 \frac{1}{k}-\sum_{k=1}^{r+3} \frac{1}{k}-\sum_{k=1}^r\frac{1}{k} {\bigg ]}. }[/math]

Clearly, for [math]\displaystyle{ r=0 }[/math],

[math]\displaystyle{ \frac{\partial a_{3} }{\partial c}{\bigg \vert}_{c=0} =0. }[/math]

The infinite series part of [math]\displaystyle{ {\tilde y}_2 }[/math] is [math]\displaystyle{ S_\infty }[/math], where

[math]\displaystyle{ S_\infty=x^3 \sum_{r=1}^\infty \frac{\partial a_{r+3} }{\partial c}{\bigg \vert}_{c=0} x^r. }[/math]

Now we can write (disregarding the arbitrary constant) for[math]\displaystyle{ \gamma=1-m }[/math]

[math]\displaystyle{ {\tilde y}_1(x)= x^3 {_2 F_1}(\alpha+m,\beta+m;1+m;z) }[/math]

[math]\displaystyle{ {\tilde y}_2(x)={\tilde y}_1(x) \log x -\sum_{r=1}^m \frac{ (-m)_r (r-1)!}{(1-\alpha-m)_r (1-\beta-m)_r} x^{m-r}. }[/math] [math]\displaystyle{ +x^3 \sum_{r=0}^\infty \frac{(\alpha+m)_{r}(\beta+m)_r}{(1+m)_r \times r!} {\bigg [} \sum_{k=0}^{r-1} \left ( \frac{1}{\alpha+m+k}+\frac{1}{\beta+m+k} \right ) +\sum_{k=1}^3 \frac{1}{k}-\sum_{k=1}^{r+3} \frac{1}{k}-\sum_{k=1}^r\frac{1}{k} {\bigg ] x^r}. }[/math]

Some authors prefer to express the finite sums in this last result using the digamma function [math]\displaystyle{ \psi(x) }[/math]. In particular, the following results are used

[math]\displaystyle{ H_n=\psi(n+1)+\gamma_{em}. }[/math] Here, [math]\displaystyle{ \gamma_{em} =0.5772156649=\psi(1) }[/math] is the Euler-Mascheroni constant. Also

[math]\displaystyle{ \sum_{k=0}^{n-1} \frac{1}{z+k}=\psi(z+n)-\psi(z). }[/math]

With these results we obtain the form given in Abramamowitz and Stegun §15.5.21, namely

[math]\displaystyle{ {\tilde y}_2(x)={\tilde y}_1(x) \log x -\sum_{r=1}^m \frac{ (-m)_r (r-1)!}{(1-\alpha-m)_r (1-\beta-m)_r} x^{m-r}. }[/math] [math]\displaystyle{ +x^3 \sum_{r=0}^\infty \frac{(\alpha+m)_{r}(\beta+m)_r}{(1+m)_r \times r!} {\bigg [} \psi(\alpha+r+m)-\psi(\alpha+m)+ \psi(\beta+r+m)-\psi(\beta+m) }[/math] [math]\displaystyle{ -\psi(r+1+m)-\psi(r+1)+\psi(1+m)+\psi(1) {\bigg ] x^r}. }[/math]

The Standard" Form of the Solution γ > 1

In this section, we shall concentrate on the ``standard solution", and we shall not replace [math]\displaystyle{ a_0 }[/math] with [math]\displaystyle{ b_0 (c-1+\gamma) }[/math]. We shall put [math]\displaystyle{ \gamma =1+m }[/math] where [math]\displaystyle{ m=1,2,3, \cdots }[/math]. For the root [math]\displaystyle{ c=1-\gamma }[/math] of the indicial equation we had

[math]\displaystyle{ A_r= \left [ A_{r-1}\frac{ (r+\alpha-1+c)(r+\beta-1+c)}{(r+c)(r+c+\gamma-1)} \right ]_{c=1-\gamma} =A_{r-1}\frac{ (r+\alpha-\gamma)(r+\beta-\gamma)}{(r+1-\gamma)(r)}, }[/math]

where [math]\displaystyle{ r\ge 1 }[/math] in which case we are in trouble if [math]\displaystyle{ r=\gamma-1=m }[/math]. For instance, if [math]\displaystyle{ \gamma=4 }[/math], the denominator in the recurrence relations vanishes for [math]\displaystyle{ r=3 }[/math]. We can use exactly the same methods that we have just used for the standard solution in the last section. We shall not (in the instance where [math]\displaystyle{ \gamma=4 }[/math]) replace [math]\displaystyle{ a_0 }[/math] with [math]\displaystyle{ b_0 (c+3) }[/math] as this will not give us the standard form of solution that we are after. Rather, we shall ``insist" that [math]\displaystyle{ A_3 =const. }[/math] as we did in the standard solution for [math]\displaystyle{ \gamma=-2 }[/math] in the last section. (Recall that this defined the function [math]\displaystyle{ h(c) }[/math] and that [math]\displaystyle{ a_0 }[/math] will now be replaced with [math]\displaystyle{ b_0 (c+3)h(c) }[/math].) Then we may work out the coefficients of [math]\displaystyle{ x^0 }[/math] to [math]\displaystyle{ x^2 }[/math] as functions of [math]\displaystyle{ c }[/math] using the recurrence relations backwards. There is nothing new to add here, and the reader may use the same methods as used in the last section to find the results of [1]§15.5.18 and §15.5.19, these are

[math]\displaystyle{ y_{1}={_2F_1}(\alpha,\beta;1+m;x), }[/math]

and

[math]\displaystyle{ y_{2}= {_2F_1}(\alpha,\beta;1+m;x) \log x + z^m \sum_{r=1}^\infty \frac{ (\alpha)_r (\beta)_r}{r! (1+m)_r} [ \psi(\alpha+r)-\psi(\alpha) +\psi(\beta+k)-\psi(\beta) }[/math] [math]\displaystyle{ -\psi(m+1+r)+\psi(m+1)-\psi(r+1)+\psi(1) ] z^r -\sum_{k=1}^m \frac{ (k-1)! (-m)_k }{ (1-\alpha)_k (1-\beta)_k} z^{-r}. }[/math]

Note that the powers of [math]\displaystyle{ z }[/math] in the finite sum part of [math]\displaystyle{ y_2(x) }[/math] are now negative so that this sum diverges as [math]\displaystyle{ z \rightarrow 0\$ }[/math]

Solution around x = 1

Let us now study the singular point x = 1. To see if it is regular,

[math]\displaystyle{ \begin{align} \lim_{x \to a} \frac{(x - a) P_1(x)}{P_2(x)} &= \lim_{x \to 1} \frac{(x - 1) (\gamma - (1 + \alpha + \beta)x)}{x(1 - x)} = \lim_{x \to 1} \frac{-(\gamma - (1 + \alpha + \beta)x)}{x} = 1 + \alpha + \beta - \gamma \\ \lim_{x \to a} \frac{(x - a)^2 P_0(x)}{P_2(x)} &= \lim_{x \to 1} \frac{(x - 1)^2 (-\alpha\beta)}{x(1 - x)} = \lim_{x \to 1} \frac{(x - 1) \alpha \beta}{x} = 0 \end{align} }[/math]

Hence, both limits exist and x = 1 is a regular singular point. Now, instead of assuming a solution on the form

[math]\displaystyle{ y = \sum_{r = 0}^\infty a_r (x - 1)^{r + c}, }[/math]

we will try to express the solutions of this case in terms of the solutions for the point x = 0. We proceed as follows: we had the hypergeometric equation

[math]\displaystyle{ x(1 - x)y'' + (\gamma - (1 + \alpha + \beta)x)y' - \alpha\beta y = 0. }[/math]

Let z = 1 − x. Then

[math]\displaystyle{ \begin{align} \frac{dy}{dx} &= \frac{dy}{dz} \times \frac{dz}{dx} = -\frac{dy}{dz} = -y' \\ \frac{d^2 y}{dx^2} &= \frac{d}{dx}\left( \frac{dy}{dx} \right) = \frac{d}{dx}\left( -\frac{dy}{dz} \right) = \frac{d}{dz}\left( -\frac{dy}{dz} \right) \times \frac{dz}{dx} =\frac{d^{2}y}{dz^{2}} = y'' \end{align} }[/math]

Hence, the equation takes the form

[math]\displaystyle{ z(1 - z) y'' + (\alpha + \beta - \gamma + 1 - (1 + \alpha + \beta)z) y' - \alpha\beta y = 0. }[/math]

Since z = 1 − x, the solution of the hypergeometric equation at x = 1 is the same as the solution for this equation at z = 0. But the solution at z = 0 is identical to the solution we obtained for the point x = 0, if we replace each γ by α + β − γ + 1. Hence, to get the solutions, we just make this substitution in the previous results. For x = 0, c1 = 0 and c2 = 1 − γ. Hence, in our case, c1 = 0 while c2 = γ − α − β. Let us now write the solutions. In the following we replaced each z by 1 - x.

Analysis of the solution in terms of the difference γ − α − β of the two roots

To simplify notation from now on denote γ − α − β by Δ, therefore γ = Δ + α + β.

Δ not an integer

[math]\displaystyle{ y = A \left \{ {{}_2 F_1}(\alpha, \beta; -\Delta + 1; 1 - x) \right \} + B \left \{(1 - x)^{\Delta} {{}_2 F_1}(\Delta + \beta, \Delta + \alpha; \Delta + 1; 1 - x) \right \} }[/math]

Δ = 0

[math]\displaystyle{ y = C \left \{ {{}_2 F_1}(\alpha, \beta; 1; 1 - x) \right \} + D \left \{\sum_{r = 0}^\infty \frac{(\alpha)_r (\beta)_r}{(1)_r^2} \left(\ln(1 - x) + \sum_{k = 0}^{r - 1} \left(\frac{1}{\alpha + k} + \frac{1}{\beta + k} - \frac{2}{1 + k}\right)\right) (1 - x)^r \right \} }[/math]

Δ is a non-zero integer

Δ > 0

[math]\displaystyle{ \begin{align} y &= E \left \{ \frac{1}{(-\Delta + 1)_{\Delta - 1}} \ \sum_{r = 1 - \Delta - \alpha - \beta}^\infty \frac{(\alpha)_r (\beta)_r}{(1)_r (1)_{r - \Delta}} (1 - x)^r \right \} + \\ &\quad + F \left \{(1 - x)^{\Delta} \ \sum_{r = 0}^\infty \frac{(\Delta)(\Delta + \alpha)_r (\Delta + \beta)_r} {(\Delta + 1)_r (1)_r} \left (\ln(1 - x) + \frac{1}{\Delta} +\sum_{k=0}^{r-1} \left(\frac{1}{\Delta + \alpha + k} + \frac{1}{\Delta + \beta + k} - \frac{1}{\Delta + 1 + k} - \frac{1}{1 + k} \right) \right ) (1 - x)^r \right \} \end{align} }[/math]

Δ < 0

[math]\displaystyle{ \begin{align} y &= G \left \{ \frac{(1 - x)^{\Delta}}{(\Delta+1)_{-\Delta - 1}} \ \sum_{r = -\Delta}^\infty \frac{(\Delta + \alpha )_r (\Delta + \beta)_r}{(1)_r (1)_{r + \Delta}} (1 - x)^r \right \} + \\ &\quad + H \left \{ \sum_{r = 0}^\infty \frac{(\Delta)(\Delta + \alpha)_r (\Delta + \beta)_r}{(\Delta + 1)_r (1)_r}\left (\ln(1 - x) - \frac{1}{\Delta} + \sum_{k = 0}^{r - 1} \left(\frac{1}{\alpha + k} + \frac{1}{\beta + k} - \frac{1}{-\Delta + 1 + k} - \frac{1}{1 + k} \right)\right ) (1 - x)^r \right \} \end{align} }[/math]

Solution around infinity

Finally, we study the singularity as x → ∞. Since we can't study this directly, we let x = s−1. Then the solution of the equation as x → ∞ is identical to the solution of the modified equation when s = 0. We had

[math]\displaystyle{ \begin{align} & x(1-x)y''+\left ( \gamma -(1+\alpha +\beta )x \right ) y'-\alpha \beta y=0 \\ & \frac{dy}{dx}=\frac{dy}{ds}\times \frac{ds}{dx}=-s^2\times \frac{dy}{ds}=-s^2y' \\ & \frac{d^{2}y}{dx^{2}}=\frac{d}{dx}\left( \frac{dy}{dx} \right)=\frac{d}{dx}\left( -s^2 \times \frac{dy}{ds} \right)=\frac{d}{ds}\left( -s^2 \times \frac{dy}{ds} \right)\times \frac{ds}{dx} = \left( (-2s)\times \frac{dy}{ds}+(-s^{2})\frac{d^{2}y}{ds^{2}} \right) \times (-s^{2})=2s^{3}y'+s^{4}y'' \end{align} }[/math]

Hence, the equation takes the new form

[math]\displaystyle{ \frac{1}{s} \left(1 - \frac{1}{s}\right) \left(2 s^3 y' + s^4 y''\right) + \left(\gamma - (1 + \alpha + \beta)\frac{1}{s} \right) (-s^2 y') - \alpha \beta y = 0 }[/math]

which reduces to

[math]\displaystyle{ \left (s^{3}-s^{2} \right )y''+ \left ((2-\gamma )s^2 +(\alpha +\beta -1)s\right )y'-\alpha \beta y = 0. }[/math]

Let

[math]\displaystyle{ \begin{align} P_{0}(s) &=-\alpha \beta, \\ P_{1}(s) &= (2-\gamma )s^2+(\alpha +\beta -1)s, \\ P_{2}(s) &= s^3-s^2. \end{align} }[/math]

As we said, we shall only study the solution when s = 0. As we can see, this is a singular point since P2(0) = 0. To see if it is regular,

[math]\displaystyle{ \begin{align} \lim_{s \to a} \frac{(s-a)P_1(s)}{P_2(s)} & =\lim_{s \to 0} \frac{(s-0)((2-\gamma )s^2+(\alpha +\beta -1)s)}{s^3-s^2} \\ &= \lim_{s \to 0} \frac{(2-\gamma )s^{2}+(\alpha +\beta -1)s}{s^2-s} \\ &= \lim_{s \to 0} \frac{(2-\gamma )s+(\alpha +\beta -1)}{s-1}=1-\alpha -\beta. \\ \lim_{s \to a} \frac{(s-a)^2 P_0(s)}{P_2(s)} &=\lim_{s \to 0} \frac{(s-0)^2( -\alpha \beta)}{s^3-s^2}=\lim_{s \to 0} \frac{( -\alpha \beta)}{s-1}=\alpha \beta. \end{align} }[/math]

Hence, both limits exist and s = 0 is a regular singular point. Therefore, we assume the solution takes the form

[math]\displaystyle{ y=\sum_{r=0}^{\infty }{a_{r}s^{r+c}} }[/math]

with a0 ≠ 0. Hence,

[math]\displaystyle{ \begin{align} y'&=\sum\limits_{r=0}^{\infty }{a_{r}(r+c)s^{r+c-1}}\\ y''&=\sum\limits_{r=0}^{\infty }{a_{r}(r+c)(r+c-1)s^{r+c-2}} \end{align} }[/math]

Substituting in the modified hypergeometric equation we get

[math]\displaystyle{ \left (s^3-s^2 \right )y''+ \left ((2-\gamma )s^2+(\alpha +\beta -1)s \right )y' - (\alpha \beta) y=0 }[/math]

And therefore:

[math]\displaystyle{ \left (s^3 - s^2 \right ) \sum_{r=0}^{\infty }{a_{r}(r+c)(r+c-1)s^{r+c-2}} + \left ((2-\gamma )s^2+(\alpha +\beta -1)s \right ) \sum_{r=0}^{\infty }{a_{r}(r+c)s^{r+c-1}}-(\alpha \beta) \sum_{r=0}^{\infty }{a_{r}s^{r+c}}=0 }[/math]

i.e.,

[math]\displaystyle{ \sum_{r=0}^{\infty }{a_{r}(r+c)(r+c-1)s^{r+c+1}}-\sum_{r=0}^{\infty }{a_{r}(r+c)(r+c-1)x^{r+c}} +(2-\gamma )\sum_{r=0}^{\infty }{a_{r}(r+c)s^{r+c+1}}+(\alpha +\beta -1)\sum_{r=0}^{\infty }{a_{r}(r+c)s^{r+c}}-\alpha \beta \sum_{r=0}^{\infty }{a_{r}s^{r+c}}=0. }[/math]

In order to simplify this equation, we need all powers to be the same, equal to r + c, the smallest power. Hence, we switch the indices as follows

[math]\displaystyle{ \begin{align} &\sum_{r=1}^{\infty }{a_{r-1}(r+c-1)(r+c-2)s^{r+c}}-\sum_{r=0}^{\infty }{a_{r}(r+c)(r+c-1)s^{r+c}} +(2-\gamma )\sum_{r=1}^{\infty }{a_{r-1}(r+c-1)s^{r+c}}+ \\ &\qquad \qquad + (\alpha +\beta -1)\sum_{r=0}^{\infty }{a_{r}(r+c)s^{r+c}}-\alpha \beta \sum_{r=0}^{\infty }{a_{r}s^{r+c}}=0 \end{align} }[/math]

Thus, isolating the first term of the sums starting from 0 we get

[math]\displaystyle{ \begin{align} & a_{0}\left ( -(c)(c-1)+(\alpha +\beta -1)(c)-\alpha \beta \right )s^{c}+\sum_{r=1}^{\infty }{a_{r-1}(r+c-1)(r+c-2)s^{r+c}} -\sum_{r=1}^{\infty }{a_{r}(r+c)(r+c-1)x^{r+c}}+\\ & \qquad \qquad + (2-\gamma )\sum_{r=1}^{\infty }{a_{r-1}(r+c-1)s^{r+c}} +(\alpha +\beta -1)\sum_{r=1}^{\infty }{a_{r}(r+c)s^{r+c}}-\alpha \beta \sum_{r=1}^{\infty }{a_{r}s^{r+c}}=0 \end{align} }[/math]

Now, from the linear independence of all powers of s (i.e., of the functions 1, s, s2, ...), the coefficients of sk vanish for all k. Hence, from the first term we have

[math]\displaystyle{ a_{0}\left ( -(c)(c-1)+(\alpha +\beta -1)(c)-\alpha \beta \right )=0 }[/math]

which is the indicial equation. Since a0 ≠ 0, we have

[math]\displaystyle{ (c)(-c+1+\alpha +\beta -1)-\alpha \beta )=0. }[/math]

Hence, c1 = α and c2 = β.

Also, from the rest of the terms we have

[math]\displaystyle{ \left ((r+c-1)(r+c-2)+(2-\gamma )(r+c-1) \right ) a_{r-1} +\left ( -(r+c)(r+c-1)+(\alpha +\beta -1)(r+c)-\alpha \beta \right ) a_r=0 }[/math]

Hence,

[math]\displaystyle{ a_{r}=-\frac{\left ( (r+c-1)(r+c-2)+(2-\gamma )(r+c-1) \right )}{\left ( -(r+c)(r+c-1)+(\alpha +\beta -1)(r+c)-\alpha \beta \right )}a_{r-1} =\frac{\left ((r+c-1)(r+c-\gamma ) \right )}{\left ( (r+c)(r+c-\alpha -\beta )+\alpha \beta \right )}a_{r-1} }[/math]

But

[math]\displaystyle{ \begin{align} (r+c)(r+c-\alpha -\beta )+\alpha \beta &=(r+c-\alpha )(r+c)-\beta (r+c)+\alpha \beta \\ &=(r+c-\alpha )(r+c)-\beta (r+c-\alpha ). \end{align} }[/math]

Hence, we get the recurrence relation

[math]\displaystyle{ a_{r}=\frac{(r+c-1)(r+c-\gamma )}{(r+c-\alpha )(r+c-\beta )}a_{r-1}, \quad \forall r \ge 1 }[/math]

Let's now simplify this relation by giving ar in terms of a0 instead of ar−1. From the recurrence relation,

[math]\displaystyle{ \begin{align} a_1 &=\frac{(c)(c+1-\gamma )}{(c+1-\alpha )(c+1-\beta )}a_{0} \\ a_2 &=\frac{(c+1)(c+2-\gamma )}{(c+2-\alpha )(c+2-\beta )}a_{1}=\frac{(c+1)(c)(c+2-\gamma )(c+1-\gamma )}{(c+2-\alpha )(c+1-\alpha )(c+2-\beta )(c+1-\beta )}a_{0} = \frac{(c)_{2}(c+1-\gamma )_{2}}{(c+1-\alpha )_{2}(c+1-\beta )_{2}}a_{0} \end{align} }[/math]

As we can see,

[math]\displaystyle{ a_{r}=\frac{(c)_r(c+1-\gamma )_r}{(c+1-\alpha )_{r}(c+1-\beta )_{r}}a_{0}\quad \forall r \ge 0 }[/math]

Hence, our assumed solution takes the form

[math]\displaystyle{ y=a_{0}\sum_{r=0}^{\infty} \frac{(c)_{r}(c+1-\gamma )_{r}}{(c+1-\alpha )_{r}(c+1-\beta )_{r}}s^{r+c} }[/math]

We are now ready to study the solutions corresponding to the different cases for c1 − c2 = α − β.

Analysis of the solution in terms of the difference α − β of the two roots

α − β not an integer

Then y1 = y|c = α and y2 = y|c = β. Since

[math]\displaystyle{ y=a_0 \sum_{r=0}^{\infty} \frac{(c)_{r}(c+1-\gamma )_{r}}{(c+1-\alpha )_{r}(c+1-\beta )_{r}}s^{r+c}, }[/math]

we have

[math]\displaystyle{ \begin{align} y_1&=a_0 \sum_{r=0}^{\infty} \frac{(\alpha )_r (\alpha +1-\gamma )_r}{(1)_r (\alpha +1-\beta )_r} s^{r+\alpha} =a_0 s^{\alpha} \ {}_2F_1(\alpha, \alpha +1-\gamma ; \alpha +1-\beta ; s) \\ y_2&=a_0 \sum_{r=0}^{\infty} \frac{(\beta )_r (\beta +1-\gamma )_r}{(\beta +1-\alpha )_r (1)_r} s^{r+\beta} =a_0 s^{\beta} \ {}_2F_1(\beta, \beta +1-\gamma ; \beta +1-\alpha ;s) \end{align} }[/math]

Hence, y = Ay1 + By2. Let Aa0 = A and Ba0 = B. Then, noting that s = x−1,

[math]\displaystyle{ y=A \left \{ x^{-\alpha} \ {}_{2}F_{1} \left (\alpha, \alpha +1-\gamma ; \alpha +1-\beta ; x^{-1} \right ) \right \}+B \left \{ x^{-\beta} \ {}_{2}F_{1} \left (\beta, \beta +1-\gamma ; \beta +1-\alpha ; x^{-1} \right ) \right \} }[/math]

α − β = 0

Then y1 = y|c = α. Since α = β, we have

[math]\displaystyle{ y=a_{0}\sum_{r=0}^{\infty }{\frac{(c)_{r}(c+1-\gamma )_{r}}{\left( (c+1-\alpha )_{r} \right)^{2}}s^{r+c}} }[/math]

Hence,

[math]\displaystyle{ \begin{align} y_{1} &= a_{0}\sum_{r=0}^{\infty }{\frac{(\alpha )_{r}(\alpha +1-\gamma )_{r}}{(1)_{r}(1)_{r}}s^{r+\alpha } }=a_0 s^{\alpha} \ {}_{2}F_{1}(\alpha, \alpha +1-\gamma ; 1; s) \\ y_{2} &= \left. \frac{\partial y}{\partial c}\right |_{c=\alpha} \end{align} }[/math]

To calculate this derivative, let

[math]\displaystyle{ M_{r}=\frac{(c)_{r}(c+1-\gamma )_{r}}{\left( (c+1-\alpha )_{r} \right)^{2}} }[/math]

Then using the method in the case γ = 1 above, we get

[math]\displaystyle{ \frac{\partial M_r}{\partial c}=\frac{(c)_r(c+1-\gamma)_r}{\left( (c+1-\alpha )_{r} \right)^{2}} \sum_{k=0}^{r-1} \left( \frac{1}{c+k}+\frac{1}{c+1-\gamma +k}-\frac{2}{c+1-\alpha +k} \right) }[/math]

Now,

[math]\displaystyle{ \begin{align} y &=a_{0}s^{c}\sum_{r=0}^{\infty } \frac{(c)_{r}(c+1-\gamma )_r}{\left( (c+1-\alpha )_r \right)^{2}}s^r \\ &=a_{0}s^{c}\sum_{r=0}^{\infty }{M_{r}s^{r}} \\ &=a_0 s^c \left (\ln(s)\sum_{r=0}^{\infty} \frac{(c)_{r}(c+1-\gamma )_{r}}{\left( (c+1-\alpha )_r \right)^{2}}s^r + \sum_{r=0}^{\infty } \frac{(c)_{r}(c+1-\gamma )_{r}}{\left( (c+1-\alpha)_r \right)^2}\left\{\sum_{k=0}^{r-1}{\left( \frac{1}{c+k}+\frac{1}{c+1-\gamma +k}-\frac{2}{c+1-\alpha +k} \right)} \right\}s^r \right ) \end{align} }[/math]

Hence,

[math]\displaystyle{ \frac{\partial y}{\partial c}=a_{0}s^{c}\sum_{r=0}^{\infty } \frac{(c)_{r}(c+1-\gamma )_{r}}{\left( (c+1-\alpha )_{r} \right)^{2}} \left( \ln(s) + \sum_{k=0}^{r-1}{\left( \frac{1}{c+k}+\frac{1}{c+1-\gamma +k}-\frac{2}{c+1-\alpha +k} \right)} \right)s^r }[/math]

Therefore:

[math]\displaystyle{ y_2 = \left. \frac{\partial y}{\partial c}\right |_{c=\alpha} = a_0 s^{\alpha }\sum_{r=0}^{\infty } \frac{(\alpha )_{r}(\alpha +1-\gamma )_{r}}{(1)_r (1)_r}\left( \ln(s) +\sum_{k=0}^{r-1} \left( \frac{1}{\alpha +k}+\frac{1}{\alpha +1-\gamma +k}-\frac{2}{1+k} \right) \right)s^r }[/math]

Hence, y = C′y1 + D′y2. Let C′a0 = C and D′a0 = D. Noting that s = x−1,

[math]\displaystyle{ y=C \left \{ x^{-\alpha} {}_2F_1 \left (\alpha, \alpha +1-\gamma ; 1; x^{-1} \right ) \right \} +D \left \{ x^{-\alpha} \sum_{r=0}^{\infty} \frac{(\alpha )_{r}(\alpha +1-\gamma )_{r}}{(1)_{r} (1)_{r}} \left( \ln \left (x^{-1} \right )+\sum_{k=0}^{r-1} \left( \frac{1}{\alpha +k}+\frac{1}{\alpha +1-\gamma +k}-\frac{2}{1+k} \right) \right) x^{-r} \right \} }[/math]

α − β an integer and α − β ≠ 0

α − β > 0

From the recurrence relation

[math]\displaystyle{ a_{r}=\frac{(r+c-1)(r+c-\gamma )}{(r+c-\alpha )(r+c-\beta )}a_{r-1} }[/math]

we see that when c = β (the smaller root), aα−β → ∞. Hence, we must make the substitution a0 = b0(cci), where ci is the root for which our solution is infinite. Hence, we take a0 = b0(c − β) and our assumed solution takes the new form

[math]\displaystyle{ y_{b}=b_{0}\sum_{r=0}^{\infty} \frac{(c-\beta )(c)_{r}(c+1-\gamma )_{r}}{(c+1-\alpha )_{r}(c+1-\beta )_{r}}s^{r+c} }[/math]

Then y1 = yb|c = β. As we can see, all terms before

[math]\displaystyle{ \frac{(c-\beta )(c)_{\alpha -\beta }(c+1-\gamma )_{\alpha -\beta }}{(c+1-\alpha )_{\alpha -\beta }(c+1-\beta )_{\alpha -\beta }}s^{\alpha -\beta } }[/math]

vanish because of the c − β in the numerator.

But starting from this term, the c − β in the numerator vanishes. To see this, note that

[math]\displaystyle{ (c+1-\alpha )_{\alpha -\beta } =(c+1-\alpha )(c+2-\alpha )\cdots(c-\beta ). }[/math]

Hence, our solution takes the form

[math]\displaystyle{ \begin{align} y_{1} &=b_0 \left( \frac{(\beta )_{\alpha -\beta }(\beta +1-\gamma )_{\alpha -\beta }}{(\beta +1-\alpha )_{\alpha -\beta -1}(1)_{\alpha -\beta }}s^{\alpha -\beta }+\frac{(\beta )_{\alpha -\beta +1}(\beta +1-\gamma )_{\alpha -\beta +1}}{(\beta +1-\alpha )_{\alpha -\beta -1}(1)(1)_{\alpha -\beta +1}}s^{\alpha -\beta +1}+ \cdots \right) \\ &=\frac{b_0}{(\beta +1-\alpha )_{\alpha -\beta -1}}\sum_{r=\alpha -\beta }^{\infty} \frac{(\beta )_{r}(\beta +1-\gamma )_r }{(1)_r (1)_{r+\beta -\alpha }}s^r \end{align} }[/math]

Now,

[math]\displaystyle{ y_2=\left.\frac{\partial y_{b}}{\partial c}\right|_{c=\alpha}. }[/math]

To calculate this derivative, let

[math]\displaystyle{ M_{r}=\frac{(c-\beta )(c)_{r}(c+1-\gamma )_{r}}{(c+1-\alpha )_{r}(c+1-\beta )_{r}}. }[/math]

Then using the method in the case γ = 1 above we get

[math]\displaystyle{ \frac{\partial M_r}{\partial c} = \frac{(c - \beta)(c)_r (c + 1 - \gamma)_r}{(c + 1 - \alpha)_r (c +1-\beta)_r}\left (\frac{1}{c - \beta} + \sum_{k = 0}^{r-1} \left(\frac{1}{c + k} + \frac{1}{c + 1 - \gamma + k} -\frac{1}{c + 1 - \alpha + k} - \frac{1}{c + 1 - \beta + k} \right) \right ) }[/math]

Now,

[math]\displaystyle{ y_b=b_0\sum_{r=0}^{\infty }{\left( \frac{(c-\beta )(c)_{r}(c+1-\gamma )_{r}}{(c+1-\alpha )_{r}(c+1-\beta )_{r}}s^{r+c} \right)}=b_{0}s^c\sum_{r=0}^{\infty }{M_{r}s^{r}} }[/math]

Hence,

[math]\displaystyle{ \begin{align} \frac{\partial y}{\partial c} &= b_0 s^c \ln(s) \sum_{r = 0}^\infty \frac{(c - \beta)(c)_r (c + 1 - \gamma)_r}{(c + 1 - \alpha)_r (c + 1 - \beta)_r} s^r \\ & \quad + b_0 s^c \sum_{r = 0}^\infty \frac{(c - \beta) (c)_r (c + 1 - \gamma)_r}{(c + 1 - \alpha)_r (c + 1 - \beta)_r} \left (\frac{1}{c - \beta} + \sum_{k = 0}^{r - 1} \left(\frac{1}{c + k} + \frac{1}{c + 1 - \gamma + k} - \frac{1}{c + 1 - \alpha + k}- \frac{1}{c + 1 - \beta + k} \right) \right) s^r \end{align} }[/math]

Hence,

[math]\displaystyle{ \frac{\partial y}{\partial c} = b_0 s^c \sum_{r = 0}^\infty \frac{(c - \beta)(c)_r (c + 1 - \gamma)_r}{(c + 1 - \alpha)_r (c + 1 - \beta)_r} \left (\ln(s) + \frac{1}{c - \beta } + \sum_{k = 0}^{r - 1} \left(\frac{1}{c + k} + \frac{1}{c + 1 - \gamma + k} - \frac{1}{c + 1 - \alpha + k} - \frac{1}{c + 1 - \beta + k}\right) \right ) s^{r} }[/math]

At c = α we get y2. Hence, y = Ey1 + Fy2. Let Eb0 = E and Fb0 = F. Noting that s = x−1 we get

[math]\displaystyle{ \begin{align} y &= E \left \{ \frac{1}{(\beta + 1 - \alpha)_{\alpha - \beta - 1}} \sum_{r = \alpha - \beta}^\infty \frac{(\beta)_r (\beta + 1 - \gamma)_r}{(1)_r (1)_{r + \beta - \alpha}} x^{-r} \right \} + \\ & \quad+ F \left \{ x^{-\alpha} \sum_{r = 0}^\infty \frac{(\alpha - \beta) (\alpha)_r (\alpha + 1 - \gamma)_r}{(1)_r (\alpha + 1 - \beta)_r} \left (\ln \left (x^{-1} \right ) + \frac{1}{\alpha -\beta } + \sum_{k = 0}^{r - 1} \left(\frac{1}{\alpha + k} + \frac{1}{\alpha + 1 + k - \gamma} -\frac{1}{1 + k}-\frac{1}{\alpha + 1 + k - \beta} \right) \right ) x^{-r} \right \} \end{align} }[/math]

α − β < 0

From the symmetry of the situation here, we see that

[math]\displaystyle{ \begin{align} y &= G \left \{ \frac{1}{(\alpha + 1 - \beta)_{\beta - \alpha - 1}} \sum_{r = \beta - \alpha}^\infty \frac{(\alpha)_r (\alpha + 1 - \gamma)_r}{(1)_r (1)_{r + \alpha - \beta}} x^{-r} \right \} + \\ & \quad + H \left \{ x^{-\beta} \sum_{r = 0}^\infty \frac{(\beta - \alpha) (\beta)_r (\beta + 1 - \gamma)_r}{(1)_r (\beta + 1 - \alpha)_r} \left (\ln \left (x^{-1} \right ) + \frac{1}{\beta - \alpha } + \sum_{k = 0}^{r - 1} \left(\frac{1}{\beta + k} + \frac{1}{\beta + 1 + k - \gamma} - \frac{1}{1 + k} - \frac{1}{\beta + 1 + k - \alpha} \right) \right ) x^{-r} \right \} \end{align} }[/math]

References

  1. 1.0 1.1 1.2 Abramowitz and Stegun
  • Ian Sneddon (1966). Special functions of mathematical physics and chemistry. OLIVER B. ISBN 978-0-05-001334-2. 

Abramowitz, Milton; Stegun, Irene A. (1964). Handbook of Mathematical Functions. New York: Dover. ISBN 978-0-48-661272-0. https://archive.org/details/handbookofmathe000abra.