Method of dominant balance

From HandWiki
Short description: Method to determine the asymptotic behavior of solutions to ODEs without fully solving the equation

In mathematics, the method of dominant balance is used to determine the asymptotic behavior of solutions to an ordinary differential equation without fully solving the equation. The process is iterative, in that the result obtained by performing the method once can be used as input when the method is repeated, to obtain as many terms in the asymptotic expansion as desired.[1]

The process goes as follows:

  1. Assume that the asymptotic behavior has the form
    [math]\displaystyle{ y(x) \sim e^{S(x)}. }[/math]
  2. Make an informed guess as to which terms in the ODE might be negligible in the limit of interest.
  3. Drop these terms and solve the resulting simpler ODE.
  4. Check that the solution is consistent with step 2. If this is the case, then one has the controlling factor of the asymptotic behavior; otherwise, one needs try dropping different terms in step 2, instead.
  5. Repeat the process to higher orders, relying on the above result as the leading term in the solution.

Example: solving polynomial equation

[2]

To solve the equation [math]\displaystyle{ \epsilon x^5 - 16 x + 1 = 0 }[/math] at the limit of small [math]\displaystyle{ \epsilon }[/math], we can consider performing a serial expansion of form [math]\displaystyle{ x = x_0 + \epsilon x_1 + \cdots }[/math]. This however encounters the issue: when [math]\displaystyle{ \epsilon = 0 }[/math], the equation has just one root [math]\displaystyle{ 1/16 }[/math]. However for nonzero [math]\displaystyle{ \epsilon }[/math] the equation has 5 roots. The main issue is that 4 of these roots escape to infinity as [math]\displaystyle{ \epsilon \to 0 }[/math].

This suggests the use of the dominant balance method. That is, for small [math]\displaystyle{ \epsilon }[/math], we should have [math]\displaystyle{ |\epsilon x^5|, |16 x| \gg 1 }[/math], so we would approximately solve the equation as [math]\displaystyle{ \epsilon x^5 - 16 x \approx 0 }[/math], giving [math]\displaystyle{ x \sim \epsilon^{-1/4} }[/math]. So plugging in [math]\displaystyle{ y = \epsilon^{1/4} x }[/math], we obtain [math]\displaystyle{ y^5 - 16 y + \epsilon^{1/4} = 0 }[/math]There are five roots [math]\displaystyle{ y = 0, 2, 2i, -2, -2i }[/math], and expanding each root as a power series in [math]\displaystyle{ \epsilon^{1/4} }[/math], we obtain the five series:[math]\displaystyle{ \begin{aligned} & y_1=-2-\frac{1}{64} \varepsilon^{\frac{1}{4}}+\frac{5}{16384} \varepsilon^{\frac{1}{2}}-\frac{5}{524288} \varepsilon^{\frac{3}{4}}+\ldots, \\ & y_2=2-\frac{1}{64} \varepsilon^{\frac{1}{4}}-\frac{5}{16384} \varepsilon^{\frac{1}{2}}-\frac{5}{524288} \varepsilon^{\frac{3}{4}}-\ldots, \\ & y_3=\frac{1}{16} \varepsilon^{\frac{1}{4}}+\frac{1}{16777216} \varepsilon^{\frac{5}{4}}+\frac{5}{17592186044416} \varepsilon^{\frac{9}{4}}+\ldots, \\ & y_4=-2 i-\frac{1}{64} \varepsilon^{\frac{1}{4}}-\frac{5 i}{16384} \varepsilon^{\frac{1}{2}}+\frac{5}{524288} \varepsilon^{\frac{3}{4}}+\ldots, \\ & y_5=2 i-\frac{1}{64} \varepsilon^{\frac{1}{4}}+\frac{5 i}{16384} \varepsilon^{\frac{1}{2}}+\frac{5}{524288} \varepsilon^{\frac{3}{4}}-\ldots \end{aligned} }[/math]

Example

For arbitrary constants c and a, consider

[math]\displaystyle{ xy''+(c-x)y'-ay=0. }[/math]

This differential equation cannot be solved exactly. However, it is useful to consider how the solutions behave for large x: it turns out that [math]\displaystyle{ y }[/math] behaves like [math]\displaystyle{ e^x\, }[/math] as x → ∞ .

More rigorously, we will have [math]\displaystyle{ \log(y)\sim {x} }[/math], not [math]\displaystyle{ y\sim e^{x} }[/math]. Since we are interested in the behavior of y in the large x limit, we change variables to y = exp(S(x)), and re-express the ODE in terms of S(x),

[math]\displaystyle{ xS''+xS'^2+(c-x)S'-a=0,\, }[/math]

or

[math]\displaystyle{ S''+S'^2+\left(\frac{c}{x}-1\right)S'-\frac{a}{x}=0\, }[/math]

where we have used the product rule and chain rule to evaluate the derivatives of y.

Now suppose first that a solution to this ODE satisfies

[math]\displaystyle{ S'^2\sim S', }[/math]

as x → ∞, so that

[math]\displaystyle{ S'',~\frac{c}{x}S',~\frac{a}{x}=o(S'^2),~o(S')\, }[/math]

as x → ∞. Obtain then the dominant asymptotic behaviour by setting

[math]\displaystyle{ S_0'^2=S_0'. }[/math]

If [math]\displaystyle{ S_0 }[/math] satisfies the above asymptotic conditions, then the above assumption is consistent. The terms we dropped will have been negligible with respect to the ones we kept.

[math]\displaystyle{ S_0 }[/math] is not a solution to the ODE for S, but it represents the dominant asymptotic behavior, which is what we are interested in. Check that this choice for [math]\displaystyle{ S_0 }[/math] is consistent,

[math]\displaystyle{ \begin{align} S_0' &= 1 \\ S_0'^2 &= 1 \\ S_0'' &= 0 = o(S_0') \\ \frac{c}{x}S_0' &= \frac{c}{x} = o(S_0') \\ \frac{a}{x} &= o(S_0') \end{align} }[/math]

Everything is indeed consistent.

Thus the dominant asymptotic behaviour of a solution to our ODE has been found,

[math]\displaystyle{ \begin{align} S_0 &\sim x \\ \log(y) &\sim x. \end{align} }[/math]

By convention, the full asymptotic series is written as

[math]\displaystyle{ y\sim Ax^p e^{\lambda x^r}\left(1 + \frac{u_1}{x} + \frac{u_2}{x^2} + \cdots + \frac{u_k}{x^k} + o\left(\frac{1}{x^k}\right)\right), }[/math]

so to get at least the first term of this series we have to take a further step to see if there is a power of x out the front.

Proceed by introducing a new subleading dependent variable,

[math]\displaystyle{ S(x)\equiv S_0(x)+C(x)\, }[/math]

and then seek asymptotic solutions for C(x). Substituting into the above ODE for S(x) we find

[math]\displaystyle{ C''+C'^2+C'+\frac{c}{x}C'+\frac{c-a}{x}=0. }[/math]

Repeating the same process as before, we keep C' and (c − a)/x to find that

[math]\displaystyle{ C_0=\log x^{a-c}. }[/math]

The leading asymptotic behaviour is then

[math]\displaystyle{ y\sim x^{a-c}e^x. }[/math]

See also

References

  1. Bender, C.M.; Orszag, S.A. (1999). Advanced Mathematical Methods for Scientists and Engineers. Springer. pp. 549–568. ISBN 0-387-98931-5. 
  2. Perturbation methods, Physics 2400 - Mathematical methods for the physical sciences