Method of undetermined coefficients

From HandWiki
Revision as of 18:18, 6 March 2023 by MainAI6 (talk | contribs) (update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Approach for finding solutions of nonhomogeneous ordinary differential equations

In mathematics, the method of undetermined coefficients is an approach to finding a particular solution to certain nonhomogeneous ordinary differential equations and recurrence relations. It is closely related to the annihilator method, but instead of using a particular kind of differential operator (the annihilator) in order to find the best possible form of the particular solution, an ansatz or 'guess' is made as to the appropriate form, which is then tested by differentiating the resulting equation. For complex equations, the annihilator method or variation of parameters is less time-consuming to perform.

Undetermined coefficients is not as general a method as variation of parameters, since it only works for differential equations that follow certain forms.[1]

Description of the method

Consider a linear non-homogeneous ordinary differential equation of the form

[math]\displaystyle{ \sum_{i=0}^n c_i y^{(i)} + y^{(n+1)} = g(x) }[/math]
where [math]\displaystyle{ y^{(i)} }[/math] denotes the i-th derivative of [math]\displaystyle{ y }[/math], and [math]\displaystyle{ c_i }[/math] denotes a function of [math]\displaystyle{ x }[/math].

The method of undetermined coefficients provides a straightforward method of obtaining the solution to this ODE when two criteria are met:[2]

  1. [math]\displaystyle{ c_i }[/math] are constants.
  2. g(x) is a constant, a polynomial function, exponential function [math]\displaystyle{ e^{\alpha x} }[/math], sine or cosine functions [math]\displaystyle{ \sin{\beta x} }[/math] or [math]\displaystyle{ \cos{\beta x} }[/math], or finite sums and products of these functions ([math]\displaystyle{ {\alpha} }[/math], [math]\displaystyle{ {\beta} }[/math] constants).

The method consists of finding the general homogeneous solution [math]\displaystyle{ y_c }[/math] for the complementary linear homogeneous differential equation

[math]\displaystyle{ \sum_{i=0}^n c_i y^{(i)} + y^{(n+1)} = 0, }[/math]

and a particular integral [math]\displaystyle{ y_p }[/math] of the linear non-homogeneous ordinary differential equation based on [math]\displaystyle{ g(x) }[/math]. Then the general solution [math]\displaystyle{ y }[/math] to the linear non-homogeneous ordinary differential equation would be

[math]\displaystyle{ y = y_c + y_p. }[/math][3]

If [math]\displaystyle{ g(x) }[/math] consists of the sum of two functions [math]\displaystyle{ h(x) + w(x) }[/math] and we say that [math]\displaystyle{ y_{p_1} }[/math] is the solution based on [math]\displaystyle{ h(x) }[/math] and [math]\displaystyle{ y_{p_2} }[/math] the solution based on [math]\displaystyle{ w(x) }[/math]. Then, using a superposition principle, we can say that the particular integral [math]\displaystyle{ y_p }[/math] is[3]

[math]\displaystyle{ y_p = y_{p_1} + y_{p_2}. }[/math]

Typical forms of the particular integral

In order to find the particular integral, we need to 'guess' its form, with some coefficients left as variables to be solved for. This takes the form of the first derivative of the complementary function. Below is a table of some typical functions and the solution to guess for them.

Function of x Form for y
[math]\displaystyle{ k e^{a x}\! }[/math] [math]\displaystyle{ C e^{a x}\! }[/math]
[math]\displaystyle{ k x^n,\; n = 0, 1, 2,\ldots\! }[/math]

[math]\displaystyle{ \sum_{i=0}^n K_i x^i \! }[/math]

[math]\displaystyle{ k \cos(a x) \text{ or } k \sin(a x) \! }[/math]

[math]\displaystyle{ K \cos(a x) + M \sin(a x) \! }[/math]

[math]\displaystyle{ k e^{a x} \cos(b x) \text{ or } ke^{a x} \sin(b x) \! }[/math]

[math]\displaystyle{ e^{a x} (K \cos(b x) + M \sin(b x)) \! }[/math]

[math]\displaystyle{ \left(\sum_{i=0}^n k_i x^i\right) \cos(b x) \text{ or }\ \left(\sum_{i=0}^n k_i x^i\right) \sin(b x) \! }[/math]

[math]\displaystyle{ \left(\sum_{i=0}^n Q_i x^i\right) \cos(b x) + \left(\sum_{i=0}^n R_i x^i\right) \sin(b x) }[/math]

[math]\displaystyle{ \left(\sum_{i=0}^n k_i x^i\right) e^{a x} \cos(b x) \text{ or } \left(\sum_{i=0}^n k_i x^i\right) e^{a x} \sin(b x)\! }[/math]

[math]\displaystyle{ e^{a x} \left(\left(\sum_{i=0}^n Q_i x^i\right) \cos(b x) + \left(\sum_{i=0}^n R_i x^i\right) \sin(b x)\right) }[/math]

If a term in the above particular integral for y appears in the homogeneous solution, it is necessary to multiply by a sufficiently large power of x in order to make the solution independent. If the function of x is a sum of terms in the above table, the particular integral can be guessed using a sum of the corresponding terms for y.[1]

Examples

Example 1

Find a particular integral of the equation

[math]\displaystyle{ y'' + y = t \cos t. }[/math]

The right side t cos t has the form

[math]\displaystyle{ P_n e^{\alpha t} \cos{\beta t} }[/math]

with n = 2, α = 0, and β = 1.

Since α + = i is a simple root of the characteristic equation

[math]\displaystyle{ \lambda^2 + 1 = 0 }[/math]

we should try a particular integral of the form

[math]\displaystyle{ \begin{align} y_p &= t \left [F_1 (t) e^{\alpha t} \cos{\beta t} + G_1 (t) e^{\alpha t} \sin{\beta t} \right ] \\ &= t \left [F_1 (t) \cos t + G_1 (t) \sin t \right ] \\ &= t \left [ \left (A_0 t + A_1 \right ) \cos t + \left (B_0 t + B_1 \right ) \sin t \right ] \\ &= \left (A_0 t^2 + A_1 t \right ) \cos t + \left (B_0 t^2 + B_1 t \right) \sin t. \end{align} }[/math]

Substituting yp into the differential equation, we have the identity

[math]\displaystyle{ \begin{align} t \cos t &= y_p'' + y_p \\ &= \left [ \left(A_0 t^2 + A_1 t \right ) \cos t + \left (B_0 t^2 + B_1 t \right ) \sin t \right ]'' + \left[\left(A_0 t^2 + A_1 t \right ) \cos t + \left(B_0 t^2 + B_1 t \right ) \sin t \right ] \\ &= \left [2A_0 \cos t + 2 \left (2A_0 t + A_1 \right )(-\sin t) + \left (A_0 t^2 + A_1 t \right )(-\cos t) + 2B_0 \sin t + 2 \left (2B_0 t + B_1 \right ) \cos t + \left (B_0 t^2 + B_1 t \right )(- \sin t) \right ] \\ &\qquad +\left[\left(A_0 t^2 + A_1 t \right ) \cos t + \left(B_0 t^2 + B_1 t \right ) \sin t \right ] \\ &= [4B_0 t + (2A_0 + 2B_1)] \cos t + [-4A_0 t + (-2A_1 + 2B_0)] \sin t. \end{align} }[/math]

Comparing both sides, we have

[math]\displaystyle{ \begin{cases} 1 = 4B_0\\ 0 = 2A_0 + 2B_1 \\ 0 = -4A_0 \\ 0 = -2A_1 + 2B_0 \end{cases} }[/math]

which has the solution

[math]\displaystyle{ A_0 = 0, \quad A_1 = B_0 = \frac{1}{4}, \quad B_1 = 0. }[/math]

We then have a particular integral

[math]\displaystyle{ y_p = \frac {1} {4} t \cos t + \frac {1}{4} t^2 \sin t. }[/math]

Example 2

Consider the following linear nonhomogeneous differential equation:

[math]\displaystyle{ \frac{dy}{dx} = y + e^x. }[/math]

This is like the first example above, except that the nonhomogeneous part ([math]\displaystyle{ e^x }[/math]) is not linearly independent to the general solution of the homogeneous part ([math]\displaystyle{ c_1 e^x }[/math]); as a result, we have to multiply our guess by a sufficiently large power of x to make it linearly independent.

Here our guess becomes:

[math]\displaystyle{ y_p = A x e^x. }[/math]

By substituting this function and its derivative into the differential equation, one can solve for A:

[math]\displaystyle{ \frac{d}{dx} \left( A x e^x \right) = A x e^x + e^x }[/math]
[math]\displaystyle{ A x e^x + A e^x = A x e^x + e^x }[/math]
[math]\displaystyle{ A = 1. }[/math]

So, the general solution to this differential equation is:

[math]\displaystyle{ y = c_1 e^x + xe^x. }[/math]

Example 3

Find the general solution of the equation:

[math]\displaystyle{ \frac{dy}{dt} = t^2 - y }[/math]

[math]\displaystyle{ t^2 }[/math] is a polynomial of degree 2, so we look for a solution using the same form,

[math]\displaystyle{ y_p = A t^2 + B t + C, }[/math]

Plugging this particular function into the original equation yields,

[math]\displaystyle{ 2 A t + B = t^2 - (A t^2 + B t + C), }[/math]
[math]\displaystyle{ 2 A t + B =(1-A)t^2 -Bt -C, }[/math]
[math]\displaystyle{ (A-1)t^2 + (2A+B)t + (B+C) = 0. }[/math]

which gives:

[math]\displaystyle{ A-1 = 0, \quad 2A+B =0, \quad B+C=0. }[/math]

Solving for constants we get:

[math]\displaystyle{ y_p = t^2 - 2 t + 2 }[/math]

To solve for the general solution,

[math]\displaystyle{ y= y_p + y_c }[/math]

where [math]\displaystyle{ y_c }[/math] is the homogeneous solution [math]\displaystyle{ y_c = c_1 e^{-t} }[/math], therefore, the general solution is:

[math]\displaystyle{ y= t^2 - 2 t + 2 + c_1 e^{-t} }[/math]

References

  1. 1.0 1.1 Ralph P. Grimaldi (2000). "Nonhomogeneous Recurrence Relations". Section 3.3.3 of Handbook of Discrete and Combinatorial Mathematics. Kenneth H. Rosen, ed. CRC Press. ISBN 0-8493-0149-1.
  2. Zill, Dennis G., Warren S. Wright (2014). Advanced Engineering Mathematics. Jones and Bartlett. pp. 125. ISBN 978-1-4496-7977-4. 
  3. 3.0 3.1 Dennis G. Zill (14 May 2008). A First Course in Differential Equations. Cengage Learning. ISBN 978-0-495-10824-5. https://books.google.com/books?id=BnArjLNjXuYC&q=%22undetermined+coefficients%22.