Ordinary differential equation
Differential equations 

Classification 
Solution 
In mathematics, an ordinary differential equation (ODE) is a differential equation containing one or more functions of one independent variable and the derivatives of those functions.^{[1]} The term ordinary is used in contrast with the term partial differential equation which may be with respect to more than one independent variable.^{[2]}
Differential equations
A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
 [math]\displaystyle{ a_0(x)y +a_1(x)y' + a_2(x)y'' +\cdots +a_n(x)y^{(n)}+b(x)=0, }[/math]
where [math]\displaystyle{ a_0(x) }[/math], ..., [math]\displaystyle{ a_n(x) }[/math] and [math]\displaystyle{ b(x) }[/math] are arbitrary differentiable functions that do not need to be linear, and [math]\displaystyle{ y', \ldots, y^{(n)} }[/math] are the successive derivatives of the unknown function y of the variable x.
Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with nonlinear equations, they are generally approximated by linear differential equations for an easier solution. The few nonlinear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation).
Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution.
Background
Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations.
Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates),^{[3]} biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes).
Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler.
A simple example is Newton's second law of motion — the relationship between the displacement x and the time t of an object under the force F, is given by the differential equation
 [math]\displaystyle{ m \frac{\mathrm{d}^2 x(t)}{\mathrm{d}t^2} = F(x(t))\, }[/math]
which constrains the motion of a particle of constant mass m. In general, F is a function of the position x(t) of the particle at time t. The unknown function x(t) appears on both sides of the differential equation, and is indicated in the notation F(x(t)).^{[4]}^{[5]}^{[6]}^{[7]}
Definitions
In what follows, let y be a dependent variable and x an independent variable, and y = f(x) is an unknown function of x. The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation (dy/dx, d^{2}y/dx^{2}, …, d^{n}y/dx^{n}) is more useful for differentiation and integration, whereas Lagrange's notation (y′, y′′, …, y^{(n)}) is more useful for representing derivatives of any order compactly, and Newton's notation [math]\displaystyle{ (\dot y, \ddot y, \overset{...}{y}) }[/math] is often used in physics for representing derivatives of low order with respect to time.
General definition
Given F, a function of x, y, and derivatives of y. Then an equation of the form
 [math]\displaystyle{ F\left (x,y,y',\ldots, y^{(n1)} \right )=y^{(n)} }[/math]
is called an explicit ordinary differential equation of order n.^{[8]}^{[9]}
More generally, an implicit ordinary differential equation of order n takes the form:^{[10]}
 [math]\displaystyle{ F\left(x, y, y', y'',\ \ldots,\ y^{(n)}\right) = 0 }[/math]
There are further classifications:
 Autonomous
 A differential equation not depending on x is called autonomous.
 Linear

A differential equation is said to be linear if F can be written as a linear combination of the derivatives of y:
 [math]\displaystyle{ y^{(n)} = \sum_{i=0}^{n  1} a_i(x) y^{(i)} + r(x) }[/math]
 Homogeneous
 If r(x) = 0, and consequently one "automatic" solution is the trivial solution, y = 0. The solution of a linear homogeneous equation is a complementary function, denoted here by y_{c}.
 Nonhomogeneous (or inhomogeneous)
 If r(x) ≠ 0. The additional solution to the complementary function is the particular integral, denoted here by y_{p}.
 Nonlinear
 A differential equation that cannot be written in the form of a linear combination.
System of ODEs
A number of coupled differential equations form a system of equations. If y is a vector whose elements are functions; y(x) = [y_{1}(x), y_{2}(x),..., y_{m}(x)], and F is a vectorvalued function of y and its derivatives, then
 [math]\displaystyle{ \mathbf{y}^{(n)} = \mathbf{F}\left(x,\mathbf{y},\mathbf{y}',\mathbf{y}'',\ldots, \mathbf{y}^{(n1)} \right) }[/math]
is an explicit system of ordinary differential equations of order n and dimension m. In column vector form:
 [math]\displaystyle{ \begin{pmatrix} y_1^{(n)} \\ y_2^{(n)} \\ \vdots \\ y_m^{(n)} \end{pmatrix} = \begin{pmatrix} f_1 \left (x,\mathbf{y},\mathbf{y}',\mathbf{y}'',\ldots, \mathbf{y}^{(n1)} \right ) \\ f_2 \left (x,\mathbf{y},\mathbf{y}',\mathbf{y}'',\ldots, \mathbf{y}^{(n1)} \right ) \\ \vdots \\ f_m \left (x,\mathbf{y},\mathbf{y}',\mathbf{y}'',\ldots, \mathbf{y}^{(n1)} \right) \end{pmatrix} }[/math]
These are not necessarily linear. The implicit analogue is:
 [math]\displaystyle{ \mathbf{F} \left(x,\mathbf{y},\mathbf{y}',\mathbf{y}'',\ldots, \mathbf{y}^{(n)} \right) = \boldsymbol{0} }[/math]
where 0 = (0, 0, ..., 0) is the zero vector. In matrix form
 [math]\displaystyle{ \begin{pmatrix} f_1(x,\mathbf{y},\mathbf{y}',\mathbf{y}'',\ldots, \mathbf{y}^{(n)}) \\ f_2(x,\mathbf{y},\mathbf{y}',\mathbf{y}'',\ldots, \mathbf{y}^{(n)}) \\ \vdots \\ f_m(x,\mathbf{y},\mathbf{y}',\mathbf{y}'',\ldots, \mathbf{y}^{(n)}) \end{pmatrix}=\begin{pmatrix} 0\\ 0\\ \vdots\\ 0 \end{pmatrix} }[/math]
For a system of the form [math]\displaystyle{ \mathbf{F} \left(x,\mathbf{y},\mathbf{y}'\right) = \boldsymbol{0} }[/math], some sources also require that the Jacobian matrix [math]\displaystyle{ \frac{\partial\mathbf{F}(x,\mathbf{u},\mathbf{v})}{\partial \mathbf{v}} }[/math] be nonsingular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian nonsingularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems.^{[14]}^{[15]}^{[16]} Presumably for additional derivatives, the Hessian matrix and so forth are also assumed nonsingular according to this scheme, although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order,^{[17]} which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.
The behavior of a system of ODEs can be visualized through the use of a phase portrait.
Solutions
Given a differential equation
 [math]\displaystyle{ F\left(x, y, y', \ldots, y^{(n)} \right) = 0 }[/math]
a function u: I ⊂ R → R, where I is an interval, is called a solution or integral curve for F, if u is ntimes differentiable on I, and
 [math]\displaystyle{ F(x,u,u',\ \ldots,\ u^{(n)})=0 \quad x \in I. }[/math]
Given two solutions u: J ⊂ R → R and v: I ⊂ R → R, u is called an extension of v if I ⊂ J and
 [math]\displaystyle{ u(x) = v(x) \quad x \in I.\, }[/math]
A solution that has no extension is called a maximal solution. A solution defined on all of R is called a global solution.
A general solution of an nthorder equation is a solution containing n arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'.^{[18]} A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution.^{[19]}
In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters.
Solutions of Finite Duration
For nonlinear autonomous ODEs it is possible under some conditions to develop solutions of finite duration,^{[20]} meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finiteduration solutions can't be analytical functions on the whole real line, and because they will being nonLipschitz functions at their ending time, they don´t stand uniqueness of solutions of Lipschitz differential equations.
As example, the equation:
 [math]\displaystyle{ y'= \text{sgn}(y)\sqrt{y},\,\,y(0)=1 }[/math]
Admits the finite duration solution:
 [math]\displaystyle{ y(x)=\frac{1}{4}\left(1\frac{x}{2}+\left1\frac{x}{2}\right\right)^2 }[/math]
Theories
Singular solutions
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but littleknown work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
Reduction to quadratures
The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenthcentury algebraists to find a method for solving the general equation of the nth degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that complex differential equations require complex numbers. Hence, analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and, if so, what are the characteristic properties.
Fuchsian theory
Two memoirs by Fuchs^{[21]} inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a nonlinear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces f = 0 under rational onetoone transformations.
Lie's theory
From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact.
Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.^{[22]}
A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and nonlinear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE.
Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.
Sturm–Liouville theory
Sturm–Liouville theory is a theory of a special type of second order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via secondorder homogeneous linear equations. The problems are identified as SturmLiouville Problems (SLP) and are named after J.C.F. Sturm and J. Liouville, who studied them in the mid1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering.^{[23]} SLPs are also useful in the analysis of certain partial differential equations.
Existence and uniqueness of solutions
There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are
Theorem Assumption Conclusion Peano existence theorem F continuous local existence only Picard–Lindelöf theorem F Lipschitz continuous local existence and uniqueness
In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met.
Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (nonlinear) algebraic part alone.^{[24]}
Local existence and uniqueness theorem simplified
The theorem can be stated simply as follows.^{[25]} For the equation and initial value problem: [math]\displaystyle{ y' = F(x,y)\,,\quad y_0 = y(x_0) }[/math] if F and ∂F/∂y are continuous in a closed rectangle [math]\displaystyle{ R = [x_0a,x_0+a] \times [y_0b,y_0+b] }[/math] in the xy plane, where a and b are real (symbolically: a, b ∈ R) and × denotes the Cartesian product, square brackets denote closed intervals, then there is an interval [math]\displaystyle{ I = [x_0h,x_0+h] \subset [x_0a,x_0+a] }[/math] for some h ∈ R where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on F to be linear, this applies to nonlinear equations that take the form F(x, y), and it can also be applied to systems of equations.
Global uniqueness and maximum domain of solution
When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:^{[26]}
For each initial condition (x_{0}, y_{0}) there exists a unique maximum (possibly infinite) open interval
 [math]\displaystyle{ I_{\max} = (x_,x_+), x_\pm \in \R \cup \{\pm \infty\}, x_0 \in I_{\max} }[/math]
such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain [math]\displaystyle{ I_\max }[/math].
In the case that [math]\displaystyle{ x_\pm \neq \pm\infty }[/math], there are exactly two possibilities
 explosion in finite time: [math]\displaystyle{ \limsup_{x \to x_\pm} \y(x)\ \to \infty }[/math]
 leaves domain of definition: [math]\displaystyle{ \lim_{x \to x_\pm} y(x)\ \in \partial \bar{\Omega} }[/math]
where Ω is the open set in which F is defined, and [math]\displaystyle{ \partial \bar{\Omega} }[/math] is its boundary.
Note that the maximum domain of the solution
 is always an interval (to have uniqueness)
 may be smaller than [math]\displaystyle{ \R }[/math]
 may depend on the specific choice of (x_{0}, y_{0}).
 Example.
 [math]\displaystyle{ y' = y^2 }[/math]
This means that F(x, y) = y^{2}, which is C^{1} and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.
Even in such a simple setting, the maximum domain of solution cannot be all [math]\displaystyle{ \R }[/math] since the solution is
 [math]\displaystyle{ y(x) = \frac{y_0}{(x_0x)y_0+1} }[/math]
which has maximum domain:
 [math]\displaystyle{ \begin{cases}\R & y_0 = 0 \\[4pt] \left (\infty, x_0+\frac{1}{y_0} \right ) & y_0 \gt 0 \\[4pt] \left (x_0+\frac{1}{y_0},+\infty \right ) & y_0 \lt 0 \end{cases} }[/math]
This shows clearly that the maximum interval may depend on the initial conditions. The domain of y could be taken as being [math]\displaystyle{ \R \setminus (x_0+ 1/y_0), }[/math] but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.
The maximum domain is not [math]\displaystyle{ \R }[/math] because
 [math]\displaystyle{ \lim_{x \to x_\pm} \y(x)\ \to \infty, }[/math]
which is one of the two possible cases according to the above theorem.
Reduction of order
Differential equations can usually be solved more easily if the order of the equation can be reduced.
Reduction to a firstorder system
Any explicit differential equation of order n,
 [math]\displaystyle{ F\left(x, y, y', y'',\ \ldots,\ y^{(n1)}\right) = y^{(n)} }[/math]
can be written as a system of n firstorder differential equations by defining a new family of unknown functions
 [math]\displaystyle{ y_i = y^{(i1)}.\! }[/math]
for i = 1, 2,..., n. The ndimensional system of firstorder coupled differential equations is then
 [math]\displaystyle{ \begin{array}{rcl} y_1'&=&y_2\\ y_2'&=&y_3\\ &\vdots&\\ y_{n1}'&=&y_n\\ y_n'&=&F(x,y_1,\ldots,y_n). \end{array} }[/math]
more compactly in vector notation:
 [math]\displaystyle{ \mathbf{y}'=\mathbf{F}(x,\mathbf{y}) }[/math]
where
 [math]\displaystyle{ \mathbf{y}=(y_1,\ldots,y_n),\quad \mathbf{F}(x,y_1,\ldots,y_n)=(y_2,\ldots,y_n,F(x,y_1,\ldots,y_n)). }[/math]
Summary of exact solutions
Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.
In the table below, P(x), Q(x), P(y), Q(y), and M(x,y), N(x,y) are any integrable functions of x, y, and b and c are real given constants, and C_{1}, C_{2}, ... are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.
In the integral solutions, λ and ε are dummy variables of integration (the continuum analogues of indices in summation), and the notation ∫^{x} F(λ) dλ just means to integrate F(λ) with respect to λ, then after the integration substitute λ = x, without adding constants (explicitly stated).
Separable equations
Differential equation  Solution method  General solution 

Firstorder, separable in x and y (general case, see below for special cases)^{[27]}
[math]\displaystyle{ \begin{align} P_1(x)Q_1(y) + P_2(x)Q_2(y)\,\frac{dy}{dx} &= 0 \\ P_1(x)Q_1(y)\,dx + P_2(x)Q_2(y)\,dy &= 0 \end{align} }[/math] 
Separation of variables (divide by P_{2}Q_{1}).  [math]\displaystyle{ \int^x \frac{P_1(\lambda)}{P_2(\lambda)}\,d\lambda + \int^y \frac{Q_2(\lambda)}{Q_1(\lambda)}\,d\lambda = C }[/math] 
Firstorder, separable in x^{[25]}
[math]\displaystyle{ \begin{align} \frac{dy}{dx} &= F(x) \\ dy &= F(x) \, dx \end{align} }[/math] 
Direct integration.  [math]\displaystyle{ y= \int^x F(\lambda) \, d\lambda + C }[/math] 
Firstorder, autonomous, separable in y^{[25]}
[math]\displaystyle{ \begin{align} \frac{dy}{dx} &= F(y) \\ dy &= F(y) \, dx \end{align} }[/math] 
Separation of variables (divide by F).  [math]\displaystyle{ x=\int^y \frac{d\lambda}{F(\lambda)} + C }[/math] 
Firstorder, separable in x and y^{[25]}
[math]\displaystyle{ \begin{align} P(y)\frac{dy}{dx} + Q(x) &= 0 \\ P(y)\,dy + Q(x)\,dx &= 0 \end{align} }[/math] 
Integrate throughout.  [math]\displaystyle{ \int^y P(\lambda)\, d\lambda + \int^x Q(\lambda)\,d\lambda = C }[/math] 
General firstorder equations
Differential equation  Solution method  General solution 

Firstorder, homogeneous^{[25]}
[math]\displaystyle{ \frac{dy}{dx} = F \left( \frac y x \right ) }[/math] 
Set y = ux, then solve by separation of variables in u and x.  [math]\displaystyle{ \ln (Cx) = \int^{y/x} \frac{d\lambda}{F(\lambda)  \lambda} }[/math] 
Firstorder, separable^{[27]}
[math]\displaystyle{ \begin{align} yM(xy) + xN(xy)\,\frac{dy}{dx} &= 0 \\ yM(xy)\,dx + xN(xy)\,dy &= 0 \end{align} }[/math] 
Separation of variables (divide by xy). 
[math]\displaystyle{ \ln (Cx) = \int^{xy} \frac{N(\lambda)\,d\lambda}{\lambda [N(\lambda)M(\lambda)] } }[/math] If N = M, the solution is xy = C. 
Exact differential, firstorder^{[25]}
[math]\displaystyle{ \begin{align} M(x,y) \frac{dy}{dx} + N(x,y) &= 0 \\ M(x,y)\,dy + N(x,y)\,dx &= 0 \end{align} }[/math] where [math]\displaystyle{ \frac{\partial M}{\partial y} = \frac{\partial N}{\partial x} }[/math] 
Integrate throughout.  [math]\displaystyle{ \begin{align}
F(x,y) &= \int^x M(\lambda,y)\,d\lambda + \int^y Y(\lambda)\,d\lambda \\
&= \int^y N(x,\lambda)\,d\lambda + \int^x X(\lambda)\,d\lambda=C
\end{align} }[/math]
where [math]\displaystyle{ Y(y)= N(x,y)\frac{\partial}{\partial y}\int^x M(\lambda,y)\,d\lambda }[/math] and [math]\displaystyle{ X(x)= M(x,y)\frac{\partial}{\partial x}\int^y N(x,\lambda)\,d\lambda }[/math] 
Inexact differential, firstorder^{[25]}
[math]\displaystyle{ \begin{align} M(x,y) \frac{dy}{dx} + N(x,y) &= 0 \\ M(x,y)\,dy + N(x,y)\,dx &= 0 \end{align} }[/math] where [math]\displaystyle{ \frac{\partial M}{\partial x} \neq \frac{\partial N}{\partial y} }[/math] 
Integration factor μ(x, y) satisfying
[math]\displaystyle{ \frac{\partial (\mu M)}{\partial y} = \frac{\partial (\mu N)}{\partial x} }[/math] 
If μ(x, y) can be found in a suitable way, then
[math]\displaystyle{ \begin{align} F(x,y) = &\int^x \mu(\lambda,y)M(\lambda,y)\,d\lambda + \int^y Y(\lambda)\,d\lambda \\ = &\int^y \mu(x,\lambda)N(x,\lambda)\,d\lambda + \int^x X(\lambda)\,d\lambda = C \end{align} }[/math] where [math]\displaystyle{ Y(y)= N(x,y)\frac{\partial}{\partial y}\int^x \mu (\lambda,y)M(\lambda,y)\,d\lambda }[/math] and [math]\displaystyle{ X(x)= M(x,y)\frac{\partial}{\partial x}\int^y \mu (x,\lambda)N(x,\lambda)\,d\lambda }[/math] 
General secondorder equations
Differential equation  Solution method  General solution 

Secondorder, autonomous^{[28]}
[math]\displaystyle{ \frac{d^2y}{dx^2} = F(y) }[/math] 
Multiply both sides of equation by 2dy/dx, substitute [math]\displaystyle{ 2 \frac{dy}{dx} \frac{d^2y}{dx^2} = \frac{d}{dx} \left(\frac{dy}{dx}\right)^2 }[/math], then integrate twice.  [math]\displaystyle{ x = \pm \int^y \frac{ d \lambda}{\sqrt{2 \int^\lambda F(\varepsilon) \, d \varepsilon + C_1}} + C_2 }[/math] 
Linear to the nth order equations
Differential equation  Solution method  General solution 

Firstorder, linear, inhomogeneous, function coefficients^{[25]}
[math]\displaystyle{ \frac{dy}{dx} + P(x)y = Q(x) }[/math] 
Integrating factor: [math]\displaystyle{ e^{\int^x P(\lambda)\,d\lambda}. }[/math]  Armour formula:
[math]\displaystyle{ y = e^{ \int^x P(\lambda) \, d\lambda}\left[\int^x e^{\int^\lambda P(\varepsilon) \, d\varepsilon}Q(\lambda) \, d\lambda +C \right] }[/math] 
Secondorder, linear, inhomogeneous, function coefficients
[math]\displaystyle{ \frac{d^2y}{dx^2}+2p(x)\frac{dy}{dx}+\left(p(x)^2+p'(x)\right)y=q(x) }[/math] 
Integrating factor: [math]\displaystyle{ e^{\int^x P(\lambda)\,d\lambda} }[/math]  [math]\displaystyle{ y = e^{ \int^x P(\lambda) \, d\lambda}\left[\int^x\left(\int^\xi e^{\int^\lambda P(\varepsilon) \, d\varepsilon}Q(\lambda) \, d\lambda \right)d\xi +C_1x+C_2\right] }[/math] 
Secondorder, linear, inhomogeneous, constant coefficients^{[29]}
[math]\displaystyle{ \frac{d^2y}{dx^2} + b\frac{dy}{dx} + cy = r(x) }[/math] 
Complementary function y_{c}: assume y_{c} = e^{αx}, substitute and solve polynomial in α, to find the linearly independent functions [math]\displaystyle{ e^{\alpha_j x} }[/math].
Particular integral y_{p}: in general the method of variation of parameters, though for very simple r(x) inspection may work.^{[25]} 
[math]\displaystyle{ y = y_c + y_p }[/math]
If b^{2} > 4c, then [math]\displaystyle{ y_c = C_1e^{ \frac x2\,\left(b + \sqrt{b^2  4c}\right)} + C_2e^{\frac x2\,\left(b  \sqrt{b^2  4c}\right)} }[/math] If b^{2} = 4c, then [math]\displaystyle{ y_c = (C_1x + C_2)e^{\frac{bx}{2}} }[/math] If b^{2} < 4c, then [math]\displaystyle{ y_c = e^{ \frac{bx}{2}} \left[ C_1 \sin\left( x\,\frac{\sqrt{4cb^2}}{2}\right) + C_2\cos\left( x\,\frac{\sqrt{4cb^2}}{2}\right)\right] }[/math] 
nthorder, linear, inhomogeneous, constant coefficients^{[29]}
[math]\displaystyle{ \sum_{j=0}^n b_j \frac{d^j y}{dx^j} = r(x) }[/math] 
Complementary function y_{c}: assume y_{c} = e^{αx}, substitute and solve polynomial in α, to find the linearly independent functions [math]\displaystyle{ e^{\alpha_j x} }[/math].
Particular integral y_{p}: in general the method of variation of parameters, though for very simple r(x) inspection may work.^{[25]} 
[math]\displaystyle{ y = y_c + y_p }[/math]
Since α_{j} are the solutions of the polynomial of degree n: [math]\displaystyle{ \prod_{j=1}^n ( \alpha  \alpha_j) = 0 }[/math], then: for α_{j} all different, [math]\displaystyle{ y_c = \sum_{j=1}^n C_j e^{\alpha_j x} }[/math] for each root α_{j} repeated k_{j} times, [math]\displaystyle{ y_c = \sum_{j=1}^n \left( \sum_{\ell=1}^{k_j} C_{j,\ell} x^{\ell1}\right )e^{\alpha_j x} }[/math] for some α_{j} complex, then setting α = χ_{j} + iγ_{j}, and using Euler's formula, allows some terms in the previous results to be written in the form [math]\displaystyle{ C_j e^{\alpha_j x} = C_j e^{\chi_j x}\cos(\gamma_j x + \varphi_j) }[/math] where ϕ_{j} is an arbitrary constant (phase shift). 
The guessing method
When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form: [math]\displaystyle{ y = Ae^{\alpha t} }[/math] since this is a very common solution that physically behaves in a sinusoidal way.
In the case of a first order ODE that is nonhomogeneous we need to first find a DE solution to the homogeneous portion of the DE, otherwise known as the characteristic equation, and then find a solution to the entire nonhomogeneous equation by guessing. Finally, we add both of these solutions together to obtain the total solution to the ODE, that is:
[math]\displaystyle{ \text{total solution} = \text{homogeneous solution} + \text{particular solution} }[/math]
Software for ODE solving
 Maxima, an opensource computer algebra system.
 COPASI, a free (Artistic License 2.0) software package for the integration and analysis of ODEs.
 MATLAB, a technical computing application (MATrix LABoratory)
 GNU Octave, a highlevel language, primarily intended for numerical computations.
 Scilab, an open source application for numerical computation.
 Maple, a proprietary application for symbolic calculations.
 Mathematica, a proprietary application primarily intended for symbolic calculations.
 SymPy, a Python package that can solve ODEs symbolically
 Julia (programming language), a highlevel language primarily intended for numerical computations.
 SageMath, an opensource application that uses a Pythonlike syntax with a wide range of capabilities spanning several branches of mathematics.
 SciPy, a Python package that includes an ODE integration module.
 Chebfun, an opensource package, written in MATLAB, for computing with functions to 15digit accuracy.
 GNU R, an open source computational environment primarily intended for statistics, which includes packages for ODE solving.
See also
 Boundary value problem
 Examples of differential equations
 Laplace transform applied to differential equations
 List of dynamical systems and differential equations topics
 Matrix differential equation
 Method of undetermined coefficients
 Recurrence relation
Notes
 ↑ Dennis G. Zill (15 March 2012). A First Course in Differential Equations with Modeling Applications. Cengage Learning. ISBN 9781285401102. https://books.google.com/books?id=pasKAAAAQBAJ&q=%22ordinary+differential%22. Retrieved 11 July 2019.
 ↑ "What is the origin of the term "ordinary differential equations"?". Stack Exchange. http://hsm.stackexchange.com/a/5032/1772.
 ↑ Mathematics for Chemists, D.M. Hirst, Macmillan Press, 1976, (No ISBN) SBN: 333181727
 ↑ (Kreyszig 1972)
 ↑ (Simmons 1972)
 ↑ (Halliday Resnick)
 ↑ (Tipler 1991)
 ↑ ^{8.0} ^{8.1} (Harper 1976)
 ↑ (Kreyszig 1972)
 ↑ (Simmons 1972)
 ↑ ^{11.0} ^{11.1} (Kreyszig 1972)
 ↑ (Simmons 1972)
 ↑ (Harper 1976)
 ↑ (Kreyszig 1972)
 ↑ (Ascher 1998)
 ↑ Achim Ilchmann; Timo Reis (2014). Surveys in DifferentialAlgebraic Equations II. Springer. pp. 104–105. ISBN 9783319110509.
 ↑ (Ascher 1998)
 ↑ (Kreyszig 1972)
 ↑ (Kreyszig 1972)
 ↑ Vardia T. Haimo (1985). "Finite Time Differential Equations". 1985 24th IEEE Conference on Decision and Control. pp. 1729–1733. doi:10.1109/CDC.1985.268832. https://ieeexplore.ieee.org/document/4048613.
 ↑ Crelle, 1866, 1868
 ↑ (Lawrence 1999)
 ↑ Logan, J. (2013). Applied mathematics (Fourth ed.).
 ↑ (Ascher 1998)
 ↑ ^{25.0} ^{25.1} ^{25.2} ^{25.3} ^{25.4} ^{25.5} ^{25.6} ^{25.7} ^{25.8} ^{25.9} Elementary Differential Equations and Boundary Value Problems (4th Edition), W.E. Boyce, R.C. Diprima, Wiley International, John Wiley & Sons, 1986, ISBN 0471838241
 ↑ Boscain; Chitour 2011, p. 21
 ↑ ^{27.0} ^{27.1} Mathematical Handbook of Formulas and Tables (3rd edition), S. Lipschutz, M. R. Spiegel, J. Liu, Schaum's Outline Series, 2009, ISC_2N 9780071548557
 ↑ Further Elementary Analysis, R. Porter, G.Bell & Sons (London), 1978, ISBN 0713515945
 ↑ ^{29.0} ^{29.1} Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010, ISC_2N 9780521861533
References
 Halliday, David; Resnick, Robert (1977), Physics (3rd ed.), New York: John Wiley & Sons, ISBN 0471717169
 Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: PrenticeHall, ISBN 0134875389
 Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: John Wiley & Sons, ISBN 0471507288, https://archive.org/details/advancedengineer00krey.
 Polyanin, A. D. and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1584882972
 Simmons, George F. (1972), Differential Equations with Applications and Historical Notes, New York: McGrawHill
 Tipler, Paul A. (1991), Physics for Scientists and Engineers: Extended version (3rd ed.), New York: Worth Publishers, ISBN 0879014326
 Boscain, Ugo; Chitour, Yacine (2011) (in fr), Introduction à l'automatique, http://www.cmapx.polytechnique.fr/~boscain/poly2011.pdf
 Dresner, Lawrence (1999), Applications of Lie's Theory of Ordinary and Partial Differential Equations, Bristol and Philadelphia: Institute of Physics Publishing, ISBN 9780750305303
 Ascher, Uri; Petzold, Linda (1998), Computer Methods for Ordinary Differential Equations and DifferentialAlgebraic Equations, SIAM, ISBN 9781611971392
Bibliography
 Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations. New York: McGrawHill. https://archive.org/details/theoryofordinary00codd.
 Hartman, Philip (2002), Ordinary differential equations, Classics in Applied Mathematics, 38, Philadelphia: Society for Industrial and Applied Mathematics, doi:10.1137/1.9780898719222, ISBN 9780898715101, https://books.google.com/books?id=CENAPMUEpfoC
 W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
 Ince, Edward L. (1944), Ordinary Differential Equations, Dover Publications, New York, ISBN 9780486603490, https://archive.org/details/ordinarydifferen029666mbp
 Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0486495108
 Ibragimov, Nail H. (1993). CRC Handbook of Lie Group Analysis of Differential Equations Vol. 13. Providence: CRCPress. ISBN 0849344883..
 Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 9780821883280. https://www.mat.univie.ac.at/~gerald/ftp/bookode/.
 A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 041527267X
 D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
External links
 Hazewinkel, Michiel, ed. (2001), "Differential equation, ordinary", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 9781556080104, https://www.encyclopediaofmath.org/index.php?title=p/d031910
 EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
 Online Notes / Differential Equations by Paul Dawkins, Lamar University.
 Differential Equations, S.O.S. Mathematics.
 A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
 Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl.
 Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC.
 Modeling with ODEs using Scilab A tutorial on how to model a physical system described by ODE using Scilab standard programming language by Openeering team.
 Solving an ordinary differential equation in WolframAlpha
Original source: https://en.wikipedia.org/wiki/Ordinary differential equation.
Read more 