Linear timeinvariant system
In system analysis, among other fields of study, a linear timeinvariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and timeinvariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (x ∗ h)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.^{[2]}
Linear timeinvariant system theory is also used in image processing, where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. These systems may be referred to as linear translationinvariant to give the terminology the most general reach. In the case of generic discretetime (i.e., sampled) systems, linear shiftinvariant is the corresponding term. LTI system theory is an area of applied mathematics which has direct applications in electrical circuit analysis and design, signal processing and filter design, control theory, mechanical engineering, image processing, the design of measuring instruments of many sorts, NMR spectroscopy^{[citation needed]}, and many other technical areas where systems of ordinary differential equations present themselves.
Overview
The defining properties of any LTI system are linearity and time invariance.
 Linearity means that the relationship between the input [math]\displaystyle{ x(t) }[/math] and the output [math]\displaystyle{ y(t) }[/math], both being regarded as functions, is a linear mapping: If [math]\displaystyle{ a }[/math] is a constant then the system output to [math]\displaystyle{ ax(t) }[/math] is [math]\displaystyle{ ay(t) }[/math]; if [math]\displaystyle{ x'(t) }[/math] is a further input with system output [math]\displaystyle{ y'(t) }[/math] then the output of the system to [math]\displaystyle{ x(t)+x'(t) }[/math] is [math]\displaystyle{ y(t)+y'(t) }[/math], this applying for all choices of [math]\displaystyle{ a }[/math], [math]\displaystyle{ x(t) }[/math], [math]\displaystyle{ x'(t) }[/math]. The latter condition is often referred to as the superposition principle.
 Time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of T seconds. That is, if the output due to input [math]\displaystyle{ x(t) }[/math] is [math]\displaystyle{ y(t) }[/math], then the output due to input [math]\displaystyle{ x(tT) }[/math] is [math]\displaystyle{ y(tT) }[/math]. Hence, the system is time invariant because the output does not depend on the particular time the input is applied.
The fundamental result in LTI system theory is that any LTI system can be characterized entirely by a single function called the system's impulse response. The output of the system [math]\displaystyle{ y(t) }[/math] is simply the convolution of the input to the system [math]\displaystyle{ x(t) }[/math] with the system's impulse response [math]\displaystyle{ h(t) }[/math]. This is called a continuous time system. Similarly, a discretetime linear timeinvariant (or, more generally, "shiftinvariant") system is defined as one operating in discrete time: [math]\displaystyle{ y_{i} = x_{i} * h_{i} }[/math] where y, x, and h are sequences and the convolution, in discrete time, uses a discrete summation rather than an integral.
LTI systems can also be characterized in the frequency domain by the system's transfer function, which is the Laplace transform of the system's impulse response (or Z transform in the case of discretetime systems). As a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the transform of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain.
For all LTI systems, the eigenfunctions, and the basis functions of the transforms, are complex exponentials. This is, if the input to a system is the complex waveform [math]\displaystyle{ A_s e^{st} }[/math] for some complex amplitude [math]\displaystyle{ A_s }[/math] and complex frequency [math]\displaystyle{ s }[/math], the output will be some complex constant times the input, say [math]\displaystyle{ B_s e^{st} }[/math] for some new complex amplitude [math]\displaystyle{ B_s }[/math]. The ratio [math]\displaystyle{ B_s/A_s }[/math] is the transfer function at frequency [math]\displaystyle{ s }[/math].
Since sinusoids are a sum of complex exponentials with complexconjugate frequencies, if the input to the system is a sinusoid, then the output of the system will also be a sinusoid, perhaps with a different amplitude and a different phase, but always with the same frequency upon reaching steadystate. LTI systems cannot produce frequency components that are not in the input.
LTI system theory is good at describing many important systems. Most LTI systems are considered "easy" to analyze, at least compared to the timevarying and/or nonlinear case. Any system that can be modeled as a linear differential equation with constant coefficients is an LTI system. Examples of such systems are electrical circuits made up of resistors, inductors, and capacitors (RLC circuits). Ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits.
Most LTI system concepts are similar between the continuoustime and discretetime (linear shiftinvariant) cases. In image processing, the time variable is replaced with two space variables, and the notion of time invariance is replaced by twodimensional shift invariance. When analyzing filter banks and MIMO systems, it is often useful to consider vectors of signals.
A linear system that is not timeinvariant can be solved using other approaches such as the Green function method.
Continuoustime systems
Impulse response and convolution
The behavior of a linear, continuoustime, timeinvariant system with input signal x(t) and output signal y(t) is described by the convolution integral:^{[3]}
[math]\displaystyle{ y(t) = (x * h)(t) }[/math] [math]\displaystyle{ \mathrel{\stackrel{\mathrm{def}}{=}} \int\limits_{\infty}^{\infty} x(t  \tau)\cdot h(\tau) \, \mathrm{d}\tau }[/math] [math]\displaystyle{ = \int\limits_{\infty}^\infty x(\tau)\cdot h(t  \tau) \,\mathrm{d}\tau, }[/math] (using commutativity)
where [math]\displaystyle{ h(t) }[/math] is the system's response to an impulse: [math]\displaystyle{ x(\tau) = \delta(\tau) }[/math]. [math]\displaystyle{ y(t) }[/math] is therefore proportional to a weighted average of the input function [math]\displaystyle{ x(\tau) }[/math]. The weighting function is [math]\displaystyle{ h(\tau) }[/math], simply shifted by amount [math]\displaystyle{ t }[/math]. As [math]\displaystyle{ t }[/math] changes, the weighting function emphasizes different parts of the input function. When [math]\displaystyle{ h(\tau) }[/math] is zero for all negative [math]\displaystyle{ \tau }[/math], [math]\displaystyle{ y(t) }[/math] depends only on values of [math]\displaystyle{ x }[/math] prior to time [math]\displaystyle{ t }[/math], and the system is said to be causal.
To understand why the convolution produces the output of an LTI system, let the notation [math]\displaystyle{ \{x(u\tau);\ u\} }[/math] represent the function [math]\displaystyle{ x(u\tau) }[/math] with variable [math]\displaystyle{ u }[/math] and constant [math]\displaystyle{ \tau }[/math]. And let the shorter notation [math]\displaystyle{ \{x\} }[/math] represent [math]\displaystyle{ \{x(u);\ u\} }[/math]. Then a continuoustime system transforms an input function, [math]\displaystyle{ \{x\}, }[/math] into an output function, [math]\displaystyle{ \{y\} }[/math]. And in general, every value of the output can depend on every value of the input. This concept is represented by: [math]\displaystyle{ y(t) \mathrel{\stackrel{\text{def}}{=}} O_t\{x\}, }[/math] where [math]\displaystyle{ O_t }[/math] is the transformation operator for time [math]\displaystyle{ t }[/math]. In a typical system, [math]\displaystyle{ y(t) }[/math] depends most heavily on the values of [math]\displaystyle{ x }[/math] that occurred near time [math]\displaystyle{ t }[/math]. Unless the transform itself changes with [math]\displaystyle{ t }[/math], the output function is just constant, and the system is uninteresting.
For a linear system, [math]\displaystyle{ O }[/math] must satisfy Eq.1:

[math]\displaystyle{ O_t\left\{\int\limits_{\infty}^\infty c_{\tau}\ x_{\tau}(u) \, \mathrm{d}\tau ;\ u\right\} = \int\limits_{\infty}^\infty c_\tau\ \underbrace{y_\tau(t)}_{O_t\{x_\tau\}} \, \mathrm{d}\tau. }[/math]
(
)
And the timeinvariance requirement is:

[math]\displaystyle{ \begin{align} O_t\{x(u  \tau);\ u\} &\mathrel{\stackrel{\quad}{=}} y(t  \tau)\\ &\mathrel{\stackrel{\text{def}}{=}} O_{t\tau}\{x\}.\, \end{align} }[/math]
(
)
In this notation, we can write the impulse response as [math]\displaystyle{ h(t) \mathrel{\stackrel{\text{def}}{=}} O_t\{\delta(u);\ u\}. }[/math]
Similarly:
[math]\displaystyle{ h(t  \tau) }[/math] [math]\displaystyle{ \mathrel{\stackrel{\text{def}}{=}} O_{t\tau}\{\delta(u);\ u\} }[/math] [math]\displaystyle{ = O_t\{\delta(u  \tau);\ u\}. }[/math] (using Eq.3)
Substituting this result into the convolution integral: [math]\displaystyle{ \begin{align} (x * h)(t) &= \int_{\infty}^\infty x(\tau)\cdot h(t  \tau) \,\mathrm{d}\tau \\[4pt] &= \int_{\infty}^\infty x(\tau)\cdot O_t\{\delta(u\tau);\ u\} \, \mathrm{d}\tau,\, \end{align} }[/math]
which has the form of the right side of Eq.2 for the case [math]\displaystyle{ c_\tau = x(\tau) }[/math] and [math]\displaystyle{ x_\tau(u) = \delta(u\tau). }[/math]
Eq.2 then allows this continuation: [math]\displaystyle{ \begin{align} (x * h)(t) &= O_t\left\{\int_{\infty}^\infty x(\tau)\cdot \delta(u\tau) \, \mathrm{d}\tau;\ u \right\}\\[4pt] &= O_t\left\{x(u);\ u \right\}\\ &\mathrel{\stackrel{\text{def}}{=}} y(t).\, \end{align} }[/math]
In summary, the input function, [math]\displaystyle{ \{x\} }[/math], can be represented by a continuum of timeshifted impulse functions, combined "linearly", as shown at . The system's linearity property allows the system's response to be represented by the corresponding continuum of impulse responses, combined in the same way. And the timeinvariance property allows that combination to be represented by the convolution integral.
The mathematical operations above have a simple graphical simulation.^{[4]}
Exponentials as eigenfunctions
An eigenfunction is a function for which the output of the operator is a scaled version of the same function. That is, [math]\displaystyle{ \mathcal{H}f = \lambda f, }[/math] where f is the eigenfunction and [math]\displaystyle{ \lambda }[/math] is the eigenvalue, a constant.
The exponential functions [math]\displaystyle{ A e^{s t} }[/math], where [math]\displaystyle{ A, s \in \mathbb{C} }[/math], are eigenfunctions of a linear, timeinvariant operator. A simple proof illustrates this concept. Suppose the input is [math]\displaystyle{ x(t) = A e^{s t} }[/math]. The output of the system with impulse response [math]\displaystyle{ h(t) }[/math] is then [math]\displaystyle{ \int_{\infty}^\infty h(t  \tau) A e^{s \tau}\, \mathrm{d} \tau }[/math] which, by the commutative property of convolution, is equivalent to [math]\displaystyle{ \begin{align} \overbrace{\int_{\infty}^\infty h(\tau) \, A e^{s (t  \tau)} \, \mathrm{d} \tau}^{\mathcal{H} f} &= \int_{\infty}^\infty h(\tau) \, A e^{s t} e^{s \tau} \, \mathrm{d} \tau \\[4pt] &= A e^{s t} \int_{\infty}^{\infty} h(\tau) \, e^{s \tau} \, \mathrm{d} \tau \\[4pt] &= \overbrace{\underbrace{A e^{s t}}_{\text{Input}}}^{f} \overbrace{\underbrace{H(s)}_{\text{Scalar}}}^{\lambda}, \\ \end{align} }[/math]
where the scalar [math]\displaystyle{ H(s) \mathrel{\stackrel{\text{def}}{=}} \int_{\infty}^\infty h(t) e^{s t} \, \mathrm{d} t }[/math] is dependent only on the parameter s.
So the system's response is a scaled version of the input. In particular, for any [math]\displaystyle{ A, s \in \mathbb{C} }[/math], the system output is the product of the input [math]\displaystyle{ A e^{st} }[/math] and the constant [math]\displaystyle{ H(s) }[/math]. Hence, [math]\displaystyle{ A e^{s t} }[/math] is an eigenfunction of an LTI system, and the corresponding eigenvalue is [math]\displaystyle{ H(s) }[/math].
Direct proof
It is also possible to directly derive complex exponentials as eigenfunctions of LTI systems.
Let's set [math]\displaystyle{ v(t) = e^{i \omega t} }[/math] some complex exponential and [math]\displaystyle{ v_a(t) = e^{i \omega (t+a)} }[/math] a timeshifted version of it.
[math]\displaystyle{ H[v_a](t) = e^{i\omega a} H[v](t) }[/math] by linearity with respect to the constant [math]\displaystyle{ e^{i \omega a} }[/math].
[math]\displaystyle{ H[v_a](t) = H[v](t+a) }[/math] by time invariance of [math]\displaystyle{ H }[/math].
So [math]\displaystyle{ H[v](t+a) = e^{i \omega a} H[v](t) }[/math]. Setting [math]\displaystyle{ t = 0 }[/math] and renaming we get: [math]\displaystyle{ H[v](\tau) = e^{i\omega \tau} H[v](0) }[/math] i.e. that a complex exponential [math]\displaystyle{ e^{i \omega \tau} }[/math] as input will give a complex exponential of same frequency as output.
Fourier and Laplace transforms
The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The onesided Laplace transform [math]\displaystyle{ H(s) \mathrel{\stackrel{\text{def}}{=}} \mathcal{L}\{h(t)\} \mathrel{\stackrel{\text{def}}{=}} \int_0^\infty h(t) e^{s t} \, \mathrm{d} t }[/math] is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids (i.e., exponential functions of the form [math]\displaystyle{ e^{j \omega t} }[/math] where [math]\displaystyle{ \omega \in \mathbb{R} }[/math] and [math]\displaystyle{ j \mathrel{\stackrel{\text{def}}{=}} \sqrt{1} }[/math]). The Fourier transform [math]\displaystyle{ H(j \omega) = \mathcal{F}\{h(t)\} }[/math] gives the eigenvalues for pure complex sinusoids. Both of [math]\displaystyle{ H(s) }[/math] and [math]\displaystyle{ H(j\omega) }[/math] are called the system function, system response, or transfer function.
The Laplace transform is usually used in the context of onesided signals, i.e. signals that are zero for all values of t less than some value. Usually, this "start time" is set to zero, for convenience and without loss of generality, with the transform integral being taken from zero to infinity (the transform shown above with lower limit of integration of negative infinity is formally known as the bilateral Laplace transform).
The Fourier transform is used for analyzing systems that process signals that are infinite in extent, such as modulated sinusoids, even though it cannot be directly applied to input and output signals that are not square integrable. The Laplace transform actually works directly for these signals if they are zero before a start time, even if they are not square integrable, for stable systems. The Fourier transform is often applied to spectra of infinite signals via the Wiener–Khinchin theorem even when Fourier transforms of the signals do not exist.
Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain, given signals for which the transforms exist [math]\displaystyle{ y(t) = (h*x)(t) \mathrel{\stackrel{\text{def}}{=}} \int_{\infty}^\infty h(t  \tau) x(\tau) \, \mathrm{d} \tau \mathrel{\stackrel{\text{def}}{=}} \mathcal{L}^{1}\{H(s)X(s)\}. }[/math]
One can use the system response directly to determine how any particular frequency component is handled by a system with that Laplace transform. If we evaluate the system response (Laplace transform of the impulse response) at complex frequency s = jω, where ω = 2πf, we obtain H(s) which is the system gain for frequency f. The relative phase shift between the output and input for that frequency component is likewise given by arg(H(s)).
Examples
 A simple example of an LTI operator is the derivative.
 [math]\displaystyle{ \frac{\mathrm{d}}{\mathrm{d}t} \left( c_1 x_1(t) + c_2 x_2(t) \right) = c_1 x'_1(t) + c_2 x'_2(t) }[/math] (i.e., it is linear)
 [math]\displaystyle{ \frac{\mathrm{d}}{\mathrm{d}t} x(t\tau) = x'(t\tau) }[/math] (i.e., it is time invariant)
When the Laplace transform of the derivative is taken, it transforms to a simple multiplication by the Laplace variable s. [math]\displaystyle{ \mathcal{L}\left\{\frac{\mathrm{d}}{\mathrm{d}t}x(t)\right\} = s X(s) }[/math]
That the derivative has such a simple Laplace transform partly explains the utility of the transform.  Another simple LTI operator is an averaging operator [math]\displaystyle{ \mathcal{A}\left\{x(t)\right\} \mathrel{\stackrel{\text{def}}{=}} \int_{ta}^{t+a} x(\lambda) \, \mathrm{d} \lambda. }[/math] By the linearity of integration, [math]\displaystyle{ \begin{align} \mathcal{A} \{c_1 x_1(t) + c_2 x_2(t)\} &= \int_{ta}^{t+a} ( c_1 x_1(\lambda) + c_2 x_2(\lambda)) \, \mathrm{d} \lambda\\ &= c_1 \int_{ta}^{t+a} x_1(\lambda) \, \mathrm{d} \lambda + c_2 \int_{ta}^{t+a} x_2(\lambda) \, \mathrm{d} \lambda\\ &= c_1 \mathcal{A}\{x_1(t)\} + c_2 \mathcal{A} \{x_2(t) \}, \end{align} }[/math] it is linear. Additionally, because [math]\displaystyle{ \begin{align} \mathcal{A}\left\{x(t\tau)\right\} &= \int_{ta}^{t+a} x(\lambda\tau) \, \mathrm{d} \lambda\\ &= \int_{(t\tau)a}^{(t\tau)+a} x(\xi) \, \mathrm{d} \xi\\ &= \mathcal{A}\{x\}(t\tau), \end{align} }[/math] it is time invariant. In fact, [math]\displaystyle{ \mathcal{A} }[/math] can be written as a convolution with the boxcar function [math]\displaystyle{ \Pi(t) }[/math]. That is, [math]\displaystyle{ \mathcal{A}\left\{x(t)\right\} = \int_{\infty}^\infty \Pi\left(\frac{\lambdat}{2a}\right) x(\lambda) \, \mathrm{d} \lambda, }[/math] where the boxcar function [math]\displaystyle{ \Pi(t) \mathrel{\stackrel{\text{def}}{=}} \begin{cases} 1 &\text{if } t \lt \frac{1}{2},\\ 0 &\text{if } t \gt \frac{1}{2}. \end{cases} }[/math]
Important system properties
Some of the most important properties of a system are causality and stability. Causality is a necessity for a physical system whose independent variable is time, however this restriction is not present in other cases such as image processing.
Causality
A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality is [math]\displaystyle{ h(t) = 0 \quad \forall t \lt 0, }[/math]
where [math]\displaystyle{ h(t) }[/math] is the impulse response. It is not possible in general to determine causality from the twosided Laplace transform. However when working in the time domain one normally uses the onesided Laplace transform which requires causality.
Stability
A system is boundedinput, boundedoutput stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if every input satisfying [math]\displaystyle{ \ \x(t)\_{\infty} \lt \infty }[/math]
leads to an output satisfying [math]\displaystyle{ \ \y(t)\_{\infty} \lt \infty }[/math]
(that is, a finite maximum absolute value of [math]\displaystyle{ x(t) }[/math] implies a finite maximum absolute value of [math]\displaystyle{ y(t) }[/math]), then the system is stable. A necessary and sufficient condition is that [math]\displaystyle{ h(t) }[/math], the impulse response, is in L^{1} (has a finite L^{1} norm): [math]\displaystyle{ \h(t)\_1 = \int_{\infty}^\infty h(t) \, \mathrm{d}t \lt \infty. }[/math]
In the frequency domain, the region of convergence must contain the imaginary axis [math]\displaystyle{ s = j\omega }[/math].
As an example, the ideal lowpass filter with impulse response equal to a sinc function is not BIBO stable, because the sinc function does not have a finite L^{1} norm. Thus, for some bounded input, the output of the ideal lowpass filter is unbounded. In particular, if the input is zero for [math]\displaystyle{ t \lt 0 }[/math] and equal to a sinusoid at the cutoff frequency for [math]\displaystyle{ t \gt 0 }[/math], then the output will be unbounded for all times other than the zero crossings.^{[dubious – discuss]}
Deriving the Solution of Linear TimeInvariant Differential Equations
Given is an explicit linear system of differential equations in the form:
 [math]\displaystyle{ \begin{align} \frac{dx(t)}{dt}=A\,x(t)+b\,u(t), \,x(t=0)=x_{0} \end{align} }[/math]
with the state vector [math]\displaystyle{ x(t)\in \mathbb{R}^{n} }[/math], the system matrix [math]\displaystyle{ A\in\mathbb{R}^{n\times n} }[/math], the input [math]\displaystyle{ u(t)\in \mathbb{R} }[/math], the input vector [math]\displaystyle{ b\in \mathbb{R}^{n} }[/math] and the initial condition [math]\displaystyle{ x_{0} \in \mathbb{R}^{n} }[/math]. The solution consists of a homogeneous and a particular part.
Homogeneous solution
The homogeneous differential equation is obtained by setting the input equal to zero.
 [math]\displaystyle{ \begin{align} \frac{dx(t)}{dt}=A\,x(t),\, x(t=0)=x_{0} \end{align} }[/math]
This solution can now be described using a Taylor series representation:
 [math]\displaystyle{ \begin{align} x(t)=\phi(t)x_{0}=(E+\phi_{1}t+\phi_{2}t^{2}+...+\phi_{n}t^{n}+...)x_0 \end{align} }[/math]
where [math]\displaystyle{ E }[/math] is the unit matrix. Substituting this solution into the above equation, one obtains:
 [math]\displaystyle{ \begin{align} \frac{d}{dt}(\phi(t)x_0)&=A\,\phi(t)\,x_{0}\\ (\phi_{1}+2\,\phi_{2}\,t+...+n\,\phi_{n}\,t^{n1}+...)x_0&=(A+A\phi_{1}t+A\phi_{2}t^{2}+...+A\phi_{n}t^{n}+...)\,x_0 \end{align} }[/math]
Now the unknown matrices [math]\displaystyle{ \phi_{n} }[/math] can be determined by comparing coefficients:
 [math]\displaystyle{ \begin{align} \phi_{1} &= A \\ \phi_{2}&=\frac{1}{2}A\,\phi_{1}=\frac{1}{2!}A^{^2} \\ \phi_{3}&=\frac{1}{3}A\,\phi_{2}=\frac{1}{3!}A^{3} \\ &... \\ \phi_{n}&=\frac{1}{n!}A^{n}. \end{align} }[/math]
The following notation is commonly used for the fundamental matrix [math]\displaystyle{ \phi_{n} }[/math]:
 [math]\displaystyle{ \begin{align} \phi(t)=e^{At}=E+At+\frac{1}{2!}A^{2}t^{2}+\frac{1}{3!}A^{3}t^{3}+...+\frac{1}{n!}A^{n}t^{n}+... \end{align} }[/math]
Particular solution
Assuming [math]\displaystyle{ u(t)\neq0 }[/math] and [math]\displaystyle{ x_{0}=0 }[/math], follows:
 [math]\displaystyle{ \begin{align} \frac{d}{dt}x(t)=A\,x(t)+b\,u(t) \end{align} }[/math]
The particular solution is obtained in the form:
 [math]\displaystyle{ \begin{align} x_{p}(t)=\phi(t)\xi(t)=e^{At}\xi(t), \end{align} }[/math]
where [math]\displaystyle{ \xi(t) }[/math] is an unknown function vector with [math]\displaystyle{ \xi(0)=0 }[/math]. From the above two equations follows:
 [math]\displaystyle{ \begin{align} \frac{d}{dt}x_{p}(t)&=A\,x_{p}(t)+b\,u(t)\\ \xi(t)\frac{d}{dt}\phi(t)+\phi(t)\frac{d}{dt}\xi(t)&=A\,x_{p}(t)+b\,u(t)\\ A\,\phi(t)\xi(t)+\phi(t)\frac{d}{dt}\xi(t)&=A\,x_{p}(t)+b\,u(t)\\ A\,x_{p}(t)+\phi(t)\frac{d}{dt}\xi(t)&=A\,x_{p}(t)+b\,u(t)\\ \end{align} }[/math]
Thus [math]\displaystyle{ \xi(t) }[/math] can be determined:
 [math]\displaystyle{ \begin{align} \frac{d}{dt}\xi(t)=\phi^{1}(t)bu(t)\\ \end{align} }[/math]
One obtains by integration utilizing the properties of the fundamental matrix:
 [math]\displaystyle{ \begin{align} \phi(t)\xi(t)&=\phi(t)\int_{0}^{t}\phi^{1}(\tau)bu(\tau)d\tau\\ x_{p}(t)&=\int_{0}^{t}\phi(t\tau)bu(\tau)d\tau\\ x_{p}(t)&=\int_{0}^{t}e^{A(t\tau)}bu(\tau)d\tau\\ \end{align} }[/math]
Thus, we finally obtain the solution of a linear timeinvariant differential equation:
 [math]\displaystyle{ \begin{align} x(t)=e^{At}x_{0} + \int_{0}^{t}e^{A(t\tau)}bu(\tau)d\tau\\ \end{align} }[/math]
Discretetime systems
Almost everything in continuoustime systems has a counterpart in discretetime systems.
Discretetime systems from continuoustime systems
In many contexts, a discrete time (DT) system is really part of a larger continuous time (CT) system. For example, a digital recording system takes an analog sound, digitizes it, possibly processes the digital signals, and plays back an analog sound for people to listen to.
In practical systems, DT signals obtained are usually uniformly sampled versions of CT signals. If [math]\displaystyle{ x(t) }[/math] is a CT signal, then the sampling circuit used before an analogtodigital converter will transform it to a DT signal: [math]\displaystyle{ x_n \mathrel{\stackrel{\text{def}}{=}} x(nT) \qquad \forall \, n \in \mathbb{Z}, }[/math] where T is the sampling period. Before sampling, the input signal is normally run through a socalled Nyquist filter which removes frequencies above the "folding frequency" 1/(2T); this guarantees that no information in the filtered signal will be lost. Without filtering, any frequency component above the folding frequency (or Nyquist frequency) is aliased to a different frequency (thus distorting the original signal), since a DT signal can only support frequency components lower than the folding frequency.
Impulse response and convolution
Let [math]\displaystyle{ \{x[m  k];\ m\} }[/math] represent the sequence [math]\displaystyle{ \{x[m  k];\text{ for all integer values of } m\}. }[/math]
And let the shorter notation [math]\displaystyle{ \{x\} }[/math] represent [math]\displaystyle{ \{x[m];\ m\}. }[/math]
A discrete system transforms an input sequence, [math]\displaystyle{ \{x\} }[/math] into an output sequence, [math]\displaystyle{ \{y\}. }[/math] In general, every element of the output can depend on every element of the input. Representing the transformation operator by [math]\displaystyle{ O }[/math], we can write: [math]\displaystyle{ y[n] \mathrel{\stackrel{\text{def}}{=}} O_n\{x\}. }[/math]
Note that unless the transform itself changes with n, the output sequence is just constant, and the system is uninteresting. (Thus the subscript, n.) In a typical system, y[n] depends most heavily on the elements of x whose indices are near n.
For the special case of the Kronecker delta function, [math]\displaystyle{ x[m] = \delta[m], }[/math] the output sequence is the impulse response: [math]\displaystyle{ h[n] \mathrel{\stackrel{\text{def}}{=}} O_n\{\delta[m];\ m\}. }[/math]
For a linear system, [math]\displaystyle{ O }[/math] must satisfy:

[math]\displaystyle{ O_n\left\{\sum_{k=\infty}^{\infty} c_k\cdot x_k[m];\ m\right\} = \sum_{k=\infty}^{\infty} c_k\cdot O_n\{x_k\}. }[/math]
(
)
And the timeinvariance requirement is:

[math]\displaystyle{ \begin{align} O_n\{x[mk];\ m\} &\mathrel{\stackrel{\quad}{=}} y[nk]\\ &\mathrel{\stackrel{\text{def}}{=}} O_{nk}\{x\}.\, \end{align} }[/math]
(
)
In such a system, the impulse response, [math]\displaystyle{ \{h\} }[/math], characterizes the system completely. That is, for any input sequence, the output sequence can be calculated in terms of the input and the impulse response. To see how that is done, consider the identity: [math]\displaystyle{ x[m] \equiv \sum_{k=\infty}^{\infty} x[k] \cdot \delta[m  k], }[/math]
which expresses [math]\displaystyle{ \{x\} }[/math] in terms of a sum of weighted delta functions.
Therefore: [math]\displaystyle{ \begin{align} y[n] = O_n\{x\} &= O_n\left\{\sum_{k=\infty}^\infty x[k]\cdot \delta[mk];\ m \right\}\\ &= \sum_{k=\infty}^\infty x[k]\cdot O_n\{\delta[mk];\ m\},\, \end{align} }[/math]
where we have invoked Eq.4 for the case [math]\displaystyle{ c_k = x[k] }[/math] and [math]\displaystyle{ x_k[m] = \delta[mk] }[/math].
And because of Eq.5, we may write: [math]\displaystyle{ \begin{align} O_n\{\delta[mk];\ m\} &\mathrel{\stackrel{\quad}{=}} O_{nk}\{\delta[m];\ m\} \\ &\mathrel{\stackrel{\text{def}}{=}} h[nk]. \end{align} }[/math]
Therefore:
[math]\displaystyle{ y[n] }[/math] [math]\displaystyle{ = \sum_{k=\infty}^{\infty} x[k] \cdot h[n  k] }[/math] [math]\displaystyle{ = \sum_{k=\infty}^{\infty} x[nk] \cdot h[k], }[/math] (commutativity)
which is the familiar discrete convolution formula. The operator [math]\displaystyle{ O_n }[/math] can therefore be interpreted as proportional to a weighted average of the function x[k]. The weighting function is h[−k], simply shifted by amount n. As n changes, the weighting function emphasizes different parts of the input function. Equivalently, the system's response to an impulse at n=0 is a "time" reversed copy of the unshifted weighting function. When h[k] is zero for all negative k, the system is said to be causal.
Exponentials as eigenfunctions
An eigenfunction is a function for which the output of the operator is the same function, scaled by some constant. In symbols, [math]\displaystyle{ \mathcal{H}f = \lambda f , }[/math]
where f is the eigenfunction and [math]\displaystyle{ \lambda }[/math] is the eigenvalue, a constant.
The exponential functions [math]\displaystyle{ z^n = e^{sT n} }[/math], where [math]\displaystyle{ n \in \mathbb{Z} }[/math], are eigenfunctions of a linear, timeinvariant operator. [math]\displaystyle{ T \in \mathbb{R} }[/math] is the sampling interval, and [math]\displaystyle{ z = e^{sT}, \ z,s \in \mathbb{C} }[/math]. A simple proof illustrates this concept.
Suppose the input is [math]\displaystyle{ x[n] = z^n }[/math]. The output of the system with impulse response [math]\displaystyle{ h[n] }[/math] is then [math]\displaystyle{ \sum_{m=\infty}^{\infty} h[nm] \, z^m }[/math]
which is equivalent to the following by the commutative property of convolution [math]\displaystyle{ \sum_{m=\infty}^{\infty} h[m] \, z^{(n  m)} = z^n \sum_{m=\infty}^{\infty} h[m] \, z^{m} = z^n H(z) }[/math] where [math]\displaystyle{ H(z) \mathrel{\stackrel{\text{def}}{=}} \sum_{m=\infty}^\infty h[m] z^{m} }[/math] is dependent only on the parameter z.
So [math]\displaystyle{ z^n }[/math] is an eigenfunction of an LTI system because the system response is the same as the input times the constant [math]\displaystyle{ H(z) }[/math].
Z and discretetime Fourier transforms
The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The Z transform [math]\displaystyle{ H(z) = \mathcal{Z}\{h[n]\} = \sum_{n=\infty}^\infty h[n] z^{n} }[/math]
is exactly the way to get the eigenvalues from the impulse response.^{[clarification needed]} Of particular interest are pure sinusoids; i.e. exponentials of the form [math]\displaystyle{ e^{j \omega n} }[/math], where [math]\displaystyle{ \omega \in \mathbb{R} }[/math]. These can also be written as [math]\displaystyle{ z^n }[/math] with [math]\displaystyle{ z = e^{j \omega} }[/math]^{[clarification needed]}. The discretetime Fourier transform (DTFT) [math]\displaystyle{ H(e^{j \omega}) = \mathcal{F}\{h[n]\} }[/math] gives the eigenvalues of pure sinusoids^{[clarification needed]}. Both of [math]\displaystyle{ H(z) }[/math] and [math]\displaystyle{ H(e^{j\omega}) }[/math] are called the system function, system response, or transfer function.
Like the onesided Laplace transform, the Z transform is usually used in the context of onesided signals, i.e. signals that are zero for t<0. The discretetime Fourier transform Fourier series may be used for analyzing periodic signals.
Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain. That is, [math]\displaystyle{ y[n] = (h*x)[n] = \sum_{m=\infty}^\infty h[nm] x[m] = \mathcal{Z}^{1}\{H(z)X(z)\}. }[/math]
Just as with the Laplace transform transfer function in continuoustime system analysis, the Z transform makes it easier to analyze systems and gain insight into their behavior.
Examples
 A simple example of an LTI operator is the delay operator [math]\displaystyle{ D\{x[n]\} \mathrel{\stackrel{\text{def}}{=}} x[n1] }[/math].
 [math]\displaystyle{ D \left( c_1 \cdot x_1[n] + c_2 \cdot x_2[n] \right) = c_1 \cdot x_1[n  1] + c_2\cdot x_2[n  1] = c_1\cdot Dx_1[n] + c_2\cdot Dx_2[n] }[/math] (i.e., it is linear)
 [math]\displaystyle{ D\{x[n  m]\} = x[n  m  1] = x[(n  1)  m] = D\{x\}[n  m] }[/math] (i.e., it is time invariant)
The Z transform of the delay operator is a simple multiplication by z^{−1}. That is,
[math]\displaystyle{ \mathcal{Z}\left\{Dx[n]\right\} = z^{1} X(z). }[/math]  Another simple LTI operator is the averaging operator [math]\displaystyle{ \mathcal{A}\left\{x[n]\right\} \mathrel{\stackrel{\text{def}}{=}} \sum_{k=na}^{n+a} x[k]. }[/math] Because of the linearity of sums, [math]\displaystyle{ \begin{align} \mathcal{A}\left\{c_1 x_1[n] + c_2 x_2[n] \right\} &= \sum_{k=na}^{n+a} \left( c_1 x_1[k] + c_2 x_2[k] \right)\\ &= c_1 \sum_{k=na}^{n+a} x_1[k] + c_2 \sum_{k=na}^{n+a} x_2[k]\\ &= c_1 \mathcal{A}\left\{x_1[n] \right\} + c_2 \mathcal{A}\left\{x_2[n] \right\}, \end{align} }[/math] and so it is linear. Because, [math]\displaystyle{ \begin{align} \mathcal{A}\left\{x[nm]\right\} &= \sum_{k=na}^{n+a} x[km]\\ &= \sum_{k'=(nm)a}^{(nm)+a} x[k']\\ &= \mathcal{A}\left\{x\right\}[nm], \end{align} }[/math] it is also time invariant.
Important system properties
The inputoutput characteristics of discretetime LTI system are completely described by its impulse response [math]\displaystyle{ h[n] }[/math]. Two of the most important properties of a system are causality and stability. Noncausal (in time) systems can be defined and analyzed as above, but cannot be realized in realtime. Unstable systems can also be analyzed and built, but are only useful as part of a larger system whose overall transfer function is stable.
Causality
A discretetime LTI system is causal if the current value of the output depends on only the current value and past values of the input.^{[5]} A necessary and sufficient condition for causality is [math]\displaystyle{ h[n] = 0 \ \forall n \lt 0, }[/math] where [math]\displaystyle{ h[n] }[/math] is the impulse response. It is not possible in general to determine causality from the Z transform, because the inverse transform is not unique^{[dubious – discuss]}. When a region of convergence is specified, then causality can be determined.
Stability
A system is bounded input, bounded output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if [math]\displaystyle{ \x[n]\_{\infty} \lt \infty }[/math]
implies that [math]\displaystyle{ \y[n]\_{\infty} \lt \infty }[/math]
(that is, if bounded input implies bounded output, in the sense that the maximum absolute values of [math]\displaystyle{ x[n] }[/math] and [math]\displaystyle{ y[n] }[/math] are finite), then the system is stable. A necessary and sufficient condition is that [math]\displaystyle{ h[n] }[/math], the impulse response, satisfies [math]\displaystyle{ \h[n]\_1 \mathrel{\stackrel{\text{def}}{=}} \sum_{n = \infty}^\infty h[n] \lt \infty. }[/math]
In the frequency domain, the region of convergence must contain the unit circle (i.e., the locus satisfying [math]\displaystyle{ z = 1 }[/math] for complex z).
Notes
See also
 Circulant matrix
 Frequency response
 Impulse response
 System analysis
 Green function
 Signalflow graph
References
 Phillips, C.l., Parr, J.M., & Riskin, E.A (2007). Signals, systems and Transforms. Prentice Hall. ISBN 9780130412072.
 Hespanha, J.P. (2009). Linear System Theory. Princeton university press. ISBN 9780691140216.
 Crutchfield, Steve (October 12, 2010), "The Joy of Convolution", Johns Hopkins University, http://www.jhu.edu/signals/convolve/index.html, retrieved November 21, 2010
 Vaidyanathan, P. P.; Chen, T. (May 1995). "Role of anticausal inverses in multirate filter banks — Part I: system theoretic fundamentals". IEEE Trans. Signal Process. 43 (6): 1090. doi:10.1109/78.382395. Bibcode: 1995ITSP...43.1090V. https://authors.library.caltech.edu/6832/1/VAIieeetsp95b.pdf.
Further reading
 Porat, Boaz (1997). A Course in Digital Signal Processing. New York: John Wiley. ISBN 9780471149613.
 Vaidyanathan, P. P.; Chen, T. (May 1995). "Role of anticausal inverses in multirate filter banks — Part I: system theoretic fundamentals". IEEE Trans. Signal Process. 43 (5): 1090. doi:10.1109/78.382395. Bibcode: 1995ITSP...43.1090V. https://authors.library.caltech.edu/6832/1/VAIieeetsp95b.pdf.
External links
 ECE 209: Review of Circuits as LTI Systems – Short primer on the mathematical analysis of (electrical) LTI systems.
 ECE 209: Sources of Phase Shift – Gives an intuitive explanation of the source of phase shift in two common electrical LTI systems.
 JHU 520.214 Signals and Systems course notes. An encapsulated course on LTI system theory. Adequate for self teaching.
 LTI system example: RC lowpass filter. Amplitude and phase response.
Original source: https://en.wikipedia.org/wiki/Linear timeinvariant system.
Read more 