Minimum phase
In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable.[1][2]
The most general causal LTI transfer function can be uniquely factored into a series of an all-pass and a minimum phase system. The system function is then the product of the two parts, and in the time domain the response of the system is the convolution of the two part responses. The difference between a minimum-phase and a general transfer function is that a minimum-phase system has all of the poles and zeros of its transfer function in the left half of the s-plane representation (in discrete time, respectively, inside the unit circle of the z plane). Since inverting a system function leads to poles turning to zeros and conversely, and poles on the right side (s-plane imaginary line) or outside (z-plane unit circle) of the complex plane lead to unstable systems, only the class of minimum-phase systems is closed under inversion. Intuitively, the minimum-phase part of a general causal system implements its amplitude response with minimal group delay, while its all-pass part corrects its phase response alone to correspond with the original system function.
The analysis in terms of poles and zeros is exact only in the case of transfer functions which can be expressed as ratios of polynomials. In the continuous-time case, such systems translate into networks of conventional, idealized LCR networks. In discrete time, they conveniently translate into approximations thereof, using addition, multiplication, and unit delay. It can be shown that in both cases, system functions of rational form with increasing order can be used to efficiently approximate any other system function; thus even system functions lacking a rational form, and so possessing an infinitude of poles and/or zeros, can in practice be implemented as efficiently as any other.
In the context of causal, stable systems, we would in theory be free to choose whether the zeros of the system function are outside of the stable range (to the right or outside) if the closure condition wasn't an issue. However, inversion is of great practical importance, just as theoretically perfect factorizations are in their own right. (Cf. the spectral symmetric/antisymmetric decomposition as another important example, leading e.g. to Hilbert transform techniques.) Many physical systems also naturally tend towards minimum-phase response, and sometimes have to be inverted using other physical systems obeying the same constraint.
Insight is given below as to why this system is called minimum-phase, and why the basic idea applies even when the system function cannot be cast into a rational form that could be implemented.
Inverse system
A system [math]\displaystyle{ \mathbb{H} }[/math] is invertible if we can uniquely determine its input from its output. I.e., we can find a system [math]\displaystyle{ \mathbb{H}_\text{inv} }[/math] such that if we apply [math]\displaystyle{ \mathbb{H} }[/math] followed by [math]\displaystyle{ \mathbb{H}_\text{inv} }[/math], we obtain the identity system [math]\displaystyle{ \mathbb{I} }[/math]. (See Inverse matrix for a finite-dimensional analog). That is, [math]\displaystyle{ \mathbb{H}_\text{inv} \mathbb{H} = \mathbb{I}. }[/math]
Suppose that [math]\displaystyle{ \tilde{x} }[/math] is input to system [math]\displaystyle{ \mathbb{H} }[/math] and gives output [math]\displaystyle{ \tilde{y} }[/math]: [math]\displaystyle{ \mathbb{H} \tilde{x} = \tilde{y}. }[/math]
Applying the inverse system [math]\displaystyle{ \mathbb{H}_\text{inv} }[/math] to [math]\displaystyle{ \tilde{y} }[/math] gives [math]\displaystyle{ \mathbb{H}_\text{inv} \tilde{y} = \mathbb{H}_\text{inv} \mathbb{H} \tilde{x} = \mathbb{I} \tilde{x} = \tilde{x}. }[/math]
So we see that the inverse system [math]\displaystyle{ \mathbb{H}_{inv} }[/math] allows us to determine uniquely the input [math]\displaystyle{ \tilde{x} }[/math] from the output [math]\displaystyle{ \tilde{y} }[/math].
Discrete-time example
Suppose that the system [math]\displaystyle{ \mathbb{H} }[/math] is a discrete-time, linear, time-invariant (LTI) system described by the impulse response [math]\displaystyle{ h(n) }[/math] for n in Z. Additionally, suppose [math]\displaystyle{ \mathbb{H}_\text{inv} }[/math] has impulse response [math]\displaystyle{ h_\text{inv}(n) }[/math]. The cascade of two LTI systems is a convolution. In this case, the above relation is the following: [math]\displaystyle{ (h_\text{inv} * h)(n) = (h * h_\text{inv})(n) = \sum_{k=-\infty}^\infty h(k) h_\text{inv}(n - k) = \delta(n), }[/math] where [math]\displaystyle{ \delta(n) }[/math] is the Kronecker delta, or the identity system in the discrete-time case. (Changing the order of [math]\displaystyle{ h_\text{inv} }[/math] and [math]\displaystyle{ h }[/math] is allowed because of commutativity of the convolution operation.) Note that this inverse system [math]\displaystyle{ \mathbb{H}_\text{inv} }[/math] need not be unique.
Minimum-phase system
This section does not cite any external source. HandWiki requires at least one external source. See citing external sources. (October 2023) (Learn how and when to remove this template message) |
When we impose the constraints of causality and stability, the inverse system is unique; and the system [math]\displaystyle{ \mathbb{H} }[/math] and its inverse [math]\displaystyle{ \mathbb{H}_\text{inv} }[/math] are called minimum-phase. The causality and stability constraints in the discrete-time case are the following (for time-invariant systems where h is the system's impulse response, and [math]\displaystyle{ \|{\cdot}\|_1 }[/math] is the ℓ1 norm):
Causality
[math]\displaystyle{ h(n) = 0\ \forall n \lt 0 }[/math] and [math]\displaystyle{ h_\text{inv}(n) = 0\ \forall n \lt 0. }[/math]
Stability
[math]\displaystyle{ \sum_{n=-\infty}^\infty |h(n)| = \|h\|_1 \lt \infty }[/math] and [math]\displaystyle{ \sum_{n=-\infty}^\infty |h_\text{inv}(n)| = \|h_\text{inv}\|_1 \lt \infty. }[/math]
See the article on stability for the analogous conditions for the continuous-time case.
Frequency analysis
This section does not cite any external source. HandWiki requires at least one external source. See citing external sources. (October 2023) (Learn how and when to remove this template message) |
Discrete-time frequency analysis
Performing frequency analysis for the discrete-time case will provide some insight. The time-domain equation is [math]\displaystyle{ (h * h_\text{inv})(n) = \delta(n). }[/math]
Applying the Z-transform gives the following relation in the z domain: [math]\displaystyle{ H(z) H_\text{inv}(z) = 1. }[/math]
From this relation, we realize that [math]\displaystyle{ H_\text{inv}(z) = \frac{1}{H(z)}. }[/math]
For simplicity, we consider only the case of a rational transfer function H(z). Causality and stability imply that all poles of H(z) must be strictly inside the unit circle (see stability). Suppose [math]\displaystyle{ H(z) = \frac{A(z)}{D(z)}, }[/math] where A(z) and D(z) are polynomial in z. Causality and stability imply that the poles – the roots of D(z) – must be strictly inside the unit circle. We also know that [math]\displaystyle{ H_\text{inv}(z) = \frac{D(z)}{A(z)}, }[/math] so causality and stability for [math]\displaystyle{ H_\text{inv}(z) }[/math] imply that its poles – the roots of A(z) – must be inside the unit circle. These two constraints imply that both the zeros and the poles of a minimum-phase system must be strictly inside the unit circle.
Continuous-time frequency analysis
Analysis for the continuous-time case proceeds in a similar manner, except that we use the Laplace transform for frequency analysis. The time-domain equation is [math]\displaystyle{ (h * h_\text{inv})(t) = \delta(t), }[/math] where [math]\displaystyle{ \delta(t) }[/math] is the Dirac delta function – the identity operator in the continuous-time case because of the sifting property with any signal x(t): [math]\displaystyle{ (\delta * x)(t) = \int_{-\infty}^\infty \delta(t - \tau) x(\tau) \,d\tau = x(t). }[/math]
Applying the Laplace transform gives the following relation in the s-plane: [math]\displaystyle{ H(s), H_\text{inv}(s) = 1, }[/math] from which we realize that [math]\displaystyle{ H_\text{inv}(s) = \frac{1}{H(s)}. }[/math]
Again, for simplicity, we consider only the case of a rational transfer function H(s). Causality and stability imply that all poles of H(s) must be strictly inside the left-half s-plane (see stability). Suppose [math]\displaystyle{ H(s) = \frac{A(s)}{D(s)}, }[/math] where A(s) and D(s) are polynomial in s. Causality and stability imply that the poles – the roots of D(s) – must be inside the left-half s-plane. We also know that [math]\displaystyle{ H_\text{inv}(s) = \frac{D(s)}{A(s)}, }[/math] so causality and stability for [math]\displaystyle{ H_\text{inv}(s) }[/math] imply that its poles – the roots of A(s) – must be strictly inside the left-half s-plane. These two constraints imply that both the zeros and the poles of a minimum-phase system must be strictly inside the left-half s-plane.
Relationship of magnitude response to phase response
A minimum-phase system, whether discrete-time or continuous-time, has an additional useful property that the natural logarithm of the magnitude of the frequency response (the "gain" measured in nepers, which is proportional to dB) is related to the phase angle of the frequency response (measured in radians) by the Hilbert transform. That is, in the continuous-time case, let [math]\displaystyle{ H(j\omega)\ \stackrel{\text{def}}{=}\ H(s)\Big|_{s=j\omega} }[/math] be the complex frequency response of system H(s). Then, only for a minimum-phase system, the phase response of H(s) is related to the gain by [math]\displaystyle{ \arg[H(j\omega)] = -\mathcal{H}\big\{\log\big(|H(j\omega)|\big)\big\}, }[/math] where [math]\displaystyle{ \mathcal{H} }[/math] denotes the Hilbert transform, and, inversely, [math]\displaystyle{ \log\big(|H(j\omega)|\big) = \log\big(|H(j\infty)|\big) + \mathcal{H}\big\{\arg[H(j\omega)]\big\}. }[/math]
Stated more compactly, let [math]\displaystyle{ H(j\omega) = |H(j\omega)| e^{j\arg[H(j\omega)]}\ \stackrel{\text{def}}{=}\ e^{\alpha(\omega)} e^{j\phi(\omega)} = e^{\alpha(\omega) + j\phi(\omega)}, }[/math] where [math]\displaystyle{ \alpha(\omega) }[/math] and [math]\displaystyle{ \phi(\omega) }[/math] are real functions of a real variable. Then [math]\displaystyle{ \phi(\omega) = -\mathcal{H}\{\alpha(\omega)\} }[/math] and [math]\displaystyle{ \alpha(\omega) = \alpha(\infty) + \mathcal{H}\{\phi(\omega)\}. }[/math]
The Hilbert transform operator is defined to be [math]\displaystyle{ \mathcal{H}\{x(t)\}\ \stackrel{\text{def}}{=}\ \hat{x}(t) = \frac{1}{\pi} \int_{-\infty}^\infty \frac{x(\tau)}{t - \tau} \,d\tau. }[/math]
An equivalent corresponding relationship is also true for discrete-time minimum-phase systems.
Minimum phase in the time domain
This section does not cite any external source. HandWiki requires at least one external source. See citing external sources. (October 2023) (Learn how and when to remove this template message) |
For all causal and stable systems that have the same magnitude response, the minimum-phase system has its energy concentrated near the start of the impulse response. i.e., it minimizes the following function, which we can think of as the delay of energy in the impulse response: [math]\displaystyle{ \sum_{n=m}^\infty |h(n)|^2 \quad \forall m \in \mathbb{Z}^+. }[/math]
Minimum phase as minimum group delay
For all causal and stable systems that have the same magnitude response, the minimum phase system has the minimum group delay. The following proof illustrates this idea of minimum group delay.
Suppose we consider one zero [math]\displaystyle{ a }[/math] of the transfer function [math]\displaystyle{ H(z) }[/math]. Let's place this zero [math]\displaystyle{ a }[/math] inside the unit circle ([math]\displaystyle{ \left| a \right| \lt 1 }[/math]) and see how the group delay is affected. [math]\displaystyle{ a = \left| a \right| e^{i \theta_a} \, \text{ where } \, \theta_a = \operatorname{Arg}(a) }[/math]
Since the zero [math]\displaystyle{ a }[/math] contributes the factor [math]\displaystyle{ 1 - a z^{-1} }[/math] to the transfer function, the phase contributed by this term is the following. [math]\displaystyle{ \begin{align} \phi_a \left(\omega \right) &= \operatorname{Arg} \left(1 - a e^{-i \omega} \right)\\ &= \operatorname{Arg} \left(1 - \left| a \right| e^{i \theta_a} e^{-i \omega} \right)\\ &= \operatorname{Arg} \left(1 - \left| a \right| e^{-i (\omega - \theta_a)} \right)\\ &= \operatorname{Arg} \left( \left\{ 1 - \left| a \right| \cos( \omega - \theta_a ) \right\} + i \left\{ \left| a \right| \sin( \omega - \theta_a ) \right\}\right)\\ &= \operatorname{Arg} \left( \left\{ \left| a \right|^{-1} - \cos( \omega - \theta_a ) \right\} + i \left\{ \sin( \omega - \theta_a ) \right\} \right) \end{align} }[/math]
[math]\displaystyle{ \phi_a (\omega) }[/math] contributes the following to the group delay.
[math]\displaystyle{ \begin{align} -\frac{d \phi_a (\omega)}{d \omega} &= \frac{ \sin^2( \omega - \theta_a ) + \cos^2( \omega - \theta_a ) - \left| a \right|^{-1} \cos( \omega - \theta_a ) }{ \sin^2( \omega - \theta_a ) + \cos^2( \omega - \theta_a ) + \left| a \right|^{-2} - 2 \left| a \right|^{-1} \cos( \omega - \theta_a ) } \\ &= \frac{ \left| a \right| - \cos( \omega - \theta_a ) }{ \left| a \right| + \left| a \right|^{-1} - 2 \cos( \omega - \theta_a ) } \end{align} }[/math]
The denominator and [math]\displaystyle{ \theta_a }[/math] are invariant to reflecting the zero [math]\displaystyle{ a }[/math] outside of the unit circle, i.e., replacing [math]\displaystyle{ a }[/math] with [math]\displaystyle{ (a^{-1})^{*} }[/math]. However, by reflecting [math]\displaystyle{ a }[/math] outside of the unit circle, we increase the magnitude of [math]\displaystyle{ \left| a \right| }[/math] in the numerator. Thus, having [math]\displaystyle{ a }[/math] inside the unit circle minimizes the group delay contributed by the factor [math]\displaystyle{ 1 - a z^{-1} }[/math]. We can extend this result to the general case of more than one zero since the phase of the multiplicative factors of the form [math]\displaystyle{ 1 - a_i z^{-1} }[/math] is additive. I.e., for a transfer function with [math]\displaystyle{ N }[/math] zeros, [math]\displaystyle{ \operatorname{Arg}\left( \prod_{i = 1}^N \left( 1 - a_i z^{-1} \right) \right) = \sum_{i = 1}^N \operatorname{Arg}\left( 1 - a_i z^{-1} \right) }[/math]
So, a minimum phase system with all zeros inside the unit circle minimizes the group delay since the group delay of each individual zero is minimized.
Non-minimum phase
Systems that are causal and stable whose inverses are causal and unstable are known as non-minimum-phase systems. A given non-minimum phase system will have a greater phase contribution than the minimum-phase system with the equivalent magnitude response.
Maximum phase
A maximum-phase system is the opposite of a minimum phase system. A causal and stable LTI system is a maximum-phase system if its inverse is causal and unstable.[dubious ] That is,
- The zeros of the discrete-time system are outside the unit circle.
- The zeros of the continuous-time system are in the right-hand side of the complex plane.
Such a system is called a maximum-phase system because it has the maximum group delay of the set of systems that have the same magnitude response. In this set of equal-magnitude-response systems, the maximum phase system will have maximum energy delay.
For example, the two continuous-time LTI systems described by the transfer functions [math]\displaystyle{ \frac{s + 10}{s + 5} \qquad \text{and} \qquad \frac{s - 10}{s + 5} }[/math]
have equivalent magnitude responses; however, the second system has a much larger contribution to the phase shift. Hence, in this set, the second system is the maximum-phase system and the first system is the minimum-phase system. These systems are also famously known as nonminimum-phase systems that raise many stability concerns in control. One recent solution to these systems is moving the RHP zeros to the LHP using the PFCD method.[3]
Mixed phase
A mixed-phase system has some of its zeros inside the unit circle and has others outside the unit circle. Thus, its group delay is neither minimum or maximum but somewhere between the group delay of the minimum and maximum phase equivalent system.
For example, the continuous-time LTI system described by transfer function [math]\displaystyle{ \frac{ (s + 1)(s - 5)(s + 10) }{ (s+2)(s+4)(s+6) } }[/math] is stable and causal; however, it has zeros on both the left- and right-hand sides of the complex plane. Hence, it is a mixed-phase system. To control the transfer functions that include these systems some methods such as internal model controller (IMC),[4] generalized Smith's predictor (GSP)[5] and parallel feedforward control with derivative (PFCD)[6] are proposed.
Linear phase
A linear-phase system has constant group delay. Non-trivial linear phase or nearly linear phase systems are also mixed phase.
See also
- All-pass filter – A special non-minimum-phase case.
- Kramers–Kronig relation – Minimum phase system in physics
References
- ↑ Hassibi, Babak; Kailath, Thomas; Sayed, Ali H. (2000). Linear estimation. Englewood Cliffs, N.J: Prentice Hall. pp. 193. ISBN 0-13-022464-2.
- ↑ J. O. Smith III, Introduction to Digital Filters with Audio Applications (September 2007 edition).
- ↑ Noury, K. (2019). "Analytical Statistical Study of Linear Parallel Feedforward Compensators for Nonminimum-Phase Systems". Analytical Statistical Study of Linear Parallel Feedforward Compensators for Nonminimum Phase Systems. doi:10.1115/DSCC2019-9126. ISBN 978-0-7918-5914-8.
- ↑ Morari, Manfred (2002). Robust process control. PTR Prentice Hall. ISBN 0137821530. OCLC 263718708.
- ↑ Ramanathan, S.; Curl, R. L.; Kravaris, C. (1989). "Dynamics and control of quasirational systems" (in en). AIChE Journal 35 (6): 1017–1028. doi:10.1002/aic.690350615. ISSN 1547-5905.
- ↑ Noury, K. (2019). "Class of Stabilizing Parallel Feedforward Compensators for Nonminimum-Phase Systems". Class of Stabilizing Parallel Feedforward Compensators for Nonminimum Phase Systems. doi:10.1115/DSCC2019-9240. ISBN 978-0-7918-5914-8.
Further reading
Original source: https://en.wikipedia.org/wiki/Minimum phase.
Read more |