Separation principle

From HandWiki

In control theory, a separation principle, more formally known as a principle of separation of estimation and control, states that under some assumptions the problem of designing an optimal feedback controller for a stochastic system can be solved by designing an optimal observer for the state of the system, which feeds into an optimal deterministic controller for the system. Thus the problem can be broken into two separate parts, which facilitates the design. The first instance of such a principle is in the setting of deterministic linear systems, namely that if a stable observer and a stable state feedback are designed for a linear time-invariant system (LTI system hereafter), then the combined observer and feedback is stable. The separation principle does not hold in general for nonlinear systems.

Another instance of the separation principle arises in the setting of linear stochastic systems, namely that state estimation (possibly nonlinear) together with an optimal state feedback controller designed to minimize a quadratic cost, is optimal for the stochastic control problem with output measurements. When process and observation noise are Gaussian, the optimal solution separates into a Kalman filter and a linear-quadratic regulator. This is known as linear-quadratic-Gaussian control. More generally, under suitable conditions and when the noise is a martingale (with possible jumps), again a separation principle applies and is known as the separation principle in stochastic control.[1][2][3][4][5][6]

The separation principle also holds for high gain observers used for state estimation of a class of nonlinear systems[7] and control of quantum systems.

Proof of separation principle for deterministic LTI systems

Consider a deterministic LTI system:

[math]\displaystyle{ \begin{align} \dot{x}(t) & = A x(t) + B u(t) \\ y(t) & = C x(t) \end{align} }[/math]

where

[math]\displaystyle{ u(t) }[/math] represents the input signal,
[math]\displaystyle{ y(t) }[/math] represents the output signal, and
[math]\displaystyle{ x(t) }[/math] represents the internal state of the system.

We can design an observer of the form

[math]\displaystyle{ \dot{\hat{x}} = ( A - L C ) \hat{x} + B u + L y \, }[/math]

and state feedback

[math]\displaystyle{ u(t) = - K \hat{x} \, . }[/math]

Define the error e:

[math]\displaystyle{ e = x - \hat{x} \, . }[/math]

Then

[math]\displaystyle{ \dot{e} = (A - L C) e \, }[/math]
[math]\displaystyle{ u(t) = - K ( x - e ) \, . }[/math]

Now we can write the closed-loop dynamics as

[math]\displaystyle{ \begin{bmatrix} \dot{x} \\ \dot{e} \\ \end{bmatrix} = \begin{bmatrix} A - B K & BK \\ 0 & A - L C \\ \end{bmatrix} \begin{bmatrix} x \\ e \\ \end{bmatrix}. }[/math]

Since this is a triangular matrix, the eigenvalues are just those of A − BK together with those of A − LC.[8] Thus the stability of the observer and feedback are independent.

References

  1. Karl Johan Astrom (1970). Introduction to Stochastic Control Theory. 58. Academic Press. ISBN 0-486-44531-3. 
  2. Tyrone Duncan and Pravin Varaiya (1971). "On the solutions of a stochastic control system". SIAM J. Control 9 (3): 354–371. doi:10.1137/0309026. 
  3. M.H.A. Davis and P. Varaiya (1972). "Information states for stochastic systems". J. Math. Anal. Applications 37: 384–402. doi:10.1016/0022-247X(72)90281-8. 
  4. Anders Lindquist (1973). "On Feedback Control of Linear Stochastic Systems". SIAM Journal on Control 11 (2): 323–343. doi:10.1137/0311025. 
  5. A. Bensoussan (1992). Stochastic Control of Partially Observable Systems. Cambridge University Press. 
  6. Tryphon T. Georgiou and Anders Lindquist (2013). "The Separation Principle in Stochastic Control, Redux". IEEE Transactions on Automatic Control 58 (10): 2481–2494. doi:10.1109/TAC.2013.2259207. 
  7. Atassi, A.N.; Khalil, H.K. (1998). "A separation principle for the control of a class of nonlinear systems". Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No.98CH36171). 1. IEEE. pp. 855–860. doi:10.1109/cdc.1998.760800. ISBN 0-7803-4394-8. http://dx.doi.org/10.1109/cdc.1998.760800. 
  8. Proof can be found in this math.stackexchange [1].
  • Brezinski, Claude. Computational Aspects of Linear Control (Numerical Methods and Algorithms). Springer, 2002.