Separation principle in stochastic control

From HandWiki
Revision as of 06:22, 27 June 2023 by BotanyGa (talk | contribs) (simplify)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The separation principle is one of the fundamental principles of stochastic control theory, which states that the problems of optimal control and state estimation can be decoupled under certain conditions. In its most basic formulation it deals with a linear stochastic system

[math]\displaystyle{ \begin{align} dx & =A(t)x(t)\,dt+B_1(t)u(t)\,dt+B_2(t)\,dw \\ dy & =C(t)x(t)\,dt +D(t)\,dw \end{align} }[/math]

with a state process [math]\displaystyle{ x }[/math], an output process [math]\displaystyle{ y }[/math] and a control [math]\displaystyle{ u }[/math], where [math]\displaystyle{ w }[/math] is a vector-valued Wiener process, [math]\displaystyle{ x(0) }[/math] is a zero-mean Gaussian random vector independent of [math]\displaystyle{ w }[/math], [math]\displaystyle{ y(0)=0 }[/math], and [math]\displaystyle{ A }[/math], [math]\displaystyle{ B_1 }[/math], [math]\displaystyle{ B_2 }[/math], [math]\displaystyle{ C }[/math], [math]\displaystyle{ D }[/math] are matrix-valued functions which generally are taken to be continuous of bounded variation. Moreover, [math]\displaystyle{ DD' }[/math] is nonsingular on some interval [math]\displaystyle{ [0,T] }[/math]. The problem is to design an output feedback law [math]\displaystyle{ \pi:\, y \mapsto u }[/math] which maps the observed process [math]\displaystyle{ y }[/math] to the control input [math]\displaystyle{ u }[/math] in a nonanticipatory manner so as to minimize the functional

[math]\displaystyle{ J(u) = \mathbb{E}\left\{ \int_0^T x(t)'Q(t)x(t)\,dt+\int_0^Tu(t)'R(t)u(t)\,dt +x(T)'Sx(T)\right\}, }[/math]

where [math]\displaystyle{ \mathbb{E} }[/math] denotes expected value, prime ([math]\displaystyle{ ' }[/math]) denotes transpose. and [math]\displaystyle{ Q }[/math] and [math]\displaystyle{ R }[/math] are continuous matrix functions of bounded variation, [math]\displaystyle{ Q(t) }[/math] is positive semi-definite and [math]\displaystyle{ R(t) }[/math] is positive definite for all [math]\displaystyle{ t }[/math]. Under suitable conditions, which need to be properly stated, the optimal policy [math]\displaystyle{ \pi }[/math] can be chosen in the form

[math]\displaystyle{ u(t)=K(t)\hat x(t), }[/math]

where [math]\displaystyle{ \hat x(t) }[/math] is the linear least-squares estimate of the state vector [math]\displaystyle{ x(t) }[/math] obtained from the Kalman filter

[math]\displaystyle{ d\hat x=A(t)\hat x(t)\,dt+B_1(t)u(t)\,dt +L(t)(dy-C(t)\hat x(t)\,dt),\quad \hat x(0)=0, }[/math]

where [math]\displaystyle{ K }[/math] is the gain of the optimal linear-quadratic regulator obtained by taking [math]\displaystyle{ B_2=D=0 }[/math] and [math]\displaystyle{ x(0) }[/math] deterministic, and where [math]\displaystyle{ L }[/math] is the Kalman gain. There is also a non-Gaussian version of this problem (to be discussed below) where the Wiener process [math]\displaystyle{ w }[/math] is replaced by a more general square-integrable martingale with possible jumps.[1] In this case, the Kalman filter needs to be replaced by a nonlinear filter providing an estimate of the (strict sense) conditional mean

[math]\displaystyle{ \hat{x}(t)= \operatorname E\{ x(t)\mid {\cal Y}_t\}, }[/math]

where

[math]\displaystyle{ {\cal Y}_t:=\sigma\{ y(\tau), \tau\in [0,t]\}, \quad 0\leq t\leq T, }[/math]

is the filtration generated by the output process; i.e., the family of increasing sigma fields representing the data as it is produced.

In the early literature on the separation principle it was common to allow as admissible controls [math]\displaystyle{ u }[/math] all processes that are adapted to the filtration [math]\displaystyle{ \{{\cal Y}_t, \, 0\leq t\leq T\} }[/math]. This is equivalent to allowing all non-anticipatory Borel functions as feedback laws, which raises the question of existence of a unique solution to the equations of the feedback loop. Moreover, one needs to exclude the possibility that a nonlinear controller extracts more information from the data than what is possible with a linear control law.[2]

Choices of the class of admissible control laws

Linear-quadratic control problems are often solved by a completion-of-squares argument. In our present context we have

[math]\displaystyle{ J(u)=\operatorname{E}\left\{ \int_0^T(u-Kx)'R(u-Kx) \, dt\right\} +\text{terms that do not depend on }u, }[/math]

in which the first term takes the form[3]

[math]\displaystyle{ \begin{align} \operatorname{E}\left\{ \int_0^T(u-Kx)'R(u-Kx)\,dt\right\}=\operatorname{E}\left\{\int_0^T[(u-K\hat{x})'R(u-K\hat{x})+\operatorname{tr}(K'RK\Sigma)] \, dt\right\}, \end{align} }[/math]

where [math]\displaystyle{ \Sigma }[/math] is the covariance matrix

[math]\displaystyle{ \Sigma(t):=\operatorname{E}\{[x(t)-\hat{x}(t)][x(t)-\hat{x}(t)]'\}. }[/math]

The separation principle would now follow immediately if [math]\displaystyle{ \begin{align}\Sigma\end{align} }[/math] were independent of the control. However this needs to be established.

The state equation can be integrated to take the form

[math]\displaystyle{ x(t)=x_0(t)+\int_0^t \Phi(t,s)B_1(s)u(s) \, ds, }[/math]

where [math]\displaystyle{ x_0 }[/math] is the state process obtained by setting [math]\displaystyle{ u=0 }[/math] and [math]\displaystyle{ \Phi }[/math] is the transition matrix function. By linearity, [math]\displaystyle{ \hat{x}(t)=\operatorname{E}\{x(t)\mid {\cal Y}_t\} }[/math] equals

[math]\displaystyle{ \hat{x}(t)=\hat{x}_0(t)+\int_0^t \Phi(t,s)B_1(s)u(s)\,ds, }[/math]

where [math]\displaystyle{ \hat{x}_0(t)=\operatorname{E}\{x_0(t)\mid {\cal Y}_t\} }[/math]. Consequently,

[math]\displaystyle{ \Sigma(t):=\mathbb{E}\{[x_0(t)-\hat{x}_0(t)][x_0(t)-\hat{x}_0(t)]'\}, }[/math]

but we need to establish that [math]\displaystyle{ \begin{align}\hat{x}_0\end{align} }[/math] does not depend on the control. This would be the case if

[math]\displaystyle{ {\cal Y}_t ={\cal Y}_t^0:=\sigma\{ y_0(\tau), \tau\in [0,t]\}, \quad 0\leq t\leq T, }[/math]

where [math]\displaystyle{ y_0 }[/math] is the output process obtained by setting [math]\displaystyle{ u=0 }[/math]. This issue was discussed in detail by Lindquist.[2] In fact, since the control process [math]\displaystyle{ u }[/math] is in general a nonlinear function of the data and thus non-Gaussian, then so is the output process [math]\displaystyle{ y }[/math]. To avoid these problems one might begin by uncoupling the feedback loop and determine an optimal control process in the class of stochastic processes [math]\displaystyle{ u }[/math] that are adapted to the family [math]\displaystyle{ \{ {\cal Y}_t^0\} }[/math] of sigma fields. This problem, where one optimizes over the class of all control processes adapted to a fixed filtration, is called a stochastic open loop (SOL) problem.[2] It is not uncommon in the literature to assume from the outset that the control is adapted to [math]\displaystyle{ \{ {\mathcal Y}_t^0\} }[/math]; see, e.g., Section 2.3 in Bensoussan,[4] also van Handel [5] and Willems.[6]

In Lindquist 1973[2] a procedure was proposed for how to embed the class of admissible controls in various SOL classes in a problem-dependent manner, and then construct the corresponding feedback law. The largest class [math]\displaystyle{ \Pi }[/math] of admissible feedback laws [math]\displaystyle{ \pi }[/math] consists of the non-anticipatory functions [math]\displaystyle{ u:=\pi(y) }[/math] such that the feedback equation has a unique solution and the corresponding control process [math]\displaystyle{ u_\pi }[/math] is adapted to [math]\displaystyle{ \{{\mathcal Y}_t^0\} }[/math]. Next, we give a few examples of specific classes of feedback laws that belong to this general class, as well as some other strategies in the literature to overcome the problems described above.

Linear control laws

The admissible class [math]\displaystyle{ \Pi }[/math] of control laws could be restricted to contain only certain linear ones as in Davis.[7] More generally, the linear class

[math]\displaystyle{ ({\mathcal L})\quad u(t)=\bar{u}(t)+\int_0^tF(t,\tau)\,dy, }[/math]

where [math]\displaystyle{ \bar{u} }[/math] is a deterministic function and [math]\displaystyle{ F }[/math] is an [math]\displaystyle{ L_2 }[/math] kernel, ensures that [math]\displaystyle{ \Sigma }[/math] is independent of the control.[8][2] In fact, the Gaussian property will then be preserved, and [math]\displaystyle{ \hat{x} }[/math] will be generated by the Kalman filter. Then the error process [math]\displaystyle{ \tilde{x}:= x-\hat{x} }[/math] is generated by

[math]\displaystyle{ d\tilde{x}=(A-LC)\tilde{x}\,dt +(B_2-LD)\,dw, \quad \tilde{x}(0)=x(0), }[/math]

which is clearly independent of the choice of control, and thus so is [math]\displaystyle{ \Sigma }[/math].

Lipschitz-continuous control laws

Wonham proved a separation theorem for controls in the class [math]\displaystyle{ \begin{align}\pi:\, u(t)=\psi(t,\hat{x}(t))\end{align} }[/math], even for a more general cost functional than J(u).[9] However, the proof is far from simple and there are many technical assumptions. For example, [math]\displaystyle{ \begin{align}C(t)\end{align} }[/math] must square and have a determinant bounded away from zero, which is a serious restriction. A later proof by Fleming and Rishel[10] is considerably simpler. They also prove the separation theorem with quadratic cost functional [math]\displaystyle{ J(u) }[/math] for a class of Lipschitz continuous feedback laws, namely [math]\displaystyle{ u(t)=\phi(t,y) }[/math], where [math]\displaystyle{ \phi:\, [0,T]\times C^n [0,T]\to{\mathbb R}^m }[/math] is a non-anticipatory function of [math]\displaystyle{ y }[/math] which is Lipschitz continuous in this argument. Kushner[11] proposed a more restricted class [math]\displaystyle{ u(t)=\psi(t,\hat{\xi}(t)) }[/math], where the modified state process [math]\displaystyle{ \hat{\xi} }[/math] is given by

[math]\displaystyle{ \hat{\xi}(t)=\operatorname{E}\{ x_0(t)\mid {\mathcal Y}_t^0\}+ \int_0^t \Phi(t,s)B_1(s)u(s)\,ds, }[/math]

leading to the identity [math]\displaystyle{ \begin{align}\hat{x}=\hat{\xi}\end{align} }[/math].

Imposing delay

If there is a delay in the processing of the observed data so that, for each [math]\displaystyle{ t }[/math], [math]\displaystyle{ u(t) }[/math] is a function of [math]\displaystyle{ y(\tau); \, 0\leq\tau\leq t-\varepsilon }[/math], then [math]\displaystyle{ {\cal Y}_t ={\cal Y}_t^0 }[/math], [math]\displaystyle{ 0\leq t\leq T }[/math], see Example 3 in Georgiou and Lindquist.[1] Consequently, [math]\displaystyle{ \Sigma }[/math] is independent of the control. Nevertheless, the control policy [math]\displaystyle{ \pi }[/math] must be such that the feedback equations have a unique solution.

Consequently, the problem with possibly control-dependent sigma fields does not occur in the usual discrete-time formulation. However, a procedure used in several textbooks to construct the continuous-time [math]\displaystyle{ \Sigma }[/math] as the limit of finite difference quotients of the discrete-time [math]\displaystyle{ \Sigma }[/math], which does not depend on the control, is circular or a best incomplete; see Remark 4 in Georgiou and Lindquist.[1]

Weak solutions

An approach introduced by Duncan and Varaiya[12] and Davis and Varaiya,[13] see also Section 2.4 in Bensoussan[4] is based on weak solutions of the stochastic differential equation. Considering such solutions of

[math]\displaystyle{ dx =A(t)x(t)\,dt+B_1(t)u(t)\,dt+B_2(t)\,dw }[/math]

we can change the probability measure (that depends on [math]\displaystyle{ \begin{align}u\end{align} }[/math]) via a Girsanov transformation so that

[math]\displaystyle{ d\tilde{w}:= B_1(t)u(t)\,dt+B_2(t)\,dw }[/math]

becomes a new Wiener process, which (under the new probability measure) can be assumed to be unaffected by the control. The question of how this could be implemented in an engineering system is left open.

Nonlinear filtering solutions

Although a nonlinear control law will produce a non-Gaussian state process, it can be shown, using nonlinear filtering theory (Chapters 16.1 in Lipster and Shirayev[14] ), that the state process is conditionally Gaussian given the filtration [math]\displaystyle{ \begin{align}\{{\mathcal Y}_t\}\end{align} }[/math]. This fact can be used to show that [math]\displaystyle{ \begin{align}\hat{x}\end{align} }[/math] is actually generated by a Kalman filter (see Chapters 11 and 12 in Lipster and Shirayev[14]). However, this requires quite a sophisticated analysis and is restricted to the case where the driving noise [math]\displaystyle{ \begin{align}w\end{align} }[/math] is a Wiener process.

Additional historical perspective can be found in Mitter.[15]

Issues on feedback in linear stochastic systems

At this point it is suitable to consider a more general class of controlled linear stochastic systems that also covers systems with time delays, namely

[math]\displaystyle{ \begin{align} z(t) & =z_0(t) + \int_0^t G(t,s)u(s)\,ds \\ y(t) & = Hz(t) \end{align} }[/math]

with [math]\displaystyle{ \begin{align}z_0\end{align} }[/math] a stochastic vector process which does not depend on the control.[2] The standard stochastic system is then obtained as a special case where [math]\displaystyle{ z=[x',y']' }[/math], [math]\displaystyle{ z_0=[x_0',y_0']' }[/math] and [math]\displaystyle{ H=[I,0] }[/math]. We shall use the short-hand notation

[math]\displaystyle{ z=z_0+g\pi Hz }[/math]

for the feedback system, where

[math]\displaystyle{ g\;:\; (t,u) \mapsto \int_0^t G(t,\tau)u(\tau)\,d\tau }[/math]

is a Volterra operator.

In this more general formulation the embedding procedure of Lindquist[2] defines the class [math]\displaystyle{ \Pi }[/math] of admissible feedback laws [math]\displaystyle{ \pi }[/math] as the class of non-anticipatory functions [math]\displaystyle{ u:=\pi(y) }[/math] such that the feedback equation [math]\displaystyle{ z=z_0+g\pi Hz }[/math] has a unique solution [math]\displaystyle{ z_\pi }[/math] and [math]\displaystyle{ u=\pi(Hz_\pi) }[/math] is adapted to [math]\displaystyle{ \{{\mathcal Y}_t^0\} }[/math].

In Georgiou and Lindquist[1] a new framework for the separation principle was proposed. This approach considers stochastic systems as well-defined maps between sample paths rather than between stochastic processes and allows us to extend the separation principle to systems driven by martingales with possible jumps. The approach is motivated by engineering thinking where systems and feedback loops process signals, and not stochastic processes per se or transformations of probability measures. Hence the purpose is to create a natural class of admissible control laws that make engineering sense, including those that are nonlinear and discontinuous.

The feedback equation [math]\displaystyle{ z=z_0+g\pi Hz }[/math] has a unique strong solution if there exists a non-anticipating function [math]\displaystyle{ F }[/math] such that [math]\displaystyle{ z=F(z_0) }[/math] satisfies the equation with probability one and all other solutions coincide with [math]\displaystyle{ z }[/math] with probability one. However, in the sample-wise setting, more is required, namely that such a unique solution exists and that [math]\displaystyle{ z=z_0+g\pi Hz }[/math] holds for all [math]\displaystyle{ z_0 }[/math], not just almost all. The resulting feedback loop is deterministically well-posedin the sense that the feedback equations admit a unique solution that causally depends on the input for each input sample path.

In this context, a signal is defined to be a sample path of a stochastic process with possible discontinuities. More precisely, signals will belong to the Skorohod space [math]\displaystyle{ D }[/math], i.e., the space of functions which are continuous on the right and have a left limit at all points (càdlàg functions). In particular, the space [math]\displaystyle{ C }[/math] of continuous functions is a proper subspace of [math]\displaystyle{ D }[/math]. Hence the response of a typical nonlinear operation that involves thresholding and switching can be modeled as a signal. The same goes for sample paths of counting processes and other martingales. A system is defined to be a measurable non-anticipatory map [math]\displaystyle{ D\to D }[/math] sending sample paths to sample paths so that their outputs at any time [math]\displaystyle{ t }[/math] is a measurable function of past values of the input and time. For example, stochastic differential equations with Lipschitz coefficients driven by a Wiener process induce maps between corresponding path spaces, see page 127 in Rogers and Williams,[16] and pages 126-128 in Klebaner.[17] Also, under fairly general conditions (see e.g., Chapter V in Protter[18]), stochastic differential equations driven by martingales with sample paths in [math]\displaystyle{ D }[/math] have strong solutions who are semi-martingales.

For the time setting [math]\displaystyle{ f(z):=g\pi Hz }[/math], the feedback system [math]\displaystyle{ z=z_0+g\pi Hz }[/math] can be written [math]\displaystyle{ z=z_0+f(z) }[/math], where [math]\displaystyle{ z_0 }[/math] can be interpreted as an input.

Definition. A feedback loop [math]\displaystyle{ z=z_0+f(z) }[/math] is deterministically well-posed if it has a unique solution [math]\displaystyle{ z\in D }[/math] for all inputs [math]\displaystyle{ z_0\in D }[/math] and [math]\displaystyle{ (1-f)^{-1} }[/math] is a system.

This implies that the processes [math]\displaystyle{ z }[/math] and [math]\displaystyle{ z_0 }[/math] define identical filtrations.[1] Consequently, no new information is created by the loop. However, what we need is that [math]\displaystyle{ {\cal Y}_t ={\cal Y}_t^0 }[/math] for [math]\displaystyle{ 0\leq t\leq T }[/math]. This is ensured by the following lemma (Lemma 8 in Georgiou and Lindquist[1]).

Key Lemma. If the feedback loop [math]\displaystyle{ z=z_0+g\pi Hz }[/math] is deterministically well-posed, [math]\displaystyle{ g\pi }[/math] is a system, and [math]\displaystyle{ H }[/math] is a linear system having a right inverse [math]\displaystyle{ H^{-R} }[/math] that is also a system, then [math]\displaystyle{ (1-Hg\pi)^{-1} }[/math] is a system and [math]\displaystyle{ {\cal Y}_t ={\cal Y}_t^0 }[/math] for [math]\displaystyle{ 0\leq t\leq T }[/math].

The condition on [math]\displaystyle{ H }[/math] in this lemma is clearly satisfied in the standard linear stochastic system, for which [math]\displaystyle{ H=[0,I] }[/math], and hence [math]\displaystyle{ H^{-R}=H' }[/math]. The remaining conditions are collected in the following definition.

Definition. A feedback law [math]\displaystyle{ \pi }[/math] is deterministically well-posed for the system [math]\displaystyle{ z=z_0+g\pi Hz }[/math] if [math]\displaystyle{ g\pi }[/math] is a system and the feedback system [math]\displaystyle{ z=z_0+g\pi Hz }[/math] deterministically well-posed.

Examples of simple systems that are not deterministically well-posed are given in Remark 12 in Georgiou and Lindquist.[1]

A separation principle for physically realizable control laws

By only considering feedback laws that are deterministically well-posed, all admissible control laws are physically realizable in the engineering sense that they induce a signal that travels through the feedback loop. The proof of the following theorem can be found in Georgiou and Lindquist 2013.[1]

Separation theorem. Given the linear stochastic system

[math]\displaystyle{ \begin{align} dx & =A(t)x(t)\,dt+B_1(t)u(t)\,dt+B_2(t)\,dw \\ dy & =C(t)x(t)\,dt +D(t)\,dw \end{align} }[/math]

where [math]\displaystyle{ w }[/math] is a vector-valued Wiener process, [math]\displaystyle{ x(0) }[/math] is a zero-mean Gaussian random vector independent of [math]\displaystyle{ w }[/math], consider the problem of minimizing the quadratic functional J(u) over the class of all deterministically well-posed feedback laws [math]\displaystyle{ \pi }[/math]. Then the unique optimal control law is given by [math]\displaystyle{ u(t)=K(t)\hat{x}(t) }[/math] where [math]\displaystyle{ K }[/math] is defined as above and [math]\displaystyle{ \hat{x} }[/math] is given by the Kalman filter. More generally, if [math]\displaystyle{ w }[/math] is a square-integrable martingale and [math]\displaystyle{ x(0) }[/math] is an arbitrary zero mean random vector, [math]\displaystyle{ u(t)=K(t)\hat{x}(t) }[/math], where [math]\displaystyle{ \hat{x}(t)=\operatorname{E}\{x(t)\mid {\cal Y}_t\} }[/math], is the optimal control law provided it is deterministically well-posed.

In the general non-Gaussian case, which may involve counting processes, the Kalman filter needs to be replaced by a nonlinear filter.

A Separation principle for delay-differential systems

Stochastic control for time-delay systems were first studied in Lindquist,[19][20][8][2] and Brooks,[21] although Brooks relies on the strong assumption that the observation [math]\displaystyle{ y }[/math] is functionally independent of the control [math]\displaystyle{ u }[/math], thus avoiding the key question of feedback.

Consider the delay-differential system[8]

[math]\displaystyle{ \begin{align} dx &=\left(\int_{t-h}^t d_s\,A(t,s)x(s)\right) \,dt + B_1(t)u(t)\,dt+B_2(t)\,dw \\ dy & =\left(\int_{t-h}^t d_s\,C(t,s)x(s)\right) \,dt +D(t)\,dw \end{align} }[/math]

where [math]\displaystyle{ w }[/math] is now a (square-integrable) Gaussian (vector) martingale, and where [math]\displaystyle{ \begin{align}A\end{align} }[/math] and [math]\displaystyle{ C }[/math] are of bounded variation in the first argument and continuous on the right in the second, [math]\displaystyle{ x(t)=\xi(t) }[/math] is deterministic for [math]\displaystyle{ -h\leq t\leq 0 }[/math], and [math]\displaystyle{ y(0)=0 }[/math]. More precisely, [math]\displaystyle{ A(t,s)=0 }[/math] for [math]\displaystyle{ s\geq t }[/math], [math]\displaystyle{ A(t,s)=A(t,t-h) }[/math] for [math]\displaystyle{ t\leq t-h }[/math], and the total variation of [math]\displaystyle{ s\mapsto A(t,s) }[/math] is bounded by an integrable function in the variable [math]\displaystyle{ t }[/math], and the same holds for [math]\displaystyle{ C }[/math].

We want to determine a control law which minimizes

[math]\displaystyle{ J(u)=\operatorname{E}\left(\int_0^T x(t)'Q(t)x(t)\,d\alpha(t)+\int_0^Tu(t)'R(t)u(t)\,dt\right), }[/math]

where [math]\displaystyle{ \begin{align}d\alpha\end{align} }[/math] is a positive Stieltjes measure. The corresponding deterministic problem obtained by setting [math]\displaystyle{ \begin{align}w=0\end{align} }[/math] is given by

[math]\displaystyle{ u(t)=\int_{t-h}^t d_\tau \, K(t,\tau)x(\tau), }[/math]

with[8] [math]\displaystyle{ \begin{align}K\end{align} }[/math].

The following separation principle for the delay system above can be found in Georgiou and Lindquist 2013[1] and generalizes the corresponding result in Lindquist 1973[8]

Theorem. There is a unique feedback law [math]\displaystyle{ \begin{align}\pi:\, y\mapsto u\end{align} }[/math] in the class of deterministically well-posed control laws that minimizes [math]\displaystyle{ \begin{align}J(u)\end{align} }[/math], and it is given by

[math]\displaystyle{ u(t)=\int_{t-h}^t d_s \, K(t,s)\hat{x}(s\mid t), }[/math]

where [math]\displaystyle{ K }[/math] is the deterministic control gain and [math]\displaystyle{ \hat{x}(s\mid t) := E\{ x(s)\mid {\cal Y}_t\} }[/math] is given by the linear (distributed) filter

[math]\displaystyle{ \begin{align} d\hat{x}(t\mid t) & =\int_{t-h}^t d_s \, A(t,s)\hat{x}(s\mid t) \, dt +B_1u\,dt+ X(t,t)\,dv \\ d\hat{x}(t\mid t) & =\int_{t-h}^t d_s \, A(t,s)\hat{x}(s\mid t) \, dt +B_1u\,dt+ X(t,t)\,dv \end{align} }[/math]

where [math]\displaystyle{ v }[/math] is the innovation process

[math]\displaystyle{ dv=dy - \int_{t-h}^t d_sC(t,s)\hat{x}(s\mid t)\, dt, \quad v(0)=0, }[/math]

and the gain [math]\displaystyle{ x }[/math] is as defined in page 120 in Lindquist.[8]

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Tryphon T. Georgiou and Anders Lindquist (2013). "The Separation Principle in Stochastic Control, Redux". IEEE Transactions on Automatic Control 58 (10): 2481–2494. doi:10.1109/TAC.2013.2259207. .
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 Anders Lindquist (1973). "On Feedback Control of Linear Stochastic Systems". SIAM Journal on Control 11 (2): 323–343. doi:10.1137/0311025. .
  3. Karl Johan Astrom (1970). Introduction to Stochastic Control Theory. 58. Academic Press. ISBN 978-0-486-44531-1. .
  4. 4.0 4.1 A. Bensoussan (1992). Stochastic Control of Partially Observable Systems. Cambridge University Press. .
  5. Ramon van Handel (2007). Stochastic Calculus, Filtering, and Stochastic Control. unpublished notes. https://web.math.princeton.edu/~rvan/acm217/ACM217.pdf. 
  6. Jan C. Willems. (1978). "Recursive filtering". Statistica Neerlandica 32 (1): 1–39. doi:10.1111/j.1467-9574.1978.tb01382.x. .
  7. M.H.A. Davis (1978). Linear Estimation and Stochastic Control. Chapman and Hall. .
  8. 8.0 8.1 8.2 8.3 8.4 8.5 Anders Lindquist (1973). "Optimal control of linear stochastic systems with applications to time lag systems". Information Sciences 5: 81–126. doi:10.1016/0020-0255(73)90005-4. .
  9. Murray Wonham (1968). "On the separation theorem of stochastic control". SIAM J. Control 6 (2): 312–326. doi:10.1137/0306023. 
  10. W.H. Fleming and R.W. Rishel (1968). Deterministic and Stochastic Optimal Control. Springer-Verlag. .
  11. H. Kushner (1971). Introduction to Stochastic Control. Holt, Rinehart and Winston. .
  12. Tyrone Duncan and Pravin Varaiya (1971). "On the solutions of a stochastic control system". SIAM J. Control 9 (3): 354–371. doi:10.1137/0309026. https://kuscholarworks.ku.edu/bitstream/1808/16692/1/DuncanTE_Aug1971.pdf. .
  13. M.H.A. Davis and P. Varaiya (1972). "Information states for stochastic systems". J. Math. Anal. Applications 37: 384–402. doi:10.1016/0022-247X(72)90281-8. .
  14. 14.0 14.1 R.S. Liptser and A.N. Shirayev (1978). Statistics of Random Processes II, Applications. Springer-Verlag. .
  15. S. Mitter (1996). "Filtering and stochastic control: A historical perspective". IEEE Control Systems Magazine 13 (3): 67–76. .
  16. Rogers, L. Chris G., and David Williams (2000). Diffusions, Markov processes and martingales: Volume 2, Itô calculus. Cambridge university press. 
  17. Klebaner, Fima C. (2012). Introduction to Stochastic Calculus with Applications. Imperial College Press. 
  18. Protter, P. E. (2004). Stochastic Integration and Differential Equations. Springer. 
  19. Anders Lindquist (1968). "On optimal stochastic control with smoothed information". Information Sciences 1: 55–85. doi:10.1016/0020-0255(68)90007-8. .
  20. Anders Lindquist (1969). "An innovations approach to optimal control of linear stochastic systems with time delay". Information Sciences 1 (3): 279–295. doi:10.1016/S0020-0255(69)80014-9. .
  21. R. Brooks (1972). "Linear Stochastic Control: An extended separation principle". J. Math. Anal. Appl. 38 (3): 569–587. doi:10.1016/0022-247X(72)90069-8. .