# Kushner equation

In filtering theory the Kushner equation (after Harold Kushner) is an equation for the conditional probability density of the state of a stochastic non-linear dynamical system, given noisy measurements of the state. It therefore provides the solution of the nonlinear filtering problem in estimation theory. The equation is sometimes referred to as the Stratonovich–Kushner (or Kushner–Stratonovich) equation. However, the correct equation in terms of Itō calculus was first derived by Kushner although a more heuristic Stratonovich version of it appeared already in Stratonovich's works in late fifties. However, the derivation in terms of Itō calculus is due to Richard Bucy.[clarification needed]

## Overview

Assume the state of the system evolves according to

$\displaystyle{ dx = f(x,t) \, dt + \sigma dw }$

and a noisy measurement of the system state is available:

$\displaystyle{ dz = h(x,t) \, dt + \eta dv }$

where w, v are independent Wiener processes. Then the conditional probability density p(xt) of the state at time t is given by the Kushner equation:

$\displaystyle{ dp(x,t) = L[p(x,t)] dt + p(x,t) [h(x,t)-E_t h(x,t) ]^\top \eta^{-\top}\eta^{-1} [dz-E_t h(x,t) dt]. }$

where $\displaystyle{ L p = -\sum \frac{\partial (f_i p)}{\partial x_i} + \frac{1}{2} \sum (\sigma \sigma^\top)_{i,j} \frac{\partial^2 p}{\partial x_i \partial x_j} }$ is the Kolmogorov Forward operator and $\displaystyle{ dp(x,t) = p(x,t + dt) - p(x,t) }$ is the variation of the conditional probability.

The term $\displaystyle{ dz-E_t h(x,t) dt }$ is the innovation i.e. the difference between the measurement and its expected value.

### Kalman–Bucy filter

One can simply use the Kushner equation to derive the Kalman–Bucy filter for a linear diffusion process. Suppose we have $\displaystyle{ f(x,t) = A x }$ and $\displaystyle{ h(x,t) = C x }$. The Kushner equation will be given by

$\displaystyle{ dp(x,t) = L[p(x,t)] dt + p(x,t) [C x- C \mu(t)]^\top \eta^{-\top}\eta^{-1} [dz-C \mu(t) dt], }$

where $\displaystyle{ \mu(t) }$ is the mean of the conditional probability at time $\displaystyle{ t }$. Multiplying by $\displaystyle{ x }$ and integrating over it, we obtain the variation of the mean

$\displaystyle{ d\mu(t) = A \mu(t) dt + \Sigma(t) C^\top \eta^{-\top}\eta^{-1} \left(dz - C\mu(t) dt\right). }$

Likewise, the variation of the variance $\displaystyle{ \Sigma(t) }$ is given by

$\displaystyle{ \frac{d\Sigma(t)}{dt} = A\Sigma(t) + \Sigma(t) A^\top + \sigma^\top \sigma-\Sigma(t) C^\top\eta^{-\top} \eta^{-1} C \Sigma(t). }$

The conditional probability is then given at every instant by a normal distribution $\displaystyle{ \mathcal{N}(\mu(t),\Sigma(t)) }$.