Wiener filter
In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant (LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, and additive noise. The Wiener filter minimizes the mean square error between the estimated random process and the desired process.
Description
The goal of the Wiener filter is to compute a statistical estimate of an unknown signal using a related signal as an input and filtering that known signal to produce the estimate as an output. For example, the known signal might consist of an unknown signal of interest that has been corrupted by additive noise. The Wiener filter can be used to filter out the noise from the corrupted signal to provide an estimate of the underlying signal of interest. The Wiener filter is based on a statistical approach, and a more statistical account of the theory is given in the minimum mean square error (MMSE) estimator article.
Typical deterministic filters are designed for a desired frequency response. However, the design of the Wiener filter takes a different approach. One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks the linear time-invariant filter whose output would come as close to the original signal as possible. Wiener filters are characterized by the following:[1]
- Assumption: signal and (additive) noise are stationary linear stochastic processes with known spectral characteristics or known autocorrelation and cross-correlation
- Requirement: the filter must be physically realizable/causal (this requirement can be dropped, resulting in a non-causal solution)
- Performance criterion: minimum mean-square error (MMSE)
This filter is frequently used in the process of deconvolution; for this application, see Wiener deconvolution.
Wiener filter solutions
Let [math]\displaystyle{ s(t+ \alpha ) }[/math] be an unknown signal which must be estimated from a measurement signal [math]\displaystyle{ x(t) }[/math]. Where alpha is a tunable parameter. [math]\displaystyle{ \alpha \gt 0 }[/math] is known as prediction, [math]\displaystyle{ \alpha = 0 }[/math] is known as filtering, and [math]\displaystyle{ \alpha \lt 0 }[/math] is known as smoothing (see Wiener filtering chapter of [1] for more details).
The Wiener filter problem has solutions for three possible cases: one where a noncausal filter is acceptable (requiring an infinite amount of both past and future data), the case where a causal filter is desired (using an infinite amount of past data), and the finite impulse response (FIR) case where only input data is used (i.e. the result or output is not fed back into the filter as in the IIR case). The first case is simple to solve but is not suited for real-time applications. Wiener's main accomplishment was solving the case where the causality requirement is in effect; Norman Levinson gave the FIR solution in an appendix of Wiener's book.
Noncausal solution
- [math]\displaystyle{ G(s) = \frac{S_{x,s}(s)}{S_x(s)}e^{\alpha s}, }[/math]
where [math]\displaystyle{ S }[/math] are spectral densities. Provided that [math]\displaystyle{ g(t) }[/math] is optimal, then the minimum mean-square error equation reduces to
- [math]\displaystyle{ E(e^2) = R_s(0) - \int_{-\infty}^{\infty} g(\tau)R_{x,s}(\tau + \alpha)\,d\tau, }[/math]
and the solution [math]\displaystyle{ g(t) }[/math] is the inverse two-sided Laplace transform of [math]\displaystyle{ G(s) }[/math].
Causal solution
- [math]\displaystyle{ G(s) = \frac{H(s)}{S_x^{+}(s)}, }[/math]
where
- [math]\displaystyle{ H(s) }[/math] consists of the causal part of [math]\displaystyle{ \frac{S_{x,s}(s)}{S_x^{-}(s)}e^{\alpha s} }[/math] (that is, that part of this fraction having a positive time solution under the inverse Laplace transform)
- [math]\displaystyle{ S_x^{+}(s) }[/math] is the causal component of [math]\displaystyle{ S_x(s) }[/math] (i.e., the inverse Laplace transform of [math]\displaystyle{ S_x^{+}(s) }[/math] is non-zero only for [math]\displaystyle{ t \ge 0 }[/math])
- [math]\displaystyle{ S_x^{-}(s) }[/math] is the anti-causal component of [math]\displaystyle{ S_x(s) }[/math] (i.e., the inverse Laplace transform of [math]\displaystyle{ S_x^{-}(s) }[/math] is non-zero only for [math]\displaystyle{ t \lt 0 }[/math])
This general formula is complicated and deserves a more detailed explanation. To write down the solution [math]\displaystyle{ G(s) }[/math] in a specific case, one should follow these steps:[2]
- Start with the spectrum [math]\displaystyle{ S_x(s) }[/math] in rational form and factor it into causal and anti-causal components: [math]\displaystyle{ S_x(s) = S_x^{+}(s) S_x^{-}(s) }[/math] where [math]\displaystyle{ S^{+} }[/math] contains all the zeros and poles in the left half plane (LHP) and [math]\displaystyle{ S^{-} }[/math] contains the zeroes and poles in the right half plane (RHP). This is called the Wiener–Hopf factorization.
- Divide [math]\displaystyle{ S_{x,s}(s)e^{\alpha s} }[/math] by [math]\displaystyle{ S_x^{-}(s) }[/math] and write out the result as a partial fraction expansion.
- Select only those terms in this expansion having poles in the LHP. Call these terms [math]\displaystyle{ H(s) }[/math].
- Divide [math]\displaystyle{ H(s) }[/math] by [math]\displaystyle{ S_x^{+}(s) }[/math]. The result is the desired filter transfer function [math]\displaystyle{ G(s) }[/math].
Finite impulse response Wiener filter for discrete series
The causal finite impulse response (FIR) Wiener filter, instead of using some given data matrix X and output vector Y, finds optimal tap weights by using the statistics of the input and output signals. It populates the input matrix X with estimates of the auto-correlation of the input signal (T) and populates the output vector Y with estimates of the cross-correlation between the output and input signals (V).
In order to derive the coefficients of the Wiener filter, consider the signal w[n] being fed to a Wiener filter of order (number of past taps) N and with coefficients [math]\displaystyle{ \{a_0, \cdots, a_N\} }[/math]. The output of the filter is denoted x[n] which is given by the expression
- [math]\displaystyle{ x[n] = \sum_{i=0}^N a_i w[n-i] . }[/math]
The residual error is denoted e[n] and is defined as e[n] = x[n] − s[n] (see the corresponding block diagram). The Wiener filter is designed so as to minimize the mean square error (MMSE criteria) which can be stated concisely as follows:
- [math]\displaystyle{ a_i = \arg \min E \left [e^2[n] \right ], }[/math]
where [math]\displaystyle{ E[\cdot] }[/math] denotes the expectation operator. In the general case, the coefficients [math]\displaystyle{ a_i }[/math] may be complex and may be derived for the case where w[n] and s[n] are complex as well. With a complex signal, the matrix to be solved is a Hermitian Toeplitz matrix, rather than symmetric Toeplitz matrix. For simplicity, the following considers only the case where all these quantities are real. The mean square error (MSE) may be rewritten as:
- [math]\displaystyle{ \begin{align} E \left [e^2[n] \right ] &= E \left [ (x[n]-s[n])^2 \right ]\\ &= E \left [ x^2[n] \right ] + E \left [s^2[n] \right ] - 2E[x[n]s[n]]\\ &= E \left [ \left ( \sum_{i=0}^N a_i w[n-i] \right)^2\right ] + E \left [s^2[n] \right ] - 2E\left [\sum_{i=0}^N a_i w[n-i]s[n] \right ] \end{align} }[/math]
To find the vector [math]\displaystyle{ [a_0,\, \ldots,\, a_N] }[/math] which minimizes the expression above, calculate its derivative with respect to each [math]\displaystyle{ a_i }[/math]
- [math]\displaystyle{ \begin{align} \frac{\partial}{\partial a_i} E \left [e^2[n] \right ] &= \frac{\partial}{\partial a_i} \left \{ E \left [ \left ( \sum_{j=0}^N a_j w[n-j] \right)^2\right ] + E \left [s^2[n] \right ] - 2E\left [\sum_{j=0}^N a_j w[n-j]s[n] \right ]\right \} \\ &= 2E\left [ \left ( \sum_{j=0}^N a_j w[n-j] \right ) w[n-i] \right ] - 2E [w[n-i]s[n]] \\ &= 2 \left ( \sum_{j=0}^N E [w[n-j]w[n-i] ] a_j \right ) - 2E [ w[n-i]s[n]] \end{align} }[/math]
Assuming that w[n] and s[n] are each stationary and jointly stationary, the sequences [math]\displaystyle{ R_w[m] }[/math] and [math]\displaystyle{ R_{ws}[m] }[/math] known respectively as the autocorrelation of w[n] and the cross-correlation between w[n] and s[n] can be defined as follows:
- [math]\displaystyle{ \begin{align} R_w[m] &= E\{w[n]w[n+m]\} \\ R_{ws}[m] &= E\{w[n]s[n+m]\} \end{align} }[/math]
The derivative of the MSE may therefore be rewritten as:
- [math]\displaystyle{ \frac{\partial}{\partial a_i} E \left [e^2[n] \right ]= 2 \left ( \sum_{j=0}^{N} R_w[j-i] a_j \right ) - 2 R_{ws}[i] \qquad i = 0,\cdots, N. }[/math]
Note that for real [math]\displaystyle{ w[n] }[/math], the autocorrelation is symmetric:[math]\displaystyle{ R_w[j-i] = R_w[i-j] }[/math]Letting the derivative be equal to zero results in:
- [math]\displaystyle{ \sum_{j=0}^N R_w[j-i] a_j = R_{ws}[i] \qquad i = 0,\cdots, N. }[/math]
which can be rewritten (using the above symmetric property) in matrix form
- [math]\displaystyle{ \underbrace{\begin{bmatrix} R_w[0] & R_w[1] & \cdots & R_w[N] \\ R_w[1] & R_w[0] & \cdots & R_w[N-1] \\ \vdots & \vdots & \ddots & \vdots \\ R_w[N] & R_w[N-1] & \cdots & R_w[0] \end{bmatrix}}_{\mathbf{T}} \underbrace{\begin{bmatrix} a_0 \\ a_1 \\ \vdots \\ a_N \end{bmatrix}}_{\mathbf{a}} = \underbrace{\begin{bmatrix} R_{ws}[0] \\R_{ws}[1] \\ \vdots \\ R_{ws}[N] \end{bmatrix}}_{\mathbf{v}} }[/math]
These equations are known as the Wiener–Hopf equations. The matrix T appearing in the equation is a symmetric Toeplitz matrix. Under suitable conditions on [math]\displaystyle{ R }[/math], these matrices are known to be positive definite and therefore non-singular yielding a unique solution to the determination of the Wiener filter coefficient vector, [math]\displaystyle{ \mathbf{a} = \mathbf{T}^{-1}\mathbf{v} }[/math]. Furthermore, there exists an efficient algorithm to solve such Wiener–Hopf equations known as the Levinson-Durbin algorithm so an explicit inversion of T is not required.
In some articles, the cross correlation function is defined in the opposite way:[math]\displaystyle{ R_{sw}[m] = E\{w[n]s[n+m]\} }[/math]Then, the [math]\displaystyle{ \mathbf{v} }[/math] matrix will contain [math]\displaystyle{ R_{sw}[0] \ldots R_{sw}[N] }[/math]; this is just a difference in notation.
Whichever notation is used, note that for real [math]\displaystyle{ w[n], s[n] }[/math]:[math]\displaystyle{ R_{sw}[k] = R_{ws}[-k] }[/math]
Relationship to the least squares filter
The realization of the causal Wiener filter looks a lot like the solution to the least squares estimate, except in the signal processing domain. The least squares solution, for input matrix [math]\displaystyle{ \mathbf{X} }[/math] and output vector [math]\displaystyle{ \mathbf{y} }[/math] is
- [math]\displaystyle{ \boldsymbol{\hat\beta} = (\mathbf{X} ^\mathbf{T}\mathbf{X})^{-1}\mathbf{X}^{\mathbf{T}}\boldsymbol y . }[/math]
The FIR Wiener filter is related to the least mean squares filter, but minimizing the error criterion of the latter does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution.
Complex signals
For complex signals, the derivation of the complex Wiener filter is performed by minimizing [math]\displaystyle{ E \left [|e[n]|^2 \right ] }[/math] =[math]\displaystyle{ E \left [e[n]e^*[n] \right ] }[/math]. This involves computing partial derivatives with respect to both the real and imaginary parts of [math]\displaystyle{ a_i }[/math], and requiring them both to be zero.
The resulting Wiener-Hopf equations are:
- [math]\displaystyle{ \sum_{j=0}^N R_w[j-i] a_j^* = R_{ws}[i] \qquad i = 0,\cdots, N. }[/math]
which can be rewritten in matrix form:
- [math]\displaystyle{ \underbrace{\begin{bmatrix} R_w[0] & R_w^*[1] & \cdots & R_w^*[N-1] & R_w^*[N] \\ R_w[1] & R_w[0] & \cdots& R_w^*[N-2] & R_w^*[N-1] \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ R_w[N-1] & R_w[N-2] & \cdots & R_w[0] & R_w^*[1] \\ R_w[N] & R_w[N-1] & \cdots & R_w[1] & R_w[0] \end{bmatrix}}_{\mathbf{T}} \underbrace{\begin{bmatrix} a_0^* \\ a_1^* \\ \vdots \\a_{N-1}^* \\ a_N^* \end{bmatrix}}_{\mathbf{a^*}} = \underbrace{\begin{bmatrix} R_{ws}[0] \\R_{ws}[1] \\ \vdots\\ R_{ws}[N-1] \\ R_{ws}[N] \end{bmatrix}}_{\mathbf{v}} }[/math]
Note here that:[math]\displaystyle{ \begin{align} R_w[-k] &= R_w^*[k] \\ R_{sw}[k] &= R_{ws}^*[-k] \end{align} }[/math]
The Wiener coefficient vector is then computed as:[math]\displaystyle{ \mathbf{a} = {(\mathbf{T}^{-1}\mathbf{v})}^* }[/math]
Applications
The Wiener filter has a variety of applications in signal processing, image processing,[3] control systems, and digital communications. These applications generally fall into one of four main categories:
For example, the Wiener filter can be used in image processing to remove noise from a picture. For example, using the Mathematica function:
WienerFilter[image,2]
on the first image on the right, produces the filtered image below it.
It is commonly used to denoise audio signals, especially speech, as a preprocessor before speech recognition.
History
The filter was proposed by Norbert Wiener during the 1940s and published in 1949.[4][5] The discrete-time equivalent of Wiener's work was derived independently by Andrey Kolmogorov and published in 1941.[6] Hence the theory is often called the Wiener–Kolmogorov filtering theory (cf. Kriging). The Wiener filter was the first statistically designed filter to be proposed and subsequently gave rise to many others including the Kalman filter.
See also
- Wiener deconvolution
- least mean squares filter
- similarities between Wiener and LMS
- linear prediction
- MMSE estimator
- Kalman filter
- generalized Wiener filter
- matched filter
- information field theory
References
- ↑ 1.0 1.1 Brown, Robert Grover; Hwang, Patrick Y.C. (1996). Introduction to Random Signals and Applied Kalman Filtering (3 ed.). New York: John Wiley & Sons. ISBN 978-0-471-12839-7.
- ↑ Welch, Lloyd R. "Wiener–Hopf Theory". http://csi.usc.edu/PDF/wienerhopf.pdf.
- ↑ Boulfelfel, D.; Rangayyan, R. M.; Hahn, L. J.; Kloiber, R. (1994). "Three-dimensional restoration of single photon emission computed tomography images". IEEE Transactions on Nuclear Science 41 (5): 1746–1754. doi:10.1109/23.317385. Bibcode: 1994ITNS...41.1746B.
- ↑ Wiener N: The interpolation, extrapolation and smoothing of stationary time series', Report of the Services 19, Research Project DIC-6037 MIT, February 1942
- ↑ Wiener, Norbert (1949). Extrapolation, Interpolation, and Smoothing of Stationary Time Series. New York: Wiley. ISBN 978-0-262-73005-1.
- ↑ Kolmogorov A.N: 'Stationary sequences in Hilbert space', (In Russian) Bull. Moscow Univ. 1941 vol.2 no.6 1-40. English translation in Kailath T. (ed.) Linear least squares estimation Dowden, Hutchinson & Ross 1977 ISBN:0-87933-098-8
Further reading
- Thomas Kailath, Ali H. Sayed, and Babak Hassibi, Linear Estimation, Prentice-Hall, NJ, 2000, ISBN:978-0-13-022464-4.
External links
- Mathematica WienerFilter function
Original source: https://en.wikipedia.org/wiki/Wiener filter.
Read more |