Wiener deconvolution

From HandWiki
Revision as of 04:29, 27 June 2023 by Steve2012 (talk | contribs) (link)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
From left: Original image, blurred image, image deblurred using Wiener deconvolution.

In mathematics, Wiener deconvolution is an application of the Wiener filter to the noise problems inherent in deconvolution. It works in the frequency domain, attempting to minimize the impact of deconvolved noise at frequencies which have a poor signal-to-noise ratio.

The Wiener deconvolution method has widespread use in image deconvolution applications, as the frequency spectrum of most visual images is fairly well behaved and may be estimated easily.

Wiener deconvolution is named after Norbert Wiener.

Definition

Given a system:

[math]\displaystyle{ \ y(t) = (h*x)(t) + n(t) }[/math]

where [math]\displaystyle{ * }[/math] denotes convolution and:

  • [math]\displaystyle{ \ x(t) }[/math] is some original signal (unknown) at time [math]\displaystyle{ \ t }[/math].
  • [math]\displaystyle{ \ h(t) }[/math] is the known impulse response of a linear time-invariant system
  • [math]\displaystyle{ \ n(t) }[/math] is some unknown additive noise, independent of [math]\displaystyle{ \ x(t) }[/math]
  • [math]\displaystyle{ \ y(t) }[/math] is our observed signal

Our goal is to find some [math]\displaystyle{ \ g(t) }[/math] so that we can estimate [math]\displaystyle{ \ x(t) }[/math] as follows:

[math]\displaystyle{ \ \hat{x}(t) = (g*y)(t) }[/math]

where [math]\displaystyle{ \ \hat{x}(t) }[/math] is an estimate of [math]\displaystyle{ \ x(t) }[/math] that minimizes the mean square error

[math]\displaystyle{ \ \epsilon(t) = \mathbb{E} \left| x(t) - \hat{x}(t) \right|^2 }[/math],

with [math]\displaystyle{ \ \mathbb{E} }[/math] denoting the expectation. The Wiener deconvolution filter provides such a [math]\displaystyle{ \ g(t) }[/math]. The filter is most easily described in the frequency domain:

[math]\displaystyle{ \ G(f) = \frac{H^*(f)S(f)}{ |H(f)|^2 S(f) + N(f) } }[/math]

where:

  • [math]\displaystyle{ \ G(f) }[/math] and [math]\displaystyle{ \ H(f) }[/math] are the Fourier transforms of [math]\displaystyle{ \ g(t) }[/math] and [math]\displaystyle{ \ h(t) }[/math],
  • [math]\displaystyle{ \ S(f) = \mathbb{E}|X(f)|^2 }[/math] is the mean power spectral density of the original signal [math]\displaystyle{ \ x(t) }[/math],
  • [math]\displaystyle{ \ N(f) = \mathbb{E}|V(f)|^2 }[/math] is the mean power spectral density of the noise [math]\displaystyle{ \ n(t) }[/math],
  • [math]\displaystyle{ X(f) }[/math], [math]\displaystyle{ Y(f) }[/math], and [math]\displaystyle{ V(f) }[/math] are the Fourier transforms of [math]\displaystyle{ x(t) }[/math], and [math]\displaystyle{ y(t) }[/math], and [math]\displaystyle{ n(t) }[/math], respectively,
  • the superscript [math]\displaystyle{ {}^* }[/math] denotes complex conjugation.

The filtering operation may either be carried out in the time-domain, as above, or in the frequency domain:

[math]\displaystyle{ \ \hat{X}(f) = G(f)Y(f) }[/math]

and then performing an inverse Fourier transform on [math]\displaystyle{ \ \hat{X}(f) }[/math] to obtain [math]\displaystyle{ \ \hat{x}(t) }[/math].

Note that in the case of images, the arguments [math]\displaystyle{ \ t }[/math] and [math]\displaystyle{ \ f }[/math] above become two-dimensional; however the result is the same.

Interpretation

The operation of the Wiener filter becomes apparent when the filter equation above is rewritten:

[math]\displaystyle{ \begin{align} G(f) & = \frac{1}{H(f)} \left[ \frac{ 1 }{ 1 + 1/(|H(f)|^2 \mathrm{SNR}(f))} \right] \end{align} }[/math]

Here, [math]\displaystyle{ \ 1/H(f) }[/math] is the inverse of the original system, [math]\displaystyle{ \ \mathrm{SNR}(f) = S(f)/N(f) }[/math] is the signal-to-noise ratio, and [math]\displaystyle{ \ |H(f)|^2 \mathrm{SNR}(f) }[/math] is the ratio of the pure filtered signal to noise spectral density. When there is zero noise (i.e. infinite signal-to-noise), the term inside the square brackets equals 1, which means that the Wiener filter is simply the inverse of the system, as we might expect. However, as the noise at certain frequencies increases, the signal-to-noise ratio drops, so the term inside the square brackets also drops. This means that the Wiener filter attenuates frequencies according to their filtered signal-to-noise ratio.

The Wiener filter equation above requires us to know the spectral content of a typical image, and also that of the noise. Often, we do not have access to these exact quantities, but we may be in a situation where good estimates can be made. For instance, in the case of photographic images, the signal (the original image) typically has strong low frequencies and weak high frequencies, while in many cases the noise content will be relatively flat with frequency.

Derivation

As mentioned above, we want to produce an estimate of the original signal that minimizes the mean square error, which may be expressed:

[math]\displaystyle{ \ \epsilon(f) = \mathbb{E} \left| X(f) - \hat{X}(f) \right|^2 }[/math] .

The equivalence to the previous definition of [math]\displaystyle{ \epsilon }[/math], can be derived using Plancherel theorem or Parseval's theorem for the Fourier transform.

If we substitute in the expression for [math]\displaystyle{ \ \hat{X}(f) }[/math], the above can be rearranged to

[math]\displaystyle{ \begin{align} \epsilon(f) & = \mathbb{E} \left| X(f) - G(f)Y(f) \right|^2 \\ & = \mathbb{E} \left| X(f) - G(f) \left[ H(f)X(f) + V(f) \right] \right|^2 \\ & = \mathbb{E} \big| \left[ 1 - G(f)H(f) \right] X(f) - G(f)V(f) \big|^2 \end{align} }[/math]

If we expand the quadratic, we get the following:

[math]\displaystyle{ \begin{align} \epsilon(f) & = \Big[ 1-G(f)H(f) \Big] \Big[ 1-G(f)H(f) \Big]^*\, \mathbb{E}|X(f)|^2 \\ & {} - \Big[ 1-G(f)H(f) \Big] G^*(f)\, \mathbb{E}\Big\{X(f)V^*(f)\Big\} \\ & {} - G(f) \Big[ 1-G(f)H(f) \Big]^*\, \mathbb{E}\Big\{V(f)X^*(f)\Big\} \\ & {} + G(f) G^*(f)\, \mathbb{E}|V(f)|^2 \end{align} }[/math]

However, we are assuming that the noise is independent of the signal, therefore:

[math]\displaystyle{ \ \mathbb{E}\Big\{X(f)V^*(f)\Big\} = \mathbb{E}\Big\{V(f)X^*(f)\Big\} = 0 }[/math]

Substituting the power spectral densities [math]\displaystyle{ \ S(f) }[/math] and [math]\displaystyle{ \ N(f) }[/math], we have:

[math]\displaystyle{ \epsilon(f) = \Big[ 1-G(f)H(f) \Big]\Big[ 1-G(f)H(f) \Big]^ * S(f) + G(f)G^*(f)N(f) }[/math]

To find the minimum error value, we calculate the Wirtinger derivative with respect to [math]\displaystyle{ \ G(f) }[/math] and set it equal to zero.

[math]\displaystyle{ \ \frac{d\epsilon(f)}{dG(f)} = 0 \Rightarrow G^*(f)N(f) - H(f)\Big[1 - G(f)H(f)\Big]^* S(f) = 0 }[/math]

This final equality can be rearranged to give the Wiener filter.

See also

References

  • Rafael Gonzalez, Richard Woods, and Steven Eddins. Digital Image Processing Using Matlab. Prentice Hall, 2003.

External links