Lifting scheme

From HandWiki
Short description: Technique for wavelet analysis
Lifting sequence consisting of two steps

The lifting scheme is a technique for both designing wavelets and performing the discrete wavelet transform (DWT). In an implementation, it is often worthwhile to merge these steps and design the wavelet filters while performing the wavelet transform. This is then called the second-generation wavelet transform. The technique was introduced by Wim Sweldens.[1]

The lifting scheme factorizes any discrete wavelet transform with finite filters into a series of elementary convolution operators, so-called lifting steps, which reduces the number of arithmetic operations by nearly a factor two. Treatment of signal boundaries is also simplified.[2]

The discrete wavelet transform applies several filters separately to the same signal. In contrast to that, for the lifting scheme, the signal is divided like a zipper. Then a series of convolution–accumulate operations across the divided signals is applied.

Basics

The simplest version of a forward wavelet transform expressed in the lifting scheme is shown in the figure above. [math]\displaystyle{ P }[/math] means predict step, which will be considered in isolation. The predict step calculates the wavelet function in the wavelet transform. This is a high-pass filter. The update step calculates the scaling function, which results in a smoother version of the data.

As mentioned above, the lifting scheme is an alternative technique for performing the DWT using biorthogonal wavelets. In order to perform the DWT using the lifting scheme, the corresponding lifting and scaling steps must be derived from the biorthogonal wavelets. The analysis filters ([math]\displaystyle{ g, h }[/math]) of the particular wavelet are first written in polyphase matrix

[math]\displaystyle{ P(z) = \begin{bmatrix} h_\text{even}(z) & g_\text{even}(z) \\ h_\text{odd}(z) & g_\text{odd}(z) \end{bmatrix}, }[/math]

where [math]\displaystyle{ \det P(z) = z^{-m} }[/math].

The polyphase matrix is a 2 × 2 matrix containing the analysis low-pass and high-pass filters, each split up into their even and odd polynomial coefficients and normalized. From here the matrix is factored into a series of 2 × 2 upper- and lower-triangular matrices, each with diagonal entries equal to 1. The upper-triangular matrices contain the coefficients for the predict steps, and the lower-triangular matrices contain the coefficients for the update steps. A matrix consisting of all zeros with the exception of the diagonal values may be extracted to derive the scaling-step coefficients. The polyphase matrix is factored into the form

[math]\displaystyle{ P(z) = \begin{bmatrix} 1 & a(1 + z^{-1}) \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ b(1 + z) & 1 \end{bmatrix}, }[/math]

where [math]\displaystyle{ a }[/math] is the coefficient for the predict step, and [math]\displaystyle{ b }[/math] is the coefficient for the update step.

An example of a more complicated extraction having multiple predict and update steps, as well as scaling steps, is shown below; [math]\displaystyle{ a }[/math] is the coefficient for the first predict step, [math]\displaystyle{ b }[/math] is the coefficient for the first update step, [math]\displaystyle{ c }[/math] is the coefficient for the second predict step, [math]\displaystyle{ d }[/math] is the coefficient for the second update step, [math]\displaystyle{ k_1 }[/math] is the odd-sample scaling coefficient, and [math]\displaystyle{ k_2 }[/math] is the even-sample scaling coefficient:

[math]\displaystyle{ P(z) = \begin{bmatrix} 1 & a(1 + z^{-1}) \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ b(1 + z) & 1 \end{bmatrix} \begin{bmatrix} 1 & c(1 + z^{-1}) \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ d(1 + z) & 1 \end{bmatrix} \begin{bmatrix} k_1 & 0 \\ 0 & k_2 \end{bmatrix}. }[/math]

According to matrix theory, any matrix having polynomial entries and a determinant of 1 can be factored as described above. Therefore, every wavelet transform with finite filters can be decomposed into a series of lifting and scaling steps. Daubechies and Sweldens discuss lifting-step extraction in further detail.[3]

CDF 9/7 filter

Main page: Cohen–Daubechies–Feauveau wavelet

To perform the CDF 9/7 transform, a total of four lifting steps are required: two predict and two update steps. The lifting factorization leads to the following sequence of filtering steps.[3]

[math]\displaystyle{ d_l = d_l + a (s_l + s_{l+1}), }[/math]
[math]\displaystyle{ s_l = s_l + b (d_l + d_{l-1}), }[/math]
[math]\displaystyle{ d_l = d_l + c (s_l + s_{l+1}), }[/math]
[math]\displaystyle{ s_l = s_l + d (d_l + d_{l-1}), }[/math]
[math]\displaystyle{ d_l = k_1 d_l, }[/math]
[math]\displaystyle{ s_l = k_2 s_l. }[/math]

Properties

Perfect reconstruction

Every transform by the lifting scheme can be inverted. Every perfect-reconstruction filter bank can be decomposed into lifting steps by the Euclidean algorithm. That is, "lifting-decomposable filter bank" and "perfect-reconstruction filter bank" denotes the same. Every two perfect-reconstruction filter banks can be transformed into each other by a sequence of lifting steps. For a better understanding, if [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math] are polyphase matrices with the same determinant, then the lifting sequence from [math]\displaystyle{ P }[/math] to [math]\displaystyle{ Q }[/math] is the same as the one from the lazy polyphase matrix [math]\displaystyle{ I }[/math] to [math]\displaystyle{ P^{-1}\cdot Q }[/math].

Speedup

Speedup is by a factor of two. This is only possible because lifting is restricted to perfect-reconstruction filter banks. That is, lifting somehow squeezes out redundancies caused by perfect reconstruction.

The transformation can be performed immediately in the memory of the input data (in place, in situ) with only constant memory overhead.

Non-linearities

The convolution operations can be replaced by any other operation. For perfect reconstruction only the invertibility of the addition operation is relevant. This way rounding errors in convolution can be tolerated and bit-exact reconstruction is possible. However, the numeric stability may be reduced by the non-linearities. This must be respected if the transformed signal is processed like in lossy compression. Although every reconstructable filter bank can be expressed in terms of lifting steps, a general description of the lifting steps is not obvious from a description of a wavelet family. However, for instance, for simple cases of the Cohen–Daubechies–Feauveau wavelet, there is an explicit formula for their lifting steps.

Increasing vanishing moments, stability, and regularity

A lifting modifies biorthogonal filters in order to increase the number of vanishing moments of the resulting biorthogonal wavelets, and hopefully their stability and regularity. Increasing the number of vanishing moments decreases the amplitude of wavelet coefficients in regions where the signal is regular, which produces a more sparse representation. However, increasing the number of vanishing moments with a lifting also increases the wavelet support, which is an adverse effect that increases the number of large coefficients produced by isolated singularities. Each lifting step maintains the filter biorthogonality but provides no control on the Riesz bounds and thus on the stability of the resulting wavelet biorthogonal basis. When a basis is orthogonal then the dual basis is equal to the original basis. Having a dual basis that is similar to the original basis is, therefore, an indication of stability. As a result, stability is generally improved when dual wavelets have as much vanishing moments as original wavelets and a support of similar size. This is why a lifting procedure also increases the number of vanishing moments of dual wavelets. It can also improve the regularity of the dual wavelet. A lifting design is computed by adjusting the number of vanishing moments. The stability and regularity of the resulting biorthogonal wavelets are measured a posteriori, hoping for the best. This is the main weakness of this wavelet design procedure.

Generalized lifting

Lifting scheme
Block diagram of the (forward) lifting scheme transform

The generalized lifting scheme was developed by Joel Solé and Philippe Salembier and published in Solé's PhD dissertation.[4] It is based on the classical lifting scheme and generalizes it by breaking out a restriction hidden in the scheme structure. The classical lifting scheme has three kinds of operations:

  1. A lazy wavelet transform splits signal [math]\displaystyle{ f_j[n] }[/math] in two new signals: the odd-samples signal denoted by [math]\displaystyle{ f_j^o[n] }[/math] and the even-samples signal denoted by [math]\displaystyle{ f_j^e[n] }[/math].
  2. A prediction step computes a prediction for the odd samples, based on the even samples (or vice versa). This prediction is subtracted from the odd samples, creating an error signal [math]\displaystyle{ g_{j+1}[n] }[/math].
  3. An update step recalibrates the low-frequency branch with some of the energy removed during subsampling. In the case of classical lifting, this is used in order to "prepare" the signal for the next prediction step. It uses the predicted odd samples [math]\displaystyle{ g_{j+1}[n] }[/math] to prepare the even ones [math]\displaystyle{ f_j^e[n] }[/math] (or vice versa). This update is subtracted from the even samples, producing the signal denoted by [math]\displaystyle{ f_{j+1}[n] }[/math].

The scheme is invertible due to its structure. In the receiver, the update step is computed first with its result added back to the even samples, and then it is possible to compute exactly the same prediction to add to the odd samples. In order to recover the original signal, the lazy wavelet transform has to be inverted. Generalized lifting scheme has the same three kinds of operations. However, this scheme avoids the addition-subtraction restriction that offered classical lifting, which has some consequences. For example, the design of all steps must guarantee the scheme invertibility (not guaranteed if the addition-subtraction restriction is avoided).

Definition

Generalized lifting scheme.
Block diagram of the (forward) generalized lifting scheme transform

Generalized lifting scheme is a dyadic transform that follows these rules:

  1. Deinterleaves the input into a stream of even-numbered samples and another stream of odd-numbered samples. This is sometimes referred to as a lazy wavelet transform.
  2. Computes a prediction mapping. This step tries to predict odd samples taking into account the even ones (or vice versa). There is a mapping from the space of the samples in [math]\displaystyle{ f_j^e[n] }[/math] to the space of the samples in [math]\displaystyle{ g_{j+1}[n] }[/math]. In this case the samples (from [math]\displaystyle{ f_j^e[n] }[/math]) chosen to be the reference for [math]\displaystyle{ f_j^o[n] }[/math] are called the context. It could be expressed as
    [math]\displaystyle{ g_{j+1}[n] = P(f_j^o[n];f_j^e[n]). }[/math]
  3. Computes an update mapping. This step tries to update the even samples taking into account the odd predicted samples. It would be a kind of preparation for the next prediction step, if any. It could be expressed as
    [math]\displaystyle{ f_{j+1}[n] = U(f_j^e[n];g_{j+1}[n]). }[/math]

Obviously, these mappings cannot be any functions. In order to guarantee the invertibility of the scheme itself, all mappings involved in the transform must be invertible. In case that mappings arise and arrive on finite sets (discrete bounded value signals), this condition is equivalent to saying that mappings are injective (one-to-one). Moreover, if a mapping goes from one set to a set of the same cardinality, it should be bijective.

In the generalized lifting scheme the addition/subtraction restriction is avoided by including this step in the mapping. In this way the classical lifting scheme is generalized.

Design

Some designs have been developed for the prediction-step mapping. The update-step design has not been considered as thoroughly, because it remains to be answered how exactly the update step is useful. The main application of this technique is image compression. There some interesting references such as,[5][6][7] and.[8]

Applications

See also

  • The Feistel scheme in cryptology uses much the same idea of dividing data and alternating function application with addition. Both in the Feistel scheme and the lifting scheme this is used for symmetric en- and decoding.

References

  1. Sweldens, Wim (1997). "The Lifting Scheme: A Construction of Second Generation Wavelets". Journal on Mathematical Analysis 29 (2): 511–546. doi:10.1137/S0036141095289051. https://cm-bell-labs.github.io/who/wim/papers/lift2.pdf. 
  2. Mallat, Stéphane (2009). A Wavelet Tour of Signal Processing. Academic Press. ISBN 978-0-12-374370-1. https://wavelet-tour.github.io/. 
  3. 3.0 3.1 Daubechies, Ingrid; Sweldens, Wim (1998). "Factoring Wavelet Transforms into Lifting Steps". Journal of Fourier Analysis and Applications 4 (3): 247–269. doi:10.1007/BF02476026. https://9p.io/who/wim/papers/factor/factor.pdf. 
  4. Ph.D. dissertation: Optimization and Generalization of Lifting Schemes: Application to Lossless Image Compression.
  5. Rolon, J. C.; Salembier, P. (Nov 7–9, 2007). "Generalized Lifting for Sparse Image Representation and Coding". https://www.researchgate.net/publication/228347925. 
  6. Rolon, J. C.; Salembier, P.; Alameda, X. (Oct 12–15, 2008). "Image Compression with Generalized Lifting and partial knowledge of the signal pdf". https://www.academia.edu/download/40211740/cRolon08.pdf. [|permanent dead link|dead link}}]
  7. Rolon, J. C.; Ortega, A.; Salembier, P.. "Modeling of Contours in Wavelet Domain for Generalized Lifting Image Compression". https://upcommons.upc.edu/bitstream/handle/2117/9008/modelingcontours.pdf. 
  8. Rolon, J. C.; Mendonça, E.; Salembier, P.. "Generalized Lifting With Adaptive Local pdf estimation for Image Coding". https://upcommons.upc.edu/bitstream/handle/2117/8835/GeneralizedLifting.pdf. 
  9. Oraintara, Soontorn; Chen, Ying-Jui; Nguyen, Truong Q. (2002). "Integer Fast Fourier Transform". IEEE Transactions on Signal Processing 50 (3): 607–618. doi:10.1109/78.984749. Bibcode2002ITSP...50..607O. http://www-ee.uta.edu/msp/pub/Journaintfft.pdf. 
  10. Thielemann, Henning (2004). "Optimally matched wavelets". Proceedings in Applied Mathematics and Mechanics 4: 586–587. doi:10.1002/pamm.200410274. 
  11. Fattal, Raanan (2009). "Edge-Avoiding Wavelets and their Applications". ACM Transactions on Graphics 28 (3): 1–10. doi:10.1145/1531326.1531328. http://www.cs.huji.ac.il/~raananf/projects/eaw/. 
  12. Uytterhoeven, Geert; Bultheel, Adhemar (1998). "The Red-Black Wavelet Transform". Signal Processing Symposium (IEEE Benelux). pp. 191–194. http://nalag.cs.kuleuven.be/papers/ade/redblack/. 

External links