 Example Worksheet - Wavelet Transforms - Maple Help

Home : Support : Online Help : Mathematics : Discrete Mathematics : DiscreteTransforms : Example Worksheet - Wavelet Transforms

Wavelets and Applications Introduction Wavelets are powerful tools that can be used in signal processing and data compression. Wavelet transforms are an excellent alternative to Fourier transforms in many situations. In Fourier analysis, a signal is decomposed into periodic components; in wavelet analysis, a signal is decomposed into components localized in both time and frequency domains. Thus, wavelet transforms are ideal when signals are not periodic. The Theory

In Fourier analysis,  is used as an orthonormal basis of ${L}^{2}\left[0,1\right]$.  In wavelet analysis, a father wavelet $\mathrm{φ}$ and a mother wavelet $\mathrm{ψ}$ are chosen such that:

form an orthonormal basis of ${L}^{2}\left[0,1\right]$. In theory, $\mathrm{\phi }$ is chosen to satisfy the conditions of a multiresolutional analysis (MRA), and then $\mathrm{\psi }$  is determined from $\mathrm{\phi }$ and the MRA. In practice, $\mathrm{\phi }$ and $\mathrm{\psi }$ are assumed to satisfy the following functional equations, and the coefficients are computed according to the desired properties of the MRA.

$\mathrm{\phi }\left(x\right)=\sum _{n=-\infty }^{\infty }\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}{h}_{n}\mathrm{\phi }\left(2x-n\right)$    (1)

$\mathrm{\psi }\left(x\right)=\sum _{n=-\infty }^{\infty }\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}{g}_{n}\mathrm{\phi }\left(2x-n\right)$    (2)

In fact, often $\mathrm{\psi }$ and $\mathrm{\phi }$ cannot be determined symbolically, and are defined solely in terms of these coefficients. In such cases, the Cascades Algorithm can be used to obtain numerical approximations.

The fact that $\mathrm{\psi }$ and $\mathrm{\phi }$ must be orthogonal reduces to the following numerical conditions on the ${h}_{n}$ and ${g}_{n}$, when $\mathrm{\phi }$ has norm 1.

$\sum _{n=-\infty }^{\infty }\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}{h}_{n}{h}_{n+2h}={\mathrm{\delta }}_{0,k}$    (3)

$\sum _{n=-\infty }^{\infty }\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}{g}_{n}{g}_{n+2h}={\mathrm{\delta }}_{0,k}$    (4)

$\sum _{n=-\infty }^{\infty }\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}{h}_{n}{g}_{n+2h}=0$    (5)

Usually, the ${h}_{n}$  are computed, then the ${g}_{n}$ are determined by reversing the ${h}_{n}$ and negating every term.  The ${h}_{n}$ and ${g}_{n}$ are also known as the scaling and wavelet coefficients, or the low pass and high pass filters, respectively. Z Transforms When the Fourier transform of equation (1) is computed, $\mathrm{Φ}\left(w\right)=\left(\sum _{n=-\mathrm{∞}}^{\mathrm{∞}}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}\frac{{h}_{n}{e}^{-inw}}{2}\right)\mathrm{Φ}\left(\frac{w}{2}\right)$ is obtained. This is the motivation for defining: ${m}_{0}\left(x\right)=\sum _{n=-\mathrm{∞}}^{\mathrm{∞}}\phantom{\rule[-0.0ex]{5.0px}{0.0ex}}\frac{{h}_{n}{x}^{n}}{2}$ This is known as the Z transform associated with $\mathrm{\phi }$. In this context, the orthogonality conditions reduce to:   ${m}_{0}\left(x\right){m}_{0}\left(-x\right)+{m}_{0}\left(x+\mathrm{\pi }\right){m}_{0}\left(-x-\mathrm{\pi }\right)=2$ The generation of wavelets is often phrased in this language. Orthogonal and Biorthogonal Wavelets Technically, the above discussion applies only to orthogonal wavelets. In a variant of this theory, different bases of ${L}^{2}\left[0,1\right]$ are used for the analysis and synthesis of signals. Such pairs of bases are generated by biorthogonal bases. Note that orthogonal wavelets can be viewed as biorthogonal wavelets for which the analysis and synthesis processes coincide. However, biorthogonal wavelets are not orthogonal; their main advantage is that the orthogonality conditions are relaxed, allowing more smoothness conditions to be imposed. Some authors refer to wavelets that are neither orthogonal or biorthogonal, but such wavelets are not discussed here. Wavelet Generation

This section provides examples showing how some wavelets are generated by Maple. This section is not intended as a complete guide to the generation of these wavelets, and it is not intended as a thorough discussion of their theory. It is a simplified outline of how Maple generates these wavelets. Symlets

Symlets are a variant of the Daubechies wavelet. In fact, they are also called Daubechies least asymmetric wavelets. They have the same vanishing moments as the Daubechies wavelets, and the same size, but they have minimal phase. A complex valued function $f$ is said to have linear phase if there are real numbers a and b such that . It is known that there are no compactly supported (that is, finite length) orthogonal wavelets with linear phase. The Symlets were designed by Ingrid Daubechies to have phase as close as possible to linear.

The generation of Symlets is very similar to the generation of the Daubechies wavelets. To generate the Symlet of size , start by finding the roots of the polynomial $P$ that is used in the generation of the Daubechies wavelet. Then transform these roots to get roots of the Laurent polynomial $P\left(Z\left(X\right)\right)$, where $Z\left(X\right)=\frac{2-X-{X}^{-1}}{2}$. The plot demonstrates that these roots come in groups of conjugates and reciprocals, so you are able to restrict your attention to those with norm less than or equal to 1 and non zero imaginary part.

 > $A≔7:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}P≔\mathrm{add}\left(\mathrm{binomial}\left(A+k-1,A-1\right){2}^{-k}{X}^{k},k=0..A-1\right);$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{sols}≔\left[\mathrm{RootFinding}:-\mathrm{Analytic}\left(P,X,\mathrm{re}=-100..100,\mathrm{im}=-100..100\right)\right]:\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$$T:=\mathrm{map}\left(t→\mathrm{op}\left(\left[1-t+{\left(-2t+{t}^{2}\right)}^{\frac{1}{2}},1-t-{\left(-2t+{t}^{2}\right)}^{\frac{1}{2}}\right]\right),\mathrm{sols}\right):$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{Tplot}≔\left[\mathrm{seq}\left(\left[\mathrm{ℜ}\left({T}_{i}\right),\mathrm{ℑ}\left({T}_{i}\right)\right],i=1..\mathrm{nops}\left(T\right)\right)\right]:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{plots}\left[\mathrm{listplot}\right]\left(\mathrm{Tplot},\mathrm{style}=\mathrm{point}\right);$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}S≔\mathrm{select}\left(z→\left|z\right|<1\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{and}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}0\le \mathrm{ℑ}\left(z\right),T\right):$
 ${P}{:=}{1}{+}\frac{{7}}{{2}}{}{X}{+}{7}{}{{X}}^{{2}}{+}\frac{{21}}{{2}}{}{{X}}^{{3}}{+}\frac{{105}}{{8}}{}{{X}}^{{4}}{+}\frac{{231}}{{16}}{}{{X}}^{{5}}{+}\frac{{231}}{{16}}{}{{X}}^{{6}}$ Now perform spectral factorization on $P\left(Z\left(X\right)\right)$. This means that you want to find a $p\left(X\right)$ such that . This is done by using the Feyer Reiz algorithm. For each root $\mathrm{\lambda }$ of $P\left(X\right)$, you simply have to assign one of the roots of $P\left(Z\left(X\right)\right)-\mathrm{λ}$ to $p\left(x\right)$. Because of the properties of $P$, this is sufficient. This is where the Symlets are different from the Daubechies wavelets. In the generation of the Daubechies wavelets, you can simply pick the root in the unit circle of the complex plane. This choice of spectral factorization has maximal phase. To compute the choice of spectral factorization with minimal phase, you have to compute all spectral factorizations, and pick the one where the nonlinear part has the smallest L1 norm.

 >
 > 

Note that $i$ and $n$ should be thought of as binary numbers whose digits encode the factorization being used.

To save time, you can assign the first value. This means that you only range over half of all possible factorizations, but this is enough. Within the loop below, save a list of the phases, $\mathrm{PhaseList}$, so that you can graph them later. Remember that minimal phase is defined by minimum L1 norm.

 >

With this done, you can now graph all of the phases that were considered, with the minimal phase displayed with a bold line.

 > $\mathrm{plots}\left[\mathrm{display}\right]\left(\mathrm{plot}\left(\left[\mathrm{seq}\left(\mathrm{PhaseList}\left[i\right],i=0..{2}^{\mathrm{nops}\left(S\right)-1}-1\right)\right],w=0..\mathrm{π}\right),\mathrm{plot}\left(\mathrm{PhaseList}\left[\mathrm{minphaseat}\right],w=0..\mathrm{π},\mathrm{thickness}=5\right)\right)$ Now construct the spectral factorization with the minimal factorization computed above, and extract the normalized scaling (low pass) coefficients.

 > 
 >
 $\left[{0.002681814559}{,}{-}{0.001047384898}{,}{-}{0.01263630337}{,}{0.03051551326}{,}{0.06789269364}{,}{-}{0.04955283511}{,}{0.01744125457}{,}{0.5361019159}{,}{0.7677643179}{,}{0.2886296316}{,}{-}{0.1400472406}{,}{-}{0.1078082374}{,}{0.004010244846}{,}{0.01026817669}\right]$ (3.1.1)

Verify this against the Maple function that computes wavelets.

 > $\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{convert}\left(h,\mathrm{list}\right)$
 $\left[{0.00268181456826015}{,}{-}{0.00104738488867974}{,}{-}{0.0126363034032406}{,}{0.0305155131658779}{,}{0.0678926935012206}{,}{-}{0.0495528349370428}{,}{0.0174412550868357}{,}{0.536101917090569}{,}{0.767764317004883}{,}{0.288629631750648}{,}{-}{0.140047240442934}{,}{-}{0.107808237703290}{,}{0.00401024487152240}{,}{0.0102681767084648}\right]$ (3.1.2) The Discrete Wavelet Transform

The wavelet transform can be accomplished for discrete signals by using an algorithm known as the (fast) discrete wavelet transform. Recall the coefficients ${h}_{n}$ and ${g}_{n}$ from equations (1) to (5). The low pass filter, $\mathrm{w2}$, is the ${h}_{n}$, and the high pass filter, $\mathrm{w1}$, is the ${g}_{n}$ (in vector form). In almost all useful cases, these are finite. The size of $\mathrm{w1}$ and $\mathrm{w2}$ is called the filter length. If $\mathrm{w1}$ has $n$ elements, $\mathrm{w1}$ is often called an $n$-tap wavelet.

The discrete wavelet transform produces two outputs, each half the size of the input. The first output is the high detail coefficients, and the second is the low detail coefficients. They are computed by convolving $\mathrm{w1}$ and $\mathrm{w2}$, respectively, against the input data, and then downsampling (throwing away every second term).

The low detail coefficients are then recursively processed. It is not obvious, but this in fact computes the wavelet coefficients (the coefficients of each function in the orthonormal base of wavelets).

The discrete wavelet transform is a linear transformation on the input signal. If the high and lows pass filter are:

${w}_{1}=\left[{w}_{11},{w}_{12},{w}_{13},{w}_{14}\right]$

${w}_{2}=\left[{w}_{21},{w}_{22},{w}_{23},{w}_{24}\right]$

then this linear transform on a signal of length 6 can be viewed as multiplication by the Matrix:

$\left[\begin{array}{cccccc}{w}_{11}& {w}_{12}& {w}_{13}& {w}_{14}& 0& 0\\ {w}_{21}& {w}_{22}& {w}_{23}& {w}_{24}& 0& 0\\ 0& 0& {w}_{11}& {w}_{12}& {w}_{13}& {w}_{14}\\ 0& 0& {w}_{21}& {w}_{22}& {w}_{23}& {w}_{24}\\ {w}_{13}& {w}_{14}& 0& 0& {w}_{11}& {w}_{12}\\ {w}_{23}& {w}_{24}& 0& 0& {w}_{21}& {w}_{22}\end{array}\right]$

The result of the multiplication of this Matrix by the signal is an interlacing of the high and low pass coefficients.

The orthogonality conditions on ${w}_{ij}$ (recall that ${w}_{1}$ and ${w}_{2}$ are the wavelet and scaling coefficients from equations (3) to (5)) are equivalent to the Matrix being orthogonal! This makes the discrete wavelet transform an orthogonal linear transformation, and hence very easy to invert. End Conditions

In the above convolutions, the filter mask "falls off" the end of the data. To maintain the orthogonality of the discrete wavelet transform (and the resulting easy invertibility), the data must be assumed to be periodic (as shown in preceding diagrams). However, two other common alternatives exist. The data can be padded with zeros, or the data can be reflected to generate extra data. In both cases, the transform is usually not invertible with orthogonal or biorthogonal wavelets, but can sometimes be modified to maintain easy invertibility. Such modifications are not discussed here.

For example, given the signal [1,2,3,4,5,6], the following are the periodic, zeros, and reflection end conditions for extending the data.

 Periodic: $\left[1,2,3,4,5,6,1,2,3,...\right]$ Zeros: $\left[1,2,3,4,5,6,0,0,0,...\right]$ Reflection: $\left[1,2,3,4,5,6,5,4,3,...\right]$ Maple Functions for Wavelets

All of Maple's functions for wavelets are part of the SignalProcessing and DiscreteTransforms packages.

The SignalProcessing commands are:
- DWT
- InverseDWT

The DiscreteTransform package commands are:

- WaveletCoefficients
- WaveletPlot

See the corresponding help pages for basic information and examples. Examples from the DiscreteTransforms package are described below. For more examples from the SignalProcessing package, see the SignalProcessing examples page. Examples

This is a quick example to explore the Daubechies length 4 wavelet.

 > $\mathrm{with}\left(\mathrm{DiscreteTransforms}\right);$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{with}\left(\mathrm{ImageTools}\right):$
 $\left[{\mathrm{DiscreteWaveletTransform}}{,}{\mathrm{FourierTransform}}{,}{\mathrm{InverseDiscreteWaveletTransform}}{,}{\mathrm{InverseFourierTransform}}{,}{\mathrm{WaveletCoefficients}}{,}{\mathrm{WaveletPlot}}\right]$ (5.1.1)
 > Plot the Daubechies mother and father wavelets by using the WaveletPlot command. WaveletPlot uses a numerical algorithm called Cascades to approximate and plot functions satisfying equations (1) and (2). No explicit definition exists of the Daubechies length 4 mother and father wavelets.

 > $\mathrm{WaveletPlot}\left(\mathrm{ExW1},\mathrm{ExW2}\right)$ The WaveletCoefficients command respects the Digits setting. In this case, if you increase the setting of Digits, you can correctly identify the symbolic expressions of the the wavelet coefficients.

 > $\mathrm{Digits}≔20:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{ExW1},\mathrm{ExW2}≔\mathrm{WaveletCoefficients}\left(\mathrm{Daubechies},4\right);$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{map}\left(\mathrm{identify},\mathrm{ExW1}\right);$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{Digits}≔10:$  And of course, you can use the Daubechies 4 wavelet to transform data.

 > $\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{ExT}≔\mathrm{DiscreteWaveletTransform}\left(\mathrm{ExV},\mathrm{Daubechies},4\right);$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{InverseDiscreteWaveletTransform}\left(\mathrm{ExT},\mathrm{Daubechies},4\right)$  You can also transform an image, by using the ImageTools package.

 > $\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{ExImg}≔\mathrm{Matrix}\left(\mathrm{ToGrayscale}\left(\mathrm{PreImg}\right)\right):$
 > $\mathrm{Embed}\left(\mathrm{Create}\left(\mathrm{Show}\right)\right)$  Sample Applications

Wavelets are of growing importance in a number of diverse fields, including seismology, underwater acoustics, computer vision, and signal processing. The largest application seems to be in image compression; below are three sample applications to illustrate of the capabilities of Maple's new wavelet functions. Procedures and Initialization

First, define functions to transform (and inverse transform) a grayscale image. Discrete wavelet transforms of images are done by transforming first one dimension, and then the other. This process generates four outputs, the high-high, high-low, low-high, and low-low coefficients. The low-low coefficients can then be transformed recursively.

Also define a small procedure to count the zeros in a Matrix, returning the result as a percentage of the number of elements in the Matrix, and two procedures needed for signal denoising.

 > $\mathrm{restart};\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{with}\left(\mathrm{DiscreteTransforms}\right):$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{with}\left(\mathrm{ImageTools}\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$
 >
 >
 >
 > >
 > Image Compression

The discrete wavelet transform is the analytical core of both the FBI fingerprint compression standard and the JPEG 2000 image compression standard. The success of wavelets in the FBI's case is nothing short of stunning: fingerprints are stored in a small fraction of the space while maintaining all of their distinguishing features.

The power of wavelets in this area comes from the zero and near zero values that appear in images transformed by using wavelets. In lossy compression, these values can be set to zero, making the data much easier to compress. The examples below demonstrate the power of this method without actually performing any compression.

 > $\mathrm{Embed}\left(\mathrm{Create}\left(\mathrm{img}\right)\right)$ Now set the level of transform to be used, the percentage of zeros desired, and the wavelet to be used.

 > $\mathrm{imglevels}:=2:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{imgper}≔85:\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$$\mathrm{imgWLname}≔\mathrm{Daubechies}:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{imgWLparam}≔20:$

Now you can transform and threshold the data. In the thresholding step, the smallest (in absolute value) per percent of the values in the transformed data are set to zero. As a check, output the percentage of zeros in R, the thresholded, transformed data.

 > $\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{imgR}:=\mathrm{Matrix}\left(\mathrm{map}\left(z→\mathrm{if}\left(\left|z\right|\le \mathrm{imgthresh},0,z\right),\mathrm{imgT}\right),\mathrm{datatype}=\mathrm{float}\left[8\right]\right):$
 > $\mathrm{count0}\left(\mathrm{imgR}\right)$
 ${85}$ (6.2.1)
 > $\mathrm{Embed}\left(\mathrm{Create}\left(\mathrm{imgT}\right)\right)$ And now you can invert the transformation, using the thresholded data.

 > 
 > $\mathrm{Embed}\left(\mathrm{Create}\left(\mathrm{Side}\right)\right)$ The quality of the new image is amazing, given that it was constructed from data that was per percent zero!

In addition to visually judging the quality of the reconstructed image, you can compute the average error (relative to the original image), view a black and white Matrix representing high error pixels, and use the ImageTools[Quality] command.

 > $\frac{\mathrm{LinearAlgebra}:-\mathrm{Norm}\left(\mathrm{imgNew}-\mathrm{img},\mathrm{∞}\right)}{\mathrm{rtable_num_elems}\left(\mathrm{img}\right)};\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$$\mathrm{Er}:=\mathrm{Matrix}\left(\mathrm{LinearAlgebra}:-\mathrm{Dimensions}\left(\mathrm{img}\right),\left(i,j\right)→\mathrm{if}\left(0.05<\left|\mathrm{imgNew}\left[i,j\right]-\mathrm{img}\left[i,j\right]\right|,1,0\right)\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}$$\mathrm{Quality}\left(\mathrm{img},\mathrm{imgNew}\right)$
 ${0.0002205079906}$
 ${0.00288222335046411059}$ (6.2.2)
 > $\mathrm{Embed}\left(\mathrm{Create}\left(\mathrm{Er}\right)\right)$ Given the ability to transform images to produce lots of zeroes without significantly affecting image quality, the applications of wavelets in image compression are obvious. Signal Denoising

First create a signal, $S$, and a noisy version, $\mathrm{NS}$.

 > $\mathrm{Signal}:=\mathrm{Vector}\left(128,i\to \mathrm{evalf}\left(\mathrm{sin}\left(\frac{4i\mathrm{π}}{128}\right)+\frac{1\mathrm{sin}\left(\frac{12i\mathrm{π}}{128}\right)}{3}\right),\mathrm{datatype}=\mathrm{float}\left[8\right]\right):$$\mathrm{NS}:=\mathrm{Vector}\left(128,i→{\mathrm{Signal}}_{i}+\frac{1\mathrm{rand}\left(\right)}{5000000000000},\mathrm{datatype}=\mathrm{float}\left[8\right]\right):$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{plots}\left[\mathrm{listplot}\right]\left(\mathrm{NS}\right)$ Now transform the data, threshold it, and perform the inverse transform.

 > $\mathrm{SDlevels}≔3:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{SDthresh}≔0.25:$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{plots}\left[\mathrm{listplot}\right]\left(\mathrm{SDT}\right)$ > $\mathrm{SDR}:=\mathrm{Vector}\left(\mathrm{map}\left(z→\mathrm{if}\left(\left|z\right|\le \mathrm{SDthresh},0,z\right),\mathrm{SDT}\right),\mathrm{datatype}=\mathrm{float}\left[8\right]\right):$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{plots}\left[\mathrm{listplot}\right]\left(\mathrm{SDR}\right)$ > $\mathrm{SDNew}:=\mathrm{InverseVectorDWT}\left(\mathrm{SDR},\mathrm{SDlevels},\mathrm{SDw1},\mathrm{SDw2}\right):$$\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{plots}\left[\mathrm{listplot}\right]\left(\mathrm{SDNew}\right)$ Amazing! Matrix-Vector Multiplication

In the situation where a fixed Matrix $M$ must be multiplied by many Vectors $V$, the discrete wavelet transform can sometimes be used to speed up the calculations. The algorithm presented below preprocesses $M$ by using wavelets to exploit patterns. If $M$ consists of random values, there will be no speed up. By thresholding, this algorithm tries to reduce the number of floating-point multiplications required, but does not represent any decrease in algorithmic complexity.

Note that orthogonal wavelets must be used in this algorithm. This application exploits the fact that, for orthogonal wavelets, the discrete wavelet transform is an orthogonal linear transformation. (See the section Discrete Wavelet Transform for a brief comment on the orthogonality of the discrete wavelet transform.)

 > $\mathrm{TV}≔\mathrm{Vector}\left[\mathrm{column}\right]\left(\mathrm{LinearAlgebra}:-\mathrm{RandomVector}\left(200\right),\mathrm{datatype}=\mathrm{float}\left[8\right]\right):$

Because the discrete wavelet transform is an orthogonal linear transformation, you can simultaneously transform the rows of the Matrix M and the whole Vector $V$, while leaving their inner product unchanged. This is related to the fact that, for Vectors $x$ and $y$ and an orthogonal Matrix $A$,

= x.y

The fact that the transformation does not affect $M.V$ can be verified numerically.

 > $\mathrm{LinearAlgebra}:-\mathrm{Norm}\left(\mathrm{TM1}.\mathrm{TV1}-\mathrm{TM}.\mathrm{TV}\right)$
 ${1.36424205265939236}{}{{10}}^{{-12}}$ (6.4.1)

It is also possible to transform the columns of $M$, but then the result has to be inverse transformed.

 > $\mathrm{LinearAlgebra}:-\mathrm{Norm}\left(\mathrm{TRes}-\mathrm{TM}.\mathrm{TV}\right)$
 ${1.25055521493777633}{}{{10}}^{{-12}}$ (6.4.2)
 > 

So this means that you can effectively apply the same techniques that were used to transform and threshold images to simplify Matrix-Vector multiplication. References

Introductory books

 • Aboufadel, Edward and Schlicker, Steven. Discovering Wavelets. Interscience, 1999.

Intermediate books; note that these books use fourier transforms extensively

 • Mallat, Stephane. A Wavelet Tour of Signal Processing. Academic Press, 1999.
 • Hernandez, Eugenio and Weiss, Guido. First course on wavelets. CRC, 1996.

The definitive book on wavelets from an expert standpoint

 • Daubechies, Ingrid. "Ten Lectures on Wavelets." SIAM, 1992.