8  Fourier Transform

The fourier transform helps us convert a signal from the time domain to the frequency domain. In this section our main concern is going to be one dimensional signals, but the concepts can be applied to multiple dimensions.

Before we can start defining the Fourier Series we need to extend our notion of vector space to functions space. This is done with Hilbert spaces. The computational rules follow the same principal as in Definition 1.1, what we want to investigate is the inner product.

Definition 8.1 (Hilbert inner product) The Hilbert inner product of two functions \(f(x)\) and \(g(x)\) is defined for \(x \in [a, b]\) as: \[ \langle f(x), g(x)\rangle = \int_a^b f(x) \overline{g}(x)\, \mathrm{d}x, \] where \(\overline{g}(x)\) denotes the complex conjugate.

At first this looks strange but it is closely related to our already known Definition 1.3.

As a first step, if we move from real to complex vector spaces the transpose is replaced by the conjugate transposed or hermit transpose, in notation the \(^\mathbf{T}\) becomes \(^\mathbf{H}\).

Now consider a discrete version of \(f\) and \(g\) at regular intervals \(\Delta x = \frac{b-a}{n-1}\) where \[ f_i = f(x_i) = f(a + (i - 1) \Delta x),\quad i = 1, \ldots n, \] same for \(g_i\) and accordingly \(x_1 = a + 0 \Delta x = a\) and \(x_n = a + (n - 1)\Delta x = b\).

The inner product is then \[ \langle f, g \rangle = \langle\left[\begin{array}{c}f_1 \\ f_2 \\ \vdots \\ f_n \end{array}\right], \left[\begin{array}{c}g_1 \\ g_2 \\ \vdots \\ g_n \end{array}\right] \rangle = \sum_{i=1}^n f_i \overline{g}_i = \sum_{i=1}^n f(x_i)\overline{g}(x_i). \]

As this sum will increase by increasing \(n\) we should normalize it by the factor \(\Delta x\).

\[ \frac{b-a}{n-1} \langle g, f \rangle = \sum_{i=1}^n f(x_i)\overline{g}(x_i) \Delta x. \] If we now increase \(n\to \infty\) we get \(\Delta x \to 0\) and the sum transforms into the integral.

Definition 8.2 (Hilbert two norm) The Hilbert two norm of the function \(f(x)\) is defined for \(x \in [a, b]\) as: \[ \|f\|_2 = \sqrt{\langle f(x), f(x)\rangle} = \left(\int_a^b f(x) \overline{f}(x)\,\mathrm{d}x\right)^{\frac12}, \] where \(\overline{f}(x)\) denotes the complex conjugate.

The set of all functions with bounded norm defines the Hilbert space \(L^2(a, b)\), i.e. the set of all square integrable functions. This space is also called the space of Lebesgue integrable functions.

Similar as we saw projection in vector spaces related to the inner product this is true here as well.

Definition 8.3 (Periodic function) We call a function \(f\,:\, \mathbb{R} \to \mathbb{R}\) periodic with a period of \(L>0\), \(L\)-periodic for short, if \[ f(g + L) = f(t),\quad\forall t \in \mathbb{R}. \]

The following holds true for \(L\)-periodic functions:

  1. If \(L\) is a period than \(nL\) for \(n = 1, 2, 3, \ldots\) is a period as well.
  2. If \(f\) and \(g\) are \(L\)-periodic, than \(\alpha f + \beta g\) are \(L\)-periodic, for \(\alpha, \beta \in \mathbb{C}\).
  3. I \(f\) is \(L\)-periodic it follows that \(\forall a\in\mathbb{R}\) \[ \int_a^{a+T}f(t)\,\mathrm{d}t = \int_0^{T}f(t)\,\mathrm{d}t. \]
  4. If \(f\) is \(L\)-periodic than \(F(t)=f(\frac{t}{\omega})\) with \(\omega = \frac{2\pi}{L}\) is \(2\pi\)-periodic.

The Fourier series is nothing else as the projection of a function with an integer period on the domain \([a, b]\) onto the orthogonal basis defined by the sine and cosine functions.

8.1 Fourier Series

In Fourier analysis the first result is stated for a periodic and piecewise smooth function \(f(t)\).

Definition 8.4 (Fourier Series) For a \(L\)-periodic function \(f(t)\) we can write \[ f(t) = \frac{a_0}{2} + \sum_{k=1}^\infty \left(a_k \cos\left(\omega k t \right)+ b_k \sin\left(\omega k t\right)\right), \tag{8.1}\] for \[ \begin{aligned} a_k &= \frac{2}{L}\int_0^L f(t) \cos\left(\omega k t \right)\, \mathrm{d}t,\\ b_k &= \frac{2}{L}\int_0^L f(t) \sin\left(\omega k t \right)\, \mathrm{d}t. \end{aligned} \] where we can view the last two equations as the projection onto the orthogonal basis \(\{\cos(k t), \sin(k t)\}_{k=0}^\infty\), i.e. \[ \begin{aligned} a_k &= \frac{1}{\|\cos\left(\omega k t\right)\|_2^2} \langle f(t), \cos\left(\omega k t\right)\rangle, \\ b_k &= \frac{1}{\|\sin\left(\omega k t\right)\|_2^2} \langle f(t), \sin\left(\omega k t\right)\rangle. \end{aligned} \]

If we perform a partial reconstruction by truncating the series at \(M\) we get \[ \hat{f}_M(t) = \frac{a_0}{2} + \sum_{k=1}^M \left(a_k \cos\left(\omega k t \right)+ b_k \sin\left(\omega k t \right)\right). \]

With the help of Euler’s formula: \[ \mathrm{e}^{\mathrm{i} k t} = \cos(k t) + \mathrm{i} \sin(k t) \tag{8.2}\] we can rewrite Equation 8.1 as \[ f(t) = \sum_{k=-\infty}^\infty c_k \mathrm{e}^{\omega\mathrm{i} k t} \] with \[ c_k = \frac{1}{L} \int_0^L f(t) \mathrm{e}^{-\omega\mathrm{i} k t}\, \mathrm{d}t. \] and for \(n=1, 2, 3, \ldots\) \[ c_0 = \tfrac12 a_0, \quad c_n = \tfrac12 (a_n - \mathrm{i} b_n), \quad c_{-n} = \tfrac12 (a_n + \mathrm{i} b_n). \]

Note

If \(f(t)\) is real valued than \(c_k = \overline{c}_{-k}\).

Example 8.1 (Fourier Series of Hat functions) We test the Fourier Series with two different hat functions. The first represents a triangle with constant slope up and down, the second a rectangle with infinite slope in the corners.

Show the code for the figure
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ["svg"]

# Parameters
L = 2 * np.pi
M = 5
M2 = 50
N = 512
# Hat functions
fun = lambda t, L: 0 if abs(t) > L / 4 else (1 - np.sign(t) * t * 4 / L)
fun2 = lambda t, L: 0 if abs(t) > L / 4 else 1

# t
1t = np.linspace(-L/2, L/2, N, endpoint=False)
dt = t[1] - t[0]
w = np.pi * 2 / L

f = np.fromiter(map(lambda t: fun(t, L), t), t.dtype)
f2 = np.fromiter(map(lambda t: fun2(t, L), t), t.dtype)

# Necessary functions
scalarproduct = lambda f, g, dt: dt * np.vecdot(f, g)
a_coeff = lambda n, f: 2 / L * scalarproduct(f, np.cos(w * n * t), dt)
b_coeff = lambda n, f: 2 / L * scalarproduct(f, np.sin(w * n * t), dt)

# f_hat_0
f_hat = np.zeros((M + 1, N))
f_hat[0, :] = 1/2 * a_coeff(0, f)
f2_hat = np.zeros((M2 + 1, N))
f2_hat[0, :] = 1/2 * a_coeff(0, f2)

# Computation of the approximation
a = np.zeros(M)
b = np.zeros(M)
for i in range(M):
    a[i] = a_coeff(i + 1, f)
    b[i] = b_coeff(i + 1, f)
    f_hat[i + 1, :] = f_hat[i, :] + \
        a[i] * np.cos(w * (i + 1) * t) + \
        b[i] * np.sin(w * (i + 1) * t)

for i in range(M2):
    f2_hat[i + 1, :] = f2_hat[i, :] + \
        a_coeff(i + 1, f2) * np.cos(w * (i + 1) * t) + \
        b_coeff(i + 1, f2) * np.sin(w * (i + 1) * t)

# Figures
plt.figure(0)
plt.plot(t, f, label=r"$f$")
plt.plot(t, f_hat[-1, :], label=r"$\hat{f_7}$")
plt.xticks([])
plt.legend()
plt.gca().set_aspect(1.5)

plt.figure(1)
plt.plot(t, f_hat[0, :], label=rf"$a_{0}$")
for i in range(M):
    plt.plot(t, a[i] * np.cos(w * (i+1) * t),
             label=rf"$a_{i+1}\cos({i+1}\omega t)$")
plt.legend(ncol=np.ceil((M + 1) / 2), bbox_to_anchor=(1, -0.1))
plt.xticks([])
plt.gca().set_aspect(1.5)

plt.figure(2)
plt.plot(t, f2, label=r"$f$")
plt.plot(t, f2_hat[7, :], label=r"$\hat{f}_7$")
plt.plot(t, f2_hat[20, :], label=r"$\hat{f}_{20}$")
plt.plot(t, f2_hat[50, :], label=r"$\hat{f}_{50}$")
plt.xlabel(r"$x$")
plt.legend()
plt.gca().set_aspect(1.5)

plt.show()
1
Not including the endpoint is important, as this is part of the periodicity of the function.
(a) Sawtooth function and the reconstruction with 7 nodes
(b) Nodes of the reconstruction
(c) Step function and the reconstruction with various nodes
Figure 8.1: Fourier transform of a two hat functions.

Exercise 8.1 (Self implementation Example 8.1) Implement the code yourself by filling out the missing sections:

Code fragment for implementation.
import numpy as np
import matplotlib.pyplot as plt

# Parameters
L = 2 * np.pi   # Interval
M = 5           # Nodes for the first function
M2 = 50         # Nodes for the second function
N = 512         # Interpolation points
# Hat functions
fun = lambda x, L: # tooth
fun2 = lambda x, L: # step

# x and y for the functions
t = # points for the evaluation ()
dt = t[1] - t[0]
w = np.pi * 2 / L

f = np.fromiter(map(lambda t: fun(t, L), t), t.dtype)
f2 = np.fromiter(map(lambda t: fun2(t, L), t), t.dtype)

# Necessary functions
scalarproduct = lambda f, g, dt: # see definition in the notes
a_coeff = lambda n, f: # see definition in the notes
b_coeff = lambda n, f: # see definition in the notes

# f_hat_0
f_hat = np.zeros((M + 1, N))
f_hat[0, :] = # a_0 for f
f2_hat = np.zeros((M2 + 1, N))
f2_hat[0, :] = # a_0 for f2

# Computation of the approximation
a = np.zeros(M)
b = np.zeros(M)
for i in range(M):
    a[i] = a_coeff(i + 1, f)
    b[i] = b_coeff(i + 1, f)
    f_hat[i + 1, :] = f_hat[i, :] + \
        a[i] * np.cos(w * (i + 1) * t) + \
        b[i] * np.sin(w * (i + 1) * t)

for i in range(M2):
    f2_hat[i + 1, :] = f2_hat[i, :] + \
        a_coeff(i + 1, f2) * np.cos(w * (i + 1) * t) + \
        b_coeff(i + 1, f2) * np.sin(w * (i + 1) * t)

# Figures
plt.figure(0)
plt.plot(t, f, label=r"$f$")
plt.plot(t, f_hat[-1, :], label=r"$\hat{f_7}$")
plt.xticks([])
plt.legend()
plt.gca().set_aspect(1.5)

plt.figure(1)
plt.plot(t, f_hat[0, :], label=rf"$a_{0}$")
for i in range(M):
    plt.plot(t, a[i] * np.cos(w * (i+1) * t),
             label=rf"$a_{i+1}\cos({i+1}\omega t)$")
plt.legend(ncol=np.ceil((M + 1)/2), bbox_to_anchor=(1, -0.1))
plt.xticks([])
plt.gca().set_aspect(1.5)

plt.figure(2)
plt.plot(t, f2, label=r"$f$")
plt.plot(t, f2_hat[7, :], label=r"$\hat{f}_7$")
plt.plot(t, f2_hat[20, :], label=r"$\hat{f}_{20}$")
plt.plot(t, f2_hat[50, :], label=r"$\hat{f}_{50}$")
plt.xlabel(r"$x$")
plt.legend()
plt.gca().set_aspect(1.5)

plt.show()
Note

The phenomenon that the truncated Fourier series oscillates in Figure 8.1 (c) due to the discontinuity of the function is called the Gibbs phenomenon.

8.2 Fourier Transform

The Fourier Series is defined for \(L\)-periodic functions. The Fourier transform extends this to functions with the domain extended to \(\pm\infty\).

Let us start of with the series representation we already know: \[ f(t) = \sum_{k=-\infty}^\infty c_k \mathrm{e}^{\mathrm{i} \omega k t} \] with the coefficients \[ c_k = \frac{1}{2L} \int_{-L}^{L} f(t) \mathrm{e}^{-\mathrm{i} \omega_k t}\, \mathrm{d}t, \] with \(\omega_k=\frac{k\pi}{L} = k\Delta \omega\).

If we now perform the transition for \(L \to \infty\) resulting in \(\Delta\omega \to 0\) and basically moving from discrete frequencies to a continuous set of frequencies. This results in \[ f(t) = \lim_{\Delta \omega \to 0} \sum_{k=-\infty}^\infty \frac{\Delta \omega}{2\pi} \int_{-\tfrac{\pi}{\Delta \omega}}^{\tfrac{\pi}{\Delta \omega}} f(\xi) \mathrm{e}^{-\mathrm{i} k \Delta \omega \xi}\, \mathrm{d}\xi\,\, \mathrm{e}^{\Delta\omega\mathrm{i} k t} \] which is a Riemann integral and the kernel becomes the Fourier Transform of our function.

Definition 8.5 (Fourier Transform) A function \(f\,:\, \mathbb{R} \to \mathbb{R}\) is called Fourier-transformable if \[ \hat{f}(\omega) = \mathcal{F}\{f(t)\} = \int_{-\infty}^{\infty} f(t)\mathrm{e}^{-\mathrm{i} \omega t}\, \mathrm{d}t \] exists for all \(\omega\in\mathbb{R}\). In this case we call \(\hat{f}(\omega) \equiv \mathcal{F}\{f(t)\}\) the Fourier transform of \(f(t)\).

The inverse Fourier transform is defined as \[ \mathcal{F}^{-1}\{\hat{f}(\omega)\} = \frac{1}{2 \pi}\int_{-\infty}^{\infty} \hat{f}(\omega)\mathrm{e}^{\mathrm{i} \omega x}\, \mathrm{d}\omega \]

Note

The pair \((f, \hat{f})\) is often called the Fourier transform pair.

The two integrals converge, as long as both functions are Lebesgue integrable, i.e.  \[ \int_{-\infty}^\infty|f(t)|\, \mathrm{d}t \le \infty \quad\text{and}\quad \int_{-\infty}^\infty|\hat{f}(\omega)|\, \mathrm{d}\omega \le \infty,\] or \(f, \hat{f} \in L^1[(-\infty, \infty)]\).

As could be expected, the Fourier transform has properties that lead to computational advantages.

For tow functions \(f, g \in L^1[(-\infty, \infty)]\) and \(\alpha, \beta\in\mathbb{C}\) the following properties hold:

  1. Linearity \[ \mathcal{F}\{\alpha f(t) + \beta g(t)\} = \alpha \mathcal{F}\{f(t)\} + \beta \mathcal{F}\{g(t)\} = \alpha \hat{f}(\omega)+ \beta \hat{g}(\omega), \] and \[ \mathcal{F}^{-1}\{\alpha \hat{f}(\omega) + \beta \hat{g}(\omega)\} = \alpha \mathcal{F}^{-1}\{\hat{f}(\omega)\} + \beta \mathcal{F}^{-1}\{\hat{g}(\omega)\} = \alpha f(t) + \beta g(t). \]

  2. Conjugation \[ \mathcal{F}\{\overline{f(t)}\} = \overline{\hat{f}(-\omega)}. \]

  3. Scaling, for \(\alpha \neq 0\) \[ \mathcal{F}\{f(\alpha t)\} = \frac{1}{|\alpha|}\hat{f}\left(\frac{\omega}{\alpha}\right). \]

  4. Drift in time, for \(a\in\mathbb{R}\) \[ \mathcal{F}\{f(t - a)\} = \mathrm{e}^{-\mathrm{i}\omega a}\hat{f}(\omega). \]

  5. Drift in frequency, for \(a\in\mathbb{R}\) \[ \mathrm{e}^{\mathrm{i} a t} \mathcal{F}\{f(t)\} = \hat{f}(\omega - a). \]

  6. If \(f\) is even or odd, than \(\hat{f}\) is even or odd, respectively.

  7. Derivative in time \[ \mathcal{F}\{\partial_t f(t)\} = \mathrm{i} \omega \hat{f}(\omega) \] We are going to prove this by going through the lines \[ \begin{aligned} \mathcal{F}\left\{\frac{d}{d\,t}f(t)\right\} &= \int_{-\infty}^\infty f'(t)\mathrm{e}^{-\mathrm{i}\omega t}\, \mathrm{d}t \\ &= \left[f(t)\mathrm{e}^{-\mathrm{i}\omega t}\right]_{-\infty}^\infty - \int_{-\infty}^\infty -\mathrm{i} \omega f(t)\mathrm{e}^{-\mathrm{i}\omega t}\, \mathrm{d}t \\ &= \mathrm{i} \omega \int_{-\infty}^\infty f(t)\mathrm{e}^{-\mathrm{i}\omega t}\, \mathrm{d}t \\ &= \mathrm{i} \omega \mathcal{F}\{f(t)\} \end{aligned} \] For higher derivatives we get \[ \mathcal{F}\{\partial_t^n f(t)\} = \mathrm{i}^n \omega^n \hat{f}(\omega) \]

  8. Derivative in frequency \[ \mathcal{F}\{t^n f(t)\} = \mathrm{i}^n \partial_\omega^n\hat{f}(\omega) \]

  9. The convolution of two functions is defined as \[ (f \ast g)(t) = \int_{-\infty}^{\infty}f(t - \xi) g(\xi)\, \mathrm{d}\xi, \] and for the Fourier transform \[ \mathcal{F}\{(f \ast g)(t)\} = \hat{f} \cdot \hat{g}. \]

  10. Parseval’s Theorem \[ \|f\|_2^2 = \int_{-\infty}^{\infty}|f(t)|^2\, \mathrm{d}t = \frac{1}{2 \pi}\int_{-\infty}^{\infty}|\hat{f}(\omega)|^2\, \mathrm{d}\omega \] stating that the Fourier Transform preserves the 2-norm up to a scaling factor.

8.3 Discrete Fourier Transform

The Discrete Fourier Transform (DFT) is a way of approximating the Fourier transform on discrete vectors of data and essentially it is nothing else than sampling the function and using numerical integration.

Definition 8.6 (Discrete-Fourier Transform) For equally spaced values \(t_k = k\Delta t\), for \(k\in\mathbb{Z}\) and \(\Delta t>0\) and the discrete values of the function evaluations \(f_k=f(t_k)\). If the function is periodic with \(L=N\Delta t\) then the discrete Fourier transform is given as \[ \hat{f}_k = \sum_{j=0}^{N-1}f_j\, \mathrm{e}^{-\mathrm{i} j k\tfrac{2 \pi}{N}}, \tag{8.3}\] and its inverse (iDFT) as \[ f_k = \frac{1}{N}\sum_{j=0}^{N-1}\hat{f}_j\, \mathrm{e}^{\mathrm{i} j k\tfrac{2 \pi}{N}}. \]

As we can see, the DFT is a linear operator and therefore it can be written as a matrix vector product \[ \left[ \begin{array}{c} \hat{f}_1 \\ \hat{f}_2 \\ \hat{f}_3 \\ \vdots \\ \hat{f}_N \end{array} \right] = \left[ \begin{array}{ccccc} 1 & 1 & 1 & \dots & 1 \\ 1 & \omega_N & \omega_N^2 & \dots & \omega_N^{N-1} \\ 1 & \omega_N^2 & \omega_N^4 & \dots & \omega_N^{2(N-1)} \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & \omega_N^{N-1} & \omega_N^{2(N-1)} & \dots & \omega_N^{(N-1)^2 } \\ \end{array} \right] \left[ \begin{array}{c} f_1 \\ f_2 \\ f_3 \\ \vdots \\ f_N \end{array} \right] \tag{8.4}\] with \(\omega_N = \exp({-\mathrm{i} \tfrac{2 \pi}{N}})\).

Note

The matrix of the DFT is a unitary Vandermonde matrix.

As we can transfer the properties of the Fourier transform to the DFT we get the nice properties for sampled signals.

The downside of the DFT is that it does not scale well for large \(N\) as the matrix-vector multiplication is \(\mathcal{O}(N^2)\) and becomes slow.

Tip

Code for the computation of the DFT matrix.

Show the code.
import numpy as np
import matplotlib.pyplot as plt

# Parameters
N = 256
w = np.exp(-1j * 2 * np.pi / N )

J, K = np.meshgrid(np.arange(N), np.arange(N))
DFT = np.power(w, J*K)

8.4 Fast Fourier Transform

In 1965, James W. Cooley (IBM) and John W. Tukey (Princeton) developed the so called fast Fourier transform (FFT) that scales with \(\mathcal{O}(N \log(N))\). which becomes almost linear for large enough \(N\), see Cooley and Tukey (1965).

Note

To give an idea of what this change means and why this algorithm was a game changer. Audio is most of the time sampled with 44.1kHz, i.e. 44 100 samples per second. For a 10s audio clip the vector \(f\) will have the length \(N = 4.41 \times 10^5\). The DFT computation (without generating the matrix) results ins approximately \(2\times 10^{11}\) multiplications. The FFT on the other hand requires \(6\times 10^6\) leading to a speed-up of about \(30 000\).

How this influenced our world we know from the use in our daily communication networks.

(Compare Brunton and Kutz 2022, 65–66)

Note

We should note that Cooley and Tukey where not the first to propose a FFT but they provided the formulation used today. Gauss already formulated the FFT 150 years earlier in 1805 for orbital approximations. Apparently, he did the necessary computations in his head and needed a fast algorithm so he developed the FFT. Gauss being Gauss did not see this as something important and it did not get published until 1866 in his compiled notes, Gauß (1866).

The main idea of the FFT is to exploit symmetries in the Fourier transform and to relate the \(N\)-dimensional DFT to a lower dimensional DFT by reordering the coefficients.

Definition 8.7 (Fast-Fourier Transform) If we assume that \(N = 2^n\), i.e. a power of \(2\), in particular \(N=2M\), and \(F_N\) denotes the matrix of Equation 8.3 for dimension \(N\) we get have \(\hat{f} = F_N f\) and \(f = \tfrac1N \overline{F_N} \hat{f}\). If we further split \(f\) in the even and odd indices as \[e = [f_0, f_2, \ldots, f_{N-2}]^{\mathsf{T}}\in \mathbb{C}^{M}\] and \[o = [f_1, f_3, \ldots, f_{N-1}]^{\mathsf{T}}\in \mathbb{C}^{M}\] and with Equation 8.3 we get \[ \begin{aligned} \hat{f}_k &= \sum_{j=0}^{N-1}f_j\, \omega^{j k} = \sum_{j=0}^{M-1}f_{2j}\, \omega^{(2j) k} + \sum_{j=0}^{M-1}f_{2j+1}\, \omega^{(2j+1) k} \\ &= \sum_{j=0}^{M-1}e_j\, (\omega^2)^{j k} + \omega^k\sum_{j=0}^{M-1}o_{j}\, (\omega^2)^{j k}. \end{aligned} \] If we then split \(\hat{f}\) in an upper and lower part \[u = [\hat{f}_0, \hat{f}_2, \ldots, \hat{f}_{M-1}]^{\mathsf{T}}\in \mathbb{C}^{M}\] and \[l = [\hat{f}_{M}, \hat{f}_{M+1}, \ldots, \hat{f}_{N-1}]^{\mathsf{T}}\in \mathbb{C}^{M}\] and with the property \(\omega^{k+M} = \omega^k \omega^M = - \omega^k\) we get

\[ \begin{aligned} u_k &= \sum_{j=0}^{M-1}e_j\, (\omega^2)^{j k} + \omega^k\sum_{j=0}^{M-1}o_{j}\, (\omega^2)^{j k},\\ l_k &= \sum_{j=0}^{M-1}e_j\, (\omega^2)^{j k} - \omega^k\sum_{j=0}^{M-1}o_{j}\, (\omega^2)^{j k}. \end{aligned} \] This results in the more visual matrix representation \[ \hat{f}= F_N f = \left[ \begin{array}{cc} I_M & D_M \\ I_M & -D_M \end{array} \right] \left[ \begin{array}{cc} F_M & 0 \\ 0 & F_M \end{array} \right] \left[ \begin{array}{cc} f_{even}\\ f_{odd} \end{array} \right], \] for \(I_M\) being the identity matrix in dimension \(M\) and \[ D_M = \left[ \begin{array}{ccccc} 1 & 0 & 0 & \dots & 0 \\ 0 & \omega & 0 & \dots & 0 \\ 0 & 0 & \omega^2 & \dots & 0 \\ \vdots & \vdots & \ddots & \ddots &\vdots\\ 0 & 0 & 0 & \dots & \omega^{(M-1)} \\ \end{array} \right]. \] Now repeat this \(n\) times.

Note

If \(N\) is not a power of \(2\) padding is used to make the size fit by extending the vector with zeros.

Note

The original FFT paper (Cooley and Tukey 1965) uses bit flipping and similar techniques to boost performance even more. It can even be implemented to allow for in place computation to save storage.

\[ \begin{array}{|c|c|c|c|c|} \hline & \text{bit flip} & M = 1 & M = 2 & M = 4 \\ \hline c_0 & \xi_0 = c_0\ (c_{eee}) & \eta_0 = \xi_0 + w^0 \xi_1 & \zeta_0 = \eta_0 + w^0 \eta_2 & y_0 = \zeta_0 + w^0 \zeta_4 \\ c_1 & \xi_1 = c_4\ (c_{eeo}) & \eta_1 = \xi_0 + w^4 \xi_1 & \zeta_1 = \eta_0 + w^2 \eta_3 & y_1 = \zeta_0 + w^1 \zeta_5 \\ c_2 & \xi_2 = c_2\ (c_{eoe}) & \eta_2 = \xi_2 + w^0 \xi_3 & \zeta_2 = \eta_1 + w^4 \eta_2 & y_2 = \zeta_1 + w^2 \zeta_6 \\ c_3 & \xi_3 = c_6\ (c_{eoo}) & \eta_3 = \xi_2 + w^4 \xi_3 & \zeta_3 = \eta_1 + w^6 \eta_3 & y_3 = \zeta_1 + w^3 \zeta_7 \\ c_4 & \xi_4 = c_1\ (c_{oee}) & \eta_4 = \xi_4 + w^0 \xi_5 & \zeta_4 = \eta_4 + w^0 \eta_6 & y_4 = \zeta_2 + w^0 \zeta_4 \\ c_5 & \xi_5 = c_5\ (c_{oeo}) & \eta_5 = \xi_4 + w^4 \xi_5 & \zeta_5 = \eta_4 + w^2 \eta_7 & y_5 = \zeta_2 + w^1 \zeta_5 \\ c_6 & \xi_6 = c_3\ (c_{ooe}) & \eta_6 = \xi_6 + w^0 \xi_7 & \zeta_6 = \eta_5 + w^4 \eta_6 & y_6 = \zeta_3 + w^2 \zeta_6 \\ c_7 & \xi_7 = c_7\ (c_{ooo}) & \eta_7 = \xi_6 + w^4 \xi_7 & \zeta_7 = \eta_5 + w^6 \eta_7 & y_7 = \zeta_3 + w^3 \zeta_7 \\ \hline \end{array} \]

In the above table \(o\) stands for odd and \(e\) for even.

(Compare Meyberg and Vachenauer 1992, 331)

8.4.1 Examples for the FFT in action

In order to give an idea how FFT works in an application we follow the examples given in (Brunton and Kutz 2022, 66–76).

Example 8.2 (FFT for de-noising) For a signal consisting of two main frequencies \(f_1 = 50\) and \(f_2=120\) we construct a signal \[ f(t) = \sin(2\pi f_1 t) + \sin(2\pi f_2 t) \] and add some Gaussian white noise np.random.randn.

We compute the FFT from the two signals and their power spectral density (PSD), i.e.  \[ PSD(\hat{f})=\frac1N \|\hat{f}\|^2. \] We use the PSD to take all frequencies with a \(PSD < 100\) out of our reconstruction as a filter. This removes noise from the signal.

Show the code for the figure
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ["svg"]

np.random.seed(6020)
# Parameters
N = 1024
a, b = 0, 1/4
t = np.linspace(a, b, N, endpoint=False)
dt = t[1] - t[0]
f1 = 50
f2 = 120
fun = lambda t: np.sin(2 * np.pi * f1 * t) + np.sin(2 * np.pi * f2 * t)

f_clean = fun(t)
f_noise = fun(t) + 2.5 * np.random.randn(len(t))           # Add some noise

fhat_noise = np.fft.fft(f_noise)
fhat_clean = np.fft.fft(f_clean)

PSD_noise = np.abs(fhat_noise)**2 / N
PSD_clean = np.abs(fhat_clean)**2 / N

freq = (1 / (dt * N)) * np.arange(N)
L = np.arange(1, np.floor(N/4), dtype='int')

# Apply filter in spectral space
filter = PSD_noise > 100
PSDclean = PSD_noise * filter
fhat_filtered = filter * fhat_noise
f_filtered = np.fft.ifft(fhat_filtered)

# Figures
plt.figure(0)
plt.plot(t, f_noise, ":", label=r"Noisy")
plt.plot(t, f_clean, label=r"Clean")
plt.xlabel("Time [s]")
plt.ylabel(r"$f$")
plt.xlim(t[0], t[-1])
plt.ylim(-5, 5)
plt.legend(loc=1)
plt.gca().set_aspect(5e-3)

plt.figure(1)
plt.plot(freq[L], PSD_noise[L], label=r"Noisy")
plt.plot(freq[L], PSD_clean[L], label=r"Clean")
plt.xlabel("Frequency [Hz]")
plt.ylabel("PSD")
plt.xlim(0, int(freq[L[-1] + 1]))
plt.legend(loc=1)
plt.gca().set_aspect(1)

plt.figure(3)
plt.plot(t, np.real(f_filtered), label=r"Noisy")
plt.plot(t, f_clean, label=r"Clean")
plt.xlabel("Time [s]")
plt.ylabel(r"$f$")
plt.xlim(t[0], t[-1])
plt.ylim(-5, 5)
plt.legend(loc=1)
plt.gca().set_aspect(5e-3)
plt.show()
(a) Original clean signal and noisy signal.
(b) Scaled square norm of of the Fourier coefficients (PSD), only parts are shown.
(c) Original signal and de-noised signal.
Figure 8.2: Signal noise filter with FFT.

As can be seen in the Figure 8.2 (c), the reconstruction is not exact. This is due to the fact that the reconstructed frequencies are not matched exactly plus we have some multiples that show up as well. In particular:

Frequency PSD
0 52.0 145.259
1 120.0 230.273
2 3976.0 230.273
3 4044.0 145.259

Note: For Figure 8.2 (c) we discarded the imaginary part of the reconstruction.

Exercise 8.2 (Self implementation Example 8.2) Implement the code yourself by filling out the missing sections:

Code fragment for implementation.
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ["svg"]

np.random.seed(6020)
# Parameters
N = 1024
a, b = 0, 1/4
t = # specify the sample steps for t
dt = t[1] - t[0]
f1 = 50
f2 = 120
fun = lambda t: # define the function as described in the example

f_clean = fun(t)
f_noise = fun(t) + 2.5 * np.random.randn(len(t))

fhat_noise = # transform the noisy signal with fft
fhat_clean = # transform the clean signal with fft

PSD_noise = # Compute the PSD for the noisy signal 
PSD_clean = # Compute the PSD for the clean signal 

freq = (1 / (dt * N)) * np.arange(N)
L = np.arange(1, np.floor(N/4), dtype='int')

# Apply filter in spectral space
filter = # create the filter for the PSD values grater 100 
PSDclean = # set everything not part of the filter to zero
fhat_filtered = # apply the filter to the transformed function
f_filtered = # apply the inverse fft

# Figures
plt.figure(0)
plt.plot(t, f_noise, ":", label=r"Noisy")
plt.plot(t, f_clean, label=r"Clean")
plt.xlabel("Time [s]")
plt.ylabel(r"$f$")
plt.xlim(t[0], t[-1])
plt.ylim(-5, 5)
plt.legend(loc=1)
plt.gca().set_aspect(5e-3)

plt.figure(1)
plt.plot(freq[L], PSD_noise[L], label=r"Noisy")
plt.plot(freq[L], PSD_clean[L], label=r"Clean")
plt.xlabel("Frequency [Hz]")
plt.ylabel("PSD")
plt.xlim(0, int(freq[L[-1] + 1]))
plt.legend(loc=1)
plt.gca().set_aspect(1)

plt.figure(3)
plt.plot(t, np.real(f_filtered), label=r"Noisy")
plt.plot(t, f_clean, label=r"Clean")
plt.xlabel("Time [s]")
plt.ylabel(r"$f$")
plt.xlim(t[0], t[-1])
plt.ylim(-5, 5)
plt.legend(loc=1)
plt.gca().set_aspect(5e-3)
plt.show()

Exercise 8.3 (DFT vs. FFT) Implement Example 8.2 with DFT and FFT such that you can evaluate the runtime and create a plot showing the different runtime as well as check if the two produce the same result.

For the Fourier transform we stated that the multiplication with \(\mathcal{F}\{\partial f\}=\mathrm{i}\omega\mathcal{F}\{f\}\), similarly we can derive a formula for the numerical derivative of a sampled function by multiplying each entry of the transformed vector by \(\mathrm{i}\kappa\) for \(\kappa=\tfrac{2\pi k}{N}\). \(\kappa\) is called the discrete wavenumber associated with the component \(k\).

Let us explore this with the example stated in (Brunton and Kutz 2022, 68–69).

Example 8.3 (Spectral derivative) We compute the so called spectral derivative for the function \[ \begin{aligned} f(t) &= \cos(t) \exp\left(-\frac{t^2}{25}\right) \\ \partial_t f(t) &= -\sin(t) \exp\left(-\frac{t^2}{25}\right) - \frac{2}{25}t f(t) \end{aligned} \]

In order to provide something to compare our results to we also compute the forward Euler finite-differences for the derivative \[ \partial_t f(t_k) \approx \frac{f(t_{k+1}) - f(t_k)}{t_{k+1} - t_k}. \]

Show the code for the figure
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ["svg"]

# Parameters
N = 128
a, b = -15, 15
L = b - a

fun = lambda t: np.cos(t) * np.exp(-np.power(t, 2) / 25)
dfun = lambda t: -(np.sin(t) * np.exp(-np.power(t, 2) / 25) + \
                 (2 / 25) * t * fun(t))

def fD(N, fun, dfun, a, b):
    t = np.linspace(a, b, N, endpoint=False)
    dt = t[1] - t[0]
    f = fun(t)
    df_DD = np.diff(f) / dt
    df_DD = np.append(df_DD, (f[-1] - f[0]) / dt)
    return df_DD, np.linalg.norm(df_DD - dfun(t)) / np.linalg.norm(df_DD)

def spD(N, fun, dfun, a, b):
    t = np.linspace(a, b, N, endpoint=False)
    f = fun(t)
    fhat = np.fft.fft(f)
    kappa = np.fft.fftfreq(N, (b - a) / (N * 2 * np.pi))
    df_hat = kappa * fhat * (1j)
    df_r = np.fft.ifft(df_hat).real
    return df_r, np.linalg.norm(df_r - dfun(t)) / np.linalg.norm(df_r)


# Finite differences
df_fD, e = fD(N, fun, dfun, a, b)
# Spectral derivative
df_spD, e = spD(N, fun, dfun, a, b)

# Figures
t = np.linspace(a, b, N, endpoint=False)
plt.figure(0)
plt.plot(t, dfun(t), label="Exact")
plt.plot(t, df_fD, "-.", label="Finite Differences")
plt.plot(t, df_spD, "--", label="Spectral")
plt.xlabel("t")
plt.ylabel(r"$\partial_t f$")
plt.legend(loc=1)
plt.gca().set_aspect(5)

plt.figure(1)
n = 19
M = range(3, n)
e_spD = np.ones(len(M))
e_fD = np.ones(len(M))
for i, j in enumerate(M):
    _, e_fD[i] = fD(2**j, fun, dfun, a, b)
    _, e_spD[i] = spD(2**j, fun, dfun, a, b)

plt.loglog(np.pow(2, M), e_fD, label="Finite differences")
plt.loglog(np.pow(2, M), e_spD, label="Spectral derivative")
plt.grid()
plt.xlabel("N")
plt.ylabel("Relative Error")
plt.legend(loc=1)
plt.gca().set_aspect(2.5e-1)

fun_saw = lambda t, L: 0 if abs(t) > L / 4 else (1 - np.sign(t) * t * 4 / L)
a, b = -np.pi, np.pi
L = b - a
fun2 = lambda t: np.fromiter(map(lambda t: fun_saw(t, L), t), t.dtype)
N = 2**10

t = np.linspace(a, b, N, endpoint=False)

plt.figure(2)
df_spD, _ = spD(N, fun2, fun2, a, b)
df_fD, _ = fD(N, fun2, fun2, a, b)
plt.plot(t, fun2(t), label=r"Function $f$")
plt.plot(t, df_fD, "-.", label="Finite derivative")
plt.plot(t, df_spD, "--", label="Spectral derivative")
plt.xlabel("t")
plt.ylabel(r"$f, \partial_t f$")
plt.xlim(-2, 2)
plt.legend(loc=1)
plt.gca().set_aspect(0.5)
(a) Computation of the derivative with different methods (Exact and spectral are almost the same).
(b) Accuracy of the methods for computing the derivative.
(c) Gibbs phenomenon for the spectral derivative for discontinuous functions.
Figure 8.3: Computing the derivative of a function.

As can be seen in Figure 8.3 (b) we can reduce the error of both methods by increasing \(N\). Nevertheless, the spectral method is more accurate and converges faster.

In Section 3.1.2 we have already seen how the eigendecomposition can be used to compute the solution of ordinary differential equations by changing the basis and transforming the equation into a basis that can be handled in a straightforward manner. The same is true with the Fourier transformation.

As mentioned before, Fourier introduced it to solve the heat equation and we will do the same.

Important

Above we have always transformed the time component of a function into the frequency domain via FFT. In the next example we have the time evolution of a signal and therefore the function depends on time and space. We apply the FFT to the space component to compute the derivative in frequency domain w.r.t. the transformed variable.

This automatically results in periodic boundary conditions for the following examples, see especially Example 8.5.

Example 8.4 (Heat Equation) The heat equation in 1D is given by \[ \dot{u}(t, x) = \alpha^2 \partial_x^2 u(\tau, x), \] for \(u(t, x)\) as the temperature distribution in space (\(x\)) and time (\(t\)). By applying the Fourier transform we get \(\mathcal{F}\{u(t,x)\}=\hat{u}(t, \omega)\) and \[ \dot{\hat{u}}(t, \omega) = - \alpha^2\omega^2\hat{u}(t, \omega). \] This transformed the partial differential equation into an ordinary differential equation and we can solve it for each fixed frequency \(\omega\) as \[ \hat{u}(t, \omega) = \mathrm{e}^{-\alpha^2\omega^2 t} \hat{u}(0, \omega). \] where \(\hat{u}(0, \omega)\) is nothing else than the Fourier transform of the initial temperature distribution at time \(t=0\). To compute the inverse Fourier transform we can make use of the convolution property introduced above.

\[ \begin{aligned} u(t, x) &= \mathcal{F}^{-1}\{\hat{u}(t, \omega)\} = \mathcal{F}^{-1}\{\mathrm{e}^{-\alpha^2\omega^2 t}\} \ast u(0, x) \\ &= \frac{1}{2 \alpha \sqrt{\pi t}} \mathrm{e}^{-\tfrac{x^2}{4\alpha^2t}} \ast u(0, x). \end{aligned} \]

For the numerical computation it is more convenient to stay in the frequency domain and use \(\kappa\) as before.

The majority of the following code is from (Brunton and Kutz 2022, Code 2.5 and Code 2.6).

Show the code for the figure
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
%config InlineBackend.figure_formats = ["svg"]

alpha = 1
a, b = -50, 50
L = b - a
N = 1000
x = np.linspace(a, b, N, endpoint=False)
dx = x[1] - x[0]

# Define discrete wavenumbers
#kappa = 2 * np.pi * np.fft.fftfreq(N, d=dx)
kappa = np.fft.fftfreq(N, (b - a) / (N * 2 * np.pi))

# Initial condition
u0 = np.zeros_like(x)
u0[np.abs(x) < 10] = 1
u0hat = np.fft.fft(u0)

# Simulate in Fourier frequency domain
dt = 0.001
T = np.arange(0, 10, dt)

fun = lambda t, x: - alpha**2 * (np.power(kappa, 2)) * x
euler = lambda y, dt, t, fun: y + dt * fun(t, y)

uhat = np.zeros((len(T), len(u0hat)), dtype="complex")
uhat[0, :] = u0hat
for i, t in enumerate(T[1:]):
    uhat[i + 1, :] = euler(uhat[i, :], dt, t, fun)

u = np.zeros_like(uhat)

for k in range(len(T)):
    u[k, :] = np.fft.ifft(uhat[k, :])

u = u.real    

fig = plt.figure()
ax = fig.add_subplot(projection='3d')
#ax.view_init(elev=45, azim=-20, roll=0)
ax.set_xlabel(r"$x$")
ax.set_ylabel(r"$t$")
ax.set_zlabel(r"$u(t, x)$")
ax.set_zticks([])

u_plot = u[0:-1:int(1 / dt), :]
for j in range(u_plot.shape[0]):
    ys = j * np.ones(u_plot.shape[1])
    ax.plot(x, ys, u_plot[j, :])
    
# Image plot
plt.figure()
plt.imshow(np.flipud(u[0:-1:100]), aspect=8)
plt.xlabel(r"$x$")
plt.ylabel(r"$t \to$")
plt.gca().axes.get_xaxis().set_ticks([])
plt.gca().axes.get_yaxis().set_ticks([])
plt.show()
(a) Evolution of the heat equation in time.
(b) x-t diagram of the transport equation.
Figure 8.4: Simulation of the heat equation in 1D.

Exercise 8.4 (Self implementation Example 8.4) Implement the code yourself by filling out the missing sections:

import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ["svg"]

alpha = 1
a, b = -50, 50
L = b - a
N = 1000
x = np.linspace(a, b, N, endpoint=False)
dx = x[1] - x[0]

# Define discrete wavenumbers
kappa = np.fft.fftfreq(N, (b - a) / (N * 2 * np.pi))

# Initial condition
u0 = np.zeros_like(x)
u0[np.abs(x) < 10] = 1
u0hat = np.fft.fft(u0)

# Simulate in Fourier frequency domain
dt = 0.001
T = np.arange(0, 10, dt)

fun = lambda t, x: #right hand side for the euler function
euler = lambda y, dt, t, fun: # euler step

uhat = np.zeros((len(T), len(u0hat)), dtype="complex")
uhat[0, :] = u0hat
for i, t in enumerate(T[1:]):
    uhat[i + 1, :] = euler(uhat[i, :], dt, t, fun)

u = np.zeros_like(uhat)

for k in range(len(T)):
    u[k, :] = np.fft.ifft(uhat[k,:])

u = u.real    

# Waterfall plot
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel(r"$x$")
ax.set_ylabel(r"$t$")
ax.set_zlabel(r"$u(t, x)$")
ax.set_zticks([])

u_plot = u[0:-1:int(1 / dt), :]
for j in range(u_plot.shape[0]):
    ys = j * np.ones(u_plot.shape[1])
    ax.plot(x, ys, u_plot[j, :])
    
# Image plot
plt.figure()
plt.imshow(np.flipud(u[0:-1:100]), aspect=8)
plt.xlabel(r"$x$")
plt.ylabel(r"$t \to$")
plt.gca().axes.get_xaxis().set_ticks([])
plt.gca().axes.get_yaxis().set_ticks([])
plt.show()

Next, we look at the transport equation.

Example 8.5 (Transport or advection) The transport equation in 1D is given by \[ \dot{u} = - c \partial_x u. \] It is called the transport equation, as the initial value simply propagates in time to the right hand side of the domain with speed \(c\), i.e. \(u(t, x) = u(0, x -c t)\). The solution is simply computed by exchanging the right hand side from the above code.

Note: In this example, we see the periodic boundary conditions implicitly provided by the space discretisation via Fourier transform.

The majority of the following code is from (Brunton and Kutz 2022, Code 2.5 and Code 2.6).

Show the code for the figure
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
%config InlineBackend.figure_formats = ["svg"]

c = 2
a, b = -20, 20
L = b - a
N = 1000
x = np.linspace(a, b, N, endpoint=False)
dx = x[1] - x[0]

# Define discrete wavenumbers
#kappa = 2 * np.pi * np.fft.fftfreq(N, d=dx)
kappa = np.fft.fftfreq(N, (b - a) / (N * 2 * np.pi))

# Initial condition
u0 = 1/np.cosh(x)
u0hat = np.fft.fft(u0)

# SciPy's odeint function doesn't play well with complex numbers,
# so we recast the state u0hat from an N-element complex vector 
# to a 2N-element real vector
u0hat_ri = np.concatenate((u0hat.real, u0hat.imag))

# Simulate in Fourier frequency domain
dt = 0.025
T = np.arange(0, 600*dt, dt)

def rhsWave(uhat_ri,t,kappa,c):
    uhat = uhat_ri[:N] + (1j) * uhat_ri[N:]
    d_uhat = -c * (1j ) * kappa * uhat
    d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag)).astype('float64')
    return d_uhat_ri

uhat_ri = odeint(rhsWave, u0hat_ri, T, args=(kappa, c))

uhat = uhat_ri[:, :N] + (1j) * uhat_ri[:, N:]

u = np.zeros_like(uhat)

for k in range(len(T)):
    u[k, :] = np.fft.ifft(uhat[k, :])

u = u.real    

# Waterfall plot
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.set_xlabel(r"$x$")
ax.set_ylabel(r"$t$")
ax.set_zlabel(r"$u(t, x)$")
ax.set_zticks([])
u_plot = u[0:-1:60, :]
for j in range(u_plot.shape[0]):
    ys = j * np.ones(u_plot.shape[1])
    ax.plot(x, ys, u_plot[j, :])
    
# Image plot
plt.figure()
plt.imshow(np.flipud(u), aspect=1.5)
plt.xlabel(r"$x$")
plt.ylabel(r"$t \to$")
plt.gca().axes.get_xaxis().set_ticks([])
plt.gca().axes.get_yaxis().set_ticks([])
plt.show()
(a) Evolution of the transport equation in time.
(b) x-t diagram of the transport equation.
Figure 8.5: Computing the derivative of a function

To increase complexity from these simple equations we move to the nonlinear Burger’s equation which is a combination of the two above and an example that produces shock waves in fluids.

Example 8.6 (Burger’s equation) Burger’s equation in 1D is given by \[ \dot{u} + u \partial_x u = \nu \partial_x^2 u. \] The equation consists of a nonlinear convection part \(u \partial_t u\) as well as diffusion. The convection part is designed in such a way that larger amplitudes travel faster and therefore causes a shock wave to form.

Regarding the solution of this equation, the interesting part is that we need to map back and forth between the Fourier space and time space to apply the nonlinearity.

This example is also very useful to test the capabilities of your solver as by decreasing the diffusion factor further and further we can approach an infinitely steep shock and in theory it can break like an ocean wave.

The majority of the following code is from (Brunton and Kutz 2022, Code 2.5 and Code 2.6).

Show the code for the figure
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
%config InlineBackend.figure_formats = ["svg"]

nu = 0.001
a, b = -10, 10
L = b - a
N = 1000
x = np.linspace(a, b, N, endpoint=False)
dx = x[1] - x[0]

# Define discrete wavenumbers
#kappa = 2 * np.pi * np.fft.fftfreq(N, d=dx)
kappa = np.fft.fftfreq(N, (b - a) / (N * 2 * np.pi))

# Initial condition
u0 = 1/np.cosh(x)

# Simulate in Fourier frequency domain
dt = 0.025
T = np.arange(0,100*dt,dt)

def rhsBurgers(u, t, kappa, nu):
    uhat = np.fft.fft(u)
    d_uhat = (1j) * kappa * uhat
    dd_uhat = -np.power(kappa, 2) * uhat
    d_u = np.fft.ifft(d_uhat)
    dd_u = np.fft.ifft(dd_uhat)
    du_dt = -u * d_u + nu * dd_u
    return du_dt.real

u = odeint(rhsBurgers, u0, T, args=(kappa, nu))

# Waterfall plot
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel(r"$x$")
ax.set_ylabel(r"$t$")
ax.set_zlabel(r"$u(t, x)$")
ax.set_zticks([])

u_plot = u[0:-1:10, :]
for j in range(u_plot.shape[0]):
    ys = j * np.ones(u_plot.shape[1])
    ax.plot(x, ys, u_plot[j, :])
    
# Image plot
plt.figure()
plt.imshow(np.flipud(u), aspect=8)
plt.xlabel(r"$x$")
plt.ylabel(r"$t \to$")
plt.gca().axes.get_xaxis().set_ticks([])
plt.gca().axes.get_yaxis().set_ticks([])
plt.show()
(a) Evolution of Burger’s equation in time.
(b) x-t diagram of Burger’s equation.
Figure 8.6: Time integration of the Burger’s equation

The following example is taken from (Meyberg and Vachenauer 1992).

Example 8.7 (Ground Emitter Circuit) We can use FFT to compute some base properties of a ground emitter circuit Figure 8.7 and how the signal is propagated through this circuit Figure 8.8.

Figure 8.7: Electric circuit
Show the raw data for the example
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ["svg"]

frequ = 2 * np.pi * 50
f = lambda t: 0.8 * np.sin(t * frequ)
diod = lambda t: (np.exp(1.2 + t) - 1)

t = np.linspace(0, 2 * np.pi / frequ, 1024, endpoint=False)
x = np.linspace(np.min(f(t)) * 1.3, np.max(f(t)) * 1.1, 1024)

ic = lambda t: diod(f(t))

k = np.arange(0, 16) * 1 / 16

# The provided figures where used to create the illustation
if False:
    plt.figure()
    plt.plot(f(t), t)
    plt.plot([np.min(t),np.max(t)], [0,0], "gray")
    plt.axis("off")
    plt.savefig("sin.svg", transparent=True)
    plt.figure()
    plt.plot(x, diod(x))
    z = np.array([0, np.min(f(t)), np.max(f(t))])
    plt.plot(z, diod(z), "bx")
    plt.axis("off")
    plt.xlim([-2,5])
    plt.gcf().patch.set_visible(False)
    plt.savefig("diode.svg", transparent=True)

    plt.figure()
    plt.plot(t, ic(t))
    plt.plot(2*np.pi*k/frequ, ic(2*np.pi*k/frequ), "ro")
    plt.axis("off")
    plt.savefig("ic.svg", transparent=True, bbox_inches ="tight")

y = ic(k)
yhat = (np.fft.fft(y))
#Necessary correction factor for the FFT
factor = 1 / 16
yy = factor * yhat

ic_mean = np.mean(ic(np.linspace(0, 1/50, 2**20)))
c0 = yy[0].real
effective_value = np.linalg.norm(yy[1:])
harmonic_distortion = np.linalg.norm(yy[3:-2])/np.linalg.norm(yy[1:])

The transition of the transistor can be approximated by a diode and is illustrated in the \(u_{BE}\) vs. \(i_C\) diagram on the top left (orange signal in Figure 8.8). The exciting function \(u\) on the bottom left is a normal sine wave with \(50\mathrm{Hz}\).

All together we get the equation \[ i_c = \mathrm{e}^{1.2 + 0.8 \sin(2 \pi 50 t)} - 1 \] for the current in \([\mathrm{mA}]\) running through the transistor.

Figure 8.8: Signal transition for the electric circuit shown above

We can see, that the signal is boosted depending on the characteristic curve of the diode, i.e. the positive part of the sine more than the negative part. If we compute the fast fourier transform, as illustrated with the red discretisation points of the curve on the right hand side, we can interpret the FFT coefficients \(c_i\) as follows:

  1. the direct current component:

\[ c_0 = 2.873 \]

  1. the effective value from Parseval’s Theorem:

\[ \overline{i_c} = \sqrt{\sum_{i=0}^{15} |c_i|^2}= 2.071 \]

  1. the total harmonic distortion (how the signal is deformed):

\[ THD = \frac{\sqrt{\sum_{i=2}^{14} |c_i|^2}}{\sqrt{\sum_{i=1}^{15} |c_i|^2}}=0.193 \]

This concludes our example with the FFT.

8.5 Gabor Transform

The Fourier transform is computed as soon as all the values of a signal are sampled. It provides spectral information over the entire signal and not when in time certain frequencies occur, i.e. no temporal information. The Fourier transform can only characterize periodic and stationary signals. The time component is used during the integration and no longer present in the transformed signal.

If both is needed in order to generate a spectrogram (plot frequency versus time), the Gabor transform brings remedy. Technically it is the Fourier transform of the signal masked by a sliding window \[ \mathcal{G}\{f\}(t, \omega) = G_f(t, \omega) = \int_{-\infty}^\infty f(\tau) \mathrm{e}^{-\mathrm{i}\omega\tau}\overline{g}(\tau - t)\, \mathrm{d}\tau. \] For the so called kernel \(g(t)\) a Gaussian bell curve is often used \[ g(t - \tau) = \mathrm{e}^{-\tfrac{(t-\tau)^2}{a^2}}. \]

Show the code for the figure
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ["svg"]

np.random.seed(6020)
# Parameters
N = 1024
a, b = 0, 1/4
t = np.linspace(a, b, N, endpoint=False)
dt = t[1] - t[0]
f1 = 50
f2 = 120
fun = lambda t: np.sin(2 * np.pi * f1 * t) + np.sin(2 * np.pi * f2 * t)

gauss = lambda x, a: np.exp(-np.pow(x, 2) / a**2)

f_clean = fun(t) + np.random.randn(len(t))  
gauss = gauss(t - 0.125, 0.025) * 4.8

# Figures
plt.figure(0)
plt.plot(t, f_clean, label="Signal")
plt.plot(t, gauss, label=r"$g(t-\tau)$")
#plt.xlabel("Time [s]")
#plt.ylabel(r"$f$")
plt.xticks([0.1, 0.125, 0.15], [r"$\tau - a$", r"$\tau$", r"$\tau + a$"])
plt.xlim(t[0], t[-1])
plt.ylim(-5, 5)
plt.yticks([])
plt.legend(loc=1)
plt.gca().set_aspect(5e-3)

plt.show()
Figure 8.9: Signal with Gaussian kernel

Figure 8.9 illustrates the sliding time window with the spread as \(a\) and the center \(\tau\). The inverse is given as \[ f(t) = \mathcal{G}^{-1}\{G_f(t, \omega)\} = \frac{1}{2 \pi \|g\|^2} = \int_{-\infty}^\infty\int_{-\infty}^\infty G_f(\tau, \omega) \mathrm{e}^{\mathrm{i}\omega\tau}\overline{g}(t - \tau)\, \mathrm{d}\omega\, \mathrm{d}t. \]

As with the Fourier transform the application of the Gabor transform is usually done in its discrete form.

Definition 8.8 (Discrete Gabor Transform) For discrete time \(\tau = k \Delta t\) and frequencies \(\nu = j \Delta \omega\) the discretized kernel function has the form \[ g_{j,k} = \mathrm{e}^{{\mathrm{i}2 \pi j\Delta\omega t}} g(t- k\Delta t) \] and the discrete Gabor transform \[ \mathcal{G}\{f\}_{j,k} = \langle f, g_{j,k} \rangle = \int_{-\infty}^\infty f(\tau)\overline{g}_{j,k}(\tau)\, \mathrm{d}\tau. \]

The following example is taken from (Brunton and Kutz 2022, 77–78).

Example 8.8 (Gabor transform for a quadratic chirp) As example we use an oscillating cosine function where the frequency of the oscillation increases as a quadratic function in time: \[ f(t) = \cos(2\pi\, t\, h(t)), \quad \text{with}\quad h(t) = h_0 + (h_1 - h_0)\frac{t}{3 t_1^2}, \] where the frequency shifts from \(h_0\) to \(h_1\) between \(t=0\) and \(t=t_1\).

The majority of the following code is from (Brunton and Kutz 2022, Code 2.9).

Show the code for the figure
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
%config InlineBackend.figure_formats = ["svg"]

N = 2048
a, b = 0, 2
t = np.linspace(a, b, N, endpoint=False)
dt = t[1] - t[0]
h0 = 50
h1 = 250
t1 = 2
x = np.cos(2 * np.pi * t * (h0 + (h1 - h0) * np.power(t,2) / (3 * t1**2)))

plt.figure(0)
N = len(x)
freq = (1 / (dt * N)) * np.arange(N)
PSD_noise = np.abs(np.fft.fft(x))**2 / N
plt.plot(freq[:N//2], PSD_noise[:N//2])
plt.xlabel("Frequency [Hz]")
plt.gca().set_aspect(10)

plt.figure(1)
plt.specgram(x, NFFT=128, Fs=1/dt, noverlap=120, cmap='jet')
plt.colorbar()
plt.xlabel("Time [s]")
plt.ylabel("Frequency [Hz]")
plt.show()
(a) Power spectral density of the signal.
(b) Spectrogram.
Figure 8.10: Quadratic chirp signal

We can see a clear peak at 50Hz but no information of time is given, where else in the spectrogram we see how the frequency progresses in time.

Note

For the analysis of time-frequency we see Heisenberg’s uncertainty principle at work. We can not attain high resolution simultaneously in both time and frequency domain. In the time domain we have perfect resolution but no information about the frequencies, in the Fourier domain the time information is not present.

The spectrogram resolves both but with reduced resolution. This effect manifests differently at different locations of the chirp signal. At the left where the frequency is almost constant, the band becomes narrower, whereas at the right it becomes wider due to the increased blur in horizontal direction. The product of the uncertainties of these quantities has a minimal value. In this context time and frequency domain respectively represent the extremes.

The Fourier transform always assumes a harmonic nature in the signal it is applied to. Therefore, it is best suited for e.g. music, rotating machines or vibrations. For other signals, especially with discontinuous parts (Gibbs phenomenon) there are more adequate bases functions. We look at these in the next section Chapter 10 but first we look at the Laplace transform Chapter 9.