Fourier Spectral methods

AMath 585, Winter Quarter 2020 at the University of Washington. Developed by R.J. LeVeque and distributed under the BSD license. You are free to modify and use as you please, with attribution.

These notebooks are all available on Github.


This notebook illustrates using the Discrete Fourier Transform to compute a Fourier series interpolant of specified data, how to approximate $u''(x)$ from given discrete values of a function $u(x)$ on $0 \leq x \leq 2\pi$, and how this can be used to solve a boudary value problem with periodic boundary conditions.

In [1]:
%matplotlib inline
In [2]:
from pylab import *

The numpy FFT module

First we illustrate use of the numpy.fft functions. See the documentation for more information.

See also the handout fourier.pdf for more about the notation used in the discrete Fourier transform, and how this transform relates to other Fourier transforms.

In [3]:
from numpy import fft

We provide data u[k] at grid points $x_j = jh$ for $j=0,~1,~\ldots,~N-1$ with $h=2\pi/N$.

Transform $\sin(kx)$ and check that only one set of coefficients is nonzero:

In [4]:
N = 16
x = linspace(0, 2*pi*(1-1./N), N)

k = 3.
u = sin(k*x)
u_hat = fft.fft(u)

for k in range(N):
    print('k = %3i:   u_hat[k] = %05f %+05fi ' % (k,u_hat[k].real,u_hat[k].imag))
k =   0:   u_hat[k] = 0.000000 +0.000000i 
k =   1:   u_hat[k] = 0.000000 +0.000000i 
k =   2:   u_hat[k] = 0.000000 +0.000000i 
k =   3:   u_hat[k] = -0.000000 -8.000000i 
k =   4:   u_hat[k] = 0.000000 -0.000000i 
k =   5:   u_hat[k] = 0.000000 -0.000000i 
k =   6:   u_hat[k] = 0.000000 -0.000000i 
k =   7:   u_hat[k] = 0.000000 -0.000000i 
k =   8:   u_hat[k] = 0.000000 +0.000000i 
k =   9:   u_hat[k] = 0.000000 +0.000000i 
k =  10:   u_hat[k] = 0.000000 +0.000000i 
k =  11:   u_hat[k] = 0.000000 +0.000000i 
k =  12:   u_hat[k] = 0.000000 +0.000000i 
k =  13:   u_hat[k] = -0.000000 +8.000000i 
k =  14:   u_hat[k] = 0.000000 -0.000000i 
k =  15:   u_hat[k] = 0.000000 -0.000000i 

Note that for $k=3$ and $N=16$, we find $\hat u_3 = -8i$ and $\hat u_{13} = 8i$.

In general $$ e^{i(N-k)x_j} = e^{iNjh}e^{-ikjh} = e^{-ikx_j} $$ since $h = 2\pi/N$ and so $$ e^{iNjh} = \left(e^{2\pi i}\right)^N = 1, $$ and so the coefficient for $N-k = 13$ in the example above could also be viewed as the coefficient of $e^{-3ix}$, and so the discrete Fourier transform above tells us that $N\sin(kx)$ is interpolated by the function $$ -8ie^{3ix} + 8ie^{-3ix} = 16 i(-i\sin(3x)) = 16 \sin(3x). $$ This is what we expect since the function we chose agrees with the interpolant in this case.

In general if the original data is real-valued then we expect u[N-k] to be the complex conjugate of u[k].

Inverse transform

If we apply the ifft to u_hat, we should recover the original u values:

In [5]:
v = fft.ifft(u_hat)
for k in range(N):
    print('k = %3i:   v[k] = %.5f %+.5fi   Original u[k] = %5f' \
          % (k, v[k].real, v[k].imag, u[k]))
k =   0:   v[k] = 0.00000 +0.00000i   Original u[k] = 0.000000
k =   1:   v[k] = 0.92388 -0.00000i   Original u[k] = 0.923880
k =   2:   v[k] = 0.70711 +0.00000i   Original u[k] = 0.707107
k =   3:   v[k] = -0.38268 -0.00000i   Original u[k] = -0.382683
k =   4:   v[k] = -1.00000 +0.00000i   Original u[k] = -1.000000
k =   5:   v[k] = -0.38268 -0.00000i   Original u[k] = -0.382683
k =   6:   v[k] = 0.70711 +0.00000i   Original u[k] = 0.707107
k =   7:   v[k] = 0.92388 +0.00000i   Original u[k] = 0.923880
k =   8:   v[k] = 0.00000 +0.00000i   Original u[k] = 0.000000
k =   9:   v[k] = -0.92388 -0.00000i   Original u[k] = -0.923880
k =  10:   v[k] = -0.70711 +0.00000i   Original u[k] = -0.707107
k =  11:   v[k] = 0.38268 -0.00000i   Original u[k] = 0.382683
k =  12:   v[k] = 1.00000 +0.00000i   Original u[k] = 1.000000
k =  13:   v[k] = 0.38268 +0.00000i   Original u[k] = 0.382683
k =  14:   v[k] = -0.70711 +0.00000i   Original u[k] = -0.707107
k =  15:   v[k] = -0.92388 +0.00000i   Original u[k] = -0.923880

Note that the imaginary part of v is zero (to machine precision if you print more digits).

Fourier interpolation

First consider the function that obtain from the Fourier transform as an interpolant of the data, $$ \tilde u(x) = \frac{1}{N} \sum_{k=0}^{N-1} e^{ikx} \hat u_k. $$

If the data comes from a function that is smooth (including at the boundaries, after extending periodically), then a very accurate interpolant can be obtained with few points.

Consider $u(x) = \exp(\cos(x))$, which is $2\pi$-perioidic and the periodic extension is $C^\infty$ (infinitely differentiable with continuous derivatives at all orders).

In [6]:
u_fcn = lambda x: exp(cos(x))

N = 8
N2 = int(N/2)
x = linspace(0, 2*pi*(1-1./N), N)
u = u_fcn(x)

u_hat = fft.fft(u)

# i * wave number vector (fft ordering):
ik = 1j*hstack((range(0,N2+1), range(-N2+1,0)));   

def u_tilde(x):
    u_tilde = u_hat[0] * ones(x.shape, dtype=complex)
    for k in range(1,N):
        u_tilde += exp(ik[k]*x) * u_hat[k]
    u_tilde = u_tilde / N
    return u_tilde.real
        

xfine = linspace(0,2*pi,1000)
ufine = u_fcn(xfine)
plot(xfine, ufine, 'b', label='original u(x)')

u_tilde_fine = u_tilde(xfine)

u = u_fcn(x)
plot(x, u, 'bo', label='data points u_j')

plot(xfine, u_tilde_fine, 'r-', label='interpolant u tilde')
legend()
title('Original function u(x) and interpolant at %i points' % N);

The fit above looks very good even with only 8 points!

It would not look so good if the periodic extesion of $u(x)$ were not $C^\infty$.

Discontinuous $u(x)$

Consider for example $u(x) = x$, which is $C^\infty$ in the interval but the periodic extension is discontinuous, so it is not even $C^0$ (continuous).

In [7]:
u_fcn = lambda x: x

N = 16
N2 = int(N/2)
x = linspace(0, 2*pi*(1-1./N), N)
u = u_fcn(x)

u_hat = fft.fft(u)

# i * wave number vector (fft ordering):
ik = 1j*hstack((range(0,N2+1), range(-N2+1,0)));   

def u_tilde(x):
    u_tilde = u_hat[0] * ones(x.shape, dtype=complex)
    for k in range(1,N):
        u_tilde += exp(ik[k]*x) * u_hat[k]
    u_tilde = u_tilde / N
    return u_tilde.real
        

xfine = linspace(0,2*pi,1000)
ufine = u_fcn(xfine)
plot(xfine, ufine, 'b', label='original u(x)')

u_tilde_fine = u_tilde(xfine)

u = u_fcn(x)
plot(x, u, 'bo', label='data points u_j')

plot(xfine, u_tilde_fine, 'r-', label='interpolant u tilde')
legend()
title('Original function u(x) and interpolant at %i points' % N);

In this case the Fourier interpolant is a $C^\infty$ periodic function, so we observe the Gibbs phenomena near the point of discontinuity of $u(x)$ (the boundary).

Note: Differentiating this interpolant would not give good approximations to the derivative, at least near the boundary, no matter how many points we used.

If we compute more Fourier coefficients, we see they only decay like $1/k$ for a discontinuous function:

In [8]:
N = 512
N2 = int(N/2)
x = linspace(0, 2*pi*(1-1./N), N)
u = u_fcn(x)

k = range(1,N2)
u_hat = fft.fft(u)

figure(figsize=(8,4))
u_hat_mag = abs(u_hat) / sqrt(N)
loglog(k, u_hat_mag[k], 'b-o', label='FFT')
for p in range(1,4):
    kpower = 1./array(k)**p
    loglog(k, kpower, label='1/k^%i' % p)
legend()
xlabel('wavenumber k')
ylabel('abs(u_hat[k])');

Actually the decay flattens out for larger $k$ because we are looking at the discrete Fourier coefficients rather than the behavior of the full set, but we won't go into that.

If the periodic extension were $C^0$ then the initial decay would be like $1/k^2$ and more generally if it is $C^\ell$ (i.e. $\ell$ derivatives exist everywhere are are continuous function, even at the boundary when extended periodically) then the decay would start out like $1/k^{\ell + 2}$.

Smooth periodic function

Again consider $u(x) = \exp(\cos(x))$, which is $2\pi$-perioidic and the periodic extension is $C^\infty$ (infinitely differentiable with continuous derivatives at all orders). In this case the Fourier components decay faster than any power $1/k^p$ and approximations of derivatives have "spectral accuracy". Using $N$ points we expect the error to decay faster than $1/N^p$ for any power $p$. (Recall that a finite diffference method generally has order of accuracy $p$ for some small $p$, e.g. $p=2$ for the 3-point centered approximation to $u''(x)$.)

In [9]:
u_fcn = lambda x: exp(cos(x))

Observe the rapid decay of u_hat[k] with k in the plot below. We only plot the first N/2 wavenumbers since for a real-valued function we expect u[N-k] to be the complex conjugate of u[k].

In [10]:
N = 64
N2 = int(N/2)
x = linspace(0, 2*pi*(1-1./N), N)
u = u_fcn(x)

k = range(1,N2)
u_hat = fft.fft(u)

figure(figsize=(8,4))
loglog(k, abs(u_hat[k]), 'b-o', label='FFT')
for p in range(1,4):
    kpower = 1./array(k)**p
    loglog(k, kpower, label='1/k^%i' % p)
legend()
xlabel('wavenumber k')
ylabel('abs(u_hat[k])');

The fact that the slope continues to get more negative in this log-log plot means that we have spectral accuracy. Note that the function can be represented almost to machine accuracy with only k=15 modes, and beyond that rounding error dominates.

Likewise, very good approximations to the derivative can be achieved with small values of N, and we expect full precision by about N = 30 (so N/2 = 15):

Spectral differentiation

Consider the problem of approximating $u''(x_j)$ at a set of $N$ points when given function values $U_j = u(x_j)$. If $u(x)$ is assumed to be $2\pi$-periodic then it is appropriate to use a spectral Fourier method. Again we use grid points $x_j = jh$ for $j=0,~1,~\ldots,~N-1$ with $h=2\pi/N$.

In [11]:
u_fcn = lambda x: exp(cos(x))
uxx_fcn = lambda x: (sin(x)**2 - cos(x)) * exp(cos(x))
In [12]:
# Discrete points:
N = 30  # should be even
h = 2*pi/N
x = linspace(0, 2*pi-h, N)

xfine = linspace(0,2*pi,1000)
ufine = u_fcn(xfine)
plot(xfine, ufine, 'b')

u = u_fcn(x)
plot(x, u, 'bo')
title('Original function u(x) and %i points' % N);
In [13]:
N2 = int(N/2)

# i * wave number vector (fft ordering):
ik = 1j*hstack((range(0,N2+1), range(-N2+1,0)))  
ik2 = ik*ik;          # multiplication factor for second derivative

# FFT of u at discrete points:
u_hat = fft.fft(u)

# multiply by ik2 to get approximate DFT of u'' at these points:
v_hat = ik2 * u_hat 

# Inverse FFT to get v with v[j] approximating u''(x[j]):
v = real(fft.ifft(v_hat))     
# imaginary part should be at machine precision level

error = v - uxx_fcn(x)
print("Max-norm of error in u'' at points is: %g" % norm(error,inf))

uxxfine = uxx_fcn(xfine)
plot(xfine, uxxfine, 'b')

u = u_fcn(x)
plot(x, v, 'ro')
title("True function u''(x) and approximation at %i points" % N);
Max-norm of error in u'' at points is: 9.52571e-14

Solving a BVP

Now let's try solving the boundary value problem $u''(x) = f(x)$ with periodic boundary conditions, $u(0)=u(2\pi)$. As the $f(x)$ we will use the second derivative of the function $u(x) = \exp(\cos(x))$ we used above. This has the required property that $\int_0^{2\pi} f(x) dx = 0$ for there to be a solution. The solution is not unique, and indeed we will see that the solution computed by the Fourier spectral method below is shifted by a constant value from the $u(x)$ we started with (because it is computed with the additional constraint that its mean should be zero).

In [14]:
u_fcn = lambda x: exp(cos(x))
uxx_fcn = lambda x: (sin(x)**2 - cos(x)) * exp(cos(x))

# Discrete points:
N = 30  # should be even
h = 2*pi/N
x = linspace(0, 2*pi-h, N)

f = uxx_fcn(x)
print('Sum of f[j] = ', f.sum())

N2 = int(N/2)
ik = 1j*hstack((range(0,N2+1), range(-N2+1,0)));   # i * wave number vector (fft ordering)
ik2 = ik*ik;          # multiplication factor for second derivative
ik2[0] = 1.    # but set the factor for k=0 to 1 so we don't divide by zero below!

# FFT of f at discrete points:
f_hat = fft.fft(f)

# divide by ik2 to get approximate DFT of u at these points:
v_hat = f_hat / ik2

# Expect f_hat[0] = 0 and so will v_hat[0]=0
print('v_hat[0] = %.5f %+.5fi to 0.' % (v_hat[0].real, v_hat[0].imag))

# Inverse FFT to get v with v[j] approximating solution:
v = real(fft.ifft(v_hat))     
# imaginary part should be at machine precision level

ufine = u_fcn(xfine) # true solution
plot(xfine, ufine, 'b')

plot(x, v, 'ro')
title("True function u(x) and approximation at %i points" % N);
Sum of f[j] =  7.549516567451064e-15
v_hat[0] = 0.00000 +0.00000i to 0.

The approximation we computed has the property that v_hat[0] = 0 and so the mean of the data values v[k] is zero, whereas the "true solution" we specified does not have mean 0. But recall the solution to this BVP is only unique up to a constant, and if we shift the "true solution" to have mean 0, then they agree to machine precision:

In [15]:
u = u_fcn(x)
umean = u.mean()
print('Mean of u[j] is %g' % umean)
print('Shifting u to have mean 0')
ushifted = u - umean
plot(xfine, ufine - umean, 'b')
plot(x, v, 'ro')
title("Shifted function u(x) and approximation at %i points" % N);
err = abs(v - ushifted).max()
print('Max error between v and shifted u: %g' % err)
Mean of u[j] is 1.26607
Shifting u to have mean 0
Max error between v and shifted u: 1.05471e-15