AMath 585, Winter Quarter 2020 at the University of Washington. Developed by R.J. LeVeque and distributed under the BSD license. You are free to modify and use as you please, with attribution.
These notebooks are all available on Github.
This notebooks explains and illustrates some of the key ideas used in multigrid methods, following Section 4.6 of the text.
%matplotlib inline
from pylab import *
use_widgets = False
if use_widgets:
from ipywidgets import interact
import ipywidgets as widgets
def animate_figs(figs):
show_frame = lambda frameno: display(figs[frameno])
interact(show_frame, frameno=widgets.IntSlider(min=0,max=len(figs)-1, value=0))
else:
from jsanimate_figs import animate_figs
For illustration we will solve $u''(x) = f(x)$ in $0\leq x \leq 1$ with Dirichlet boundary conditions.
We choose a true solution $u(x)$ that is more interesting than in previous examples, the same function used in the text as (4.83), $u(x) = 1 + 12x - 10x^2 + \sin(\phi(x))$, with $\phi(x) = 20 \pi x^3$. Note that this oscillates with higher frequency (shorter wavelength) as $x$ increases since $\phi(x)$ grows like $x^3$. We choose $f(x)$ to give this solution.
phi_fcn = lambda x: 20.0 * pi * x**3 # phi(x)
phip_fcn = lambda x: 60.0 * pi * x**2 # phi'(x)
phipp_fcn = lambda x: 120.0 * pi * x # phi''(x)
f_fcn = lambda x: -20.0 + 0.5 * (phipp_fcn(x) * cos(phi_fcn(x)) \
- (phip_fcn(x))**2 * sin(phi_fcn(x)))
utrue_fcn = lambda x: 1.0 + 12.0 * x - 10.0 * x**2 + 0.5 * sin(phi_fcn(x))
alpha = utrue_fcn(0.)
beta = utrue_fcn(1.)
print('Boundary values: alpha = %g, beta = %g' % (alpha,beta))
For this test problem the local truncation error does not vanish the way it did in previous examples. So the true solution of the linear system is not equal to the true solution of the ODE evaluated at the grid points. In order to observe how the error (in the linear system) behaves as we iterate, we thus need to compute the true solution of the linear system.
We do this using code adapted from the notebook BVP1.html, but simplified to just take the right-hand side as input.
def solve_tridiag_GE(rhs):
"""
Solve the tridiagonal system A*uint = rhs
"""
from scipy import sparse
from scipy.sparse.linalg import spsolve
m = len(rhs)
h = 1./float(m+1) # h = delta x
# set up m by m matrix A:
# note that sparse.diags is analogous to the matlab spdiags function
em = ones(m)
em1 = ones(m-1)
A = sparse.diags([em1, -2*em, em1], [-1, 0, 1],
shape=(m,m), format='csc')
A = A / h**2
# solve system for m interior points:
uint = spsolve(A, rhs)
return uint
m = 255
h = 1./float(m+1)
x_GE = linspace(0,1,m+2)
xint = x_GE[1:-1]
rhs = f_fcn(xint)
rhs[0] = rhs[0] - alpha / h**2
rhs[-1] = rhs[-1] - beta / h**2
uint_GE = solve_tridiag_GE(rhs)
u_GE = hstack([alpha, uint_GE, beta])
utrue = utrue_fcn(x_GE)
error = u_GE - utrue
figure(figsize=(12,6))
xfine = linspace(0,1,1001)
ufine = utrue_fcn(xfine)
plot(xfine, ufine, 'r')
plot(x_GE,u_GE,'bo')
title('Approximate solution with m = %i points, max error = %g' \
% (m, abs(error).max()))
grid(True);
This iterate_bvp
function takes iterations with iterative method that is defined by the argurment update_u
, which should be a function to take a single step of some iterative method.
This allows us to easily test different methods by passing in a different update_u
function, without repeating all the other stuff, as in the notebook IterativeMethods.html.
But the arguments are changed a bit from solve_bvp
in that notebook. We now pass in a vector U0
of initial conditions so we can illustrate the behavior with different choices.
Note also that this has been modified to compute the error in the iterate U
relative to u_GE
as computed with Gauss Elimination.
As in IterativeMethods.html, this is set up to solve the system in which the tridiagonal matrix has been augmented by a row of the identity matrix at the top and bottom with right hand sides values given by the Dirichlet boundary conditions, i.e., the first and last equations in the system are U[0] = alpha
and U[m+1] = beta
. This is convenient for plotting purposes and also if you want to try modifying the problem to use Neumann BCs at one of the boundaries, for example.
def iterate_bvp(f_fcn, U0, update_u, maxiter, kplot):
m = len(U0) - 2
h = 1./(m+1)
x = linspace(0,1,m+2)
alpha = U0[0]
beta = U0[-1]
# Compute discrete solution by solving with Gauss Elimination:
f = f_fcn(x)
rhs = f[1:-1] # restrict to interior points
rhs[0] = rhs[0] - alpha / h**2
rhs[-1] = rhs[-1] - beta / h**2
uint_GE = solve_tridiag_GE(rhs)
u_GE = hstack([alpha, uint_GE, beta])
# Set up for iterative method:
F = f_fcn(x)
U = U0.copy() # current iterate
max_error = abs(U-u_GE).max()
errors = [max_error]
figs = [] # for the list of figures we generate
for k in range(1,maxiter+1):
U = update_u(U, F, h) # take one iteration
max_error = abs(U-u_GE).max()
errors.append(max_error)
if mod(k,kplot)==0 or k==maxiter:
# every kplot iterations create a plot:
fig = figure(figsize=(12,10))
subplot(2,1,1)
plot(x,U0,'r-o', label='initial guess')
plot(x,u_GE,'k-', label='true solution')
plot(x,U,'bo-', label= 'iteration k = %i' % k)
legend()
grid(True)
xlim(0,1)
ylim(0,6)
title('After %i iterations, max error = %.2e' \
% (k, max_error))
subplot(2,1,2)
plot(x,U-u_GE,'bo-', label= 'error at iteration k = %i' % k)
legend()
grid(True)
xlim(0,1)
ylim(-4,1)
figs.append(fig)
close(fig)
errors = array(errors) # convert list to numpy array
return errors, figs
This function does one update step for Jacobi. Note that the way we have defined the system to include the boundary values, Jacobi does not change the first or last element of U
.
def update_u_Jacobi(U, F, h):
"""
Input: Current iterate U^[k-1]
Output: Next iterate U^[k]
"""
m = len(U) - 2
Uprev = U.copy() # save current iterate so it's not overwritten
for i in range(1,m+1):
U[i] = 0.5*(Uprev[i-1] + Uprev[i+1] - h**2 * F[i])
return U
First plot every iteration, starting with a linear initial guess:
m = 255
h = 1./(m+1)
U0 = linspace(alpha, beta, m+2)
errors_Jacobi, figs = iterate_bvp(f_fcn, U0, update_u=update_u_Jacobi,
maxiter=30, kplot=1)
animate_figs(figs)
The bottom plot now shows the error. Note that the error starts out oscillatory but becomes smoother, with the more highly oscillatory part smoothing out most quickly.
Here it is over more iterations:
errors_Jacobi, figs = iterate_bvp(f_fcn, U0, update_u=update_u_Jacobi,
maxiter=1500, kplot=50)
animate_figs(figs)
The high-frequency part of error is smoothed more quickly.
Recall that if our iterative method is $U^{[k+1]} = GU^{[k]} + c$ as in (4.10), then the error satisfies $e^{[k+1]} = Ge^{[k]}$ and so expanding the error in eigenvectors of $G$ we can relate the decay of different eigencomponents to the eigenvalues of $G$, and recall that for this system and Jacobi, the $p$th eigenvector has $j$th component $\sin(p\pi jh) = \sin(p\pi x_j)$.
Once the error is smooth, we can:
If we are trying to solve $Au^* = f$ for exact solution $u^*$, and the residual for some approximate solution $u$ is defined as $r = f - Au$, then we can compute the residual $r^{[k]} = f - AU^{[k]}$ for the iterate in any step. By subtracting $Au^* = f$ from $AU^{[k]} = f - r^{[k]}$ we also see that $Ae^{[k]} = -r^{[k]}$, which is (4.91) in the book with slightly different notation.
So from the residual $r^{[k]}$ we can approximate the error $e^{[k]}$ by solving a linear system that involves $A$.
Using this, we can expand the algorithm above to:
Note that solving $Ae = r$ on the coarser grid has two big advantages:
Another key point is that to approximate the error on a coarser grid, we don't need to solve the system exactly on that grid -- after iterating a few times we will have smoothed out the error in this system and can transfer to an even coarser grid, with fewer grid points and fewer points per wavelength in some components of the remaining error.
It seems like Jacobi has done a good job of smoothing all the high-frequency components, but in fact the example above is a bit misleading because it is deficient in the highest frequencies possible on this grid (e.g. the saw-tooth mode is the highest, corresponding to the eigenvector of $G$ with components $\sin(m\pi jh)$.
Add a component of this eigenvector to our initial guess U0
:
m = 255
x = linspace(0, 1, m+2)
U0 = linspace(alpha, beta, m+2) # initial guess
U0 = U0 + 0.5*sin(m*pi*x)
figure(figsize=(12,4))
plot(x,U0,'r-')
grid(True);
errors_Jacobi, figs = iterate_bvp(f_fcn, U0, update_u=update_u_Jacobi,
maxiter=30, kplot=1)
animate_figs(figs)
The eigenvectors of the iteration matrix $G$ for Jacobi are the same as those of the original tridiagonal matrix $A$ and are $u_j^p = \sin(\pi p x_j)$, while the corresponding eigenvalues are $\gamma_p = \cos(p\pi h)$, for $p=1,~2,~\ldots, ~m$.
As discussed in the text (p. 105), the most highly oscillatory mode corresponds to $p=m$ and note that $\gamma_m = -\gamma_1 \approx -1 + \frac 1 2 \pi^2 h^2$, so this mode decays just as slowly as the lowest mode $p=1$.
The modes that decay the most rapidly are for $p\approx m/2$. For any mode $p$ with $m/4 \leq p \leq 3m/4$ we have $|\gamma_p| \leq 1/\sqrt{2} \approx 0.7$ and $|\gamma_p|^{20} < 10^{-3}$.
For multigrid we would prefer if all the highest wave numbers decay most rapidly, which suggests using underrelaxed Jacobi,
$$ u_{k+1} = (1-\omega)u_k + \omega Gu_k $$Then the iteration matrix is $(1-\omega)I + \omega G$ and has eigenvalues $\gamma_p = (1-\omega) + \omega \cos(p\pi h)$
If we choose $\omega$ to minimize $\max_{m/2 \leq p \leq m} |\gamma_p|$ we find that $\omega = 2/3$ is best and then $|\gamma_p| \leq 1/3$ for all $p \geq m/2$.
p = linspace(1,m,m)
for omega in [0.5, 2/3., 0.75, 1.]:
gammap = (1-omega) + omega*cos(p*pi/(m+1))
plot(p,abs(gammap),label='omega = %.4f' % omega)
plot(p, 1/3. * ones(p.shape), 'k--')
legend()
title('abs(gamma_p) vs. p')
grid();
Here is an update function for underrelaxed Jacobi:
def update_u_underJacobi(U, F, h):
"""
Input: Current iterate U^[k-1]
Output: Next iterate U^[k]
"""
m = len(U) - 2
Uprev = U.copy() # save current iterate so it's not overwritten
UJac = U.copy() # for Jacobi value, all overwritten except boundary values
for i in range(1,m+1):
UJac[i] = 0.5*(Uprev[i-1] + Uprev[i+1] - h**2 * F[i])
omega = 2./3.
U = (1-omega)*Uprev + omega*UJac
return U
Testing this out, we see that the sawtooth mode in the error is now damped out within the first few iterations:
m = 255
h = 1./(m+1)
errors_underJacobi, figs = iterate_bvp(f_fcn, U0, update_u=update_u_underJacobi,
maxiter=30, kplot=1)
animate_figs(figs)
Note that in only 4 iterations, for example, all components of the error for $p > m/2$ have been reduced by a factor of at least $(1/3)^4 \approx 0.012$. At this point one might sample the residual at every other point and switch to solving $Ae = r$ for this system. Again only a few iterations are needed to damp all the the higher frequencies, which now means all $p > m/4$ from the original system size $m$. Since convergence now starts to slow down on this grid, repeat by subsampling only every other point to get a new system of size $m/4$. Continue this process until the system is small enough to solve the system by Gauss Elimination. And then work your way back up correcting each solution as you go.
This is a very brief description of the V-cycle approach that is shown on page 109.
For more information, see for example:
W. L. Briggs, V. Emden Henson, and S. F. McCormick, A Multigrid Tutorial, Second Edition, SIAM, 2000. ebook.