Conjugate-Gradient Method in 2D with variable coefficients

AMath 585, Winter Quarter 2020 at the University of Washington. Developed by R.J. LeVeque and distributed under the BSD license. You are free to modify and use as you please, with attribution.

These notebooks are all available on Github.


This notebook illustrates an implmentation of the Conjugate-Gradient method on a two-dimensional Poisson problem.

We solve the elliptic equation $(\kappa u_x)_x + (\kappa u_y)_y = f(x,y)$ where $\kappa(x,y)$ is a specified function satisfying $\kappa(x,y) > 0$ at all points in the domain.

We use Dirichlet boundary conditions and the method of manufactured solutions to choose a problem where the truncation error is zero so that the exact solution of the linear system is the solution of the PDE evaluated at the grid points.

Note that this notebook is set up to solve a particular problem on an mx by my grid with things hardwired. If you want to change kappa, f (and the true solution) and/or the grid resolution or domain, make sure to re-execute all cells below where these are defined.

In [1]:
%matplotlib inline
In [2]:
from pylab import *

Animating the iteration

To use widgets, set use_widgets = True. For javascript animation, set it to False.

In [3]:
use_widgets = False

if use_widgets:
    from ipywidgets import interact
    import ipywidgets as widgets

    def animate_figs(figs):
        show_frame = lambda frameno: display(figs[frameno])
        interact(show_frame, frameno=widgets.IntSlider(min=0,max=len(figs)-1, value=0))
else:
    from jsanimate_figs import animate_figs
In [4]:
def inner_product(V,W):
    """
    Given two grid functions V and W on an mx by my grid. 
    Return their inner product when viewed as vectors of unknowns
    """
    return sum(V*W)

Define the domain, grid, and true solution

In [5]:
ax = 0.
bx = 2.
mx = 19
dx = (bx-ax)/(mx+1.)
x_full = linspace(0,2,mx+2)

ay = 0.
by = 1.
my = 9
dy = (by-ay)/(my+1.)
y_full = linspace(0,1,my+2)

X_full, Y_full = meshgrid(x_full, y_full, indexing='ij')
X = X_full[1:-1, 1:-1]
Y = Y_full[1:-1, 1:-1]

Define kappa and the true solution

Recall we need $\kappa(x,y) >0$ everywhere in order for the PDE to be elliptic and well-posed. Here we take a simple function $\kappa(x,y) = 1 + x + y$. We also use the same exact solution $u(x,y) = x^2 + y^2 -3$ as in ConjugateGradient2D.html and you can check that the truncation error is identically zero again, for this variable coefficient problem, so we expect the exact solution of the linear system to agree with the exact solution of the PDE at the grid points.

We now calculate $(\kappa u_x)_x + (\kappa u_y)_y = 6(x+y) + 4$ so this is the right hand side function $f(x,y)$ we should use in order to have the desired solution.

In [6]:
kappa = lambda X,Y: 1 + X + Y
utrue_fcn = lambda X,Y: X**2 + Y**2 - 3.
f_fcn = lambda X,Y: 6*(X + Y) + 4

Utrue_full = utrue_fcn(X_full, Y_full)
Utrue = Utrue_full[1:-1, 1:-1]

A function to plot the approximate and true solution

As in ConjugateGradient2D.html.

In [7]:
def make_plot(U,k,rnorm,enorm):
    levels = linspace(-4,4,17)
    cmap = get_cmap('bwr')  # blue-white-red
    vmin = -4; vmax = 4 # symmetric so white is in center
    fig = figure(figsize=(10,6))
    subplot(2,1,1)
    U_full = Utrue_full.copy()
    U_full[1:-1, 1:-1] = U
    contourf(X_full,Y_full,U_full,levels,
             cmap=cmap,vmin=vmin,vmax=vmax)
    axis('scaled')
    colorbar(shrink=0.8)
    contour(X_full,Y_full,Utrue_full,levels,colors='k')
    title('iteration %3i:   2-norm(r) = %.2e,   max-norm(E) = %.2e\n' \
          % (k,rnorm,enorm) \
          + 'Approximate solution and contours of true solution')

    subplot(2,1,2)
    contourf(X_full,Y_full,Utrue_full,levels,
             cmap=cmap,vmin=vmin,vmax=vmax)
    axis('scaled')
    colorbar(shrink=0.8)
    contour(X_full,Y_full,Utrue_full,levels,colors='k')
    title('True solution')
    tight_layout()  # make room for titles on subplots
    close(fig)
    return fig

Define kappa function and evaluate at midpoints

The difference stencil is chosen so that the resulting matrix $A$ is symmetric, as discussed in section 2.15 of the text in 1D. This can be extended to 2D by using it separately for the $(\kappa u_x)_x$ and $(\kappa u_y)_y$ terms, and so the $(i,j)$ element of the grid function representing $Au$ is:

\begin{align*} \frac{1}{\Delta x^2} \left(\kappa_{i+1/2,j}(U_{i+1,j} - U_{i,j}) - \kappa_{i-1/2,j}(U_{ij} - U_{i-1,j})\right) \\ + \frac{1}{\Delta y^2} \left(\kappa_{i,j+1/2}(U_{i,j+1} - U_{i,j}) - \kappa_{i,j-1/2}(U_{ij} - U_{i,j-1})\right) \end{align*}

To clarify the implementation we define 4 different arrays below with, for example, kappa_imh_j holding values $\kappa_{i-1/2,j} = \kappa(x_i - \Delta x/2, y_j)$ and kappa_iph_j holding values $\kappa_{i+1/2,j} = \kappa(x_i + \Delta x/2, y_j)$. These two arrays hold mostly the same values, except shifted and with values included near one $x$ boundary and not the other. So for a large problem when trying to minimize storage and computation time, a single slightly larger array could hold all these values. But here we opt for clarity.

In [8]:
X_imh_j = X - dx/2.
Y_imh_j = Y
kappa_imh_j = kappa(X_imh_j, Y_imh_j)

X_iph_j = X + dx/2.
Y_iph_j = Y
kappa_iph_j = kappa(X_iph_j, Y_iph_j)

X_i_jmh = X
Y_i_jmh = Y - dy/2.
kappa_i_jmh = kappa(X_i_jmh, Y_i_jmh)

X_i_jph = X
Y_i_jph = Y + dy/2.
kappa_i_jph = kappa(X_i_jph, Y_i_jph)
In [9]:
def matvec(V):
    """
    Given a grid function v on an mx by my grid. 
    Return b = A*v
    """
   
    # pad V with zeros around border, needed for computing centered differences:
    V_full = zeros((mx+2, my+2))
    V_full[1:-1, 1:-1] = V
    
    # set array B of shape (mx,my) at interior points (same shape as V) 
    # Note that slicing with [1:-1] corresponds to interior points,
    # [:-2] correspond to points to the left (or below) and 
    # [2:] are points to the right (or above)
    
    B =   (kappa_iph_j*(V_full[2:, 1:-1] - V_full[1:-1, 1:-1]) \
         - kappa_imh_j*(V_full[1:-1, 1:-1] - V_full[:-2, 1:-1])) / dx**2 \
        + (kappa_i_jph*(V_full[1:-1, 2:] - V_full[1:-1, 1:-1]) \
         - kappa_i_jmh*(V_full[1:-1, 1:-1] - V_full[1:-1, :-2])) / dy**2
                    
    return B

The Conjugate-Gradient algorithm

The next cell implements the C-G algorithm. This is almost identical to the corresponding cell in ConjugateGradient2D.html, except for the way the Dirichlet boundary conditions are incorporated into the right hand side F, since the coefficient that multiplies these values in the new 5-point stencil for variable coefficients is no longer just 1/dx**2 or 1/dy**2 but now includes kappa values.

In [10]:
maxiter = 100
kplot = 10
verbose = False
tol = 1e-14  # stop if the residual falls below tol

print('Solving on %i by %i grid with %i unknowns' \
      % (mx,my,mx*my))
print('    using tol = %.2e with maxiter = %i' % (tol,maxiter))

# right-hand side:
F_full = f_fcn(X_full, Y_full)
F = F_full[1:-1, 1:-1]  # at interior points

# adjust for Dirichlet BCs:
F[0,:] = F[0,:] - kappa_imh_j[0,:]*Utrue_full[0, 1:-1] / dx**2
F[-1,:] = F[-1,:] - kappa_iph_j[-1,:]*Utrue_full[-1, 1:-1] / dx**2
F[:,0] = F[:,0] - kappa_i_jmh[:,0]*Utrue_full[1:-1, 0] / dy**2
F[:,-1] = F[:,-1] - kappa_i_jph[:,-1]*Utrue_full[1:-1, -1] / dy**2


# initial guess:
U0_full = 0*X_full
U0 = U0_full[1:-1, 1:-1]  # interior points

U = U0.copy() # current iterate
r = F - matvec(U)  # initial residual
p = r.copy()  # initial direction

k = 0
enorm = abs(U-Utrue).max()
errors = [enorm]
rTr_k = inner_product(r,r)
rnorm = sqrt(rTr_k)

if verbose:
    print('iteration %3i:   2-norm(r) = %.2e,   max-norm(E) = %.2e' \
          % (k,rnorm,enorm))
    
figs = []  # for the list of figures we generate

fig = make_plot(U,k,rnorm,enorm) # plot initial guess
figs.append(fig)

rTr_km = rTr_k  # in general will hold r^T * r at iteration k-1

for k in range(1,maxiter+1):
    w = matvec(p)   # the only matrix-vector multiply
    a = rTr_km / inner_product(p,w)  # alpha_{k-1} in CG algorithm
    U = U + a*p
    r = r - a*w

    enorm = abs(U-Utrue).max()
    errors.append(enorm)
    
    rTr_k = inner_product(r,r)
    rnorm = sqrt(rTr_k)

    if mod(k,kplot)==0 or k==maxiter:
        # every kplot iterations create a plot:
        fig = make_plot(U,k,rnorm,enorm)
        figs.append(fig)
        
    if verbose:
        print('iteration %3i:   2-norm(r) = %.2e,   max-norm(E) = %.2e' \
              % (k,rnorm,enorm))
    
    # check for convergence:
    if rnorm < tol:
        print('Satisfied rnorm < tol after %i iterations' % k)
        break
        
    # determine next search direction:
    b = rTr_k / rTr_km   # beta_{k-1} in CG algorithm
    rTr_km = rTr_k       # for next iteration
    p = r + b*p           # next search direction

print('Stopped after %i itereations with 2-norm(r) = %.2e' \
   % (k,rnorm))
Solving on 19 by 9 grid with 171 unknowns
    using tol = 1.00e-14 with maxiter = 100
Satisfied rnorm < tol after 88 iterations
Stopped after 88 itereations with 2-norm(r) = 8.37e-15
In [11]:
semilogy(range(1,len(errors)+1), errors, 'b-x')
grid(True)
In [12]:
animate_figs(figs)
Out[12]:
In [ ]: