AMath 585, Winter Quarter 2020 at the University of Washington. Developed by R.J. LeVeque and distributed under the BSD license. You are free to modify and use as you please, with attribution.
These notebooks are all available on Github.
Solve the nonlinear BVP $$ \epsilon u''(x) + u(x)(u'(x) - 1) = f(x) $$ with Dirichlet boundary conditions.
In this notebook we illustrate with a large value of $\epsilon$ and a "manufactured solution" to test the code is working. Later we will look at the singular perturbation version with $\epsilon$ very small.
%matplotlib inline
from pylab import *
There are a few debugging statements left in the code below from when I was getting this working, and some illustration below of how print statements can be used to aid in debugging a complex routine like this.
One of the hardest things about getting this code right is the array indexing, and making sure that you understand how Numpy array slicing works. See the notebook Numpy_array_hints.html for some tips.
def solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func, max_iter=10,
plot_iterates=True, debug=False):
"""
Solve the 2-point BVP with Dirichlet BCs
Input:
epsilon > 0 coefficient of u''
f is a function defining the right hand side,
ainfo = (ax, alpha) defines the Dirichlet boundary condition u(ax) = alpha,
binfo = (bx, beta) defines the Dirichlet boundary condition u(bx) = beta,
m is the number of (equally spaced) interior grid points to use.
u0_func = function to evaluation for initial guess
max_iter = maximum number of iterations of Newton
plot_iterates: if set to True, plot the approximate solution each iteration
debug: if set to True, print some things out including the matrix at each
iteration, so generally use this only for small m.
Returns:
x = array of grid points (including boundaries, so of length m+2)
u = array of approximate solution at these points.
"""
from scipy import sparse
from scipy.sparse.linalg import spsolve
ax, alpha = ainfo
bx, beta = binfo
h = (bx-ax)/float(m+1) # h = delta x
x = linspace(ax,bx,m+2) # note x[0]=ax, x[m+1]=bx
if debug:
print('+++ h = %g, m+2 = %i' % (h,m+2))
print('+++ x = ',x)
# convergence tolerances:
tol_delta = 1e-12
tol_Gk = 1e-12
# set up m by m matrix A for the u''(x) term,
# which is always needed as part of the Jacobian
A_diag = ones(m+2)
A_offdiag = ones(m+1)
A = sparse.diags([A_offdiag, -2*A_diag, A_offdiag], [-1, 0, 1],
shape=(m+2,m+2), format='csc')
A = epsilon * A / h**2
# modify first and last row for Dirichlet BCs:
A[0,0] = 1.
A[0,1] = 0.
A[m+1,m] = 0.
A[m+1,m+1] = 1.
# initial guess for Newton iteration:
Uk = u0_func(x) # of length m+2
if debug: print('+++ Initial Uk = ', Uk)
if plot_iterates:
# make a plot showing how the solution evolves:
fig = figure(figsize=(8,6))
ax = axes()
grid(True)
title('Approximate solution while iterating')
# Newton iteration:
for k in range(max_iter):
if plot_iterates:
plot(x,Uk,label='k = %i' % k)
U = Uk.copy() # use in slicing below so Uk not changed
# Jacobian matrix with be A from above plus nonlinear part N:
N_subdiag = -U[1:m+2]
N_subdiag[m] = 0.
N_diag = zeros(m+2)
N_diag[1:m+1] = U[2:m+2] - U[0:m] - 2*h
N_superdiag = U[0:m+1]
N_superdiag[0] = 0.
N = sparse.diags([N_subdiag, N_diag, N_superdiag], [-1, 0, 1],
shape=(m+2,m+2), format='csc')
N = N / (2*h)
Jk = A + N
if debug: print('+++ after forming Jk, Uk = ', Uk)
if debug: print('+++ Jk = \n', Jk.toarray())
# Use Uk below, since U got changed above.
Gk = zeros(m+2)
if debug: print('+++ Uk[0] = %g, alpha = %g' % (Uk[0], alpha))
Gk[0] = Uk[0] - alpha
Gk[m+1] = Uk[m+1] - beta
Gk[1:m+1] = epsilon/h**2 * (Uk[0:m] - 2*Uk[1:m+1] + Uk[2:m+2]) \
+ Uk[1:m+1] * ((Uk[2:m+2] - Uk[0:m])/(2*h) -1.) \
- f(x[1:m+1])
# solve linear system:
if debug: print('+++ Uk = ',Uk)
if debug: print('+++ Gk = ',Gk)
delta = spsolve(Jk, Gk)
Uk = Uk - delta
if debug: print('+++ delta = ',delta)
norm_delta = norm(delta, inf)
norm_Gk = norm(Gk, inf)
print('Iteration k = %i: norm(Gk) = %.2e, norm(delta) = %.2e' \
% (k, norm_Gk, norm_delta))
if (norm_delta < tol_delta) or (norm_Gk < tol_Gk):
print('Declared convergence after %i iterations' % k)
break
if k==(max_iter-1):
print('Reached max_iter, possible nonconvergence')
if plot_iterates:
legend()
return x,Uk
We choose our desired solution $u(x)$ and then set $f(x)$ and $\alpha, \beta$ accordingly.
Since the truncation error depends only on $u''''(x)$ and higher order derivatives, first try $u(x) = 3 + 4x^4$ so the truncation error (and hence the global error) should be 0, i.e. we expect the solution of the nonlinear system to be equal to the true solution evaluated at each grid point. This tests that the Newton iteration is working.
Note that we chose a function for which the boundary conditions are not just 0 and a value $\epsilon \neq 1$ to catch bugs in how these are specified.
utrue = lambda x: 3 + 4*x**2 # values below set based on this desired solution
epsilon = 2.
f = lambda x: 8*epsilon + (3 + 4*x**2)*(8*x-1.)
ax = 0.; alpha = 3.; ainfo = (ax, alpha)
bx = 1.; beta = 7.; binfo = (bx, beta)
xfine = linspace(ax, bx, 1001)
ufine = utrue(xfine)
plot(xfine, ufine, 'b')
m = 49
u0_func = lambda x: 3 + 4*x**2
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func)
# add true solution to plot
plot(x, utrue(x), 'k+')
If we don't know the true solution, one possible initial guess is simply the linear function that connects the two boundary conditions. We know this will be close to correct very near the boundaries at least.
m = 49
u0_func = lambda x: 3 + 4*x
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func)
# add true solution to plot
plot(x, utrue(x), 'k+')
Note the quadratic convergence of Newton's method, as expected.
If you purposely introduce an error in specifying the Jacobian matrix, you would see this deteriorate. For example if you change the line in the code from
N_diag[1:m+1] = U[2:m+2] - U[0:m] - 2*h
to
N_diag[1:m+1] = U[2:m+2] - U[0:m]
(forgetting the $-1$ in the $u(x)(u'(x)-1)$ term of the ODE, as I initially did), and rerun the cell above, you will see only linear convergence.
Also let's try starting with an initial guess that is farther from correct, and in particular that does not even satisfy the boundary conditions. Note from the plots below that in one iteration Newton has corrected these, since the two equations specifying the BCs are decoupled from the others and are both linear equations.
(Note: If you try an initial condition a lot farther away from the true solution, you might see non-convergence.)
m = 49
u0_func = lambda x: cos(3*pi*x)
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func)
# add true solution to plot
plot(x, utrue(x), 'k+')
We chose the manufactured solution so that the global error should be zero, no matter how coarse our grid is. Check that it is:
error = norm(u-utrue(x), inf)
print('Max-norm of error is %g' % error)
Now that we think Newton's method is converging properly, let's try a problem where the true solution is less smooth and so we expect a non-zero global error, but hopefully second-order accurate as we refine the grid.
utrue = lambda x: sin(10*x+2) # values below set based on this desired solution
epsilon = 2.
f = lambda x: -100*epsilon*sin(10*x+2) + sin(10*x+2) * (10*cos(10*x+2) - 1)
ax = 0.; alpha = utrue(ax); ainfo = (ax, alpha)
bx = 2.; beta = utrue(bx); binfo = (bx, beta)
xfine = linspace(ax, bx, 1001)
ufine = utrue(xfine)
plot(xfine, ufine, 'b')
m = 49
u0_func = lambda x: alpha + x * (beta-alpha) / (bx-ax)
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func)
# add true solution to plot
plot(x, utrue(x), 'k+')
error = norm(u-utrue(x), inf)
print('Max-norm of error is %g' % error)
# values of m+1:
mp1_vals = array([50, 100, 200, 400, 1000, 10000, 100000, 1000000])
h_vals = (bx - ax) / mp1_vals # correspoinding h values
errors = []
for jtest in range(len(mp1_vals)):
m = mp1_vals[jtest] - 1
print('Solving with m = %i' % m)
h = h_vals[jtest]
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m,
u0_func, plot_iterates=False)
x_true = linspace(ax, bx, m+2)
u_true = utrue(x_true)
error_max = abs(u - u_true).max()
errors.append(error_max)
Note that with a million grid points the rounding error may be starting to affect convergence, but even for this grid we continue to see the expected error when plotted:
loglog(h_vals, errors, 'bx-', label='Observed errors')
grid(True)
xlabel('h = Delta x')
ylabel('max norm of error')
eref = h_vals**2
loglog(h_vals, eref, 'r-', label='Reference line of slope 2')
legend(loc='lower right')
To check if the matrix is being built properly, you might want to print things out for a small value of m
and with special values of the input. For example, the cell below checks the nonlinear part of the Jacobian (since epsilon = 0
) and the interval is chosen so that h=1
. Note that we also just take one iteration (max_iter = 1
).
epsilon = 0.
f = lambda x: zeros(x.shape)
ax = 0.; alpha = 0.; ainfo = (ax, alpha)
bx = 5.; beta = 0.; binfo = (bx, beta)
m = 4
u0_func = lambda x: x
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func,
max_iter=1, plot_iterates=False, debug=True)