BVP1 - Example of solving a 2-point boundary value problem

AMath 585, Winter Quarter 2020 at the University of Washington. Developed by R.J. LeVeque and distributed under the BSD license. You are free to modify and use as you please, with attribution.

These notebooks are all available on Github.


In this notebook we set up and solve a simple two-point boundary value problem $u''(x) = f(x)$ on an interval $a \leq x \leq b$. The centered second-order accurate finite difference is used, leading to a tridiagonal system of equations.

The generation of a manufactured solution and a test of convergence are also included.

In [1]:
%matplotlib inline
In [2]:
from pylab import *
In [3]:
def solve_bvp1(f, ainfo, binfo, m):
    """
    Solve the 2-point BVP with Dirichlet BCs
    Input:
        f is a function defining the right hand side,
        ainfo = (ax, alpha) defines the Dirichlet boundary condition u(ax) = alpha,
        binfo = (bx, beta) defines the Dirichlet boundary condition u(bx) = beta,
        m is the number of (equally spaced) interior grid points to use.
    
    Returns:
        x = array of grid points (including boundaries, so of length m+2)
        u = array of approximate solution at these points.
    """
    from scipy import sparse
    from scipy.sparse.linalg import spsolve
    
    ax, alpha = ainfo
    bx, beta = binfo
    
    h = (bx-ax)/float(m+1)    # h = delta x
    x = linspace(ax,bx,m+2)   # note x[0]=ax, x[m+1]=bx
    
    # set up m by m matrix A:
    # note that sparse.diags is analogous to the matlab spdiags function
    em = ones(m)
    em1 = ones(m-1)
    A = sparse.diags([em1, -2*em, em1], [-1, 0, 1], shape=(m,m))
    A = A / h**2

    # right hand side:
    b = f(x)
    b[1] = b[1] - alpha / h**2
    b[m] = b[m] - beta / h**2
    rhs = b[1:-1]   # interior points
    
    # solve system for m interior points:
    uint = spsolve(A, rhs)
    
    # augment with boundary values to form u of length m+2:
    u = hstack([alpha, uint, beta])
    
    return x,u
In [4]:
f = lambda x: 12*x**2
ax = 0.; alpha = 3.; ainfo = (ax, alpha)
bx = 1.; beta = 2.;   binfo = (bx, beta)
utrue = lambda x: x**4 + 3. - 2*x
In [5]:
xfine = linspace(ax, bx, 1001)
ufine = utrue(xfine)
plot(xfine, ufine, 'b')
Out[5]:
[<matplotlib.lines.Line2D at 0x118f7e048>]
In [6]:
m = 9
x,u = solve_bvp1(f, ainfo, binfo, m)

figure(figsize=(12,6))
h = (bx - ax) / (m+1)
subplot(121)
plot(xfine, ufine, 'b')
plot(x, u, 'rx')
grid(True)
title('Solution with h = %.3e' % h)

subplot(122)
E = u - utrue(x)
plot(x, E, 'rx-')
ylim(-0.003, 0.003)
grid(True)
Emax = abs(E).max()
title('Maximum Error = %.3e' % Emax)
/Users/rjl/miniconda/envs/geo/lib/python3.6/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:133: SparseEfficiencyWarning: spsolve requires A be CSC or CSR matrix format
  SparseEfficiencyWarning)
Out[6]:
Text(0.5, 1.0, 'Maximum Error = 2.500e-03')

Warnings:

The warning printed out by scipy can be ignored. We used sparse.diags to define the sparse matrix and later we will talk about compressed sparse row (CSR) or column (CSC) storage. The following cell can be used to suppress printing the warning everytime we call the solve_bvp1 function:

In [7]:
import logging
logging.captureWarnings(True)

Manufactured solution

For the simple ODE $u''(x) = f(x)$ one can easily solve the equation for reasonable choices of $f(x)$, so it is relatively easy to generate exact solutions to use as test problems. For many differential equations this is not the case, in particular for most equations for which you need to develop a numerical method.

The method of "manufactured solutions" is an approach to generate test problems. In the case of our simple problem it consists of first choosing $u(x)$ and the interval on which we want the this function to be the solution, and then computing $f(x)$ and the boundary conditions so that it is the solution. We will see more interesting cases later.

As an example, suppose we want to test the solve_bvp function on a problem with a more oscillatory solution. We might choose $u(x) = x\sin(\pi x)$ on $2.5 \leq x \leq 10$ as our target solution. Then we can easily compute that $$ f(x) = 2\pi\cos(\pi x) - x\pi^2\sin(\pi x) $$ and $u(2.5) = 2.5, ~u(10) = 0$. This is our test problem.

In [8]:
utrue = lambda x: x*sin(pi*x)
f = lambda x: 2*pi*cos(pi*x) - x*pi**2 * sin(pi*x)
ax = 2.5; alpha = utrue(ax); ainfo = (ax, alpha)
bx = 10.; beta = utrue(bx);   binfo = (bx, beta)
In [9]:
xfine = linspace(ax, bx, 1001)
ufine = utrue(xfine)
plot(xfine, ufine, 'b')
Out[9]:
[<matplotlib.lines.Line2D at 0xa1a5adfd0>]

We need more grid points to get a decent solution than with the previous problem. Try m=99 and the error should go down by roughly a factor of 4.

In [10]:
m = 49
x,u = solve_bvp1(f, ainfo, binfo, m)

figure(figsize=(12,6))
h = (bx - ax) / (m+1)
subplot(121)
plot(xfine, ufine, 'b')
plot(x, u, 'rx')
grid(True)
title('Solution with h = %.3e' % h)

subplot(122)
E = u - utrue(x)
plot(x, E, 'rx-')
ylim(-0.2, 0.2)
grid(True)
Emax = abs(E).max()
title('Maximum Error = %.3e' % Emax)
Out[10]:
Text(0.5, 1.0, 'Maximum Error = 1.699e-01')

Note that the error has a similar shape to the solution, and is roughly proportional to the fourth derivative of the solution, due to the form of the local truncation error.

Test the order of accuracy

Solve the problem for several different grid resolutions. Since $h = (b-a)/(m+1)$ we choose values of $m+1$, first increasing by a factor of 2 for each test and then some much larger values increasing by factors of 10 to illustrate that second-order accuracy is seen until rounding errors become noticeable. Note that solving a tridiagonal system is extremely quick so even with 1,000,000 grid points the BVP is solved very quickly in 1 dimension.

In [11]:
# values of m+1:
mp1_vals = array([50, 100, 200, 400, 1000, 10000, 100000, 1000000])
h_vals = (bx - ax) / mp1_vals   # correspoinding h values

errors = []
print('\n    h                 error ')
for jtest in range(len(mp1_vals)):
    m = mp1_vals[jtest] - 1
    h = h_vals[jtest]
    x,u = solve_bvp1(f, ainfo, binfo, m)
    
    x_true = linspace(ax, bx, m+2)
    u_true = utrue(x_true)
    error_max = abs(u - u_true).max()
    errors.append(error_max)
    print('%10.8f   %20.16f' % (h,error_max))
    h                 error 
0.15000000     0.1698751340730631
0.07500000     0.0421345887912761
0.03750000     0.0105288717988046
0.01875000     0.0026337561280698
0.00750000     0.0004213760984619
0.00075000     0.0000042136637735
0.00007500     0.0000000424133244
0.00000750     0.0000000281237238

You can easily observe in some entries of the table above that decreasing h by a factor of 10 decreases the error by a factor of 100, though not in the last row where rounding errors are affecting the solution.

Below we make a log-log plot of these errors, and also show a "reference line" of slope 2. We expect the errors to have this slope for a second-order accurate method. The constant $C$ in the error expression $Ch^2 + {\cal O}(h^4)$ will depend on higher order derivatives of the true solution (giving a bound on the local truncation error) and on bounds for the norm of the inverse of the tridiagonal matrix (coming from the relation between local and global errors).

In [12]:
loglog(h_vals, errors, 'bx-', label='Observed errors')
grid(True)
xlabel('h = Delta x')
ylabel('max norm of error')

eref = h_vals**2
loglog(h_vals, eref, 'r-', label='Reference line of slope 2')
legend(loc='lower right')
Out[12]:
<matplotlib.legend.Legend at 0xa1a615f28>