A nonlinear BVP

AMath 585, Winter Quarter 2020 at the University of Washington. Developed by R.J. LeVeque and distributed under the BSD license. You are free to modify and use as you please, with attribution.

These notebooks are all available on Github.


Solve the nonlinear BVP $$ \epsilon u''(x) + u(x)(u'(x) - 1) = f(x) $$ with Dirichlet boundary conditions.

In this notebook we illustrate with a large value of $\epsilon$ and a "manufactured solution" to test the code is working. Later we will look at the singular perturbation version with $\epsilon$ very small.

In [1]:
%matplotlib inline
In [2]:
from pylab import *

Debugging

There are a few debugging statements left in the code below from when I was getting this working, and some illustration below of how print statements can be used to aid in debugging a complex routine like this.

One of the hardest things about getting this code right is the array indexing, and making sure that you understand how Numpy array slicing works. See the notebook Numpy_array_hints.html for some tips.

Solve the BVP

In [3]:
def solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func, max_iter=10, 
                        plot_iterates=True, debug=False):
    """
    Solve the 2-point BVP with Dirichlet BCs
    Input:
        epsilon > 0 coefficient of u''
        f is a function defining the right hand side,
        ainfo = (ax, alpha) defines the Dirichlet boundary condition u(ax) = alpha,
        binfo = (bx, beta) defines the Dirichlet boundary condition u(bx) = beta,
        m is the number of (equally spaced) interior grid points to use.
        u0_func = function to evaluation for initial guess
        max_iter = maximum number of iterations of Newton
        plot_iterates: if set to True, plot the approximate solution each iteration
        debug: if set to True, print some things out including the matrix at each
            iteration, so generally use this only for small m.
    
    Returns:
        x = array of grid points (including boundaries, so of length m+2)
        u = array of approximate solution at these points.
    """
    
    from scipy import sparse
    from scipy.sparse.linalg import spsolve
        
    ax, alpha = ainfo
    bx, beta = binfo
    
    h = (bx-ax)/float(m+1)    # h = delta x
    x = linspace(ax,bx,m+2)   # note x[0]=ax, x[m+1]=bx
    if debug: 
        print('+++ h = %g, m+2 = %i' % (h,m+2))
        print('+++ x = ',x)
    
    # convergence tolerances:
    tol_delta = 1e-12
    tol_Gk = 1e-12
    
    # set up m by m matrix A for the u''(x) term, 
    # which is always needed as part of the Jacobian
    A_diag = ones(m+2)
    A_offdiag = ones(m+1)
    A = sparse.diags([A_offdiag, -2*A_diag, A_offdiag], [-1, 0, 1], 
                     shape=(m+2,m+2), format='csc')
    A = epsilon * A / h**2
    
    # modify first and last row for Dirichlet BCs:
    A[0,0] = 1.
    A[0,1] = 0.
    A[m+1,m] = 0.
    A[m+1,m+1] = 1.
    
    # initial guess for Newton iteration:
    Uk = u0_func(x)  # of length m+2
    if debug: print('+++ Initial Uk = ', Uk)
    
    if plot_iterates:
        # make a plot showing how the solution evolves:
        fig = figure(figsize=(8,6))
        ax = axes()
        grid(True)
        title('Approximate solution while iterating')
    
    # Newton iteration:
    for k in range(max_iter):
        
        if plot_iterates:
            plot(x,Uk,label='k = %i' % k)
        
        U = Uk.copy()  # use in slicing below so Uk not changed

        # Jacobian matrix with be A from above plus nonlinear part N:
        
        N_subdiag = -U[1:m+2]
        N_subdiag[m] = 0.
        N_diag = zeros(m+2)
        N_diag[1:m+1] = U[2:m+2] - U[0:m] - 2*h
        N_superdiag = U[0:m+1]
        N_superdiag[0] = 0.
        N = sparse.diags([N_subdiag, N_diag, N_superdiag], [-1, 0, 1], 
                     shape=(m+2,m+2), format='csc')
        N = N / (2*h)
        
        Jk = A + N
        if debug: print('+++ after forming Jk, Uk = ', Uk)
        if debug: print('+++ Jk = \n', Jk.toarray())
        
        # Use Uk below, since U got changed above.
        Gk = zeros(m+2)
        if debug: print('+++ Uk[0] = %g, alpha = %g' % (Uk[0], alpha))
        Gk[0] = Uk[0] - alpha
        Gk[m+1] = Uk[m+1] - beta
        Gk[1:m+1] = epsilon/h**2 * (Uk[0:m] - 2*Uk[1:m+1] + Uk[2:m+2]) \
                    + Uk[1:m+1] * ((Uk[2:m+2] - Uk[0:m])/(2*h) -1.) \
                    - f(x[1:m+1])
        
        # solve linear system:
        if debug: print('+++ Uk = ',Uk)
        if debug: print('+++ Gk = ',Gk)
        delta = spsolve(Jk, Gk)
        Uk = Uk - delta
        if debug: print('+++ delta = ',delta)
        norm_delta = norm(delta, inf)
        norm_Gk = norm(Gk, inf)
        print('Iteration k = %i: norm(Gk) = %.2e, norm(delta) = %.2e' \
               % (k, norm_Gk, norm_delta))
        if (norm_delta < tol_delta) or (norm_Gk < tol_Gk):
            print('Declared convergence after %i iterations' % k)
            break
        if k==(max_iter-1):
            print('Reached max_iter, possible nonconvergence')
    
    if plot_iterates:
        legend()
        
    return x,Uk

Test on a manufactured solution

We choose our desired solution $u(x)$ and then set $f(x)$ and $\alpha, \beta$ accordingly.

Since the truncation error depends only on $u''''(x)$ and higher order derivatives, first try $u(x) = 3 + 4x^4$ so the truncation error (and hence the global error) should be 0, i.e. we expect the solution of the nonlinear system to be equal to the true solution evaluated at each grid point. This tests that the Newton iteration is working.

Note that we chose a function for which the boundary conditions are not just 0 and a value $\epsilon \neq 1$ to catch bugs in how these are specified.

In [4]:
utrue = lambda x: 3 + 4*x**2  # values below set based on this desired solution
epsilon = 2.
f = lambda x: 8*epsilon + (3 + 4*x**2)*(8*x-1.)
ax = 0.; alpha = 3.; ainfo = (ax, alpha)
bx = 1.; beta = 7.;   binfo = (bx, beta)
In [5]:
xfine = linspace(ax, bx, 1001)
ufine = utrue(xfine)
plot(xfine, ufine, 'b')
Out[5]:
[<matplotlib.lines.Line2D at 0x11fbc13c8>]

Test it works if we start with the true solution as our initial guess:

In [6]:
m = 49
u0_func = lambda x: 3 + 4*x**2
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func)

# add true solution to plot
plot(x, utrue(x), 'k+')
Iteration k = 0: norm(Gk) = 7.13e-12, norm(delta) = 8.68e-16
Declared convergence after 0 iterations
Out[6]:
[<matplotlib.lines.Line2D at 0x11fe7f0f0>]

Try a different initial guess

If we don't know the true solution, one possible initial guess is simply the linear function that connects the two boundary conditions. We know this will be close to correct very near the boundaries at least.

In [7]:
m = 49
u0_func = lambda x: 3 + 4*x
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func)

# add true solution to plot
plot(x, utrue(x), 'k+')
Iteration k = 0: norm(Gk) = 4.20e+01, norm(delta) = 9.87e-01
Iteration k = 1: norm(Gk) = 1.56e+00, norm(delta) = 2.51e-02
Iteration k = 2: norm(Gk) = 1.65e-03, norm(delta) = 1.43e-05
Iteration k = 3: norm(Gk) = 7.57e-10, norm(delta) = 4.83e-12
Iteration k = 4: norm(Gk) = 6.25e-12, norm(delta) = 5.32e-15
Declared convergence after 4 iterations
Out[7]:
[<matplotlib.lines.Line2D at 0x11fe7f240>]

Note the quadratic convergence of Newton's method, as expected.

If the Jacobian is wrong:

If you purposely introduce an error in specifying the Jacobian matrix, you would see this deteriorate. For example if you change the line in the code from

N_diag[1:m+1] = U[2:m+2] - U[0:m] - 2*h

to

N_diag[1:m+1] = U[2:m+2] - U[0:m]

(forgetting the $-1$ in the $u(x)(u'(x)-1)$ term of the ODE, as I initially did), and rerun the cell above, you will see only linear convergence.

Different initial conditions

Also let's try starting with an initial guess that is farther from correct, and in particular that does not even satisfy the boundary conditions. Note from the plots below that in one iteration Newton has corrected these, since the two equations specifying the BCs are decoupled from the others and are both linear equations.

(Note: If you try an initial condition a lot farther away from the true solution, you might see non-convergence.)

In [8]:
m = 49
u0_func = lambda x: cos(3*pi*x)
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func)

# add true solution to plot
plot(x, utrue(x), 'k+')
Iteration k = 0: norm(Gk) = 2.15e+02, norm(delta) = 8.00e+00
Iteration k = 1: norm(Gk) = 1.47e+02, norm(delta) = 1.26e+00
Iteration k = 2: norm(Gk) = 5.14e+00, norm(delta) = 3.06e-02
Iteration k = 3: norm(Gk) = 2.62e-03, norm(delta) = 3.04e-05
Iteration k = 4: norm(Gk) = 2.49e-09, norm(delta) = 2.22e-11
Iteration k = 5: norm(Gk) = 7.10e-12, norm(delta) = 4.91e-16
Declared convergence after 5 iterations
Out[8]:
[<matplotlib.lines.Line2D at 0xa213b2a58>]

Global error:

We chose the manufactured solution so that the global error should be zero, no matter how coarse our grid is. Check that it is:

In [9]:
error = norm(u-utrue(x), inf)
print('Max-norm of error is %g' % error)
Max-norm of error is 8.88178e-16

Test convergence

Now that we think Newton's method is converging properly, let's try a problem where the true solution is less smooth and so we expect a non-zero global error, but hopefully second-order accurate as we refine the grid.

In [10]:
utrue = lambda x: sin(10*x+2)  # values below set based on this desired solution
epsilon = 2.
f = lambda x: -100*epsilon*sin(10*x+2) + sin(10*x+2) * (10*cos(10*x+2) - 1)
ax = 0.; alpha = utrue(ax); ainfo = (ax, alpha)
bx = 2.; beta = utrue(bx);   binfo = (bx, beta)
In [11]:
xfine = linspace(ax, bx, 1001)
ufine = utrue(xfine)
plot(xfine, ufine, 'b')
Out[11]:
[<matplotlib.lines.Line2D at 0xa2153e978>]
In [12]:
m = 49
u0_func = lambda x: alpha + x * (beta-alpha) / (bx-ax)
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func)

# add true solution to plot
plot(x, utrue(x), 'k+')

error = norm(u-utrue(x), inf)
print('Max-norm of error is %g' % error)
Iteration k = 0: norm(Gk) = 2.01e+02, norm(delta) = 1.76e+00
Iteration k = 1: norm(Gk) = 1.10e+01, norm(delta) = 8.23e-02
Iteration k = 2: norm(Gk) = 2.78e-02, norm(delta) = 1.92e-04
Iteration k = 3: norm(Gk) = 1.04e-07, norm(delta) = 1.34e-09
Iteration k = 4: norm(Gk) = 3.69e-13, norm(delta) = 3.85e-15
Declared convergence after 4 iterations
Max-norm of error is 0.0231569
In [13]:
# values of m+1:
mp1_vals = array([50, 100, 200, 400, 1000, 10000, 100000, 1000000])
h_vals = (bx - ax) / mp1_vals   # correspoinding h values

errors = []
for jtest in range(len(mp1_vals)):
    m = mp1_vals[jtest] - 1
    print('Solving with m = %i' % m)
    h = h_vals[jtest]
    x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, 
                              u0_func, plot_iterates=False)
    
    x_true = linspace(ax, bx, m+2)
    u_true = utrue(x_true)
    error_max = abs(u - u_true).max()
    errors.append(error_max)
Solving with m = 49
Iteration k = 0: norm(Gk) = 2.01e+02, norm(delta) = 1.76e+00
Iteration k = 1: norm(Gk) = 1.10e+01, norm(delta) = 8.23e-02
Iteration k = 2: norm(Gk) = 2.78e-02, norm(delta) = 1.92e-04
Iteration k = 3: norm(Gk) = 1.04e-07, norm(delta) = 1.34e-09
Iteration k = 4: norm(Gk) = 3.69e-13, norm(delta) = 3.85e-15
Declared convergence after 4 iterations
Solving with m = 99
Iteration k = 0: norm(Gk) = 2.02e+02, norm(delta) = 1.76e+00
Iteration k = 1: norm(Gk) = 1.10e+01, norm(delta) = 8.11e-02
Iteration k = 2: norm(Gk) = 2.76e-02, norm(delta) = 1.90e-04
Iteration k = 3: norm(Gk) = 1.04e-07, norm(delta) = 1.28e-09
Iteration k = 4: norm(Gk) = 1.71e-12, norm(delta) = 2.89e-15
Declared convergence after 4 iterations
Solving with m = 199
Iteration k = 0: norm(Gk) = 2.02e+02, norm(delta) = 1.76e+00
Iteration k = 1: norm(Gk) = 1.10e+01, norm(delta) = 8.08e-02
Iteration k = 2: norm(Gk) = 2.78e-02, norm(delta) = 1.90e-04
Iteration k = 3: norm(Gk) = 1.04e-07, norm(delta) = 1.26e-09
Iteration k = 4: norm(Gk) = 4.97e-12, norm(delta) = 9.90e-15
Declared convergence after 4 iterations
Solving with m = 399
Iteration k = 0: norm(Gk) = 2.02e+02, norm(delta) = 1.75e+00
Iteration k = 1: norm(Gk) = 1.10e+01, norm(delta) = 8.08e-02
Iteration k = 2: norm(Gk) = 2.78e-02, norm(delta) = 1.89e-04
Iteration k = 3: norm(Gk) = 1.04e-07, norm(delta) = 1.26e-09
Iteration k = 4: norm(Gk) = 3.14e-11, norm(delta) = 2.75e-14
Declared convergence after 4 iterations
Solving with m = 999
Iteration k = 0: norm(Gk) = 2.02e+02, norm(delta) = 1.75e+00
Iteration k = 1: norm(Gk) = 1.10e+01, norm(delta) = 8.08e-02
Iteration k = 2: norm(Gk) = 2.79e-02, norm(delta) = 1.89e-04
Iteration k = 3: norm(Gk) = 1.04e-07, norm(delta) = 1.26e-09
Iteration k = 4: norm(Gk) = 1.35e-10, norm(delta) = 7.60e-14
Declared convergence after 4 iterations
Solving with m = 9999
Iteration k = 0: norm(Gk) = 2.02e+02, norm(delta) = 1.75e+00
Iteration k = 1: norm(Gk) = 1.10e+01, norm(delta) = 8.08e-02
Iteration k = 2: norm(Gk) = 2.79e-02, norm(delta) = 1.89e-04
Iteration k = 3: norm(Gk) = 1.07e-07, norm(delta) = 1.26e-09
Iteration k = 4: norm(Gk) = 2.25e-08, norm(delta) = 3.39e-13
Declared convergence after 4 iterations
Solving with m = 99999
Iteration k = 0: norm(Gk) = 2.02e+02, norm(delta) = 1.75e+00
Iteration k = 1: norm(Gk) = 1.10e+01, norm(delta) = 8.08e-02
Iteration k = 2: norm(Gk) = 2.79e-02, norm(delta) = 1.89e-04
Iteration k = 3: norm(Gk) = 1.24e-06, norm(delta) = 1.25e-09
Iteration k = 4: norm(Gk) = 1.74e-06, norm(delta) = 7.77e-12
Iteration k = 5: norm(Gk) = 1.79e-06, norm(delta) = 1.25e-11
Iteration k = 6: norm(Gk) = 1.59e-06, norm(delta) = 1.55e-11
Iteration k = 7: norm(Gk) = 2.28e-06, norm(delta) = 7.20e-12
Iteration k = 8: norm(Gk) = 1.70e-06, norm(delta) = 3.60e-12
Iteration k = 9: norm(Gk) = 1.15e-06, norm(delta) = 3.02e-12
Reached max_iter, possible nonconvergence
Solving with m = 999999
Iteration k = 0: norm(Gk) = 2.02e+02, norm(delta) = 1.75e+00
Iteration k = 1: norm(Gk) = 1.10e+01, norm(delta) = 8.08e-02
Iteration k = 2: norm(Gk) = 2.80e-02, norm(delta) = 1.89e-04
Iteration k = 3: norm(Gk) = 1.28e-04, norm(delta) = 1.22e-09
Iteration k = 4: norm(Gk) = 1.28e-04, norm(delta) = 4.97e-11
Iteration k = 5: norm(Gk) = 1.83e-04, norm(delta) = 6.95e-11
Iteration k = 6: norm(Gk) = 1.50e-04, norm(delta) = 3.78e-11
Iteration k = 7: norm(Gk) = 1.28e-04, norm(delta) = 3.90e-11
Iteration k = 8: norm(Gk) = 1.10e-04, norm(delta) = 1.95e-11
Iteration k = 9: norm(Gk) = 1.28e-04, norm(delta) = 3.07e-11
Reached max_iter, possible nonconvergence

Note that with a million grid points the rounding error may be starting to affect convergence, but even for this grid we continue to see the expected error when plotted:

In [14]:
loglog(h_vals, errors, 'bx-', label='Observed errors')
grid(True)
xlabel('h = Delta x')
ylabel('max norm of error')

eref = h_vals**2
loglog(h_vals, eref, 'r-', label='Reference line of slope 2')
legend(loc='lower right')
Out[14]:
<matplotlib.legend.Legend at 0xa214fb8d0>

Debugging

To check if the matrix is being built properly, you might want to print things out for a small value of m and with special values of the input. For example, the cell below checks the nonlinear part of the Jacobian (since epsilon = 0) and the interval is chosen so that h=1. Note that we also just take one iteration (max_iter = 1).

In [15]:
epsilon = 0.
f = lambda x: zeros(x.shape)
ax = 0.; alpha = 0.; ainfo = (ax, alpha)
bx = 5.; beta = 0.;   binfo = (bx, beta)

m = 4
u0_func = lambda x: x
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func, 
                          max_iter=1, plot_iterates=False, debug=True)
+++ h = 1, m+2 = 6
+++ x =  [0. 1. 2. 3. 4. 5.]
+++ Initial Uk =  [0. 1. 2. 3. 4. 5.]
+++ after forming Jk, Uk =  [0. 1. 2. 3. 4. 5.]
+++ Jk = 
 [[ 1.   0.   0.   0.   0.   0. ]
 [-0.5  0.   0.5  0.   0.   0. ]
 [ 0.  -1.   0.   1.   0.   0. ]
 [ 0.   0.  -1.5  0.   1.5  0. ]
 [ 0.   0.   0.  -2.   0.   2. ]
 [ 0.   0.   0.   0.   0.   1. ]]
+++ Uk[0] = 0, alpha = 0
+++ Uk =  [0. 1. 2. 3. 4. 5.]
+++ Gk =  [0. 0. 0. 0. 0. 5.]
+++ delta =  [ 0.  5. -0.  5.  0.  5.]
Iteration k = 0: norm(Gk) = 5.00e+00, norm(delta) = 5.00e+00
Reached max_iter, possible nonconvergence