BVP layer -- Solve a singular perturbation problem

AMath 585, Winter Quarter 2020 at the University of Washington. Developed by R.J. LeVeque and distributed under the BSD license. You are free to modify and use as you please, with attribution.

These notebooks are all available on Github.


Solve the nonlinear BVP $$ \epsilon u''(x) + u(x)(u'(x) - 1) = f(x) $$ with Dirichlet boundary conditions.

In this notebook we consider the case where $\epsilon > 0$ is very small, the singular perturbation problem discussed in Section 2.17 of the textbook.

Continuation in both $\epsilon$ and the size of the system are illustrated as well.

In [1]:
%matplotlib inline
In [2]:
from pylab import *
In [3]:
from scipy.interpolate import interp1d  # used for continuation

The BVP_nonlinear.py module contains the function solve_bvp_nonlinear that is illustrated in the notebook BVP_nonlinear.html.

In [4]:
from BVP_nonlinear import solve_bvp_nonlinear
In [5]:
ax = 0.; alpha = -1.; ainfo = (ax, alpha)
bx = 1.; beta = 1.5;   binfo = (bx, beta)
f = lambda x: zeros(x.shape) # the zero function

xbar = 0.5*(ax+bx-beta-alpha)
print('For small epsilon we expect a layer near xbar = %.2f' % xbar)
For small epsilon we expect a layer near xbar = 0.25
In [6]:
epsilon = 0.1
m = 49
u0_func = lambda x: alpha + x * (beta-alpha) / (bx-ax)
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func)
Iteration k = 0: norm(Gk) = 2.17e+00, norm(delta) = 6.80e-01
Iteration k = 1: norm(Gk) = 1.12e+00, norm(delta) = 1.37e-01
Iteration k = 2: norm(Gk) = 9.45e-02, norm(delta) = 4.62e-03
Iteration k = 3: norm(Gk) = 1.46e-04, norm(delta) = 5.23e-06
Iteration k = 4: norm(Gk) = 1.58e-10, norm(delta) = 5.71e-12
Iteration k = 5: norm(Gk) = 9.46e-14, norm(delta) = 2.11e-15
Declared convergence after 5 iterations

For smaller epsilon we see slower convergence and some wonky looking iterates, but the final solution looks smooth with a layer where we expect it:

In [7]:
epsilon = 0.01
m = 199
u0_func = lambda x: alpha + x * (beta-alpha) / (bx-ax)
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, u0_func)

figure()
plot(x,u,'rx-')
title('Final approximate solution')
Iteration k = 0: norm(Gk) = 2.23e+00, norm(delta) = 1.84e+00
Iteration k = 1: norm(Gk) = 2.43e+01, norm(delta) = 2.40e+00
Iteration k = 2: norm(Gk) = 1.23e+02, norm(delta) = 2.02e+00
Iteration k = 3: norm(Gk) = 1.58e+02, norm(delta) = 1.18e+00
Iteration k = 4: norm(Gk) = 6.27e+01, norm(delta) = 5.40e-01
Iteration k = 5: norm(Gk) = 1.39e+01, norm(delta) = 1.34e-01
Iteration k = 6: norm(Gk) = 8.30e-01, norm(delta) = 7.77e-03
Iteration k = 7: norm(Gk) = 3.16e-03, norm(delta) = 2.53e-05
Iteration k = 8: norm(Gk) = 3.15e-08, norm(delta) = 2.55e-10
Iteration k = 9: norm(Gk) = 1.06e-13, norm(delta) = 3.07e-15
Declared convergence after 9 iterations
Out[7]:
Text(0.5, 1.0, 'Final approximate solution')

Note that we have many grid points in the layer. If we tried this value of epsilon with fewer grid points the final solution won't look so nice, and/or Newton's method might not converge.

Here we see that it doesn't converge when the starting guess is the linear function:

In [8]:
epsilon = 0.01
m = 49
u0_func = lambda x: alpha + x * (beta-alpha) / (bx-ax)
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m, 
                          u0_func, max_iter=15)
Iteration k = 0: norm(Gk) = 2.17e+00, norm(delta) = 1.85e+00
Iteration k = 1: norm(Gk) = 2.38e+01, norm(delta) = 2.82e+00
Iteration k = 2: norm(Gk) = 1.12e+02, norm(delta) = 3.63e+00
Iteration k = 3: norm(Gk) = 4.56e+02, norm(delta) = 5.73e+00
Iteration k = 4: norm(Gk) = 4.73e+02, norm(delta) = 9.50e+00
Iteration k = 5: norm(Gk) = 1.46e+03, norm(delta) = 1.34e+01
Iteration k = 6: norm(Gk) = 9.02e+02, norm(delta) = 1.25e+01
Iteration k = 7: norm(Gk) = 2.50e+03, norm(delta) = 2.29e+01
Iteration k = 8: norm(Gk) = 8.03e+03, norm(delta) = 2.47e+01
Iteration k = 9: norm(Gk) = 1.78e+03, norm(delta) = 5.10e+01
Iteration k = 10: norm(Gk) = 2.26e+04, norm(delta) = 3.48e+01
Iteration k = 11: norm(Gk) = 4.97e+03, norm(delta) = 2.17e+01
Iteration k = 12: norm(Gk) = 1.90e+03, norm(delta) = 5.02e+01
Iteration k = 13: norm(Gk) = 6.20e+04, norm(delta) = 2.40e+01
Iteration k = 14: norm(Gk) = 1.48e+04, norm(delta) = 1.10e+01
Reached max_iter, possible nonconvergence

Continuation

Continuation in epsilon

To begin, let's keep m = 49 fixed and try to get a better initial guess by first solving the problem with a larger epsilon = 0.05 and save that solution so we can use it as an initial guess for the smaller value of epsilon.

In [9]:
epsilon = 0.05
m = 49
u0_func = lambda x: alpha + x * (beta-alpha) / (bx-ax)
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m,  u0_func)
u_05 = u
Iteration k = 0: norm(Gk) = 2.17e+00, norm(delta) = 1.02e+00
Iteration k = 1: norm(Gk) = 3.36e+00, norm(delta) = 4.30e-01
Iteration k = 2: norm(Gk) = 1.28e+00, norm(delta) = 7.34e-02
Iteration k = 3: norm(Gk) = 5.34e-02, norm(delta) = 1.64e-03
Iteration k = 4: norm(Gk) = 3.37e-05, norm(delta) = 8.94e-07
Iteration k = 5: norm(Gk) = 7.36e-12, norm(delta) = 3.14e-13
Declared convergence after 5 iterations

Now we can reduce epsilon and use the result just computed as the initial guess instead of a linear function.

Note that u0_func is a function that just returns this vector regardless of what x is passed in, so this wouldn't work if we wanted to increase m (which we will do later).

In [10]:
epsilon = 0.01
m = 49
u0_func = lambda x: u_05
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m,  u0_func)
u_01 = u

figure()
plot(x,u,'rx-')
title('Final approximate solution')
Iteration k = 0: norm(Gk) = 1.91e+00, norm(delta) = 6.59e-01
Iteration k = 1: norm(Gk) = 6.50e+00, norm(delta) = 2.00e-01
Iteration k = 2: norm(Gk) = 6.06e-01, norm(delta) = 1.86e-02
Iteration k = 3: norm(Gk) = 4.01e-03, norm(delta) = 1.08e-04
Iteration k = 4: norm(Gk) = 1.17e-07, norm(delta) = 2.08e-09
Iteration k = 5: norm(Gk) = 1.19e-14, norm(delta) = 3.80e-16
Declared convergence after 5 iterations
Out[10]:
Text(0.5, 1.0, 'Final approximate solution')

Now Newton's method converges and we get something reasonable, even though there are only a few points in the interior layer.

We saved the solution above, which we can now use as initial guess for an even smaller epsilon:

In [11]:
epsilon = 0.007
m = 49
u0_func = lambda x: u_01
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m,  u0_func, max_iter=20)

figure()
plot(x,u,'rx-')
title('Final approximate solution')
Iteration k = 0: norm(Gk) = 2.89e+00, norm(delta) = 3.00e-01
Iteration k = 1: norm(Gk) = 2.89e+00, norm(delta) = 9.06e-01
Iteration k = 2: norm(Gk) = 1.86e+01, norm(delta) = 4.80e-01
Iteration k = 3: norm(Gk) = 5.31e+00, norm(delta) = 3.06e-01
Iteration k = 4: norm(Gk) = 2.19e+00, norm(delta) = 6.64e-01
Iteration k = 5: norm(Gk) = 1.04e+01, norm(delta) = 3.71e-01
Iteration k = 6: norm(Gk) = 3.26e+00, norm(delta) = 3.10e-01
Iteration k = 7: norm(Gk) = 2.27e+00, norm(delta) = 5.46e-01
Iteration k = 8: norm(Gk) = 7.05e+00, norm(delta) = 3.26e-01
Iteration k = 9: norm(Gk) = 2.51e+00, norm(delta) = 4.00e-01
Iteration k = 10: norm(Gk) = 3.79e+00, norm(delta) = 2.99e-01
Iteration k = 11: norm(Gk) = 2.11e+00, norm(delta) = 1.43e+00
Iteration k = 12: norm(Gk) = 4.80e+01, norm(delta) = 7.29e-01
Iteration k = 13: norm(Gk) = 1.26e+01, norm(delta) = 3.99e-01
Iteration k = 14: norm(Gk) = 3.77e+00, norm(delta) = 2.99e-01
Iteration k = 15: norm(Gk) = 2.12e+00, norm(delta) = 1.35e+00
Iteration k = 16: norm(Gk) = 4.34e+01, norm(delta) = 6.94e-01
Iteration k = 17: norm(Gk) = 1.14e+01, norm(delta) = 3.84e-01
Iteration k = 18: norm(Gk) = 3.49e+00, norm(delta) = 3.03e-01
Iteration k = 19: norm(Gk) = 2.18e+00, norm(delta) = 7.65e-01
Reached max_iter, possible nonconvergence
Out[11]:
Text(0.5, 1.0, 'Final approximate solution')

Continuation in m (refining the grid)

Newton's method wasn't converging in the test just done, and clearly we don't have enough grid points in the layer for this value of epsilon. So we might want to increase m. To do so we can define a function based on one of our previous converged solutions that we can then pass in as u0_func and that can be evaluated on a new finer grid.

To do so we use the scipy.interpolate.interp1d function that was imported at the top of this notebook, to define a piecewise linear function that interpolates the values (x[i], u_01[i]):

In [12]:
u_01_func = interp1d(x,u_01,'linear')
In [13]:
epsilon = 0.007
m = 99
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m,  u0_func=u_01_func)

figure()
plot(x,u,'rx-')
title('Final approximate solution')
Iteration k = 0: norm(Gk) = 8.99e+00, norm(delta) = 1.16e-01
Iteration k = 1: norm(Gk) = 3.23e-01, norm(delta) = 3.42e-03
Iteration k = 2: norm(Gk) = 4.81e-04, norm(delta) = 4.31e-06
Iteration k = 3: norm(Gk) = 6.38e-10, norm(delta) = 4.96e-12
Iteration k = 4: norm(Gk) = 2.95e-14, norm(delta) = 2.80e-15
Declared convergence after 4 iterations
Out[13]:
Text(0.5, 1.0, 'Final approximate solution')

With this approach we can go to smaller epsilon, on a suitable fine grid:

In [14]:
epsilon = 0.001
m = 499
x,u = solve_bvp_nonlinear(epsilon, f, ainfo, binfo, m,  u0_func=u_01_func)

figure()
plot(x,u,'rx-')
title('Final approximate solution')
Iteration k = 0: norm(Gk) = 1.13e+01, norm(delta) = 9.60e-01
Iteration k = 1: norm(Gk) = 1.00e+02, norm(delta) = 3.76e-01
Iteration k = 2: norm(Gk) = 2.41e+01, norm(delta) = 7.96e-02
Iteration k = 3: norm(Gk) = 1.10e+00, norm(delta) = 3.18e-03
Iteration k = 4: norm(Gk) = 1.98e-03, norm(delta) = 4.34e-06
Iteration k = 5: norm(Gk) = 4.44e-09, norm(delta) = 9.61e-12
Iteration k = 6: norm(Gk) = 1.31e-13, norm(delta) = 3.25e-15
Declared convergence after 6 iterations
Out[14]:
Text(0.5, 1.0, 'Final approximate solution')