Variable time step ODE solver

AMath 586, Spring Quarter 2019 at the University of Washington. For other notebooks, see Index.html or the Index of all notebooks on Github.

This notebook gives a simplified implementation of a variable time step ODE solver based on an order 1,2 embedded Runge-Kutta method given by (5.42) in the textbook.

In [1]:
%matplotlib inline
In [2]:
from pylab import *

Solve the ODE

$$ u'(t) = \lambda(u - g(t)) + g'(t), \qquad u(0) = \eta $$

where $g(t)$ is a given function. The exact solution is

$$u(t) = \exp(\lambda t)(\eta - g(0)) + g(t).$$

In ScalarStiffness.html we used $g(t) = \cos(t)$, but here we modify this function $g(t)$ to also include a sharply peaked Gaussian $\exp(-\gamma (t-1)^2)$ where small time steps are needed to resolve the solution. We can also choose $\lambda$ very negative to make the problem stiff, or take $\lambda$ near 0 to make it easier to solve with an explicit method.

In [3]:
gamma = 500.

def g(t):
    return cos(t) + exp(-gamma*(t-1.)**2)

def gprime(t):
    return -sin(t) - 2*gamma*(t-1.)*exp(-gamma*(t-1.)**2)

Plot of $g(t)$:

In [4]:
tfine = linspace(0,pi,2000)
plot(tfine, g(tfine))
title('Function $g(t)$');

Implement a variable time step method

This is a simple implementation of the embedded Runge-Kutta method given by (5.42) in the textbook, along with choosing the time step $k_{n+1}$ based on the estimate of the local error in Forward Euler obtained from the previous step,

$$ |{\cal L}^n| \approx \frac 1 2 k_n^2 |u''(t_n)| \approx |\hat U^{n+1} - U^{n+1}|. $$

For a given tolerance $\epsilon$ (called tol in the code) and final time $T$ (called tfinal), we choose

$$k_{n+1} = \frac{k_n^2 \epsilon}{T |{\cal L}^n|}.$$

Then we hope $|{\cal L}^{n+1}| \leq k_{n+1} \epsilon/T$ and that the local errors accumulate linearly so that the final error at time $T$ satisfies

$$ |E(T)| \leq \sum_{n=0}^{N-1} k_n \epsilon/T = \epsilon, \qquad \text{since}\quad \sum_{n=0}^{N-1} k_n = T. $$

In case $u''(t_n)$ is near zero this might give too large a time step at some times, so also put in a restriction that $k_{n+1} \leq 2k_n$.

In [5]:
def test_variable_dt_method(tol=1e-3, lam=0):
    u0 = 0.
    
    def f(u,t):
        return lam*(u - g(t)) + gprime(t)

    def utrue(t):
        return exp(lam*(t)) * (u0 - g(0)) + g(t)

    tfinal = 3
    tfine = linspace(0,tfinal,2000)
    ufine = utrue(tfine)  # reference exact solution
    
    kn = 1e-5 # initial time step
    maxsteps = 1000000 # to avoid infinite loop
    tn = 0.
    Un = u0
    
    # start accumulating lists (append in each time step)
    times = [tn]
    u = [Un]
    timesteps = [kn]

    for n in range(maxsteps):
        tnp = tn + kn
        F1 = f(Un,tn)
        Y2 = Un + 0.5*kn*F1
        Unp = Un + kn*f(Y2, tn + 0.5*kn)
        Unp_hat = Un + kn*F1
        
        # Estimate of 1-step error in Euler:
        Ln = abs(Unp_hat - Unp)
        # New time step based on wanting 1-step error <= tol/tfinal:
        knp = kn**2 * tol/(tfinal*Ln)
        # But don't let it grow too fast:
        knp = min(knp, 2*kn)
        
        #Unp = Unp_hat # to use Euler step rather than 2nd order
        
        u.append(Unp)
        times.append(tnp)
        timesteps.append(knp)
        Un = Unp
        tn = tnp
        kn = knp
        
        # check if done:
        if tn > tfinal:
            break

    if (tn < tfinal):
        print('*** Warning, failed to reach tfinal')
        
    # convert lists into numpy arrays:
    times = array(times)
    u = array(u)
    timesteps = array(timesteps)
    
    print('Stopping at t=%.3f, after %i timesteps' % (tn, len(u)))
    print('Maximum k = %g,  lambda*(max k) = %g' \
          % (timesteps.max(), lam*timesteps.max()))
    
    figure(figsize=(11,4))
    subplot(121)
    plot(tfine,ufine,'k')
    plot(times,u)
    title('Solution')
    subplot(122)
    errs = abs(u - utrue(times))
    semilogy([0,tfinal], [tol,tol], 'g--', label='tolerance')
    semilogy(times,errs,'r',label='abs(error)')
    semilogy(times,timesteps,'k',label='timestep')
    grid(True)
    ylim(1e-12, 1)
    legend(loc='lower right')
    title('tol = %g, max error = %g' % (tol,errs.max()));

Test on a mild problem with $\lambda = -1$... Try changing tol and note that the solution is generally more accurate than requested.

In [6]:
test_variable_dt_method(tol=1e-2, lam=-1)
Stopping at t=3.004, after 11800 timesteps
Maximum k = 0.205249,  lambda*(max k) = -0.205249

Try a stiffer equation. Note that the time step is automatically adjusted to stay close to the stability region (which intersects the real axis at $k\lambda = -2$):

In [7]:
test_variable_dt_method(tol=1e-1, lam=-100)
Stopping at t=3.010, after 2850 timesteps
Maximum k = 0.091318,  lambda*(max k) = -9.1318
In [ ]: