AMath 586, Spring Quarter 2019 at the University of Washington. For other notebooks, see Index.html or the Index of all notebooks on Github.
This notebook gives a brief introduction to a class of ODE solvers called exponential integrators, focusing on the simples Exponential Time Differencing (ETD) methods described by:
For more discussion, see e.g.
%matplotlib inline
from pylab import *
from scipy.linalg import expm, solve
The basic idea is write the ODE $u'(t) = f(u(t))$, for example, as $u'(t) = Au(t) + g(u(t))$, where the matrix $A$ captures much of the stiffness of the problem and eigenvalues of the remaining Jacobian $g'(u)$ stay closer to the origin. Such a splitting naturally exists for some problems, or one might linearize in each time step.
Then we use the fact that over a single time step the true solution satisfies
$$ u(t_{n+1}) = \exp(Ak) u(t_n) + \int_{t_n}^{t_{n+1}} \exp(A(t_{n+1}-\tau) g(u(\tau)) \, d\tau. $$If $g$ were a function of $t$ this would be Duhamel's principle. More generally it is sometimes called "variation of parameters" and no longer gives an explicit solution since $u(\tau)$ comes into the integral. But we can approximate the integral numerically using $U^n$ and either past values of $U^j$ or a multi-stage approach going forward, giving rise to many possible numerical methods.
The simplest arises from approximating $g(u(\tau)) \approx g(U^n)$ over the time interval, yielding
$$ U^{n+1} = \exp(Ak) U^n + (\exp(Ak) - I) A^{-1} g(U^n). $$This is exact if $g(u)=0$ while as $A \rightarrow 0$ this approaches the forward Euler method on $u' = g(u)$.
We now replace the test problem with $u' = cu + \lambda u$ where $c$ corresponds to $A$ and is handled exactly while $\lambda$ represents an eigenvalue of $g(u)$. Applying the method gives
\begin{align*} U^{n+1} &= e^{ck}U^n + \frac 1 c \left(e^{ck}-1\right)\lambda U^n\\ &= \left( e^y + \frac 1 y \left(e^{y}-1\right) z\right) U^n \end{align*}or $U^{n+1} = R(z,y) U^n$ where $y = ck$ and $z=\lambda k$. Since $c$ and $\lambda$ could both be complex, stability region is now in four dimensional space.
Below we show a typical stability region if we fix $y$ at some negative value and then consider what values of $z$ are allowed, natural in the study of stiff problems.
def plotS(R, axisbox = [-10, 10, -10, 10], npts=500):
"""
Compute |R(z)| over a fine grid on the region specified by axisbox
and do a contour plot with contourf (filled contours)
to show the region of absolute stability.
"""
xa, xb, ya, yb = axisbox
x = linspace(xa,xb,npts)
y = linspace(ya,yb,npts)
X,Y = meshgrid(x,y)
Z = X + 1j*Y
Rval = R(Z)
Rabs = abs(Rval)
# plot interior, exterior, as green and white:
levels = [-1e9,1,1e9]
CS1 = contourf(X, Y, Rabs, levels, colors = ('g', 'w'))
# plot boundary as a black curve:
CS2 = contour(X, Y, Rabs, [1,], colors = ('k',), linewidths = (2,))
title('Region of absolute stability')
grid(True)
plot([xa,xb],[0,0],'k') # x-axis
plot([0,0],[ya,yb],'k') # y-axis
axis('scaled') # scale x and y same so that circles are circular
axis(axisbox) # set limits
ck = -100.
R = lambda z: exp(ck) + (exp(ck)-1)/ck * z
rr = -ck+2
plotS(R, axisbox = [-rr,rr, -rr,rr])
Consider the reactions
\begin{align*} &A \stackrel{K_{1}}{\rightarrow} B\\ &B+C \stackrel{K_{2}}{\rightarrow} D\\ &D \stackrel{K_{3}}{\rightarrow} A\\ \end{align*}which gives the nonlinear system
\begin{align*} u_0'(t) &= -K_1u_0(t) + K_3 u_3(t)\\ u_1'(t) &= K_1u_0(t) - K_2 u_1(t)u_2(t)\\ u_2'(t) &= - K_2 u_1(t)u_2(t)\\ u_3'(t) &= K_2 u_1(t)u_2(t) - K_3 u_3(t)\\ \end{align*}If $K_2$ is small relative to $K_1$ and/or $K_3$, then it may be possible to capture the stiff part with a linear system of equations.
K1 = 100.
K2 = 0.5
K3 = 5.
def f(u):
f0 = -K1*u[0] + K3*u[3]
f1 = K1*u[0] - K2*u[1]*u[2]
f2 = -K2*u[1]*u[2]
f3 = K2*u[1]*u[2] - K3*u[3]
return(array((f0,f1,f2,f3)))
First we apply Forward Euler to the full system to see when stability breaks down...
t0 = 0.
tfinal = 2.
eta = array((1,2,3,4))
def euler(nsteps):
t = linspace(t0, tfinal, nsteps+1)
dt = t[1] - t[0]
U = empty((4,nsteps+1)) # array for computed solution
U[:,0] = eta
for n in range(nsteps):
U[:,n+1] = U[:,n] + dt * f(U[:,n])
figure(figsize=(12,5))
plot(t, U[0,:],'k', label='u0')
plot(t, U[1,:],'b', label='u1')
plot(t, U[2,:],'r', label='u2')
plot(t, U[3,:],'c', label='u3')
xlim(t0,tfinal)
legend() # uses the labels from the plot commands
title('%i steps, dt = %7.4f' % (nsteps, dt))
With 300 timesteps we get a smooth solution:
euler(300)
Since $K_1 = 100$ is the largest rate we would expect the eigenvalues of the Jacobian to be on the order of $-100$ and so Forward Euler is only stable if $k\lambda > -2$ and we expect to need a time step around $k \approx 2/100$ for stability. To got up to time 2 we thus expect to need at least 100 time steps. Indeed this is at the limit of stability, with fewer steps the solution grows exponentially:
euler(99)
Note the oscillation since $1 + k\lambda \approx -1$. Even with 150 time steps $1 + k\lambda < 0$ and there's an initial oscillation:
euler(150)
We define the $A$ matrix by pulling out the constant part of the Jacobian, and then adding the identity matrix to make it nonsingular, since we need to use $A^{-1}$ in this integrator. (Maybe there's a better splitting?)
We then define $g(u) = f(u) - Au$ to be what's left over, which is still a nonlinear function.
A = array([[-K1,0,0,K3], [K1,0,0,0], [0,0,0,0], [0,0,0,-K3]]) + eye(4)
def g(u):
return f(u) - A.dot(u)
Note that A.dot(u) or equivalently dot(A,u) gives the matrix-vector product. This is also used in the loop below.
We also use expm and solve from scipy.linalg for the matrix exponential and to solve a linear system: solve(A,g) gives $A^{-1}g$.
Also note that expAk_I = expm(A*dt) - eye(4) is computed before the time stepping loop, so the matrix exponential is only computed once. The updating formula has been rewritten as
so that there is only one linear system solve and one matrix-vector multiply each time through the loop.
t0 = 0.
tfinal = 2.
eta = array((1,2,3,4))
def exp_euler(nsteps):
t = linspace(t0, tfinal, nsteps+1)
dt = t[1] - t[0]
U = empty((4,nsteps+1)) # array for computed solution
U[:,0] = eta
expAk_I = expm(A*dt) - eye(4)
for n in range(nsteps):
Ainv_g = solve(A,g(U[:,n]))
U[:,n+1] = U[:,n] + dot(expAk_I, U[:,n] + Ainv_g)
figure(figsize=(12,5))
plot(t, U[0,:],'k', label='u0')
plot(t, U[1,:],'b', label='u1')
plot(t, U[2,:],'r', label='u2')
plot(t, U[3,:],'c', label='u3')
xlim(t0,tfinal)
legend() # uses the labels from the plot commands
title('%i steps, dt = %7.4f' % (nsteps, dt))
Now we get nicer results with 150 steps:
exp_euler(150)
And the method stays stable even with only 10 steps (though not very accurate at this point):
exp_euler(10)