III. Numerical Methods for Ordinary Differential Equations


Ordinary differential equations are at the heart of our perception of the physical universe. For this reason numerical methods for their solutions is one of the oldest and most successful areas of numerical computations.

It would be very nice if discrete models provide calculated solutions to differential (ordinary and partial) equations exactly, but of course they do not. In fact in general they could not, even in principle, since the solution depends on an infinite amount of initial data. Instead, the best we can hope for is that the errors introduced by discretization will be small when those initial data are resonably well-behaved.

In this chapter, we discuss some simple numerical method applicable to first order ordinary differential equations in normal form subject to the prescribed initial condition:

\[ y' = f(x,y), \qquad y(x_0 ) = y_0 . \]
We always assume that the slope function f(x,y) is smooth enough so that the given initial value problem has a unique solutions and the slope function is differentiable so that all formulas make sense. Since it is an introductory, we do not discuss discretization errors in great detail---it is a subject of another course. A user should be aware that the magnitude of global errors may depend on the behavior of local errors in ways that ordinary analysis of discretization and rounding errors cannot predict. The most part of this chapter is devoted to some finite difference methods that proved their robastness and usefullness, and can guide a user how numerical methods work. Also, presented codes have limited applications because we mostly deal with uniform truncation meshes/grids that practically are not efficient. We do not discuss adaptive Runge--Kutta--Fehlberg algorithms of higher order that are used now in most applications, as well as spline methods. Instead, we present some series approximations, incuding Adomian Decomposition Method (ADM), as an introduction to applications of recurrences in practical life.

The finite diference approximations have a more complecated "physics" than the equations they are designed to simulate. In general, finite recurrences usually have more propertities than their continuous anologous counterparts. However, this arony is no paradox; the finite differences are used not because the numbers they generate have simple properties, but because those numbers are simple to compute.

Numerical methods for the solution of ordinary differential equations may be put in two categories---numerical integration (e.g., predictor-corrector) methods and Runge--Kutta methods. The advantages of the latter are that they are self-starting and easy to program for digital computers but neither of these reasons is very compelling when library subroutines can be written to handle systems of ordinary differential equations. Thus, the greater accuracy and the error-estimating ability of predictor-corrector methods make them desirable for systems of any complexity. However, when predictor-corrector methods are used, Runge--Kutta methods still find application in starting the computation and in changing the interval of integration.

In this chapter, we discuss only discretization methods that have the common feature that they do not attempt to approximate the actual solution y = φ (x) over a continuous range of the independent variable. Instead, approximate values are sought only on a set of discrete points \( x_0 , x_1 , x_2 , \ldots . \) This set is usually called grid or mesh. For simplicity, we assume that that these mesh points are equidistance: \( x_n = a+ n\,h , \quad n=0,1,2,\ldots . \) The quantity h is then called the stepsize, stepwidth, or simply the step of the method. Generally speaking, a discrete variable method for solving a differential equation consists in an algorithm which, corresponding to each lattice point xn, furnishes a number yn which is to be regarded as an approximation to the true value \( \varphi (x_n ) \) of the actual solution at the point xn.

Python has a dedicated command scipy.integrate.solve_ivp to solve the initial value problem:

\[ \frac{{\text d} y}{{\text d}t} = f (t, y) , \qquad y\left( t_0 \right) = y_0 . \]
Basic exponential decay showing automatically chosen time points.
Specifying points where the solution is desired.
Cannon fired upward with terminal event upon impact. The terminal and direction fields of an event are applied by monkey patching a function. Here y[0] is position and y[1] is velocity. The projectile starts at position 0 with velocity +10. Note that the integration never reaches t=100 because the event is terminal.
  1. R. Dormand, P. J. Prince, A family of embedded Runge-Kutta formulae, Journal of Computational and Applied Mathematics, Vol. 6, No. 1, pp. 19--26, 1980.
  2. L. W. Shampine, Some Practical Runge-Kutta Formulas, Mathematics of Computation, Vol. 46, No. 173, pp. 135-150, 1986.
  3. P. Bogacki, L.F. Shampine, A 3(2) Pair of Runge-Kutta Formulas, Applied Mathematics Letters, Vol. 2, No. 4. pp. 321-325, 1989. https://doi.org/10.1016/0893-9659(89)90079-7
  4. E. Hairer, G. Wanner, Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, doi: 10.1007/978-3-662-09947-6
  5. Backward Differentiation Formula on Wikipedia.
  6. L. F. Shampine, M. W. Reichelt, THE MATLAB ODE SUITE, SIAM J. SCI. COMPUTE., Vol. 18, No. 1, pp. 1-22, January 1997.
  7. A. C. Hindmarsh, “ODEPACK, A Systematized Collection of ODE Solvers,” IMACS Transactions on Scientific Computation, Vol 1., pp. 55-64, 1983.
  8. L. Petzold, “Automatic selection of methods for solving stiff and nonstiff systems of ordinary differential equations”, SIAM Journal on Scientific and Statistical Computing, Vol. 4, No. 1, pp. 136-148, 1983.
  9. Stiff equation on Wikipedia.