Ordinary differential equations are at the heart of our perception of the physical universe. For this reason numerical methods for their solutions is one of the oldest and most successful areas of numerical computations.
It would be very nice if discrete models provide calculated solutions to differential (ordinary and partial) equations exactly, but of course they do not. In fact in general they could not, even in principle, since the solution depends on an infinite amount of initial data. Instead, the best we can hope for is that the errors introduced by discretization will be small when those initial data are resonably well-behaved.
In this chapter, we discuss some simple numerical method applicable to first order ordinary differential equations in normal form subject to the prescribed initial condition:
The finite diference approximations have a more complecated "physics" than the equations they are designed to simulate. In general, finite recurrences usually have more propertities than their continuous anologous counterparts. However, this arony is no paradox; the finite differences are used not because the numbers they generate have simple properties, but because those numbers are simple to compute.
Numerical methods for the solution of ordinary differential equations may be put in two categories---numerical integration (e.g., predictor-corrector) methods and Runge--Kutta methods. The advantages of the latter are that they are self-starting and easy to program for digital computers but neither of these reasons is very compelling when library subroutines can be written to handle systems of ordinary differential equations. Thus, the greater accuracy and the error-estimating ability of predictor-corrector methods make them desirable for systems of any complexity. However, when predictor-corrector methods are used, Runge--Kutta methods still find application in starting the computation and in changing the interval of integration.
In this chapter, we discuss only discretization methods that have the common feature that they do not attempt to approximate the actual solution y = φ (x) over a continuous range of the independent variable. Instead, approximate values are sought only on a set of discrete points \( x_0 , x_1 , x_2 , \ldots . \) This set is usually called grid or mesh. For simplicity, we assume that that these mesh points are equidistance: \( x_n = a+ n\,h , \quad n=0,1,2,\ldots . \) The quantity h is then called the stepsize, stepwidth, or simply the step of the method. Generally speaking, a discrete variable method for solving a differential equation consists in an algorithm which, corresponding to each lattice point xn, furnishes a number yn which is to be regarded as an approximation to the true value \( \varphi (x_n ) \) of the actual solution at the point xn.
Python has a dedicated command scipy.integrate.solve_ivp to solve the initial value problem: