Preface
This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very little experience or have never used Mathematica before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.
Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in regular fonts. This means that you can copy and paste all comamnds into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts to your needs for learning how to use the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the second course APMA0330
Return to Mathematica tutorial for the first course APMA0340
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330
Return to Part III of the course APMA0330
Multistep Methods
The methods of Euler, Heun, Runge--Kutta, and Taylor are called single-step methods because they use only the information from one previous point to compute the successive point; that is, only the initial point (x0, y0) is used to compute (x1, y1) and, in general, yk is needed to compute yk+1. After several points have been found, it is feasible to use several prior points in the calculation. methods that use information at more than the last mesh point are referred to as miltistep methods. In this section, we discuss two types of multistep methods: Adams methods and backward differentiation methods. For simplicity, we assume throughout our exposition that the step length h is constant. All multistep methods are not self-starting because some initial points are required to be determined. A desirable feature of a multistep method is that the local truncation error can eb determined and a correction term can be included, which improved the accuracy of the answer at each step. Also, it is possible to determine whether the step size is small enough to obtain an accurate value for yn+1, yet large enough so that unnecessary and time-consuming calculations are eliminated. Using the combination of a predictor and corrector requires only two slope function evaluations compared to six evaluations required by RKF45. The predictor-corrector forms are among the most efficient known integration methods in terms of speed and accuracy. As a class of integration methods, the multistep sets are among the best, but individually as a predictor-corrector set, the choice for the best method varies depending on the application.
I. Adams Method
To integrate the initial value problem \( y' = f(x,y) , \quad y(x_0 ) = y_0 , \) on the mesh interval \( \left[ x_n , x_{n+1} \right] , \) we rewrite the problem in integral form:
Francis Bashforth (1819--1912), English mathematician and Anglican priest, was a classmate of J.C. Adams at Cambridge. He was particularly interested in ballistic and invented the Bashforth chronograph for measuring the velocity of artillery projectiles.
The polynomial Pk(t) of degree k contains k+1 coefficients that are determined from previously calculated data points. For example, suppose that we wish to use a first degree polynomial \( P_1 (t) =at+b . \) Then we need only the two data points \( (x_n , y_n ) \quad\mbox{and} \quad (x_{n-1} , y_{n-1}) . \) For P1 to be an approximation to the slope function, we require that
Actually, the predictor formulas are based on Newton's backward difference interpolation formula:
II. Backward Differentiation Formula
Another type of multistep method uses a polynomial Pk(t) of degree k to approximate the actual solution \( y = \phi (x) \) of the considered initial value problem rather than its derivative \( y' = \phi' (x) , \) as in the Adams method. We then differentiate Pk(t) and set \( P'_k (x_{n+1}) \) equal to \( f \left( x_{n+1} , y_{n+1} \right) \) to obtain an implicit for mual for yn+1. these are called backward differentiation formulas. These methods became widely used in the 1970s because of the work of C.William Gear (born in 1935, London, UK) on so called stiff differential equations, whose solutions are very difficult to approximate by discussed so far methods. A British-American mathematician C.W. Gear is well-known for his contributions in numerical analysis and computer science.
The simplest case uses a first degree polynomial \( P_1 (t) =at+b . \) The coefficients are chosen to match the computed values of the solution yn and pn+1
, from predictor stage. Hence, a and b must satisfyBy using higher order polynomials and corresponding more data points, we can obtain backward differentiation formulas of any order. The second order Adams-Moulton formula is
Fixed Point Iteration
Bracketing Methods
Secant Methods
Euler's Methods
Heun Method
Runge-Kutta Methods
Runge-Kutta Methods of order 2
Runge-Kutta Methods of order 3
Runge-Kutta Methods of order 4
Polynomial Approximations
Error Estimates
Adomian Decomposition Method
Modified Decomposition Method
Multistep Methods
Multistep Methods of order 3
Multistep Methods of order 4
Milne Method
Hamming Method
Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)