Runge--Kutta Methods


This tutorial surves as an introduction to the computer algebra system Maple, created by MapleSoft ©. It is made solely for the purpose of education. This tutorial assumes that you have a working knowledge of basic Maple functionality. It is recommended to switch to Maple Classic so all codes will have the same appearance, regardless of the version used. To achieve this, follow the following steps:

Usually, the same algorithm can be implemented in many different ways to accomplish the same task. As a result, this tutorial may introduce several different commands (or sets of commands) to do the same thing. It is always good to be aware of these alternate pathways; however, you are welcome to pick up any approach whatever you like the best .

Finally, the commands in this tutorial are all written in red text (as Maple input), while Maple output is in blue, which means that the output is in 2D output style. You can copy and paste all commands into Maple, change the parameters and run them. You, as the user, are free to use the mw files and scripts to your needs, and have the rights to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately. The tutorial accompanies the textbook Applied Differential Equations. The Primary Course by Vladimir Dobrushkin, CRC Press, 2015; http://www.crcpress.com/product/isbn/9781439851043.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Maple tutorial for the first course (APMA0330)
Return to Maple tutorial for the second course (APMA0340)
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330

Return to Numerical Methods

Runge--Kutta Methods

Both previously discussed rules (improved Euler and modified Euler) are particular cases of the family of numerical methods known as the Runge-Kutta methods. The Runge–Kutta methods are a family of implicit and explicit iterative methods used for the approximate solutions of ordinary differential equations. These methods were developed around 1900 by the German mathematicians Carl David Tolmé Runge (1856--1927) and Martin Wilhelm Kutta (1867--1944).

Besides being a co-developer of the Runge--Kutta method, Martin is also remembered for the Zhukovsky--Kutta aerofoil, the Kutta--Zhukovsky theorem and the Kutta condition in aerodynamics. Runge's son Wilhelm was an early developer of radar, and his daughter, Nerina (Nina), married the German American mathematician Richard Courant (1888-1972), who established one of America’s most prestigious institutes of applied mathematics and co-author of the finite element method.

The idea of Runge--Kutta methods is to take successive (weighted) Euler steps to approximate a Taylor series. In this way function evaluations (and not derivatives) are used. For example, consider the one-step formulation of the midpoint method.

\[ \begin{eqnarray*} k_1 &=& f(t_n , y_n ) , \\ k_2 &=& f \left( t_n + \frac{h}{2} , y_n + \frac{1}{2}\,hk_1 \right) \\ y_{n+1} &=& y_n + hk_2 . \end{eqnarray*} \]

Extending the above approach, repeated function evaluation can be used to obtain higher-order methods. Let us denote the Runge--Kutta approximation to the solution of the initial value problem \( y' = f(x,y) , \quad y(x_0 ) = y_0 \) at mesh point \( t_{n+1} = t_n +h \) by yn+1. Then

\[ y_{n+1} = y_n + h \left( b_1 k_1 + b_2 k_2 + \cdots + b_m k_m \right) , \]
where
\[ \begin{split} k_1 &= f \left( x_n , y_n \right) , \\ k_2 &= f\left(xt_n + c_2 h, y_n + h\,a_{21} k_1 \right) , \\ k_{3} &= f\left( x_n + c_3 h , y_n + h\,a_{31} k1 + h\,a_{32} k_2 \right) , \\ & \vdots \\ k_m &= f\left( x_n + c_m h , y_n + h\,a_{m1} k1 + h\,a_{m2} k_2 + \cdots + h\,a_{m,m-1} k_{m-1} \right) . \end{split} \]
To specify a particular method, one needs to provide the integer m (the number of stages), and the coefficients ai,j, weights bi, and notes ci. The lower triangular matrix \( {\bf A} = [a_{i,j}] \) is called the Runge–Kutta matrix. These data are usually arranged in a mnemonic device, known as a Butcher tableau (after a New Zealand mathematician John Charles Butcher, born in 1933):
\[ \begin{array}{c|ccccc} 0&0&0& \cdots & 0 & 0 \\ c_2 & a_{2,1} & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ c_s & a_{s,1} & a_{s,2} & \cdots & a_{s, s-1} & 0 \\ \hline & b_1 & b_2 & \cdots & b_{s-1} & b_s \end{array} \]
where \( s \) is the number of stages. The Runge–Kutta method is consistent if
\[ c_i = \sum_{j=1}^s a_{i,j} , \qquad i=1,2,\ldots , s.
\]
These conditions effectively determine the points in time at which the function is sampled and are a particularly useful device in the derivation of high-order Runge--Kutta methods. The coefficients of the method are free parameters that are chosen to satisfy a Taylor series expansion through some order in the time step \( h . \) In practice other conditions such as stability can also constrain the coefficients. The minimum m required for an explicit m-stage Runge–Kutta method to have order p is an open problem. Some values which are known are
\[ \begin{array}{c|cccccccc} p &1&1&2&3&4&5&6&7&8 \\ \hline \min m & 1&1&2&3&4&6 & 7&9&11 \end{array} \]

Explicit Runge--Kutta methods are a special case where the matrix A is strictly lower triangular:

\[ a_{i,j} =0, \qquad j\ge i, \quad j=1,2,\ldots , s .
\]
The row-sum conditions can be visualized as summing across the rows of the table. Notice that a consequence of explicitness is \( c_1 =0 ,\) so that the function is sampled at the beginning of the current integration step.

Example: The Butcher table for the explicit midpoint method is given by:

\[ \begin{array}{c|cc} 0&0&0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ \hline & 0 & 1 \end{array} \]

The Euler’s method is sometimes called the first order Runge-Kutta Method, and the Heun’s method the second order one. In following sections, we consider a family of Runge-Kutta methods.

Explicit Runge–Kutta methods are generally unsuitable for the solution of stiff equations because their region of absolute stability is small. The instability of explicit Runge–Kutta methods motivates the development of implicit methods. An implicit Runge–Kutta method has the form

\[ y_{n+1} = y_n + h \sum_{i=1}^m b_i k_i , \]
where
\[ k_i = f\left( x_n + c_i h , y_n + h \sum_{j=1}^m a_{i,j} k_j \right) , \qquad i =1,2,\ldots , m . \]
The difference with an explicit method is that in an explicit method, the sum over j only goes up to i-1. This also shows up in the Butcher tableau: the coefficient matrix \( {\bf A} = [ a_{i,j}] \) of an explicit method is lower triangular. In an implicit method, the sum over j goes up to m and the coefficient matrix is not triangular, yielding a Butcher tableau of the form
\[ \begin{array}{c|ccccc} c_1&a_{1,1}&a_{1,2}& \cdots & a_{1,m-1} & a_{1,m} \\ c_2 & a_{2,1} & a_{2,2} & \cdots & a_{2,m-1} & a_{2,m} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ c_m & a_{m,1} & a_{m,2} & \cdots & a_{m, m-1} & a_{m,m} \\ \hline & b_1 & b_2 & \cdots & b_{m-1} & b_m \end{array} \quad = \quad \begin{array}{c|c} {\bf c} & {\bf A} \\ \hline & {\bf b}^{\mathrm T} \end{array} . \]

Example: The simplest example of an implicit Runge–Kutta method is the backward Euler method:

\[ y_{n+1} = y_n + h\, f\left( x_n +h , y_{n+1} \right) , \qquad n=0,1,\ldots . \]
The Butcher tableau for this is simply:
\[ \begin{array}{c|c} 1 & 1 \\ \hline & 1 \end{array} . \]