Preface


The methods named after Carl Runge and Wilhelm Kutta (letter "u" in both last names is pronounced as in the word "rule") are designed to imitate the Taylor series method without requiring analytic differentiation of the original slope function. Recall the Taylor series in two variables:

\[ f(x+h, y+k) = \sum_{i\ge 0} \frac{1}{i!} \left( h\,\frac{\partial}{\partial x} + k\,\frac{\partial}{\partial y} \right)^i f(x,y) . \]

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part V of the course APMA0330

Runge--Kutta Methods


Martin Kutta Carl Runge

In 1895 paper, the German mathematician Carl David Tolmé Runge (1856--1927) extended the approximation method of Euler to a more elaborate scheme which was capable of greater accuracy. He explored three main schemes, called now the midpoint method, the Heun method, and the trapezoid rule. Runge was very sporting, a fit and active man even as he grew older, and on his 70th birthday he entertained his grandchildren by doing handstands. Runge's son Wilhelm was an early developer of radar, and his daughter, Nerina (Nina), married the German American mathematician Richard Courant (1888-1972), who established one of America’s most prestigious institutes of applied mathematics and was co-author of the finite element method.

His immediate successors, the German mathematician Karl Heun (1859--1929) and Martin Wilhelm Kutta (1867--1944) layed down the foundations for a family of numerical methods known as the Runge--Kutta methods. In 1900, K. Heun analyzed some numerical methods and introduced amongst other methods the third order method used for the approximate solutions of ordinary differential equations. The paper by K.W. Kutta, which appeared in 1901, took the analysis of Runge--Kutta methods as far as order 5. He made a complete classification of order 4 methods and introduced the famous method, known now as the classical Runge--Kutta method. Besides being a co-developer of the Runge--Kutta method, Martin is also remembered for the Zhukovsky--Kutta aerofoil, the Kutta--Zhukovsky theorem, and the Kutta condition in aerodynamics.

The first phase in the history of Runge--Kutta methods ended in the work of the Finnish mathematician Evert Johannes Nyström (1895--1960), a former student of Lindelöf and a professor of applied mathematics at the Helsinki University of Technology. He took the analysis of fifth order methods to its completion but, more importantly, he extended the use of Runge--Kutta methods to second order differential equation systems. These systems arise in dynamical problems and can often be solved more efficiently when posed in their original form rather than as converted to an equivalent first order systems. Further work on Runge--Kutta methods was carried out by the Professor Anton Huta (born in 1915) of the University of Bratislava who took the analysis as far as order 6. The theory for a system comes out of the work of S. Gill, R.H. Merson, and John Charles Butcher (born in 1933) from the University of Auckland. A history of development of Runge--Kutta methods can be found in
Butcher, J.C., A history of Runge--Kutta methods, Applied Numerical Mathematics, 20 (1996) 247--260

Both previously discussed rules (improved Euler and modified Euler) are particular cases of a family of implicit and explicit iterative numerical methods known as the Runge--Kutta methods. The idea of Runge--Kutta methods is to take successive (weighted) Euler steps to approximate a Taylor series. In this way function evaluations (and not derivatives) are used.

For example, consider the one-step formulation of the midpoint method used to find a numerical solution to the initial value problem \( y' = f(x,y), \quad y(x_0 ) = y_0 . \)

\[ \begin{eqnarray*} k_1 &=& f(x_n , y_n ) , \\ k_2 &=& f \left(xt_n + \frac{h}{2} , y_n + \frac{1}{2}\,h\,k_1 \right) \\ y_{n+1} &=& y_n + h\,k_2 . \end{eqnarray*} \]

Extending the above approach, repeated function evaluation can be used to obtain higher-order methods. Let us denote the Runge--Kutta approximation to the solution of the initial value problem \( y' = f(x,y) , \quad y(x_0 ) = y_0 \) at mesh point \( x_{n+1} = x_n +h \) by yn+1. Then

\[ y_{n+1} = y_n + h \left( b_1 k_1 + b_2 k_2 + \cdots + b_m k_m \right) , \]
where
\[ \begin{split} k_1 &= f \left( x_n , y_n \right) , \\ k_2 &= f\left(xt_n + c_2 h, y_n + h\,a_{21} k_1 \right) , \\ k_{3} &= f\left( x_n + c_3 h , y_n + h\,a_{31} k_1 + h\,a_{32} k_2 \right) , \\ & \vdots \\ k_m &= f\left( x_n + c_m h , y_n + h\,a_{m1} k_1 + h\,a_{m2} k_2 + \cdots + h\,a_{m,m-1} k_{m-1} \right) . \end{split} \]
To specify a particular method, one needs to provide the integer m (the number of stages), and the coefficients ai,j, weights bi, and notes ci. The lower triangular matrix \( {\bf A} = [a_{i,j}] \) is called the Runge–-Kutta matrix. These data are usually arranged in a mnemonic device, known as a Butcher tableau (after a New Zealand mathematician John Charles Butcher):
\[ \begin{array}{c|ccccc} 0&0&0& \cdots & 0 & 0 \\ c_2 & a_{2,1} & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ c_s & a_{s,1} & a_{s,2} & \cdots & a_{s, s-1} & 0 \\ \hline & b_1 & b_2 & \cdots & b_{s-1} & b_s \end{array} \]
where \( s \) is the number of stages. The Runge--Kutta method is consistent if
\[ c_i = \sum_{j=1}^s a_{i,j} , \qquad i=1,2,\ldots , s.
\]
These conditions effectively determine the points in time at which the function is sampled and are a particularly useful device in the derivation of high-order Runge--Kutta methods. The coefficients of the method are free parameters that are chosen to satisfy a Taylor series expansion through some order in the time step \( h . \) In practice other conditions such as stability can also constrain the coefficients. The minimum m required for an explicit m-stage Runge--Kutta method to have order p is an open problem. Some values which are known are
\[ \begin{array}{c|cccccccc} p &1&1&2&3&4&5&6&7&8 \\ \hline \min m & 1&1&2&3&4&6 & 7&9&11 \end{array} \]

Explicit Runge--Kutta methods are a special case where the matrix A is strictly lower triangular:

\[ a_{i,j} =0, \qquad j\ge i, \quad j=1,2,\ldots , s .
\]
The row-sum conditions can be visualized as summing across the rows of the table. Notice that a consequence of explicitness is \( c_1 =0 ,\) so that the function is sampled at the beginning of the current integration step.

Example: The Butcher table for the explicit midpoint method is given by:

\[ \begin{array}{c|cc} 0&0&0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ \hline & 0 & 1 \end{array} \]

The Euler’s method is sometimes called the first order Runge--Kutta Method, and the Heun’s method the second order one. In following sections, we consider a family of Runge--Kutta methods.

Explicit Runge--Kutta methods are generally unsuitable for the solution of stiff equations because their region of absolute stability is small. The instability of explicit Runge--Kutta methods motivates the development of implicit methods. An implicit Runge--Kutta method has the form

\[ y_{n+1} = y_n + h \sum_{i=1}^m b_i k_i , \]
where
\[ k_i = f\left( x_n + c_i h , y_n + h \sum_{j=1}^m a_{i,j} k_j \right) , \qquad i =1,2,\ldots , m . \]
The difference with an explicit method is that in an explicit method, the sum over j only goes up to i-1. This also shows up in the Butcher tableau: the coefficient matrix \( {\bf A} = [ a_{i,j}] \) of an explicit method is lower triangular. In an implicit method, the sum over j goes up to m and the coefficient matrix is not triangular, yielding a Butcher tableau of the form
\[ \begin{array}{c|ccccc} c_1&a_{1,1}&a_{1,2}& \cdots & a_{1,m-1} & a_{1,m} \\ c_2 & a_{2,1} & a_{2,2} & \cdots & a_{2,m-1} & a_{2,m} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ c_m & a_{m,1} & a_{m,2} & \cdots & a_{m, m-1} & a_{m,m} \\ \hline & b_1 & b_2 & \cdots & b_{m-1} & b_m \end{array} \quad = \quad \begin{array}{c|c} {\bf c} & {\bf A} \\ \hline & {\bf b}^{\mathrm T} \end{array} . \]

Example: Let us take a

Example: The simplest example of an implicit Runge--Kutta method is the backward Euler method:

\[ y_{n+1} = y_n + h\, f\left( x_n +h , y_{n+1} \right) , \qquad n=0,1,\ldots . \]
The Butcher tableau for this is simply:
\[ \begin{array}{c|c} 1 & 1 \\ \hline & 1 \end{array} . \]

 

V. Method Comparison


Sometimes you may be interested to find out what methods are being used in NDSolve.
Here you can see the coefficients of the default 2(1) embedded pair.

NDSolve`EmbeddedExplicitRungeKuttaCoefficients[2, Infinity]
Out[9]=
{{{1}, {1/2, 1/2}}, {1/2, 1/2, 0}, {1, 1}, {-(1/2), 2/3, -(1/6)}}

You also may want to compare some of the different methods to see how
they perform for a specific problem.

Implicit Runge--Kutta methods have a number of desirable properties.

The Gauss--Legendre methods, for example, are self-adjoint, meaning
that they provide the same solution when integrating forward or
backward in time.

http://reference.wolfram.com/language/tutorial/NDSolveImplicitRungeKutta.html


A generic framework for implicit Runge--Kutta methods has been implemented. The focus so far is on methods with interesting geometric properties and currently covers the following schemes:

"ImplicitRungeKuttaGaussCoefficients"
"ImplicitRungeKuttaLobattoIIIACoefficients"
"ImplicitRungeKuttaLobattoIIIBCoefficients"
"ImplicitRungeKuttaLobattoIIICCoefficients"
"ImplicitRungeKuttaRadauIACoefficients"
"ImplicitRungeKuttaRadauIIACoefficients"

The derivation of the method coefficients can be carried out to arbitrary order and arbitrary precision.

 

  1. explicit Runge--Kutta
  2. Butcher, J.C., A history of Runge--Kutta methods, Applied Numerical Mathematics, 20 (1996) 247--260

 

 

Return to Mathematica page

Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)