R TUTORIAL for the First Course. Part 3: Runge--Kutta of order 4

Prof. Vladimir A. Dobrushkin

This tutorial contains many R scripts. You, as the user, are free to use all codes for your needs, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately. Any comments and/or contributions for this tutorial are welcome; you can send your remarks to

Email Vladimir Dobrushkin

Runge--Kutta Methods of order 4

Each Runge--Kutta method is derived from an appropriate Taylor method in such a way that the final global error is of order O(hm), so it is of order m. These methods can be constructed for any order m. A trade-off to perform several slope evaluations at each step and obtain a good accuracy makes the Runge--Kutta of order 4 the most popular.

As usual, we consider an initial value problem \( y' = f(x,y) , \quad y(x_0 )= y_0 \) and seek approximate values yk to the actual solution \( y= \phi (x) \) at mesh points \( x_k = x_0 + k\,h , \quad k=0,1,2,\ldots . \) The fourth order Runge--Kutta method is based on computing yk+1 as follows

\[ y_{k+1} = y_k + h\,b_1 \,k_1 + h\,b_2 \,k_2 + h\,b_3 \,k_3 + h\,b_4 \,k_4 , \]
where cefficients are calculated as
\begin{align*} k_1 &= f(x_k , y_k ) , \\ k_2 &= f \left( x_k + c_2 h , y_k + a_{2,1} k_1 \right) , \\ k_3 &= f \left( x_k + c_3 h, y_k + a_{3,1} k_1 + a_{3,2} k_2 \right) , \\ k_4 &= f \left( x_k + c_4 h , y_k + a_{4,1} k_1 + a_{4,2} k_2 + a_{4,3} k_3 \right) . \end{align*}
By matching coefficients with those of the Taylor method of order 4 so that the local truncation error is of order \( O( h^5 ) , \) the founders of the method were able to obtain the following system of equations:
\begin{align*} c_2 &= a_{2,1} , \\ c_3 &= a_{3,1} + a_{3,2} , \\ c_4 &= a_{4,1} + a_{4,2} + a_{4,3} , \\ 1 &= b_1 + b_2 + b_3 + b_4 , \\ \frac{1}{2} &= b_1 c_2 + b_2 c_3 + b_3 c_4 , \\ \frac{1}{3} &= b_1 c_2^2 + b_2 c_3^2 + b_3 c_4^2 , \\ \frac{1}{4} &= b_1 c_2^3 + b_2 c_3^3 + b_3 c_4^3 , \\ \frac{1}{6} &= b_3 c_2 a_{3,2} + b_4 \left( c_2 a_{4,2} + c_3 a_{4,3} \right) , \\ \frac{1}{8} &= b_3 c_3 c_2 a_{3,2} + b_4 c_4 \left( c_2 a_{4,2} + cc_3 a_{4,3} \right) , \\ \frac{1}{12} &= b_3 c_2^2 a_{3,2} + b_4 \left( c_2^2 a_{4,2} + c_3^2 a_{4,3} \right) , \\ \frac{1}{24} &= b_4 c_2 a_{3,2} a_{4,3} . \end{align*}
The system involves 11 equations in 13 unknowns, so two of them could be choisen arbitrary. We are going to present some most useful choices for these coefficients.

By far the most often used is the classical fourth-order Runge-Kutta formula, which has a certain sleekness of organization about it:

\begin{equation} \label{E35.10} y_{n+1} = y_n + \frac{h}{6}\, ( k_1 + 2 k_2 + 2 k_3 + k_4 ), \end{equation}
where
\[ \begin{array}{l} k_1 = f_n = f(x_n , y_n ), \\ k_2 = f\left( x_n + \frac{h}{2}, y_n + \frac{h}{2} k_1 \right) , \\ k_3 =f\left( x_n + \frac{h}{2}, y_n + \frac{h}{2} k_2 \right) , \\ k_4 = f( x_n + h, y_n + h k_3 ). \end{array} \]
The fourth-order Runge-Kutta method requires four evaluations of the right-hand side per step h. This will be superior to the midpoint method if at least twice as large a step is possible. Generally speaking, high order does not always mean high accuracy. Predictor-corrector methods can be very much more efficient for problems where very high accuracy is a requirement.

The Runge-Kutta method iterates the x-values by simply adding a fixed step-size of h at each iteration. The y-iteration formula is far more interesting. It is a weighted average of four coefficients. Doing thi,s we see that k1 and k4 are given a weight of 1/6 in the weighted average, whereas k2 and k3 are weighted 1/3, or twice as heavily as k1 and k4. (As usual with a weighted average, the sum of the weights 1/6, 1/3, 1/3 and 1/6 is 1.) So what are these ki values that are being used in the weighted average?

  • k1 is simply Euler's prediction.
  • k2 is evaluated halfway across the prediction interval and gives us an estimate of the slope of the solution curve at this halfway point.
  • k3 has a formula which is quite similar to that of k2 except that where k1 used to be, there is now a k2. Essentially, the f-value here is yet another estimate of the slope of the solution at the "midpoint" of the prediction interval. This time, however, the y-value of the midpoint is not based on Euler's prediction, but on the y-jump predicted already with k2.
  • k4 evaluates f at \( x_n + h , \) which is at the extreme right of the prediction interval. The y-value coupled with this \( y_n + k_3 , \) is an estimate of the y-value at the right end of the interval, based on the y-jump just predicted by k3.


ode(x, times, func, parms, method = rkMethod("rk4"))

ode(x, times, func, parms, method = "ode45")

## disable polynomial interpolation (dense output)
## and fall back to linear approximation
ode(x, times, func, parms, method = rkMethod("rk45dp7", d = NULL))

## define and use a new rk method
ode(x, times, func, parms, 
  method = rkMethod(ID = "midpoint",
    varstep = FALSE,
    A      = c(0, 1/2),
    b1      = c(0, 1),
    c       = c(0, 1/2),
    stage   = 2,
    Qerr    = 1
  )
)

Kutta of order four:

\begin{equation} \label{E35.11} y_{n+1} = y_n + \frac{h}{8}\, ( k_1 + 3 k_2 + 3 k_3 + k_4 ), %\eqno{(3.12)} \end{equation}
where
\[ \begin{array}{l} k_1 = f_n = f(x_n , y_n ), \\ k_2 = f\left( x_n + \frac{h}{3}\, , y_n + \frac{h}{3}\, k_1 \right) , \\ k_3 =f\left( x_n + \frac{2h}{3}\, , y_n - \frac{h}{3}\, k_1 + h k_2 \right) , \\ k_4 = f( x_n + h, y_n + h k_1 - h k_2 + h k_3 ). \end{array} \]

Gill's method:

\begin{equation} \label{E35.12} y_{n+1} = y_n + \frac{h}{6} \left[ k_1 + 2 \left( 1 - \frac{1}{\sqrt{2}} \right) k_2 + 2 \left( 1 + \frac{1}{\sqrt{2}} \right) k_3 + k_4 \right] , \end{equation}
where
\[ \begin{array}{l} k_1 = f(x_n , y_n ), \\ k_2 = f\left( x_n + \frac{h}{2}\, , y_n + \frac{h}{2}\, k_1 \right) , \\ k_3 =f\left( x_n + \frac{h}{2}\, , y_n - \left( \frac{1}{2} - \frac{1}{\sqrt{2}} \right) h k_1 + \left( 1 - \frac{1}{\sqrt{2}} \right) h k_2 \right) , \\ k_4 = f( x_n + h, y_n - \frac{1}{\sqrt{2}} h k_2 + \left( 1 + \frac{1}{\sqrt{2}} \right) h k_3 ). \end{array} \]

Fifth order Runge--Kutta algorithm

Here is the fifth order Runge--Kutta method proposed by Butcher:

\begin{equation} \label{E36.butcher} y_{k+1} = y_k + \frac{h}{90} \left( 7k_1 + 32 k_3 + 12 k_4 + 32 k_5 + 7 k_6 \right) , \qquad k=0,1,\ldots , \end{equation}
where
\begin{align*} k_1 &= f(x_k , y_k ), \\ k_2 &= f \left( x_k + h/4 , y_k + h\,k_1 /4 \right) , \\ k_3 &= f \left( x_k + h/4 , y_k + k_1 \,h/8 + k_2 \,h/8 \right), \\ k_4 &= f \left( x_k + h/2 , y_k - k_2 \,h/2 + k_3 \,h \right) , \\ k_5 &= f \left( x_k + 3\,h/4 , y_k + 3\, k_1 \, h/16 +9 k_4 \, h/16 \right) , \\ k_6 &= f \left( x_k + h , y_k - 3\,k_1 \, h/7 + 2\,k_2 \, h/7 + 12\, k_3 \, h/7 -12\, k_4 \, h/7 + 8\, k_5 \, h/7 \right) . \end{align*}

Runge--Kutta--Fehlberg Method (RKF45)

One way to guarantee accuracy in the solution of an initial value problem is t solve the problem twice using step sizes h and h/2 and compare answers at the mesh points corresponding to the largest step size. However, this requires a significant amount of computation for the smaller step length and must be repeated if it is determined that the agreement is not good enough.

The Runge--Kutta--Fehlberg method (denoted RKF45) or Fehlberg method was developed by the German mathematician Erwin Fehlberg (1911--1990) in 1969 and is based on the large class of Runge–Kutta methods. The novelty of Fehlberg's method is that it is an embedded method from the Runge-Kutta family, and it has a procedure to determine if the proper step size h is being used. At each step, two different approximations for the solution are made and compared. If the two answers are in close agreement, the approximation is accepted. If the two answers do not agree to a specified accuracy, the step length is reduced. If the answers agree to more significant digits than required, the step size is increased. RKF45 method is a method of order \( O( h^4 ) \) with an error estimator of order \( O( h^5 ) . \)

Butcher tableau for Fehlberg's 4(5) method

\[ \begin{array}{c|cccccc} 0&& \\ \frac{1}{4} & \frac{1}{4} \\ \frac{3}{8} & \frac{3}{32} & \frac{9}{32} \\ \frac{12}{13} & \frac{1932}{2197} & - \frac{7200}{2197} & \frac{7296}{2197} \\ 1 & \frac{439}{216} & -8 & \frac{3680}{513} & - \frac{845}{4104} \\ \frac{1}{2} & -\frac{8}{27} & 2 & - \frac{3544}{2565} & \frac{1859}{4104} & - \frac{11}{40} \\ \hline \mbox{order } O(h^4 ) & \frac{25}{216} & 0 & \frac{1408}{2565} & \frac{2197}{4101} & - \frac{1}{5} & 0 \\ \mbox{order } O(h^5 ) & \frac{16}{135} & 0 & \frac{6656}{12825} & \frac{28561}{56430} & - \frac{9}{50} & \frac{2}{55} \end{array} . \]
The optimal step size sh can be determined by multilying the sclar s times the current step size h. The scalar s is
\[] s = \left( \frac{\mbox{Tol} \,h}{2 \, |z_{k+1} - y_{k+1} |} \right)^{1/4} \approx 0.84 \left( \frac{\mbox{Tol} \,h}{|z_{k+1} - y_{k+1} |} \right)^{1/4} , \]
where zk+1 is approximation based on fifth order RK method and yk+1 is approximation based on fourth order RK method.