Preface


This section is devoted to fourth order Runge--Kutta algorithm for solving first order differential equations subject to the prescribed initial condition.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part III of the course APMA0330

Runge-Kutta of order 4


The fourth-order formula, known as the Runge--Kutta formula, has been used extensively to obtain approximate solutions of differential equations of first, second, and higher orders. The original idea for such formulas seems to be due to C. Runge. This idea was used more effectively for first-order equations by W. Kutta and for second-order equations by E.J. Nyström. The extension to order n was made by R. Zurmuhl.

Each Runge--Kutta method is derived from an appropriate Taylor method in such a way that the final global error is of order O(hm), so it is of order m. These methods can be constructed for any order m. All Runge--Kutta algorithms of the same order m are equivalent from numerical analysis point of view as having the same accuracy. However, there method may provide slightly different numerical answers due to round-off errors because they use different number of arithmetic operations. A trade-off to perform several slope evaluations at each step and obtain a good accuracy makes the Runge--Kutta of order 4 the most popular.

As usual, we consider an initial value problem \( y' = f(x,y) , \quad y(x_0 )= y_0 \) and seek approximate values yk to the actual solution \( y= \phi (x) \) at mesh points \( x_n = x_0 + n\,h , \quad n=0,1,2,\ldots . \) The fourth order Runge--Kutta method is based on computing yn+1 as follows

\[ y_{n+1} = y_n + h\,b_1 \,k_1 + h\,b_2 \,k_2 + h\,b_3 \,k_3 + h\,b_4 \,k_4 , \]
where coefficients are calculated as
\begin{align*} k_1 &= f(x_n , y_n ) , \\ k_2 &= f \left( x_n + c_2 h , y_n + a_{2,1} k_1 \right) , \\ k_3 &= f \left( x_n + c_3 h, y_n + a_{3,1} k_1 + a_{3,2} k_2 \right) , \\ k_4 &= f \left( x_n + c_4 h , y_n + a_{4,1} k_1 + a_{4,2} k_2 + a_{4,3} k_3 \right) . \end{align*}
By matching coefficients with those of the Taylor method of order 4 so that the local truncation error is of order \( O( h^5 ) , \) the founders of the method were able to obtain the following system of equations:
\begin{align*} c_2 &= a_{2,1} , \\ c_3 &= a_{3,1} + a_{3,2} , \\ c_4 &= a_{4,1} + a_{4,2} + a_{4,3} , \\ 1 &= b_1 + b_2 + b_3 + b_4 , \\ \frac{1}{2} &= b_1 c_2 + b_2 c_3 + b_3 c_4 , \\ \frac{1}{3} &= b_1 c_2^2 + b_2 c_3^2 + b_3 c_4^2 , \\ \frac{1}{4} &= b_1 c_2^3 + b_2 c_3^3 + b_3 c_4^3 , \\ \frac{1}{6} &= b_3 c_2 a_{3,2} + b_4 \left( c_2 a_{4,2} + c_3 a_{4,3} \right) , \\ \frac{1}{8} &= b_3 c_3 c_2 a_{3,2} + b_4 c_4 \left( c_2 a_{4,2} + cc_3 a_{4,3} \right) , \\ \frac{1}{12} &= b_3 c_2^2 a_{3,2} + b_4 \left( c_2^2 a_{4,2} + c_3^2 a_{4,3} \right) , \\ \frac{1}{24} &= b_4 c_2 a_{3,2} a_{4,3} . \end{align*}
The system involves 11 equations in 13 unknowns, so two of them could be chosen arbitrary. We are going to present some most useful choices for these coefficients.

 

>I. Classical Runge-Kutta of order 4


So far the most often used is the classical fourth-order Runge-Kutta formula, which has a certain sleekness of organization about it:

\begin{equation} \label{E35.10} y_{n+1} = y_n + \frac{h}{6}\, ( k_1 + 2 k_2 + 2 k_3 + k_4 ), \end{equation}
where
\[ \begin{array}{l} k_1 = f_n = f(x_n , y_n ), \\ k_2 = f\left( x_n + \frac{h}{2}, y_n + \frac{h}{2} k_1 \right) , \\ k_3 =f\left( x_n + \frac{h}{2}, y_n + \frac{h}{2} k_2 \right) , \\ k_4 = f( x_n + h, y_n + h k_3 ). \end{array} \]
The fourth-order Runge-Kutta method requires four evaluations of the right-hand side per step h. This will be superior to the midpoint method if at least twice as large a step is possible. Generally speaking, high order does not always mean high accuracy. Predictor-corrector methods can be very much more efficient for problems where very high accuracy is a requirement.

The Runge-Kutta method iterates the x-values by simply adding a fixed step-size of h at each iteration. The y-iteration formula is far more interesting. It is a weighted average of four coefficients. Doing this,s we see that k1 and k4 are given a weight of 1/6 in the weighted average, whereas k2 and k3 are weighted 1/3, or twice as heavily as k1 and k4. (As usual with a weighted average, the sum of the weights 1/6, 1/3, 1/3 and 1/6 is 1.) So what are these ki values that are being used in the weighted average?

Here is a pseudocode to implement the classical Runge--Kutta method of order 4:

procedure RK4(f,t,x,h,n)
integer j,n;            real k1, k2, k3, k4, h, t, t0, x
external function f
output 0, t, x
t0t
for j = 1 to n
   k1hf(t,x)
   k2hf(t+h/2,x+k1/2)
   k2hf(t+h/2,x+k2/2)
   k4hf(t+h,x+k3)
   xx + h∗(k1 + 2k2 + 2k3 + k4)/6
   tt0 + j∗h
   output j, t, x
end for
end procedure RK4
Here is one step iteration code:
oneRK4[args_] := Module[{f, x0, y0, t0, k1, k2, k3, k4, h, x1, y1, t1, k12, k22, k32, k42},
{f, y0, t0, k1, k2, k3, k4, h} = args;
y1 = y0 + ((k1 + 2*k2 + 2*k3 + k4)/6.0);
t1 = t0 + h;
k12 = h*f[y0, t0];
k22 = h*f[y0 + k1*0.5, t0 + h*0.5];
k32 = h*f[y0 + k2*0.5, t0 + h*0.5];
k42 = h*f[y0 + k3, t0 + h];
{f, y1, t1, k12, k22, k32, k42, h}];
This subroutine oneRK4 can be used to collect data into list and the plot, as the following example illustrates.

Example: Let us consider the differential equation with oscillating slope function:

\[ \dot{y} = {\text d}y/{\text d}t = y\,\cos (t+y) . \]
Before starting calculation, it is convenient to set up the rules:
SetOptions[EvaluationNotebook[], Background -> White]
You might have multiple notebooks, one for input and many for evaluation. EvaluationNotebook[] gives you the evaluating notebook i.e. the one running the code.
list = {};
Clear[f];
f[y_, t_] := y*Cos[t + y];
h = 0.01; t0 = 0.0; y0 = 1;
t1 = t0 + h;
k1 = h*f[y0, t0];
k2 = h*f[y0 + k1*0.5, t0 + h*0.5];
k3 = h*f[y0 + k2*0.5, t0 + h*0.5];
k4 = h*f[y0 + k3, t0 + h];
n = 3000;
NestList[(AppendTo[list, {#, oneRK4[#]}]; oneRK4[#]) &, {f, y0, t0, k1, k2, k3, k4, h}, n];
list
Then we collect data and plot.
xy = Table[{list[[i]][[1]][[3]], list[[i]][[1]][[2]]}, {i, 1, Length@list}];
ListLinePlot[xy, PlotRange -> All, PlotLabel->"solution to y'=y*cos(t+y)", PlotStyle->{Thickness[0.01]}]
You may want to compare with the standard Mathematica output provided by NDSolve:
s = NDSolve[{y'[x] == y[x] Cos[x + y[x]], y[0] == 1}, y, {x, 0, 30}];
Plot[Evaluate[y[x] /. s], {x, 0, 30}, PlotRange -> All]
   ■
Example: We revisit our example \( f(x,y) = x^2 - y^2 , \quad y(1) = 1 , \) and solve it in the following way:
Clear[y]
f[x_, y_] := x^2 - y^2           (* this function is chosen as an example *)
y[0] = 1;
Do[k1 = h f[h n, y[n]];
k2 = h f[h (n + .5), y[n] + k1 / 2];
k3 = h f[h (n + .5), y[n] + k2 / 2];
k4 = h f[h (n + 1), y[n] + k3];
y[n + 1] = y[n] + 1 / 6 * (k1 + 2 k2 + 2 k3 + k4),
{n, 0, 9}]
y[10]

Other codes of classical Runge-Kutta method (the first one is when the step size h is specified, and another one when the number of steps is given):

RK4[f_, {x0_, xn_}, y0_, h_] :=
Block[{xold = x0, yold = y0, xhalf, xnew, ynew, k1, k2, k3, k4,
sollist = {{x0, y0}}, x, y, n, steps}, steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
xhalf = xold + h/2;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xhalf, y -> yold + k2*h/2};
k4 = f /. {x -> xnew, y -> yold + k3*h};
ynew = yold + (h/6)*(k1 + 2*k2 + 2*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];

Another version when the number of steps is specified.

RK4s[f_, {x0_, xn_}, y0_, steps_] :=
Block[{xold = x0, yold = y0, xhalf, xnew, ynew, k1, k2, k3, k4,
sollist = {{x0, y0}}, x, y, n, h}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
xhalf = xold + h/2;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xhalf, y -> yold + k2*h/2};
k4 = f /. {x -> xnew, y -> yold + k3*h};
ynew = yold + (h/6)*(k1 + 2*k2 + 2*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
Here is another version of classical RK4 lgorithm:
RK4th[n_, x0_, y0_, xf_, f_] := Module[{Y, h, i, k1, k2, k3, k4},
h = (xf - x0)/n;
Y = y0;
For[i = 0, i > n, i++,
k1 = f[x0 + i*h, Y];
k2 = f[x0 + i*h + 0.5*h, Y + 0.5*k1*h];
k3 = f[x0 + i*h + 0.5*h, Y + 0.5*k2*h];
k4 = f[x0 + i*h + h, Y + k3*h];
Y = Y + 1/6*(k1 + 2*k2 + 2*k3 + k4)*h]; Y]
The above procedure estimates the solution of ordinary differential equations at a point xf.
n = number of steps
x0 = boundary condition for x
y0 = boundary condition for y
xf = value of x at which y is desired
f = differential equation in the form dy/dx = f(x,y)

The following loop calculates the following:
AV = approximate value of the ODE solution
Et = true error
et = absolute relative true error percentage
Ea = approximate error
ea = absolute relative approximate error percentage
sig = least number of significant digits correct in approximation

nth=7;
Do[
Nn[i]=2^i;
H[i]=(xf-x0)/Nn[i];
AV[i]=RK4th[2^i,x0,y0,xf,f];
Et[i]=EV-AV[i];
et[i]=Abs[(Et[i]/EV)]*100.0;
If[i>0,
Ea[i]=AV[i]-AV[i-1];
ea[i]=Abs[Ea[i]/AV[i]]*100.0;
sig[i]=Floor[(2-Log[10,ea[i]/0.5])];
If[sig[i] > 0,sig[i]=0];
] ,{i,0,nth}];

Example: Consider the IVP: \( y' = 1/(3*x - 2*y + 1) , \quad y(0) =1 .\)

f[x_, y_] = 1/(3*x - 2*y + 1)
RK4s[f[x, y], {0, 1}, 0, 10]         (* 10 is the number of steps *)
Out[3]= {1., 0.614281}
f[x_, y_] = 1/(3*x - 2*y + 1)
RK4[f[x, y], {0, 1}, 0, 0.1]         (* h=0.1 is the step size *)
Out[5]= {1., 0.614281}

Here is a slightly different version of the above code:

RK4[f_, {x_, x0_, xn_}, {y_, y0_}, steps_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
k1 = N[f[xold, yold]];
k2 = N[f[xold + h/2, yold + h*k1/2]];
k3 = N[f[xold + h/2, yold + k2*h/2]];
k4 = N[f[xold + h, yold + k3*h]];
ynew = N[yold + h*(k1 + 2*k2 + 2*k3 + k4)/6];
sollist = Append[sollist, {xnew, ynew}];
xold = xnew; yold = ynew, {steps}];
Return[sollist]]

Example: We apply the standard Runge--Kutta method of order 4 to find approximate values for the solution to the IVP \( f(x,y) = x^2 - y^2 , \quad y(1) = 1 . \)

f[x_, y_] = x^2 - y^2
RK4s[f[x, y], {1, 2}, 1, 10]
Out[22]= {2., 1.70189}
Clear[x,y,a,b,n,h,k1,k2,k3,k4,k,i];
y[1] = 1; h =0.1;
a = 0; b = 2; n = Floor[(b-a)/h];
Do[x[i] = 0.0 + (i-1)*h, {i,1,n+1}]

f[x_,y_] = 1/y^3 - 2*x^2;
Do[{k1 = f[x[i],y[i]],
k2 = f[x[i]+h/2, y[i]+h*k1/2],
k3 = f[x[i]+h/2, y[i]+h*k2/2],
k4 = f[x[i]+h, y[i]+h*k3],
k = (k1+2*k2+2*k3+k4)/6;
y[i+1] = y[i] + k*h}, {i,1,n}]
Do[Print[x[i], " ", y[i]],{i,1,n+1}]

One step Runge--Kutta method of order 4 (RK4) can be coded as follows
SetOptions[EvaluationNotebook[], Background -> White]
oneRK4[args_] :=
Module[{f, x0, y0, t0, k1, k2, k3, k4, h, x1, y1, t1, k12, k22, k32, k42},
{f, y0, t0, k1, k2, h} = args;
y1 = y0 + ((k1 + k2)/2.0);
t1 = t0 + h;
k12 = h*f[y0, t0];
k22 = h*f[y0 + k1, t0 + h];
{f, y1, t1, k12, k22, h} ];
Next, we make preparation for output (called "preprocessing" in computer science) in the different forms. So we make the reference (or association) to possible outputs: There is a special Mathematica command <| followed by |> . What is inside these two marks is the definition of references.
HeunNETLIST[args_, n_, syms_] :=
Module[{list, f, y0, t0, k1, k2, h, xy},
list = {};
{f, y0, t0, k1, k2, h} = args;
NestList[(AppendTo[list, {#, oneHeun[#]}]; oneHeun[#]) &, {f, y0, t0, k1, k2, h}, n];
(* this command executes the loop *)

xy = Table[{list[[i]][[1]][[3]], list[[i]][[1]][[2]]}, {i, 1, Length@list}];
<|
"list" -> list,
"poly line" -> xy,
"plot" -> ListLinePlot[xy, PlotRange -> All, AxesLabel -> syms, PlotStyle ->Thick],
"table" -> (Dataset@Table[ tmp = Drop[list[[i]][[1]], 1];
<|
syms[[1]] -> tmp[[2]], syms[[2]] -> tmp[[1]],
ToString[Subscript["k", 1], TraditionalForm] -> tmp[[3]],
"Subscript[k,2]" -> tmp[[4]]
ToString[Subscript["k", 2], TraditionalForm] -> tmp[[4]]
"Subscript[k,2]" -> tmp[[4]]

|>,
{i, 1, Length@list}])

|>
];

Example: We consider the initial value proble with oscillating slope function:

\[ \dot{y} = y\,\cos \left( t + y \right) , \qquad y(0) = 1 , \]
where \( \dot{y} = {\text d}y/{\text d}t \) is the derivative of y with respect to time variable t.    ■

 

>>II. Kutta Algorithm of order 4


Kutta's algorithm (1901) of order four:

\begin{equation} \label{E35.11} y_{n+1} = y_n + \frac{h}{8}\, ( k_1 + 3 k_2 + 3 k_3 + k_4 ), \end{equation}
where
\[ \begin{array}{l} k_1 = f_n = f(x_n , y_n ), \\ k_2 = f\left( x_n + \frac{h}{3}\, , y_n + \frac{h}{3}\, k_1 \right) , \\ k_3 =f\left( x_n + \frac{2h}{3}\, , y_n - \frac{h}{3}\, k_1 + h k_2 \right) , \\ k_4 = f( x_n + h, y_n + h k_1 - h k_2 + h k_3 ). \end{array} \]
kutta4s[f_, {x0_, xn_}, y0_, steps_] :=
Block[{xold = x0, yold = y0, xnew, ynew, k1, k2, k3, k4,
sollist = {{x0, y0}}, x, y, n, h}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xold + h/3, y -> yold + k1*h/3};
k3 = f /. {x -> xold + 2*h/3, y -> yold - h*k1/3 + k2*h};
k4 = f /. {x -> xnew, y -> yold + h*(k1 - k2 + k3)};
ynew = yold + (h/8)*(k1 + 3*k2 + 3*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew; yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
If one wants to see the whole list of intermediate values calculated by this algorithm, then replace the last line with the following:
Return[sollist]]

Example:

Example. Consider the IVP: \( y' = x^2 - y^2 , \quad y(1) =1 .\)

f[x_, y_] = x^2 - y^2
kutta4s[f[x, y], {1, 2}, 1, 10]
Out[24]= {2., 1.7019}

Here is a slightly different version of Kutta's algorithm:

Kutta4[f_, {x_, x0_, xn_}, {y_, y0_}, steps_] :=
  Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
  h = N[(xn - x0)/steps];
  Do[xnew = xold + h;
   k1 = N[f[xold, yold]];
   k2 = N[f[xold + h/3, yold + h*k1/3]];
   k3 = N[f[xold + h*2/3, yold - h*k1/3 + k2*h]];
   k4 = N[f[xold + h, yold + k3*h - k2*h + h*k1]];
   ynew = N[yold + h*(k1 + 3*k2 + 3*k3 + k4)/8];
   sollist = Append[sollist, {xnew, ynew}];
   xold = xnew;
   yold = ynew, {steps}];
  Return[sollist]]
Now the same algorithm, but with the step size specified:
kutta4[f_, {x0_, xn_}, y0_, h_] :=
Block[{xold = x0, yold = y0, xnew, ynew, k1, k2, k3, k4,
sollist = {{x0, y0}}, x, y, n, steps}, steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xold + h/3, y -> yold + k1*h/3};
k3 = f /. {x -> xold + 2*h/3, y -> yold - h*k1/3 + k2*h};
k4 = f /. {x -> xnew, y -> yold + h*(k1 - k2 + k3)};
ynew = yold + (h/8)*(k1 + 3*k2 + 3*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
If one wants to see the whole list of intermediate values calculated by this algorithm, then replace the last line with the following:
Return[sollist]]

We demonstrate other implementations of Kutta's algorithm using Mathematica. For simplicity, we reconsider the IVP: \( y' = x^2 - y^2 , \quad y(1) =1 .\)

f[x_, y_] = x^2 - y^2
kutta4[f[x, y], {1, 2}, 1, 0.1]
Out[26]= {2., 1.7019}

Now we apply the same code with modified last line

RK4[1/(3*x - 2*y + 1), {x, 0, 1}, {y, 0}, 10]

Out[3]= {{0, 0}, {0.1, 0.0950252}, {0.2, 0.180361}, {0.3, 0.256689}, {0.4,
0.32492}, {0.5, 0.386033}, {0.6, 0.440966}, {0.7, 0.49057}, {0.8,
0.535585}, {0.9, 0.576644}, {1., 0.614281}}

Another version:

Clear[x,y,a,b,n,h,k1,k2,k3,k4,k,i];
y[1] = 1; h =0.1;
a = 0; b = 2; n = Floor[(b-a)/h];
Do[x[i] = 0.0 + (i-1)*h, {i,1,n+1}]

f[x_,y_] = 1/y^3 - 2*x^2;
Do[{k1 = f[x[i],y[i]],
k2 = f[x[i]+h/3, y[i]+h*k1/3],
k3 = f[x[i]+h*2/3, y[i]+h*k2-h*k1/3],
k4 = f[x[i]+h, y[i]+h*k3+h*k1-h*k2],
k = (k1+3*k2+3*k3+k4)/8;
y[i+1] = y[i] + k*h}, {i,1,n}]
Do[Print[x[i], " ", y[i]],{i,1,n+1}]

 

>>III. Gill's Algorithm of order 4


Gill's method:

\begin{equation} \label{E35.12} y_{n+1} = y_n + \frac{h}{6} \left[ k_1 + 2 \left( 1 - \frac{1}{\sqrt{2}} \right) k_2 + 2 \left( 1 + \frac{1}{\sqrt{2}} \right) k_3 + k_4 \right] , \end{equation}
where
\[ \begin{array}{l} k_1 = f(x_n , y_n ), \\ k_2 = f\left( x_n + \frac{h}{2}\, , y_n + \frac{h}{2}\, k_1 \right) , \\ k_3 =f\left( x_n + \frac{h}{2}\, , y_n - \left( \frac{1}{2} - \frac{1}{\sqrt{2}} \right) h k_1 + \left( 1 - \frac{1}{\sqrt{2}} \right) h k_2 \right) , \\ k_4 = f( x_n + h, y_n - \frac{1}{\sqrt{2}} h k_2 + \left( 1 + \frac{1}{\sqrt{2}} \right) h k_3 ). \end{array} \]
gill4[f_, {x0_, xn_}, y0_, h_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, x, y, steps,
r2, xnew, ynew, xhalf, k1, k2, k3, k4},
steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
xhalf = xold + h/2; r2 = 1/Sqrt[2];
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xhalf, y -> yold - h*k1*(1/2 - r2) + k2*h*(1 - r2)};
k4 = f /. {x -> xnew, y -> yold - r2*h*k2 + h*k3*(1 + r2)};
ynew = yold + (h/6)*(k1 + 2*(1 - r2)*k2 + 2*(1 + r2)*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
f[x_, y_] = x^2 - y^2
gill4[f[x, y], {1, 2}, 1, 0.1]
Out[28]= {2., 1.70189}

Another version:

gill4s[f_, {x0_, xn_}, y0_, steps_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, x, y, h, r2, xnew, ynew, xhalf, k1, k2, k3, k4},
h = N[(xn - x0)/steps];
Do[xnew = xold + h; xhalf = xold + h/2;
r2 = 1/Sqrt[2]; k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xhalf, y -> yold - h*k1*(1/2 - r2) + k2*h*(1 - r2)};
k4 = f /. {x -> xnew, y -> yold - r2*h*k2 + h*k3*(1 + r2)};
ynew = yold + (h/6)*(k1 + 2*(1 - r2)*k2 + 2*(1 + r2)*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]

Then we exercute the code by revisiting our previous example: \( y' = x^2 - y^2 , \quad y(1) =1 .\)

f[x_, y_] = x^2 - y^2
gill4s[f[x, y], {1, 2}, 1, 10]

Out[4]= {2., 1.70189}

 

Gill4[f_, {x_, x0_, xn_}, {y_, y0_}, steps_] := Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h}, h = N[(xn - x0)/steps]; Do[xnew = xold + h;
k1 = N[f[xold, yold]];
k2 = N[f[xold + h/2, yold + h*k1/2]];
k3 = N[ f[xold + h/2, yold - h*k1*(1/2 - 1/Sqrt[2]) + k2*h*(1 - 1/Sqrt[2])]];
k4 = N[f[xold + h, yold - k2*h/Sqrt[2] + h*k3*(1 + 1/Sqrt[2])]];
ynew = N[yold + h*(k1 + 2*(1 - 1/Sqrt[2])*k2 + 2*(1 + 1/Sqrt[2])*k3 + k4)/6];
sollist = Flatten[Append[sollist, {xnew, ynew}]];
xold = xnew;
yold = ynew, {steps}];
Return[sollist]]

Example: Application of the Gill's method to the initial value problem \( y' = 1 -x +4y , \quad y(0) =1 \) could be accomplished as follows.

f[x_, y_] = 1 - x + 4 y
Out[2]= 1 - x + 4 y
Gill4[f, {x, 0, 1}, {y, 1}, 10]
Out[4]= {0, 1}, {0.1, 1.60893}, {0.2, 2.50501}, {0.3, 3.82941}, {0.4, 5.79279}, {0.5, 8.70932}, {0.6, 13.0477}, {0.7, 19.5071}, {0.8, 29.1306}, {0.9, 43.474}, {1., 64.8581}

Another version:

Clear[x,y,a,b,n,h,k1,k2,k3,k4,k,i];
y[1] = 1; h =0.1;
a = 0; b = 2; n = Floor[(b-a)/h];
Do[x[i] = 0.0 + (i-1)*h, {i,1,n+1}]

f[x_,y_] = 1/y^3 - 2*x^2;
ap = N[1+1/Sqrt[2]];
am = N[1-1/Sqrt[2]];
amm = N[1/2-1/Sqrt[2]];
Do[{k1 = N[f[x[i],y[i]]],
k2 = N[f[x[i]+h/2, y[i]+h*k1/2]],
k3 = N[f[x[i]+h/2, y[i]+h*k2*am-h*k1*amm]],
k4 = N[f[x[i]+h, y[i]-h*k2/Sqrt[2]+h*k3*ap]],
k = N[(k1+2*k2*am+2*k3*ap+k4)/6];
y[i+1] = y[i] + k*h}, {i,1,n}]
Do[Print[x[i], " ", y[i]],{i,1,n+1}]

 

IV. Fifth order Butcher's method


Here is the fifth order Runge--Kutta method proposed by Butcher:

\begin{equation} \label{E36.butcher} y_{k+1} = y_k + \frac{h}{90} \left( 7k_1 + 32 k_3 + 12 k_4 + 32 k_5 + 7 k_6 \right) , \qquad k=0,1,\ldots , \end{equation}
where
\[ \begin{align*} k_1 &= f(x_k , y_k ), \qquad &k_2 = f \left( x_k + h/4 , y_k + hk_1 /4 \right) , \\ k_3 &= f \left( x_k + h/4 , y_k + k_1 h/8 + k_2 h/8 \right), \qquad &k_4 = f \left( x_k + h/2 , y_k - k_2 h/2 + k_3 h \right) , \\ k_5 &= f \left( x_k + 3h/4 , y_k + 3k_1 h/16 +9 k_4 h/16 \right) , \qquad &k_6 = f \left( x_k + h , y_k - 3k_1 h/7 + 2k_2 h/7 + 12k_3 h/7 -12k_4 h/7 + 8k_5 h/7 \right) . \end{align*} \]
We impliment the above algorithm using Mathematica as follows.
Clear[x,y,a,b,n,h,k1,k2,k3,k4,k5,k6,k,i,ap,am,amm];
y[1] = 1; h =0.1;
a = 0; b = 2; n = Floor[(b-a)/h];
Do[x[i] = 0.0 + (i-1)*h, {i,1,n+1}]

f[x_,y_] = 1/y^3 - 2*x^2;
Do[{k1 = N[f[x[i],y[i]]],
k2 = N[f[x[i]+h/4, y[i]+h*k1/4]],
k3 = N[f[x[i]+h/4, y[i]+h*k1/8+h*k2/8]],
k4 = N[f[x[i]+h/2, y[i]+h*k3-k2*h/2]],
k5 = N[f[x[i]+h*3/4, y[i]+3*h*k1/16+9*h*k4/16]],
k6 = N[f[x[i]+h, y[i]-3*h*k1/7+2*h*k2/7 +12*k3*h/7-12*h*k4/7 +8*k5*h/7]],
k = N[(7*k1+32*k3+12*k4+32*k5 +7*k6)/90];
y[i+1] = y[i] + k*h}, {i,1,n}]
Do[Print[x[i], " ", y[i]],{i,1,n+1}]

Another version of Butcher's method:

RK5[f_, {x_, x0_, xn_}, {y_, y0_}, steps_] :=
  Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
  h = N[(xn - x0)/steps];
  Do[xnew = xold + h;
   (* Print["h= ", h, " xold= ", xold, " yold= ", yold]; *)
   k1 = N[f[xold, yold]];
   k2 = N[f[xold + h/4, yold + h*k1/4]];
   k3 = N[f[xold + h/4, yold + h*k1/8 + k2*h/8]];
   k4 = N[f[xold + h/2, yold - k2*h/2 + k3*h]];
   k5 = N[f[xold + 3*h/4, yold + 3*k1*h/16 + 9*k4*h/16]];
   k6 = N[
     f[xnew, yold + 2*k2*h/7 - 3*k1*h/7 + 12*k3*h/7 - 12*k4*h/7 +
       8*k5*h/7]];
   ynew = N[yold + h*(7*k1 + 32*k3 + 12*k4 + 32*k5 + 7*k6)/90];
   sollist = Append[sollist, {xnew, ynew}];
   xold = xnew;
   yold = ynew, {steps}];
  Return[sollist]]

Example: Now we present an example of use this algorithm. Let \( f(x,y) = 1 -x +4y .\) Application of RK5 to solve numerically the initial value problem \( y' = f(x,y) , \quad y(0) =1 \) yields

f[x_, y_] = 1 - x + 4 y
Out[4]= 1 - x + 4 y
solution = RK5[f, {x, 0, 1}, {y, 1}, 10]
Out[5]= {{0, 1}, {0.1, 1.60904}, {0.2, 2.50533}, {0.3, 3.83014}, {0.4, 5.79423}, {0.5, 8.71201}, {0.6, 13.0525}, {0.7, 19.5156}, {0.8, 29.1449}, {0.9, 43.498}, {1., 64.898}}
Now we check our answer. First, we solve the initial value problem explicitly because the given differential equation is linear:
s = Flatten[DSolve[{y'[x] == 1 - x + 4*y[x], y[0] == 1}, y[x], x]]
{y[x] -> 1/16 (-3 + 19 E^(4 x) + 4 x)}
f[x_] = Flatten[y[x] /. s]
1/16 (-3 + 19 E^(4 x) + 4 x)
N[f[1]]
64.8978

 

V. Runge--Kutta--Fehlberg Method (RKF45)


One way to guarantee accuracy in the solution of an initial value problem is t solve the problem twice using step sizes h and h/2 and compare answers at the mesh points corresponding to the largest step size. However, this requires a significant amount of computation for the smaller step length and must be repeated if it is determined that the agreement is not good enough.

The Runge--Kutta--Fehlberg method (denoted RKF45) or Fehlberg method was developed by the German mathematician Erwin Fehlberg (1911--1990) in 1969 NASA report. The novelty of Fehlberg's method is that it is an embedded method from the Runge-Kutta family, and it has a procedure to determine if the proper step size h is being used. At each step, two different approximations for the solution are made and compared. If the two answers are in close agreement, the approximation is accepted. If the two answers do not agree to a specified accuracy, the step length is reduced. If the answers agree to more significant digits than required, the step size is increased. RKF45 method is a method of order \( O( h^4 ) \) with an error estimator of order \( O( h^5 ) . \)

Initially, Erwin Fehlberg introduced a so-called six stage Runge-Kutta method that requires six function evaluations per step. These function values can be combined with one set of coefficients to produce a fifth-order accurate approximation and with another set of coefficients to produce an independent fourth-order accurate approximation. Comparing these two approximations provides an error estimate and resulting step size control. Notice that it takes six stages to get fifth order. It is not possible to get fifth order with only five function evaluations per step. The new ode45 introduced in the late 1990s is based on an algorithm of Dormand and Prince (DOPRI). It uses six stages, employs the FSAL strategy, provides fourth and fifth order formulas, has local extrapolation and a companion interpolant. The new ode45 is so effective that the interpolant is called up to produce more finely spaced output at no additional cost measured in function evaluations. The codes for the two routines ode23 and ode45 are very similar.

Butcher tableau for Fehlberg's 4(5) method

\[ \begin{array}{c|cccccc} 0&& \\ \frac{1}{4} & \frac{1}{4} \\ \frac{3}{8} & \frac{3}{32} & \frac{9}{32} \\ \frac{12}{13} & \frac{1932}{2197} & - \frac{7200}{2197} & \frac{7296}{2197} \\ 1 & \frac{439}{216} & -8 & \frac{3680}{513} & - \frac{845}{4104} \\ \frac{1}{2} & -\frac{8}{27} & 2 & - \frac{3544}{2565} & \frac{1859}{4104} & - \frac{11}{40} \\ \hline \mbox{order } O(h^4 ) & \frac{25}{216} & 0 & \frac{1408}{2565} & \frac{2197}{4101} & - \frac{1}{5} & 0 \\ \mbox{order } O(h^5 ) & \frac{16}{135} & 0 & \frac{6656}{12825} & \frac{28561}{56430} & - \frac{9}{50} & \frac{2}{55} \end{array} . \]

Here is a pseudocode for the Runge--Kutta--Fehlberg method RK45:

procedure RK45(f,t,x,h,n,ε)
real ε, k1, k2, k3, k4, k5, k6, h, t, t0, x, x4
external function f
real c20 ← 0.25, c21 ← 0.25
real c30 ← 0.375, c31 ← 0.09375, c32 ← 0.28125
real c40 ← 12./13., c41 ← 1932./2197.
real c42 ← -7200./2197., c43 ← 7296./2197.
real c51 ← 439./216., c52 ← -8.
real c53 ← 3680./513., c54 ← -845./4104.
real c60 ← 0.5, c61 ← -8./27., c62 ← 2.
real c63 ← -3544./2565., c64 ← 1859./4104.
real c65 ← -0.275
real a1 ← 25./216., a2 ← 0., a3 ← 1408./2565.
real a4 ← 2197./4104., a5 ← -0.2
real b1 ← 16./135., b2 ← 0., b3 ← 6656./12825.
real b4 ← 28561./56430., b5 ← -0.18
real b6 ← 2./55.,
k1hf(t,x)
k2hf(t+h∗c20, x+c21k1)
k3hf(t+h∗c30, x+c31k1 + c32k2)
k4hf(t+h∗c40, x+c41k1 + c42k2 + c43k3)
k5hf(t+h, x+c51k1 + c52k2 + c53k3 + c54k4)
k6hf(t+h∗c60, x+c61k1 + c62k2 + c63k3 + c64k4 + c65k5)
x4x + a1k1 + a3k3 + a5k5
x4x + b1k1 + b3k3 + b4k4 + b5k5 + b6k6
tt + h
ε ← |x - x4|
end procedure RK45

We now describe a simple adaptive procedure in the RK45 procedure, the fourth- and fifth-order approximations for x(t+h), say x4 and x5, are computed from six function evaluations, and the error estimate ε = |x4 - x5| is known. From user specified bounds on the allowable error estimate, the step size h is doubled or halved as needed to keep ε within these bounds. A range for the allowable step size h is also specified by the user. Clearly, the user must set the bounds carefully so that the adaptive procedure does not get caught in a loop, trying repeatedly to halve and double the step size from the same point to meet error bounds that are too restrictive for the given differential equation. This adaptive process includes the following steps.

  1. Given a step size h and an initial value x(t), the RK45 routine computes the value x(t+h) and an error estimate ε.
  2. If εmin < ε < εmax, then the step size h is not changed and the the next step size is taken by repeating step I with initial value x(t+h).
  3. If ε < εmin, then h is replaced by 2h, provided that |2h| < hmax.
  4. If ε > εmax, then h is replaced by h/2, provided that |h/2| > hmin.
  5. If hmin ≤ |h| ≤ hmax, then the step is repeated by returning to step I with x(t) and the new h value.
The corrsponding algorithm is presented below under the name of RK45adaptive. In the parameter list of the pseudocode, f is the function f(t,x) for the differential equation \( x' = f(t,x) , \) variables t and x contain the initial values, h is the initial step size, t0 is the initial value for independed variable t, tb is the final value fort, itmax is the maximum number of steps to be taken in going from a = t0 to b = tb, εmin and εmax are lower and upper bounds on the allowable error estimate ε, hmin and hmax are bounds on the step size h, and iflag is an error flag that returns one of the following values.
iflag meaning
0 Successful march from t0 to tb
1 Maximum number of iterations reached
procedure RK45adaptive(f, t, x, h, tb, itmax, εmin, εmax, hmin, hmax, iflag)
integer iflag, itmax, n;    external function f
real ε, εmin, εmax, d, h, hmin, hmax, t, tb, x, xsave, tsave
real  δ ← ½×10-5
output   0, h, t, x
iflag ← 1
k ← 0
while   k < itmax
     kk+1
     if |h| < hmin then   h ← sign(h)hmin
     if |h| > hmax then   h ← sign(h)hmax
     d ← |tb - t|
     if d ≤ |h| then
          iflag ← 0
          if d ≤ δ ⋅ max{ |tb|, |t|}   then   exit loop
          h ← sign(h)d
     end if
     xsavex
     tsavet
     call RK45(f, t, x, h, ε)
     output n, h, t, x, ε
     if iflag == 0  then   exit loop
     if ε < εmin   then   h2h
     if ε > εmax   then
          hh/2
          xxsave
          ttsave
          kk-1
     end if
end while
end procedure RK45adaptive

The optimal step size h can be determined by multilying the scalar s times the current step size h. The scalar s is

\[ s = \left( \frac{\mbox{Tol} \,h}{2 \, |z_{k+1} - y_{k+1} |} \right)^{1/4} \approx 0.84 \left( \frac{\mbox{Tol} \,h}{|z_{k+1} - y_{k+1} |} \right)^{1/4} , \]
where zk+1 is approximation based on fifth order RK method and yk+1 is approximation based on fourth order RK method.

Other Runge--Kutta methods could be found on
https://en.wikipedia.org/wiki/List_of_Runge-Kutta_methods

Some other popular 4/5 versions of Runge--Kutta methods:

 

 

  1. Huta, A., Une amélioration de la méthode de Runge--Kutta--Nyström pour la résolution num érique des ́equations différentielles du premier ordre, Acta Fac. Nat. Univ. Comenian. Math., 1956, Vol. 1, pp. 201--224. MR 20 #3635
  2. Huta, A., Contribution à la formule de sixième ordre dans la méthode de Runge--Kutta--Nyström, Acta Fac. Nat. Univ. Comenian. Math., 1957, Vol. 2, pp. 21--24. (Slovak and Russian summaries) MR 20 #3636.
  3. Kutta W., Beitrag zur n ̈aherungsweisen Integration totaler Differential- gleichungen, Zeitschrift für angewandte Mathematik und Physik, 1901, Vol. 46, pp. 435--453.
  4. Nyström E. J., Über die numerische Integration von Differentialgleichun- gen., Acta Societatis Scientiarum Fennicae, 1925, Vol. 50, No. 13, pp. 1--55.
  5. Runge C., Uber die numerische Auflösung von Differentialgleichungen. Mathematische Annalen, 1895, Vol. 46, Issue 2, pp. 167--178.
  6. Zurmühl, R., Runge-Kutta-Verfahren zur numerischen Integration von Differential-gleichungen n-ter Ordnung, ZAMM -- Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik. 1948, Vol. 28, 9SSUE 6, pp. 173--182. MR 10, 21

 

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value