Preface


To be added

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to the main page for the second course APMA0340
Return to the main page for the first course APMA0330
Return to Part III of the course APMA0330

Runge--Kutta of order 2


The second order Runge--Kutta method (denoted RK2) simulates the accuracy of the Tylor series method of order 2. Although this method is not as good as the RK4 method, its derivation illustrates all steps and the principles involved. We consider the initial value problem \( y' = f(x,y) , \quad y(x_0 ) = y_0 \) that is assumed to have a unique solution. Using a uniform grid of points \( x_n = x_0 + n\, h , \) with fixed step size h, we write down the Taylor series formula for y(x+h):
\[ y(x+h) = y(x) + h\,y' (x) + \frac{h^2}{2}\, y'' (x) + C\,h^3 + \cdots , \]
where C is a constant expressed through the third derivative of y(x) and the other terms in the series involve powers of h greater than 3.

The derivatives y' and y'' must be extressed in terms of f(x,y) and its partial derivatives according to the given differential equation \( y' = f(x,y) . \) So we have

\begin{align*} y' (x) &= f(x,y) , \\ y'' (x) &= f_x (x,y) + f(x,y)\,f_y (x,y) , \end{align*}
where \( f_x = \partial f/\partial x \quad \mbox{and} \quad f_y = \partial f/\partial y . \) Using these formulas, we derive the Taylor expression for y(x+h):
\[ y(x+h) = y(x) + h\,y' (x) + h\,b_1 f_0 + h\,b_2 f_1 , \]
\[ \begin{split} f_0 &= f(x,y) \\ f_1 &= f(x+c_2 h , y + a_{2,1} h\, f_0 ) . \end{split} \]
Compare the above Runge-Kuta approximation of order 2 with the Taylor polynomial approximation
\begin{align*} y(x+h ) &= y(x) + h\,f(x,y) + \frac{h^2}{2}\, y'' (x) + \cdots , \\ y(x+h ) &= y(x) + \left( b_1 + b_2 \right) h\,f(x,y) + b_2 c_2 h^2 f_x (x,y) + b_2 a_{2,1} h^2 f_y (x,y)\,f(x,y) + b_2 C \,h^3 + \cdots , \end{align*}
we derive the following conclusions:
\begin{align*} h\,f(x,y) &= \left( b_1 + b_2 \right) h\,f(x,y) \qquad \Longrightarrow \qquad &1=b_1 + b_2 , \\ \frac{h^2}{2} \,f_x (x,y) &= h^2 b_2 c_2 f_x (x,y) \qquad \Longrightarrow \qquad & \frac{1}{2} = b_2 c_2 , \\ \frac{h^2}{2} \,f_y (x,y)\, f(x,y) &= h^2 b_2 a_{2,1} f_y (x,y)\, f(x,y) \qquad \Longrightarrow \qquad & \frac{1}{2} = b_2 a_{2,1} . \end{align*}
Hence, if we require that coefficients in the RK2 method satisfy the relations
\[ b_1 + b_2 =1 , \quad \frac{1}{2} = b_2 c_2 , \quad \frac{1}{2} = b_2 a_{2,1} , \]
Then the RK2 method will have the same order of accuracy as the Taylor's method, namely, 2.

Since there are only three equations in four unknowns, the system is underdetermined and we are permitted to choose one of the coefficients as we wish. There are several special choices that have been studied in the literature, and we mention some of them.

If we choose b1 = 1/2, this yields b2 = 1/2, c2 = 1, and a2,1 = 1. So we get Heun's method:

\begin{equation} \label{EqHeun} \begin{split} k_1 &= f\left( x_n , y_n \right) , \\ k_2 &= f\left( x_n , y_n + h\,k_1 \right) , \\ y_{n+1} &= y_n + \frac{h}{2} \left( k_1 + k_2 \right) , \qquad n=0,1,2,\ldots . \end{split} \end{equation}

If we choose b1 = 0, this yields b2 = 1, c2 = 1/2, and a2,1 = 1/2. So we get the modified Euler method (also known as the explicit midpoint method):

\begin{equation} \label{EqmEuler} \begin{split} k_1 &= f\left( x_n , y_n \right) , \\ y_{n+1} &= y_n + h\, f\left( x_n + \frac{h}{2} , y_n + \frac{h}{2}\,k_1 \right) , \qquad n=0,1,2,\ldots . \end{split} \end{equation}

Example: The Butcher table for the explicit midpoint method is given by:

\[ \begin{array}{c|cc} 0&0&0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ \hline & 0 & 1 \end{array} \]
One step RK2 (= Heun) method can be coded as follows
SetOptions[EvaluationNotebook[], Background -> White]
oneRK2[args_] :=
Module[{f, x0, y0, t0, k1, k2, h, x1, y1, t1, k12, k22},
{f, y0, t0, k1, k2, h} = args;
y1 = y0 + ((k1 + k2)/2.0);
t1 = t0 + h;
k12 = h*f[y0, t0];
k22 = h*f[y0 + k1, t0 + h];
{f, y1, t1, k12, k22, h} ];
Next, we make preparation for output (called "preprocessing" in computer science) in the different forms. So we make the reference (or association) to possible outputs: There is a special Mathematica command <| followed by |> . What is inside these two marks is the definition of references.
HeunNETLIST[args_, n_, syms_] :=
Module[{list, f, y0, t0, k1, k2, h, xy},
list = {};
{f, y0, t0, k1, k2, h} = args;
NestList[(AppendTo[list, {#, oneHeun[#]}]; oneHeun[#]) &, {f, y0, t0, k1, k2, h}, n];
(* this command executes the loop *)

xy = Table[{list[[i]][[1]][[3]], list[[i]][[1]][[2]]}, {i, 1, Length@list}];
<|
"list" -> list,
"poly line" -> xy,
"plot" -> ListLinePlot[xy, PlotRange -> All, AxesLabel -> syms, PlotStyle ->Thick],
"table" -> (Dataset@Table[ tmp = Drop[list[[i]][[1]], 1];
<|
syms[[1]] -> tmp[[2]], syms[[2]] -> tmp[[1]],
ToString[Subscript["k", 1], TraditionalForm] -> tmp[[3]],
"Subscript[k,2]" -> tmp[[4]]
ToString[Subscript["k", 2], TraditionalForm] -> tmp[[4]]
"Subscript[k,2]" -> tmp[[4]]

|>,
{i, 1, Length@list}])
|>
];

Example: We consider the initial value proble with oscillating slope function:

\[ \dot{y} = y\,\cos \left( t + y \right) , \qquad y(0) = 1 , \]
where \( \dot{y} = {\text d}y/{\text d}t \) is the derivative of y with respect to time variable t.    ■

 

I. Second Order Optimal Runge-Kutta Approximation


Ralston's method is a second-order method with two stages and a minimum local error bound. Its Mathematica realization is presented below when the step size is denoted by h: \begin{equation} \label{E35.opt} y_{n+1} =y_n + \frac{h}{4}\,f(x_n , y_n ) + \frac{3h}{4} \,f\left( x_n + \frac{2h}{3} , y_n + \frac{2h}{3} \,f(x_n , y_n ) \right) , \quad n=0,1,2,\ldots . \end{equation} Its Butcher tableau is

\[ \begin{array}{c|cc} 0&0&0 \\ \frac{2}{3} & \frac{2}{3} & 0 \\ \hline & \frac{1}{4} & \frac{3}{4} \end{array} . \]

RK2op[f_, {x0_, xn_}, y0_, h_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, x, y, n, steps,xnew, ynew,xthird,fn,k1},
steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
xthird = xold + 2*h/3;
fn = f [x -> xold, yold];
k1 = f[xthird, yold + fn*2*h/3];
ynew = yold + h*fn/4 + 3*h*k1/4;
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]

f[x_, y_] = x + y
RK2op[f, {1, 2}, 1, 0.1]
Out[4]= {2., 5.14224}

The same algorithm when the number of steps is specified:

RK2ops[f_, {x0_, xn_}, y0_, steps_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, x, y, n, h,
xthird, fn, ynew, xnew, k1}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
xthird = xold + 2*h/3;
fn = f[xold, yold];
k1 = f[xthird, yold + fn*2*h/3];
ynew = yold + h*fn/4 + 3*h*k1/4;
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist]]
f[x_, y_] = x + y
RK2ops[f, {1, 2}, 1, 10]
Out[6]= {{1, 1}, {1.1, 1.215}, {1.2, 1.46308}, {1.3, 1.7477}, {1.4,
2.07271}, {1.5, 2.44234}, {1.6, 2.86129}, {1.7, 3.33472}, {1.8,
3.86837}, {1.9, 4.46855}, {2., 5.14224}}

If only one final value is needed, then Return command should be replaced with

Return[sollist[[steps + 1]]]]

 

II. Implicit Runge-Kutta of order 2


\begin{equation} \label{E36.implicit} y_{n+1} = y_n + \frac{h}{2}\left( k_1 + k_2 \right) , \end{equation} where \begin{align*} k_1 &= f\left( x_n + \frac{\sqrt{3} -1}{2\sqrt{3}}\,h , \ y_n + \frac{h}{4}\,k_1 + \frac{\sqrt{3} -2}{4\sqrt{3}}\,hk_2\right) , \\ k_2 &= f\left( x_n + \frac{\sqrt{3} +1}{2\sqrt{3}}\,h , \ y_n + \frac{\sqrt{3} +2}{4\sqrt{3}}\,hk_1 + \frac{h}{4}\,k_2\right) , \end{align*}

First we present code when the step size (h) is given:

secondimp[f_, {x_, x0_, xn_}, {y_, y0_}, h_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, steps},
steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
fold = f /. {x -> xold, y -> yold};
sol = FindRoot[{f1 ==
1/(3*(xold + (Sqrt[3] - 1)/(2*Sqrt[3])*h) -
2*(yold + h/4*f1 + (Sqrt[3] - 2)/(4*Sqrt[3])*h*f2) + 1),
f2 == 1/(3*(xold + (Sqrt[3] + 1)/(2*Sqrt[3])*h) -
2*(yold + h/4*f2 + (Sqrt[3] + 2)/(4*Sqrt[3])*h*f1) +
1)}, {{f1, fold}, {f2, fold}}];
ynew = yold + (h/2)*(sol[[1, 2]] + sol[[2, 2]]);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist]]
secondimp[1/(3*x - 2*y + 1), {x, 0, 1}, {y, 0}, 0.1]
Out[2]= {{0, 0}, {0.1, 0.0950239}, {0.2, 0.180358}, {0.3, 0.256686}, {0.4, 0.324916}, {0.5, 0.386028}, {0.6, 0.440961}, {0.7, 0.490565}, {0.8, 0.53558}, {0.9, 0.576638}, {1., 0.614275}}

Then the same algorithm when the number of steps is specified:

secondimps[f_, {x_, x0_, xn_}, {y_, y0_}, steps_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
h = N[(xn - x0)/steps];
Do[xnew = xold + h;
fold = f /. {x -> xold, y -> yold};
sol = FindRoot[{f1 ==
1/(3*(xold + (Sqrt[3] - 1)/(2*Sqrt[3])*h) -
2*(yold + h/4*f1 + (Sqrt[3] - 2)/(4*Sqrt[3])*h*f2) + 1),
f2 == 1/(3*(xold + (Sqrt[3] + 1)/(2*Sqrt[3])*h) -
2*(yold + h/4*f2 + (Sqrt[3] + 2)/(4*Sqrt[3])*h*f1) +
1)}, {{f1, fold}, {f2, fold}}];
ynew = yold + (h/2)*(sol[[1, 2]] + sol[[2, 2]]);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist]]
secondimps[1/(3*x - 2*y + 1), {x, 0, 1}, {y, 0}, 10]
Out[5]= {{0, 0}, {0.1, 0.0950239}, {0.2, 0.180358}, {0.3, 0.256686}, {0.4,
0.324916}, {0.5, 0.386028}, {0.6, 0.440961}, {0.7, 0.490565}, {0.8,
0.53558}, {0.9, 0.576638}, {1., 0.614275}}

Using Mathematica command NDSolve, we check the obtained value:

s = NDSolve[{y'[x] == f[x, y[x]], y[1] == 1}, y, {x, 1, 2}]
Out[4]= {{y -> InterpolatingFunction[{1.,2.}},<>]}}
y[2.0] /. s
Out[5]= {1.70189}

secondimp[f_, {x_, x0_, xn_}, {y_, y0_}, h_] :=
  Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, steps},
  steps = Round[(xn - x0)/h];
  Do[xnew = xold + h;
   fold = f /. {x -> xold, y -> yold};
   sol = FindRoot[{f1 ==
       1/(3*(xold + (Sqrt[3] - 1)/(2*Sqrt[3])*h) -
          2*(yold + h/4*f1 + (Sqrt[3] - 2)/(4*Sqrt[3])*h*f2) + 1),
      f2 == 1/(3*(xold + (Sqrt[3] + 1)/(2*Sqrt[3])*h) -
          2*(yold + h/4*f2 + (Sqrt[3] + 2)/(4*Sqrt[3])*h*f1) +
          1)}, {{f1, fold}, {f2, fold}}];
   ynew = yold + (h/2)*(sol[[1, 2]] + sol[[2, 2]]);
   sollist = Append[sollist, {xnew, ynew}];
   xold = xnew;
   yold = ynew, {steps}];
  Return[sollist]]

 

 

III. Ralston's method


The method was proposed by Anthony Ralston (born in 1930, in New York City). Anthony Ralston received his Ph.D. from the Massachusetts Institute of Technology in 1956 under the direction of M. Eric Reissner. For approximating the solution of the initial value problem
\[ y' = f \left( x, y \right) , \qquad y(x_0 ) = y_0 . \]
Ralston's second order method gives:
\[ y_{i+1} = y_i + \frac{h}{3} \left( k_1 + 2\,k_2 \right) , \qquad x_i = x_0 + i\,h, \]
where
\[ \begin{split} k_1 &= f\left( x_i , y_i \right) , \\ k_2 &= f\left( x_i + \frac{3}{4}\, h , y_i +\frac{3}{4}\, h \, k_1 \right) . \end{split} \]
The Butcher tableau for Ralston's algorithm is
\[ \begin{array}{c|cc} 0&0&0 \\ \frac{3}{4} & \frac{3}{4} & 0 \\ \hline & \frac{1}{3} & \frac{2}{3} \end{array} . \]

The second order Ralston approximation can written as follows:

\begin{equation*} %\label{E35.ralston} y_{n+1} =y_n + \frac{h}{3}\,f(x_n , y_n ) + \frac{2h}{3} \,f\left( x_n + \frac{3h}{4} , y_n + \frac{3h}{4} \,f(x_n , y_n ) \right) , \quad n=0,1,2,\ldots . \end{equation*}
Ralston2s[f_, {x0_, xn_}, y0_, steps_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, x, y, n, h, xnew,
ynew, xthird, k1, fn}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
xthird = xold + 3*h/4;
fn = f[xold, yold];
k1 = f[xthird, yold + fn*3*h/4];
ynew = yold + h*fn/3 + 2*h*k1/3;
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
f[x_, y_] = x + y
Ralston2s[f, {1, 2}, 1, 10]
Out[10]= {2., 5.14224}

The same algorithm with fixed step size (which is denoted by h) is specified:

Ralston2[f_, {x0_, xn_}, y0_, h_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, x, y, n, steps,
xnew, ynew, xthird, k1, fn}, steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
xthird = xold + 3*h/4;
fn = f[xold, yold];
k1 = f[xthird, yold + fn*3*h/4];
ynew = yold + h*fn/3 + 2*h*k1/3;
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
f[x_, y_] = x + y
Ralston2[f, {1, 2}, 1, 0.1]
Out[12]= {2., 5.14224}

Actually, the Heun, midpoint, and Ralston methods could be united into one algorithm, based on a parameter, which we denote by a2. Select a value for a2 between 0 and 1 according to
- Heun Method: a2 = 0.5
- Midpoint Method: a2 = 1.0
- Ralston's Method: a2 = 2/3

Then Mathematica codes will need only one modification:

RK2nd[n_,x0_,y0_,xf_,f_,a2_]:=Module[{Y,h,i,a1,p1,q11,k1,k2},
h=(xf-x0)/n;
Y=y0;
a1=1-a2;
p1=1/(2*a2);
q11=p1;
For[i=0,i<n,i++,
k1=f[x0+i*h,Y];
k2=f[x0+i*h+p1*h,Y+q11*k1*h];
Y=Y+(a1*k1+a2*k2)*h]; Y]

Example: Consider the initial value problem for the Riccati equation

\[ y' = y^2 - 4\,x^2 , \qquad y(0) = -1. \]
Mathematica is capable to find explicitly its solution (expressed through modified Bessel functions):
sol=DSolve[{y'[x] == y[x]^2 - 4*x^2 , y[0]==-1}, y,x]
\[ y(x) = 2x\,\frac{4\,K_{3/4} (x^2) \, \Gamma^2 (3/4) - \pi \sqrt{2} \left( 2\,\Gamma^2 (3/4) + \pi \right) I_{-3/4} (x^2)}{4\,K_{1/4} (x^2) \,\Gamma (3/4) + \pi \sqrt{2} \left( 2\,\Gamma^2 (3/4) + \pi \right) I_{1/4} (x^2) } . \]
To analize Runge--Kutta methods, we use Mathematica and type
f[x_, y_] = y^2 - 4* x^2 (* slope funtion *)
x0=0.0; (* starting point *)
y0=-1.0; (* initial condition *)
xf=1.0; (* final value *)
n=128; (* maximum number of steps *)
a2=2/3 (* Ralston's Method *)
The following procedure estimates the solution of ordinary differential equations at a point xf.
n = number of steps
x0 = boundary condition for x
y0 = boundary condition for y
xf = value of x at which y is desired
f = differential equation in the form dy/dx = f(x,y)
a2 = constant to identify the numerical method

First, we determine the true numerical value at the final point:

EV = (y /. First[sol])[xf]
Then we plot exact solution:
plot = Plot[Evaluate[y[x] /. sol], {x, x0, xf}, PlotStyle -> {{Thickness[0.006], Orange}}]
Explicit solution.

The following loop calculates the following quantiries:
AV = approximate value of the ODE solution usin
g second order RK method by calling the module RK2nd
Et = true error
et = absolute relative true error percentage
Ea = approximate error
ea = absolute relative approximate error percentage
sig = least number of significant digits correct in approximation

nth = 7;
Do[
Nn[i] = 2^i;
H[i] = (xf - x0)/Nn[i];
AV[i] = RK2nd[2^i, x0, y0, xf, f, a2];
Et[i] = EV - AV[i];
et[i] = Abs[(Et[i]/EV)]*100.0;
If[i > 0,
Ea[i] = AV[i] - AV[i - 1];
ea[i] = Abs[Ea[i]/AV[i]]*100.0;
sig[i] = Floor[(2 - Log[10, ea[i]/0.5])];
If[sig[i] < 0, sig[i] = 0]; ] , {i, 0, nth}];
data = Table[{H[i], AV[i]}, {i, 0, nth}];
plot2 = ListPlot[data, Joined -> True,
PlotStyle -> {Thickness[0.006], RGBColor[1, 0, 0]}, PlotLabel ->
"Approximate value of the solution of the ODE\nat x = xf as a \ function of step size"]
data = Table[{Nn[i], AV[i]}, {i, 0, nth}];
plot3 = ListPlot[data, Joined -> True,
PlotStyle -> {Thickness[0.006], RGBColor[1, 0, 1]}, PlotLabel ->
"Approximate value of the solution of the ODE\nat x = xf as a \ function of number of steps"]
data = Table[{Nn[i], Et[i]}, {i, 0, nth}];
plot4 = ListPlot[data,
AxesOrigin -> {0, 0}, Joined -> True,
PlotStyle -> {Thickness[0.006], RGBColor[0, 0.7, 1]},
PlotLabel -> "True error as a function of number of steps"]
data = Table[{Nn[i], et[i]}, {i, 0, nth}];
plot5 = ListPlot[data,
AxesOrigin -> {0, 0}, Joined -> True,
PlotStyle -> {Thickness[0.006], RGBColor[0.5, 0.5, 1]}, PlotLabel ->
"Absolute relative true error percentage \nas a function of number \ of steps"]
data = Table[{Nn[i], Ea[i]}, {i, 1, nth}];
plot6 = ListPlot[data, AxesOrigin -> {0, 0}, Joined -> True,
PlotStyle -> {Thickness[0.006], RGBColor[0, 1, 0]},
PlotLabel -> "Approximate error \nas a function of number of steps"]
data = Table[{Nn[i], ea[i]}, {i, 1, nth}];
plot7 = ListPlot[data, AxesOrigin -> {0, 0}, Joined -> True,
PlotStyle -> {Thickness[0.006], RGBColor[0, 0.3, 0.7]}, PlotLabel ->
"Absolute relative approximate error percentage \nas a function of \ number of steps"]
data = Table[{Nn[i], sig[i]}, {i, 1, nth}];
plot8 = ListPlot[data, AxesOrigin -> {0, 0}, Joined -> True,
PlotStyle -> {Thickness[0.006], RGBColor[0.80, 0.52, 0.25]}, PlotLabel ->
"Least correct significant digits \nas a function of number of \ steps"]

 

IV. Generic second-order method


There is a family of Runge--Kutta methods of order 2, parameterized by an arbitrary positive number α from the interval (0,1] and given by the formula
\[ y_{n+1} = y_n + h \left( 1 - \frac{1}{2\alpha} \right) f(x_n , y_n ) + \frac{1}{2\alpha} \, f \left( x_n + \alpha \,h , y_n + \alpha \,h \,f(x_n , y_n ) \right) , \]
with Butcher tableau
\[ \begin{array}{c|cc} 0&0&0 \\ \alpha & \alpha & 0 \\ \hline & \left( 1 - \frac{1}{2\alpha} \right) & \frac{1}{2\alpha} \end{array} . \]
Correspondingly,
\[ y_{i+1} = y_i + h \left( b_1 k_1 + b_2 k_2 \right) , \qquad b_1 + b_2 =1, \]
where
\[ \begin{split} k_1 &= f\left( x_i , y_i \right) , \\ k_2 &= f\left( x_i + a\,h , y_i +a \, k_1 h \right) . \end{split} \]

This method unites all second-order Runge--Kutta methods; in particular, we have \( a= 1 \) for Heun's algorithm, \( a= 1/2 \) for the midpoint method, and \( a= 2/3 \) for optimal second order Ralston's algorithm.

 

 

Return to Mathematica page

Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)