Preface
This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very little experience or have never used Mathematica before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.
Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in regular fonts. This means that you can copy and paste all comamnds into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts to your needs for learning how to use the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part III of the course APMA0330
Runge-Kutta of order 4
Each Runge--Kutta method is derived from an appropriate Taylor method in such a way that the final global error is of order O(hm), so it is of order m. These methods can be constructed for any order m. A trade-off to perform several slope evaluations at each step and obtain a good accuracy makes the Runge--Kutta of order 4 the most popular.
As usual, we consider an initial value problem \( y' = f(x,y) , \quad y(x_0 )= y_0 \) and seek approximate values yk to the actual solution \( y= \phi (x) \) at mesh points \( x_n = x_0 + n\,h , \quad n=0,1,2,\ldots . \) The fourth order Runge--Kutta method is based on computing yn+1 as follows
I. Classical RK4
So far the most often used is the classical fourth-order Runge-Kutta formula, which has a certain sleekness of organization about it:
The Runge-Kutta method iterates the x-values by simply adding a fixed step-size of h at each iteration. The y-iteration formula is far more interesting. It is a weighted average of four coefficients. Doing thi,s we see that k1 and k4 are given a weight of 1/6 in the weighted average, whereas k2 and k3 are weighted 1/3, or twice as heavily as k1 and k4. (As usual with a weighted average, the sum of the weights 1/6, 1/3, 1/3 and 1/6 is 1.) So what are these ki values that are being used in the weighted average?
- k1 is simply Euler's prediction.
- k2 is evaluated halfway across the prediction interval and gives us an estimate of the slope of the solution curve at this halfway point.
- k3 has a formula which is quite similar to that of k2 except that where k1 used to be, there is now a k2. Essentially, the f-value here is yet another estimate of the slope of the solution at the "midpoint" of the prediction interval. This time, however, the y-value of the midpoint is not based on Euler's prediction, but on the y-jump predicted already with k2.
- k4 evaluates f at \( x_n + h , \) which is at the extreme right of the prediction interval. The y-value coupled with this \( y_n + k_3 , \) is an estimate of the y-value at the right end of the interval, based on the y-jump just predicted by k3.
Example. We revisit our example \( f(x,y) = x^2 - y^2 , \quad y(1) = 1 , \) and solve it in the following way:
f[x_, y_] := x^2 - y^2 (* this function is chosen as an example *)
y[0] = 1;
Do[k1 = h f[h n, y[n]];
k2 = h f[h (n + .5), y[n] + k1 / 2];
k3 = h f[h (n + .5), y[n] + k2 / 2];
k4 = h f[h (n + 1), y[n] + k3];
y[n + 1] = y[n] + 1 / 6 * (k1 + 2 k2 + 2 k3 + k4),
{n, 0, 9}]
y[10]
Other codes of classical Runge-Kutta method (the first one is when the step size h is specified, and another one when the number of steps is given):
RK4[f_, {x0_, xn_}, y0_, h_] :=
Block[{xold = x0, yold = y0, xhalf, xnew, ynew, k1, k2, k3, k4,
sollist = {{x0, y0}}, x, y, n, steps}, steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
xhalf = xold + h/2;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xhalf, y -> yold + k2*h/2};
k4 = f /. {x -> xnew, y -> yold + k3*h};
ynew = yold + (h/6)*(k1 + 2*k2 + 2*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Another version when the number of steps is specified.
Block[{xold = x0, yold = y0, xhalf, xnew, ynew, k1, k2, k3, k4,
sollist = {{x0, y0}}, x, y, n, h}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
xhalf = xold + h/2;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xhalf, y -> yold + k2*h/2};
k4 = f /. {x -> xnew, y -> yold + k3*h};
ynew = yold + (h/6)*(k1 + 2*k2 + 2*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
h = (xf - x0)/n;
Y = y0;
For[i = 0, i > n, i++,
k1 = f[x0 + i*h, Y];
k2 = f[x0 + i*h + 0.5*h, Y + 0.5*k1*h];
k3 = f[x0 + i*h + 0.5*h, Y + 0.5*k2*h];
k4 = f[x0 + i*h + h, Y + k3*h];
Y = Y + 1/6*(k1 + 2*k2 + 2*k3 + k4)*h]; Y]
n = number of steps
x0 = boundary condition for x
y0 = boundary condition for y
xf = value of x at which y is desired
f = differential equation in the form dy/dx = f(x,y)
The following loop calculates the following:
AV = approximate value of the ODE solution
Et = true error
et = absolute relative true error percentage
Ea = approximate error
ea = absolute relative approximate error percentage
sig = least number of significant digits correct in approximation
Do[
Nn[i]=2^i;
H[i]=(xf-x0)/Nn[i];
AV[i]=RK4th[2^i,x0,y0,xf,f];
Et[i]=EV-AV[i];
et[i]=Abs[(Et[i]/EV)]*100.0;
If[i>0,
Ea[i]=AV[i]-AV[i-1];
ea[i]=Abs[Ea[i]/AV[i]]*100.0;
sig[i]=Floor[(2-Log[10,ea[i]/0.5])];
If[sig[i] > 0,sig[i]=0];
] ,{i,0,nth}];
Example. Consider the IVP: \( y' = 1/(3*x - 2*y + 1) , \quad y(0) =1 .\)
RK4s[f[x, y], {0, 1}, 0, 10] (* 10 is the number of steps *)
RK4[f[x, y], {0, 1}, 0, 0.1] (* h=0.1 is the step size *)
Here is a slightly different version of the above code:
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
k1 = N[f[xold, yold]];
k2 = N[f[xold + h/2, yold + h*k1/2]];
k3 = N[f[xold + h/2, yold + k2*h/2]];
k4 = N[f[xold + h, yold + k3*h]];
ynew = N[yold + h*(k1 + 2*k2 + 2*k3 + k4)/6];
sollist = Append[sollist, {xnew, ynew}];
xold = xnew; yold = ynew, {steps}];
Return[sollist]]
Example. We apply this numerical method to find an approximate values for the solution to the IVP \( f(x,y) = x^2 - y^2 , \quad y(1) = 1 . \)
RK4s[f[x, y], {1, 2}, 1, 10]
y[1] = 1; h =0.1;
a = 0; b = 2; n = Floor[(b-a)/h];
Do[x[i] = 0.0 + (i-1)*h, {i,1,n+1}]
Do[{k1 = f[x[i],y[i]],
k2 = f[x[i]+h/2, y[i]+h*k1/2],
k3 = f[x[i]+h/2, y[i]+h*k2/2],
k4 = f[x[i]+h, y[i]+h*k3],
k = (k1+2*k2+2*k3+k4)/6;
y[i+1] = y[i] + k*h}, {i,1,n}]
Do[Print[x[i], " ", y[i]],{i,1,n+1}]
II. Kutta Algorithm RK4
Kutta's algorithm (1901) of order four:
Block[{xold = x0, yold = y0, xnew, ynew, k1, k2, k3, k4,
sollist = {{x0, y0}}, x, y, n, h}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xold + h/3, y -> yold + k1*h/3};
k3 = f /. {x -> xold + 2*h/3, y -> yold - h*k1/3 + k2*h};
k4 = f /. {x -> xnew, y -> yold + h*(k1 - k2 + k3)};
ynew = yold + (h/8)*(k1 + 3*k2 + 3*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew; yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
Example. Consider the IVP: \( y' = x^2 - y^2 , \quad y(1) =1 .\)
kutta4s[f[x, y], {1, 2}, 1, 10]
Here is a slightly different version of Kutta's algorithm:
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
h = N[(xn - x0)/steps];
Do[xnew = xold + h;
k1 = N[f[xold, yold]];
k2 = N[f[xold + h/3, yold + h*k1/3]];
k3 = N[f[xold + h*2/3, yold - h*k1/3 + k2*h]];
k4 = N[f[xold + h, yold + k3*h - k2*h + h*k1]];
ynew = N[yold + h*(k1 + 3*k2 + 3*k3 + k4)/8];
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist]]
Block[{xold = x0, yold = y0, xnew, ynew, k1, k2, k3, k4,
sollist = {{x0, y0}}, x, y, n, steps}, steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xold + h/3, y -> yold + k1*h/3};
k3 = f /. {x -> xold + 2*h/3, y -> yold - h*k1/3 + k2*h};
k4 = f /. {x -> xnew, y -> yold + h*(k1 - k2 + k3)};
ynew = yold + (h/8)*(k1 + 3*k2 + 3*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
Example. Consider the IVP: \( y' = x^2 - y^2 , \quad y(1) =1 .\)
kutta4[f[x, y], {1, 2}, 1, 0.1]
Now we apply the same code with modified last line
0.32492}, {0.5, 0.386033}, {0.6, 0.440966}, {0.7, 0.49057}, {0.8,
0.535585}, {0.9, 0.576644}, {1., 0.614281}}
Another version:
y[1] = 1; h =0.1;
a = 0; b = 2; n = Floor[(b-a)/h];
Do[x[i] = 0.0 + (i-1)*h, {i,1,n+1}]
Do[{k1 = f[x[i],y[i]],
k2 = f[x[i]+h/3, y[i]+h*k1/3],
k3 = f[x[i]+h*2/3, y[i]+h*k2-h*k1/3],
k4 = f[x[i]+h, y[i]+h*k3+h*k1-h*k2],
k = (k1+3*k2+3*k3+k4)/8;
y[i+1] = y[i] + k*h}, {i,1,n}]
Do[Print[x[i], " ", y[i]],{i,1,n+1}]
III. Gill's Algorithm RK4
Gill's method:
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, x, y, steps,
r2, xnew, ynew, xhalf, k1, k2, k3, k4},
steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
xhalf = xold + h/2; r2 = 1/Sqrt[2];
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xhalf, y -> yold - h*k1*(1/2 - r2) + k2*h*(1 - r2)};
k4 = f /. {x -> xnew, y -> yold - r2*h*k2 + h*k3*(1 + r2)};
ynew = yold + (h/6)*(k1 + 2*(1 - r2)*k2 + 2*(1 + r2)*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
gill4[f[x, y], {1, 2}, 1, 0.1]
Another version:
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, x, y, h, r2, xnew, ynew, xhalf, k1, k2, k3, k4},
h = N[(xn - x0)/steps];
Do[xnew = xold + h; xhalf = xold + h/2;
r2 = 1/Sqrt[2]; k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xhalf, y -> yold - h*k1*(1/2 - r2) + k2*h*(1 - r2)};
k4 = f /. {x -> xnew, y -> yold - r2*h*k2 + h*k3*(1 + r2)};
ynew = yold + (h/6)*(k1 + 2*(1 - r2)*k2 + 2*(1 + r2)*k3 + k4);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
Then we exercute the code by revisiting our previous example: \( y' = x^2 - y^2 , \quad y(1) =1 .\)
gill4s[f[x, y], {1, 2}, 1, 10]
k1 = N[f[xold, yold]];
k2 = N[f[xold + h/2, yold + h*k1/2]];
k3 = N[ f[xold + h/2, yold - h*k1*(1/2 - 1/Sqrt[2]) + k2*h*(1 - 1/Sqrt[2])]];
k4 = N[f[xold + h, yold - k2*h/Sqrt[2] + h*k3*(1 + 1/Sqrt[2])]];
ynew = N[yold + h*(k1 + 2*(1 - 1/Sqrt[2])*k2 + 2*(1 + 1/Sqrt[2])*k3 + k4)/6];
sollist = Flatten[Append[sollist, {xnew, ynew}]];
xold = xnew;
yold = ynew, {steps}];
Return[sollist]]
Example. Application of the method to the initial value problem \( y' = 1 -x +4y , \quad y(0) =1.\)
Another version:
y[1] = 1; h =0.1;
a = 0; b = 2; n = Floor[(b-a)/h];
Do[x[i] = 0.0 + (i-1)*h, {i,1,n+1}]
f[x_,y_] = 1/y^3 - 2*x^2;
ap = N[1+1/Sqrt[2]];
am = N[1-1/Sqrt[2]];
amm = N[1/2-1/Sqrt[2]];
Do[{k1 = N[f[x[i],y[i]]],
k2 = N[f[x[i]+h/2, y[i]+h*k1/2]],
k3 = N[f[x[i]+h/2, y[i]+h*k2*am-h*k1*amm]],
k4 = N[f[x[i]+h, y[i]-h*k2/Sqrt[2]+h*k3*ap]],
k = N[(k1+2*k2*am+2*k3*ap+k4)/6];
y[i+1] = y[i] + k*h}, {i,1,n}]
Do[Print[x[i], " ", y[i]],{i,1,n+1}]
IV. Fifth order Butcher's method
Here is the fifth order Runge--Kutta method proposed by Butcher:
y[1] = 1; h =0.1;
a = 0; b = 2; n = Floor[(b-a)/h];
Do[x[i] = 0.0 + (i-1)*h, {i,1,n+1}]
f[x_,y_] = 1/y^3 - 2*x^2;
Do[{k1 = N[f[x[i],y[i]]],
k2 = N[f[x[i]+h/4, y[i]+h*k1/4]],
k3 = N[f[x[i]+h/4, y[i]+h*k1/8+h*k2/8]],
k4 = N[f[x[i]+h/2, y[i]+h*k3-k2*h/2]],
k5 = N[f[x[i]+h*3/4, y[i]+3*h*k1/16+9*h*k4/16]],
k6 = N[f[x[i]+h, y[i]-3*h*k1/7+2*h*k2/7 +12*k3*h/7-12*h*k4/7 +8*k5*h/7]],
k = N[(7*k1+32*k3+12*k4+32*k5 +7*k6)/90];
y[i+1] = y[i] + k*h}, {i,1,n}]
Do[Print[x[i], " ", y[i]],{i,1,n+1}]
Another version of Butcher's method:
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
h = N[(xn - x0)/steps];
Do[xnew = xold + h;
(* Print["h= ", h, " xold= ", xold, " yold= ", yold]; *)
k1 = N[f[xold, yold]];
k2 = N[f[xold + h/4, yold + h*k1/4]];
k3 = N[f[xold + h/4, yold + h*k1/8 + k2*h/8]];
k4 = N[f[xold + h/2, yold - k2*h/2 + k3*h]];
k5 = N[f[xold + 3*h/4, yold + 3*k1*h/16 + 9*k4*h/16]];
k6 = N[
f[xnew, yold + 2*k2*h/7 - 3*k1*h/7 + 12*k3*h/7 - 12*k4*h/7 +
8*k5*h/7]];
ynew = N[yold + h*(7*k1 + 32*k3 + 12*k4 + 32*k5 + 7*k6)/90];
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist]]
Example. Now we present an example of use this algorimth. Let \( f(x,y) = 1 -x +4y .\) Application of RK5 to solve numerically the initial value problem \( y' = f(x,y) , \quad y(0) =1 \) yields
V. Runge--Kutta--Fehlberg Method (RKF45)
One way to guarantee accuracy in the solution of an initial value problem is t solve the problem twice using step sizes h and h/2 and compare answers at the mesh points corresponding to the largest step size. However, this requires a significant amount of computation for the smaller step length and must be repeated if it is determined that the agreement is not good enough.
The Runge--Kutta--Fehlberg method (denoted RKF45) or Fehlberg method was developed by the German mathematician Erwin Fehlberg (1911--1990) in 1969 NASA report. The novelty of Fehlberg's method is that it is an embedded method from the Runge-Kutta family, and it has a procedure to determine if the proper step size h is being used. At each step, two different approximations for the solution are made and compared. If the two answers are in close agreement, the approximation is accepted. If the two answers do not agree to a specified accuracy, the step length is reduced. If the answers agree to more significant digits than required, the step size is increased. RKF45 method is a method of order \( O( h^4 ) \) with an error estimator of order \( O( h^5 ) . \)Initially, Erwin Fehlberg introduced a so-called six stage Runge-Kutta method that requires six function evaluations per step. These function values can be combined with one set of coefficients to produce a fifth-order accurate approximation and with another set of coefficients to produce an independent fourth-order accurate approximation. Comparing these two approximations provides an error estimate and resulting step size control. Notice that it takes six stages to get fifth order. It is not possible to get fifth order with only five function evaluations per step. The new ode45 introduced in the late 1990s is based on an algorithm of Dormand and Prince (DOPRI). It uses six stages, employs the FSAL strategy, provides fourth and fifth order formulas, has local extrapolation and a companion interpolant. The new ode45 is so effective that the interpolant is called up to produce more finely spaced output at no additional cost measured in function evaluations. The codes for the two routines ode23 and ode45 are very similar.
Butcher tableau for Fehlberg's 4(5) method
Other Runge--Kutta methods could be found on
https://en.wikipedia.org/wiki/List_of_Runge-Kutta_methods
Some other popular 4/5 versions of Runge--Kutta methods:
- The Dormand--Prince 4/5 pair (Dormand, J. R.; Prince, P. J. (1980), "A family of embedded Runge--Kutta formulae", Journal of Computational and Applied Mathematics, 6 (1): 19–26, doi:10.1016/0771-050X(80)90013-3;
Dormand, John R. (1996), Numerical Methods for Differential Equations: A Computational Approach, Boca Raton: CRC Press, pp. 82--84, ISBN 0-8493-9433-3.). - The Cash--Karp pair. It was proposed by Professor Jeff R. Cash from Imperial College London and Alan H. Karp from IBM Scientific Center. (J. R. Cash, A. H. Karp. "A variable order Runge-Kutta method for initial value problems with rapidly varying right-hand sides", ACM Transactions on Mathematical Software 16: 201--222, 1990. doi:10.1145/79505.79507).
- The Tsitorous 4/5 (http://users.ntua.gr/tsitoura/RK54_new_v2.pdf).
Fixed Point Iteration
Bracketing Methods
Secant Methods
Euler's Methods
Heun Method
Runge-Kutta Methods
Runge-Kutta Methods of order 2
Runge-Kutta Methods of order 3
Runge-Kutta Methods of order 4
Polynomial Approximations
Error Estimates
Adomian Decomposition Method
Modified Decomposition Method
Multistep Methods
Multistep Methods of order 3
Multistep Methods of order 4
Milne Method
Hamming Method
Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)