Preface


This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very little experience or have never used Mathematica before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.

Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in regular fonts. This means that you can copy and paste all comamnds into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts to your needs for learning how to use the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part III of the course APMA0330

Runge--Kutta 3


We consider the initial value problem \( y' = f(x,y) , \quad y(x_0 ) = y_0 \) that is assumed to have a unique solution. As usual, we use a uniform grid of points \( x_n = x_0 + n\, h , \quad n= 0,1,2,\ldots , \) with fixed step size h to approximate the actual solution \( y= \phi (x) . \)

We start with the following third order algorithm credited to Kutta:

\begin{equation} \label{E35.9} y_{n+1} = y_n + \frac{h}{6}\, ( k_1 + 4 k_2 + k_3 ), \end{equation} where \[ \begin{array}{l} k_1 = f( x_n , y_n ) , \\ k_2 = f( x_n + h/2 , y_n + k_1 h/2 ) , \\ k_3 = f( x_n +h , y_n + 2h k_2 - h k_1 ) . \end{array} \]
kutta3s[f_, {x0_, xn_}, y0_, steps_] :=
Block[{xold = x0, yold = y0, xhalf, xnew, ynew, sollist = {{x0, y0}},
x, y, h}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
xhalf = xold + h/2;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xnew, y -> yold + k2*h*2 - h*k1};
ynew = yold + (h/6)*(k1 + 4*k2 + k3);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist]]

f[x_, y_] = x*x - y*y
Out[2]= x^2 - y^2
kutta3s[f[x, y], {1, 2}, 1, 10]
Out[3]= {{1, 1}, {1.1, 1.00964}, {1.2, 1.03746}, {1.3, 1.08173}, {1.4,
1.14076}, {1.5, 1.21277}, {1.6, 1.29588}, {1.7, 1.38818}, {1.8,
1.48777}, {1.9, 1.59285}, {2., 1.70178}}

If you want to see only the last value (or any other particular value) use its modification:

kutta3[f_, {x0_, xn_}, y0_, h_] :=
Block[{xold = x0, yold = y0, xnew, xhalf, ynew, sollist = {{x0, y0}},
x, y, n, steps}, steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
xhalf = xold + h/2;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xnew, y -> yold + k2*h*2 - h*k1};
ynew = yold + (h/6)*(k1 + 4*k2 + k3);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
kutta3[f[x, y], {1, 2}, 1, 0.1]
Out[3]= {2., 1.70178}

 

 

The Nystrom Algorithm


The Nystrom algorithm:

\begin{equation} \label{E35.nystrom} y_{n+1} =y_n + \frac{h}{8}\,(2k_1 + 3k_2 + 3k_3 ), \qquad n=0,1,2,\ldots , \end{equation} where \[ k_1 = f(x_n , y_n ), \ k_2 = f\left( x_n + \frac{2h}{3} , y_n + \frac{2h}{3}\,k_1 \right) , \ k_3 = f\left( x_n + \frac{2h}{3} , y_n + \frac{2h}{3}\,k_2 \right) ; \]
nystrom3s[f_, {x0_, xn_}, y0_, steps_] :=
Block[{xold = x0, yold = y0, xnew, xthird, ynew,
sollist = {{x0, y0}}, x, y, n, h}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
xthird = xold + 2*h/3;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xthird, y -> yold + k1*h*2/3};
k3 = f /. {x -> xthird, y -> yold + k2*h*2/3};
ynew = yold + (h/8)*(2*k1 + 3*k2 + 3*k3);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
nystrom3s[f[x, y], {1, 2}, 1, 10]
Out[16]= {2., 1.7018}


nystrom3[f_, {x0_, xn_}, y0_, h_] :=
Block[{xold = x0, yold = y0, xnew, xthird, ynew, k1,k2,k3,
sollist = {{x0, y0}}, x, y, n, steps}, steps = N[(xn - x0)/h];
Do[xnew = xold + h;
xthird = xold + 2*h/3;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xthird, y -> yold + k1*h*2/3};
k3 = f /. {x -> xthird, y -> yold + k2*h*2/3};
ynew = yold + (h/8)*(2*k1 + 3*k2 + 3*k3);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
nystrom3[f[x, y], {1, 2}, 1, 0.1]
Out[16]= {2., 1.7018}

Example.

Solution = nystrom[1/(3*x - 2*y + 1), {x, 0, 1}, {y, 0}, .1]
Out[18]= {{0, 0}, {0.1, 0.09504}, {0.2, 0.180388}, {0.3, 0.256727}, {0.4,
0.324968}, {0.5, 0.386087}, {0.6, 0.441026}, {0.7, 0.490635}, {0.8,
0.535654}, {0.9, 0.576716}, {1., 0.614356}}

 

The Nearly Optimal Algorithm


The nearly optimal algorithm:

\begin{equation} \label{E35.optimal} y_{n+1} =y_n + \frac{h}{9} \,(2k_1 + 3k_2 + 4k_3 ), \qquad n=0,1,2,\ldots , \end{equation} in which the stages are computed according to \[ k_1 = f(x_n , y_n ), \quad k_2 = f \left( x_n + \frac{h}{2} , y_n + \frac{h}{2}\,k_1 \right) , \quad k_3 = f \left( x_n + \frac{3h}{4} , y_n + \frac{3h}{4}\,k_2 \right) ; \]
opt3s[f_, {x0_, xn_}, y0_, steps_] :=
Block[{xold = x0, yold = y0, xnew, xhalf, ynew, k1, k2, k3,
sollist = {{x0, y0}}, x, y, n, h}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
xhalf = xold + h/2;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xold + 3*h/4, y -> yold + k2*h*3/4};
ynew = yold + (h/9)*(2*k1 + 3*k2 + 4*k3);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
opt3s[f[x, y], {1, 2}, 1, 10]
Out[14]= {2., 1.7018}


The same algorithm when the step size is specified:

opt3[f_, {x0_, xn_}, y0_, h_] :=
Block[{xold = x0, yold = y0, xnew, xhalf, ynew, k1, k2, k3,
sollist = {{x0, y0}}, x, y, n, steps}, steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
xhalf = xold + h/2;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xhalf, y -> yold + k1*h/2};
k3 = f /. {x -> xold + 3*h/4, y -> yold + k2*h*3/4};
ynew = yold + (h/9)*(2*k1 + 3*k2 + 4*k3);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
opt3[f[x, y], {1, 2}, 1, 0.1]
Out[14]= {2., 1.7018}

 

The Heun Algorithm


Heun's algorithm (1900):

\begin{equation} \label{E35.heun} y_{n+1} =y_n + \frac{h}{4} \,(k_1 + 3k_3 ), \qquad n=0,1,2,\ldots , \end{equation}
where
\[ k_1 = f(x_n , y_n ), \ k_2 = f\left( x_n + \frac{h}{3} , y_n + \frac{h}{3}\,k_1 \right) , \ k_3 = f\left( x_n + \frac{2h}{3} , y_n + \frac{2h}{3}\,k_2 \right) . \]
heun3s[f_, {x0_, xn_}, y0_, steps_] :=
Block[{xold = x0, yold = y0, xnew, xhalf, ynew, k1, k2, k3,
sollist = {{x0, y0}}, x, y, n, h}, h = N[(xn - x0)/steps];
Do[xnew = xold + h;
xthird = xold + h/3;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xthird, y -> yold + k1*h/3};
k3 = f /. {x -> xold + 2*h/3, y -> yold + k2*h*2/3};
ynew = yold + (h/4)*(k1 + 3*k3);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
heun3s[f[x, y], {1, 2}, 1, 10]
Out[18]= {2., 1.70181}


heun3[f_, {x0_, xn_}, y0_, h_] :=
Block[{xold = x0, yold = y0, xnew, xhalf, ynew, k1, k2, k3,
sollist = {{x0, y0}}, x, y, n, steps}, steps = Round[(xn - x0)/h];
Do[xnew = xold + h;
xthird = xold + h/3;
k1 = f /. {x -> xold, y -> yold};
k2 = f /. {x -> xthird, y -> yold + k1*h/3};
k3 = f /. {x -> xold + 2*h/3, y -> yold + k2*h*2/3};
ynew = yold + (h/4)*(k1 + 3*k3);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist[[steps + 1]]]]
heun3[f[x, y], {1, 2}, 1, 0.1]
Out[18]= {2., 1.70181}

heun3s[1/(3*x - 2*y + 1), {x, 0, 1}, {y, 0}, 10]
Out[21]= {{0, 0}, {0.1, 0.0950301}, {0.2, 0.180369}, {0.3, 0.256699}, {0.4,
0.324932}, {0.5, 0.386046}, {0.6, 0.440981}, {0.7, 0.490586}, {0.8,
0.535602}, {0.9, 0.576662}, {1., 0.6143}}

Solution = heun3[1/(3*x - 2*y + 1), {x, 0, 1}, {y, 0}, .1]
Out[24]= {{0, 0}, {0.1, 0.0950301}, {0.2, 0.180369}, {0.3, 0.256699}, {0.4,
0.324932}, {0.5, 0.386046}, {0.6, 0.440981}, {0.7, 0.490586}, {0.8,
0.535602}, {0.9, 0.576662}, {1., 0.6143}}

 

Bogacki and Shampine Approximation


The Bogacki--Shampine method is a method for the numerical solution of ordinary differential equations, that was proposed by the professor Lawrence F. Shampine (Southern Methodist University, Richardson, TX) and his former student Przemyslaw Bogacki in 1989. Now P. Bogacki is among the faculty at the Old Dominion University (Norfolk, VA). The Bogacki--Shampine method is a Runge--Kutta method of order three with four stages, so that it uses approximately three function evaluations per step. It has an embedded second-order method which can be used to implement adaptive step size similar to RKF45. The Bogacki--Shampine method is implemented in the ode23 function in matlab (Shampine & Reichelt 1997). For the initial value problem \( y' = f(x,y) , \quad y(x_0 ) = y_0 , \) the Bogacki--Shampine algorithm to determine approximation values yn of the actual solution at mesh points \( x_n = x_0 + n\, h , \quad n=0,1,2,\ldots ; \) is as follows:
\begin{align*} k_1 &= f\left( x_n , y_n \right) , \\ k_2 &= f\left( x_n+ \frac{h}{2} , y_n + \frac{h}{2}\, k_1 \right) , \\ k_3 &= f\left( x_n+ \frac{3\,h}{4} , y_n + \frac{3\,h}{4}\, k_2 \right) , \\ y_{n+1} &= y_n + \frac{2}{9}\, h\, k_1 + \frac{1}{3}\,h\,k_2 + \frac{4}{9}\, h\, k_3 , \\ k_4 &= f\left( x_n +h , y_{n+1} \right) , \\ z_{n+1} &= y_n + \frac{7}{24}\,h \,k_1 + \frac{1}{4}\,h\, k_2 + \frac{1}{3}\, h\,k_3 + \frac{1}{8}\, h\, k_4 . \end{align*}
Here zn+1 is a second-order approximation to the exact solution. The third order method for calculating yn+1 is due to Anthony Ralston (1965). He was born in 1930, in New York City. Anthony Ralston received his Ph.D. from the Massachusetts Institute of Technology in 1956 under the direction of M. Eric Reissner.

The Butcher tableau for the Bogacki–Shampine method is:
\[ \begin{array}{c|cccc} 0&0&0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ \hline \frac{3}{4} & 0 & \frac{13}{4} && \\ 1 & \frac{2}{9} & \frac{1}{3} & \frac{4}{9} \\ \hline & \frac{2}{9} & \frac{1}{3} & \frac{4}{9} & 0 \\ & \frac{7}{24} & \frac{1}{4} & \frac{1}{3} & \frac{1}{8} \end{array} . \]

References:

Bogacki, Przemyslaw; Shampine, Lawrence F. (1989), "A 3(2) pair of Runge–Kutta formulas", Applied Mathematics Letters, 2 (4): 321--325, ISSN 0893-9659, doi:10.1016/0893-9659(89)90079-7.
Ralston, Anthony (1965), A First Course in Numerical Analysis, New York: McGraw-Hill.
Shampine, Lawrence F.; Reichelt, Mark W. (1997), "The Matlab ODE Suite", SIAM Journal on Scientific Computing, 18 (1): 1–22, ISSN 1064--8275, doi:10.1137/S1064827594276424.
bogashamp[f_, {x_, x0_, xn_}, {y_, y0_}, steps_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
h = N[(xn - x0)/steps];
Do[xnew = xold + h;
fold = f /. {x -> xold, y -> yold};
f2 = f /. {x -> xold + h/2, y -> yold + fold*h/2};
f3 = f /. {x -> xold + 3*h/4, y -> yold + f2*3*h/4};
ynew = yold + (h/9)*(2*fold + 3*f2 + 4*f3);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist]]
bogashamp[1/(3*x - 2*y + 1), {x, 0, 1}, {y, 0}, 10]
Out[27]= {{0, 0}, {0.1, 0.095039}, y[20]{0.2, 0.180386}, {0.3, 0.256724}, {0.4, 0.324963}, {0.5, 0.386082}, {0.6, 0.441021}, {0.7, 0.490629}, {0.8, 0.535647}, {0.9, 0.576709}, {1., 0.614349}}

bogashamp[f_, {x_, x0_, xn_}, {y_, y0_}, stepsize_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
h = N[stepsize];
Do[xnew = xold + h;
fold = f /. {x -> xold, y -> yold};
f2 = f /. {x -> xold + h/2, y -> yold + fold*h/2};
f3 = f /. {x -> xold + 3*h/4, y -> yold + f2*3*h/4};
ynew = yold + (h/9)*(2*fold + 3*f2 + 4*f3);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {(xn - x0)/stepsize}];
Return[sollist]]

bogashamp[1/(3*x - 2*y + 1), {x, 0, 1}, {y, 0}, 0.1]

Out[29]= {{0, 0}, {0.1, 0.095039}, {0.2, 0.180386}, {0.3, 0.256724}, {0.4,
0.324963}, {0.5, 0.386082}, {0.6, 0.441021}, {0.7, 0.490629}, {0.8,
0.535647}, {0.9, 0.576709}, {1., 0.614349}}

 

 

 

Fixed Point Iteration

Bracketing Methods

Secant Methods

Euler's Methods

Heun Method

Runge-Kutta Methods

Runge-Kutta Methods of order 2

Runge-Kutta Methods of order 3

Runge-Kutta Methods of order 4

Polynomial Approximations

Error Estimates

Adomian Decomposition Method

Modified Decomposition Method

Multistep Methods

Multistep Methods of order 3

Multistep Methods of order 4

Milne Method

Hamming Method

 

Return to Mathematica page

Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)