Preface


This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very little experience or have never used Mathematica before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.

Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in normal font. This means that you can copy and paste all commands into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the second course APMA0330
Return to Mathematica tutorial for the first course APMA0340
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330
Return to Part III of the course APMA0330

Heun's Methods


Since for many problems the Euler rule requires a very small step size to produce sufficiently accurate results, many efforts have been devoted to the development of more efficient methods. Our next step in this direction includes Heun's method, which was named after a German mathematician Karl Heun (1859--1929), who made significant contributions to developing the Runge--Kutta method. Actually, the algorithm named after Heun was discovered by Carl Runge.

I. Trapezoid Rule


Suppose that we want to solve the initial value problem for the first order differential equation

\[ y' = (x,y), \quad y(x_0 ) = y_0\qquad \Longleftrightarrow \qquad y(x) = y(x_0 ) + \int_{x_0}^x f\left( s, y(s) \right) {\text d}s , \]
which is equivalent to the integral equation. To find its numerical solution, we pick up the uniform set of grid points \( x_n = x_0 + h\, n , \quad n=0,1,2,\ldots ; \) where h is a step size chosen fixed for simplicity. As usual, we denote by yn approximate values of unknown function at mesh points \( y_n \approx \phi (x_n ) , \quad n=0,1,2,\ldots ; \) where φ is the actual solution of the given initial value problem (IVP for short).

The concept of the Trapezoidal Rule in numerical methods is similar to the trapezoidal rule of Riemann sums. The Trapezoid Rule is generally more accurate than the Euler approximations, and it calculates approximations by taking the sum of the function of the current and next term and multiplying it by half the value of h. Therefore the syntax will be as follows:

\[ y_{n+1} = y_n + \frac{h}{2} [ f( x_n , y_n ) + f( x_{n+1} , y_{n+1} ) ] . \]
This is an implicit method: the value yn+1 appears on both sides of the equation, and to actually calculate it, we have to solve an equation which will usually be nonlinear. One possible method for solving this equation is Newton's method. We can use the Euler rule to get a fairly good estimate for the solution, which can be used as the initial guess of Newton's method.

 

II. Heun's Formula / Improved Euler Method


The Improved Euler’s method, also known as the Heun formula or the average slope method, gives a more accurate approximation than the Euler rule and gives an explicit formula for computing yn+1. The basic idea is to correct some error of the original Euler's method. The syntax of the Improved Euler’s method is similar to that of the trapezoid rule, but the y value of the function in terms of yn+1 consists of the sum of the y value and the product of h and the function in terms of xn and yn.

Improved Euler formula or the average slope method is commonly referred to as the Heun method (although discovered by C. Runge):

\[ y_{n+1} = y_n + \frac{h}{2} [ f( x_n , y_n ) + f( x_{n+1} , y_n + h f( x_n , y_n ) ) ], \quad n=0,1,2,\ldots . \]
Since it is actually the simplest version of predictor-corrector method, the recurrence can be written as
\[ \begin{split} p_{n+1} &= y_n + h\, f \left( x_n , y_n \right) , \\ y_{n+1} &= y_n + \frac{h}{2} [ f( x_n , y_n ) + f( x_{n+1} , p_{n+1} ) ], \quad n=0,1,2,\ldots . \end{split} \]
Therefore, one of the versions on the Heun method in Mathematica is as follows
Clear[y]
f[x_, y_] := x^2 + y^2
y[0] = 1;
h:=0.1;
Do[k1 = h f[h n, y[n]];
k2 = h f[h (n + 1), y[n] + k1];
y[n + 1] = y[n] + .5 (k1 + k2),
{n, 0, 9}]
y[10]

The main difference here is that instead of using y[n+1]=y[n]+h*f(x[n],y[n]), we use

y[n+1]=y[n]+h/2(f(x[n],y[n])+f(x[n+1],y[n+1])),
i.e. we are averaging the slope. For a build-in version of this, input the following:
heun[f_, {x_, x0_, xn_}, {y_, y0_}, steps_] :=
Block[{ xold = x0,
yold = y0,
sollist = {{x0, y0}}, h
},
h = N[(xn - x0) / steps];
Do[ xnew = xold + h;
k1 = h * (f /. {x -> xold, y -> yold});
k2 = h * (f /. {x -> xold + h, y -> yold + k1});
ynew = yold + .5 * (k1 + k2);
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew,
{steps}
];
Return[sollist]
]

Instead of calculating intermediate values k1 and k2 , one can define ynew directly:

heun[f_, {x_, x0_, xn_}, {y_, y0_}, steps_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
h = N[(xn - x0)/steps];
Do[xnew = xold + h;
fold = f /. {x -> xold, y -> yold};
ynew =
yold + (h/2)*((fold) + f /. {x -> xold + h, y -> yold + h*fold});
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {steps}];
Return[sollist]]

The function heun is basically used in the same way as the function euler. Then to solve the above problem, do the following:

solution = heun[ x^2 + y^2 , {x, 0, 1}, {y, 1}, 10]

Then read the answer off the table.

Another option:

f[x_, y_] := Exp[2*x - y]
h = 0.1
K = IntegerPart[1.2/h]  (* if the value at x=1.2 needed *)
y[0] = 1                      (* initial condition *)

Then implement Heun's rule:

Do[y[n + 1] =
  y[n] + (h/2)*f[n*h, y[n]] + (h/2)*
    f[h*(n + 1), y[n] + h*f[n*h, y[n]]], {n, 0, K}]
y[K]

 

Example:

Solve the IVP: \( y'= 1/(3x-2y+1),\quad y(0)=0 \)

First, we find its solution using Mathematica's command DSolve:

f[x_, y_] := 1/(3 x - 2 y + 1)
DSolve[{y'[x] == f[x, y[x]], y[0] == 0}, y[x], x]
Out[2]= {{y[x] -> 1/6 (1 + 9 x - 2 ProductLog[1/2 E^(1/2 + (9 x)/2)])}}

To evaluate the exact solution at x=1.0, we type

a := DSolve[{y'[x] == f[x, y[x]], y[0] == 0}, y[x], x]
y[x] /. a /. x -> 1.0
Out[4]= {0.614275}

Then we use Heun's numerical method with step size h=0.1:

y[0] = 0; x[0] = 0; h = 0.1;
Do[k1 = h f[n*h, y[n]]; k2 = h f[h*(n + 1), y[n] + k1];
y[n + 1] = y[n] + 0.5*(k1 + k2), {n, 0, 9}]
y[10]
out[7]= 0.617265

Now we demonstrate the use of while command:

x[0] = 0; y[0] = 1; Nsteps =20;
i = 1
h=0.1;
Array[x, Nsteps];
Array[y, Nsteps];
While[i < Nsteps+1, x[i] = x[i - 1] + h; i++];
k=1
While[k < Nsteps,
y[k] = y[k - 1] + (h/2)*
Function[{u, v}, f[u, v] + f[u + h, v + h*f[u, v]]][x[k - 1], y[k - 1]];
k++];
Array[y, Nsteps]     (* to see all Nsteps values *)
y[Nsteps]                (* to see the last value *)

Example:

Consider the initial value problem for the integro-differential equation
\[ \dot{y} = 2.3\, y -0.01\,y^2 -0.1\,y\, \int_0^t y(\tau )\,{\text d}\tau , \qquad y(0) =50. \]
First, we recall y1 found previously based on Euler rule that we use as a predictor:
\[ p_1 = 50 +2.3\,h\,50 -0.01\,h \,50^2 - 0.1\,h^2 \,50^2 . \]
Now we correct this value based on trapezoid rule:
\[ y_1 = 50 + \frac{h}{2} \left[ 2.3\left( 50 + p_1 \right) -0.01\left( 50^2 + p_1^2 \right) \right] -0.1 \, \int_0^h \,{\text d}\tau \,\int_{\tau}^h {\text d}s \, y(s)\,y(\tau ) . \]
We approximate again the double integral in the right hand-side with the trapezoid rule:
\[ T_1 (h) = \int_0^h \,{\text d}\tau \,\int_{\tau}^h {\text d}s \, y(s)\,y(\tau ) \approx \frac{h}{2} \, \int_0^h {\text d}s \, y(s)\,y(0 ) \approx \frac{h^2}{4} \, y(0) \left[ p_1 + y(0) \right] = 12.5\,h^2 \left[ 50 + p_1 \right] . \]
Therefore,
\[ y_1 = 50 + \frac{h}{2} \left[ 2.3\left( 50 + p_1 \right) -0.01\left( 50^2 + p_1^2 \right) \right] - 1.25\,h^2 \left[ 50 + p_1 \right] . \]
The general case of Heun's method yields
\[ \begin{split} p_{k+1} &= y_k + 2.3\,h\,y_k -0.01\,h\,y^2_k -0.1 \, y_k \,h\, T_k (h) ,\qquad k=1,2,\ldots ; \qquad t_{k+1} = t_k +h , \\ y_{k+1} &= y_k + \frac{h}{2} \left[ 2.3\left( y_k + p_{k+1} \right) -0.01\left( y^2_k + p_{k+1}^2 \right) \right] -0.1\,T_{k+1} (h) , \\ &= y_k + \frac{h}{2} \, 2.3 \left( y_k + p_{k+1} \right) - \frac{h}{2} \, 0.01 \left( y_k^2 + p_{k+1}^2 \right) - \frac{h}{20} \left[ y_k\, \int_0^{t_k} y(\tau )\,{\text d}\tau + p_{k+1}\, \int_0^{t_{k+1}} y(\tau )\,{\text d}\tau \right] \end{split} \]
where
\[ T_{k} (h) = \int_0^{t_k} {\text d} s\, \int_s^{t_k} {\text d} \tau\, y(s)\,y(\tau ) = \int_0^{t_k} {\text d} \tau \, \int_0^{\tau} {\text d} s\,y(s)\,y(\tau ) , \quad k=1,2,\ldots . \]
We break the integral into sum
\[ T_{k+1} (h) = T_k (h) + \int_{t_k}^{t_{k+1}} {\text d} \tau \, \int_0^{\tau} {\text d} s\,y(s)\,y(\tau ) = T_k (h) + \int_{t_k}^{t_{k+1}} {\text d} s \, \int_s^{t_{k+1}} {\text d} \tau\,y(s)\,y(\tau ) + \int_{t_k}^{t_{k+1}} {\text d} \tau \, \int_0^{t_k} {\text d} s\,y(s)\,y(\tau ) . \]
Using again the trapezoid rule, we get
\begin{align*} T_{k+1} (h) &= T_k (h) + \frac{h}{2}\, y_k \,\int_{t_k}^{t_{k+1}} {\text d} \tau \,y( \tau ) + \frac{h}{2} \left[ p_{k+1} \int_0^{t_{k}} {\text d}s \,y(s) + y_k \int_0^{t_{k}} {\text d}s \,y(s) \right] \\ &= T_k (h) + \frac{h^2}{4} \,y_k \left[ p_{k+1} + y_k \right] + \frac{h}{2} \left( p_{k+1} + y_k \right) \int_0^{t_{k}} {\text d}s \,y(s) . \end{align*}
The integral in the right hand-side can be iterated:
\[ J_k = \int_0^{t_{k}} {\text d}s \,y(s) = \int_0^{t_{k-1}} {\text d}s \,y(s) + \int_{t_{k-1}}^{t_{k}} {\text d}s \,y(s) = J_{k-1} + \int_{t_{k-1}}^{t_{k}} {\text d}s \,y(s) \approx J_{k-1} + \frac{h}{2} \left( y_k + y_{k-1} \right) , \]
with \( J_1 = \frac{h}{2} \left( y_1 + 50 \right) . \) Now we ask Mathematica to do calculations.
h = 0.01
y[0] =50

 

Make a computational experiment: in the improved Euler algorithm, using the corrector value as the next prediction, and only after the second iteration use the trapezoid rule.

The ``improved'' predictor-corrector algorithm:

\[ \begin{align*}
k_1 &= f(x_k , y_k ) ,
\\
k_2 &= f(x_{k+1} , y_k + h\,k_1 ) ,
\\
k_3 &= y_k + \frac{h}{2} \left( k_1 + k_2 \right) ,
\\
y_{k+1} &= y_k + \frac{h}{2} \left( k_1 + k_3 \right) ,
\end{align*} \]
is realized using the following Mathematica script:

heunit[f_, {x_, x0_, xn_}, {y_, y0_}, steps_] :=
  Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
  h = N[(xn - x0)/steps];
  Do[xnew = xold + h;
   k1 = f[xold, yold];
   k2 = f[xold + h, yold + k1*h];
   k3 = yold + .5*(k1 + k2)*h;
   ynew = yold + .5*(k1 + f[xnew, k3])*h;
   sollist = Append[sollist, {xnew, ynew}];
   xold = xnew;
   yold = ynew, {steps}];
  Return[sollist]]

Then we apply this code to numerically solve the initial value problem \( y' = (1+x)\,\sqrt{y} , \quad y(0) =1 \) with step size h=0.1:
f[x_, y_] = (1 + x)*Sqrt[y]
heunit[f, {x, 0, 2}, {y, 1}, 20]
we get approximate value 9.00778 while the regular Heun's method gives 8.99148.

 

III. Modified Euler Method / Midpoint Method


The Modified Euler’s method is also called the midpoint approximation. This method reevaluates the slope throughout the approximation. Instead of taking approximations with slopes provided in the function, this method attempts to calculate more accurate approximations by calculating slopes halfway through the line segment. The syntax of the Modified Euler’s method involves the sum of the current y term and the product of h with the function in terms of the sum of the current x and half of h (which defines the x value) and the sum of the current y and the product of the h value and the function in terms of the current x and y values (which defines the y value).

The midpoint method can be shown to have a local error of 2, so it is second-order accurate. The midpoint method is implemented in NDSolve as "ExplicitMidpoint":

NDSolve[{y'[t] == t^2 - y[t], y[0] == 1}, y[t], {t, 0, 2}, Method -> "ExplicitMidpoint", "StartingStepSize" -> 1/10]

Modified Euler formula or explicit midpoint rule or midpoint Euler algorithm:

\begin{equation} y_{n+1} = y_n +h f\left( x_n + \frac{h}{2}\ , \ y_n + \frac{h}{2} \,f( x_n , y_n ) \right) , \qquad n=0,1,2,\ldots . \end{equation}
Therefore, the Mathematica syntax is as follows:
y[n+1] = y[n]+ h f[x[n]+h/2,y[n] + (h/2)*f[x[n],y[n]]]

Another option:

f[x_, y_] := Exp[2*x - y]
h = 0.1
K = IntegerPart[1.2/h]  (* if the value at x=1.2 needed *)
y[0] = 1                      (* initial condition *)

Then implement modified Euler method/ midpoint rule:

Do[y[k + 1] = y[k] + h*f[k*h + h/2, y[k] + (h/2)*f[k*h, y[k]]], {k, 0, K}]
   y[K]

Let’s take all of the approximations and the exact values to compare the accuracy of the approximation methods:

x values Exact Euler Backwards Euler Trapezoid Improved Euler Modified Euler
0.1 1.049370088 1.050000000 1.057930422 1.0493676 1.049390244 1.049382716
0.2 1.097463112 1.098780488 1.118582822 1.0974587 1.097594738 1.097488615
0.3 1.144258727 1.146316720 1.182399701 1.1442530 1.144322927 1.144297185
0.4 1.189743990 1.192590526 1.249960866 1.1897374 1.189831648 1.189795330
0.5 1.233913263 1.237590400 1.322052861 1.2339064 1.234025039 1.233977276
0.6 1.276767932 1.281311430 1.399792164 1.2767613 1.276904264 1.276844291
0.7 1.318315972 1.323755068 1.484864962 1.3183102 1.318477088 1.318404257
0.8 1.358571394 1.364928769 1.580059507 1.3585670 1.358757326 1.358671110
0.9 1.397553600 1.404845524 1.690720431 1.3975511 1.397764204 1.397664201
1.0 1.435286691 1.443523310 1.830688225 1.4352865 1.435521666 1.435407596
Accuracy N/A 99.4261% 72.4513% 99.9999% 99.9836% 99.9916%

 

 

 

Return to Mathematica page

Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)