# Preface

This is a
tutorial made solely for the purpose of education and it was designed
for students taking Applied Math 0330. It is primarily for students who
have very little experience or have never used
*Mathematica* before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.

Finally, the commands in this tutorial are all written in bold black font,
while *Mathematica* output is in normal font. This means that you can
copy and paste all commands into *Mathematica*, change the parameters and
run them. You, as the user, are free to use the scripts for your needs to learn the *Mathematica* program, and have
the right to distribute this tutorial and refer to this tutorial as long as
this tutorial is accredited appropriately.

Return to computing page for the second course APMA0340

Return to Mathematica tutorial for the second course APMA0330

Return to Mathematica tutorial for the first course APMA0340

Return to the main page for the course APMA0340

Return to the main page for the course APMA0330

Return to Part III of the course APMA0330

# Heun's Methods

Since for many problems the Euler rule requires a very small step size to produce sufficiently accurate results, many efforts have been devoted to the development of more efficient methods. Our next step in this direction includes Heun's method, which was named after a German mathematician Karl Heun (1859--1929), who made significant contributions to developing the Runge--Kutta method. Actually, the algorithm named after Heun was discovered by Carl Runge.

## I. Trapezoid Rule

Suppose that we want to solve the initial value problem for the first order differential equation

*h*is a step size chosen fixed for simplicity. As usual, we denote by

*y*

_{n}approximate values of unknown function at mesh points \( y_n \approx \phi (x_n ) , \quad n=0,1,2,\ldots ; \) where φ is the actual solution of the given initial value problem (IVP for short).

The concept of the Trapezoidal Rule in numerical methods is similar to the trapezoidal rule of Riemann sums. The Trapezoid Rule is generally more accurate than the Euler approximations, and it calculates approximations by taking the sum of the function of the current and next term and multiplying it by half the value of h. Therefore the syntax will be as follows:

*y*

_{n+1}appears on both sides of the equation, and to actually calculate it, we have to solve an equation which will usually be nonlinear. One possible method for solving this equation is Newton's method. We can use the Euler rule to get a fairly good estimate for the solution, which can be used as the initial guess of Newton's method.

## II. Heun's Formula / Improved Euler Method

The Improved Euler’s method, also known as the Heun formula or the average slope method, gives a more
accurate approximation than the Euler rule and gives an explicit formula for computing *y*_{n+1}.
The basic idea is to correct some error of the original Euler's method. The syntax of the Improved Euler’s
method is similar to that of the trapezoid rule, but the *y* value of the function in terms of
*y*_{n+1} consists of the sum of the *y* value and the product of *h* and the function
in terms of *x*_{n} and *y*_{n}.

Improved Euler formula or the
average slope method is commonly referred to as the Heun method
(although discovered by C. Runge):

*Mathematica*is as follows

Clear[y]

f[x_, y_] := x^2 + y^2

y[0] = 1;

h:=0.1;

Do[k1 = h f[h n, y[n]];

k2 = h f[h (n + 1), y[n] + k1];

y[n + 1] = y[n] + .5 (k1 + k2),

{n, 0, 9}]

y[10]

The main difference here is that instead of using y[n+1]=y[n]+h*f(x[n],y[n]), we use

Block[{ xold = x0,

yold = y0,

sollist = {{x0, y0}}, h

},

h = N[(xn - x0) / steps];

Do[ xnew = xold + h;

k1 = h * (f /. {x -> xold, y -> yold});

k2 = h * (f /. {x -> xold + h, y -> yold + k1});

ynew = yold + .5 * (k1 + k2);

sollist = Append[sollist, {xnew, ynew}];

xold = xnew;

yold = ynew,

{steps}

];

Return[sollist]

]

Instead of calculating intermediate values k1 and k2 , one can define ynew directly:

Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},

h = N[(xn - x0)/steps];

Do[xnew = xold + h;

fold = f /. {x -> xold, y -> yold};

ynew =

yold + (h/2)*((fold) + f /. {x -> xold + h, y -> yold + h*fold});

sollist = Append[sollist, {xnew, ynew}];

xold = xnew;

yold = ynew, {steps}];

Return[sollist]]

The function **heun** is basically used in the same way as the function **euler**. Then to solve the
above problem, do the following:

Then read the answer off the table.

Another option:

h = 0.1

K = IntegerPart[1.2/h] (* if the value at x=1.2 needed *)

y[0] = 1 (* initial condition *)

Then implement Heun's rule:

y[n] + (h/2)*f[n*h, y[n]] + (h/2)*

f[h*(n + 1), y[n] + h*f[n*h, y[n]]], {n, 0, K}]

y[K]

### Example:

Solve the IVP: \( y'= 1/(3x-2y+1),\quad y(0)=0 \)First, we find its solution using *Mathematica*'s command **DSolve**:

DSolve[{y'[x] == f[x, y[x]], y[0] == 0}, y[x], x]

To evaluate the exact solution at x=1.0, we type

y[x] /. a /. x -> 1.0

Then we use Heun's numerical method with step size h=0.1:

Do[k1 = h f[n*h, y[n]]; k2 = h f[h*(n + 1), y[n] + k1];

y[n + 1] = y[n] + 0.5*(k1 + k2), {n, 0, 9}]

y[10]

Now we demonstrate the use of while command:

i = 1

h=0.1;

Array[x, Nsteps];

Array[y, Nsteps];

While[i < Nsteps+1, x[i] = x[i - 1] + h; i++];

k=1

While[k < Nsteps,

y[k] = y[k - 1] + (h/2)*

Function[{u, v}, f[u, v] + f[u + h, v + h*f[u, v]]][x[k - 1], y[k - 1]];

k++];

Array[y, Nsteps] (* to see all Nsteps values *)

y[Nsteps] (* to see the last value *)

### Example:

Consider the initial value problem for the integro-differential equation*y*

_{1}found previously based on Euler rule that we use as a predictor:

*Mathematica*to do calculations.

y[0] =50

Make a computational experiment: in the improved Euler algorithm, using the corrector value as the next prediction, and
only after the second iteration use the trapezoid rule.

The ``improved'' predictor-corrector algorithm:

k_1 &= f(x_k , y_k ) ,

\\

k_2 &= f(x_{k+1} , y_k + h\,k_1 ) ,

\\

k_3 &= y_k + \frac{h}{2} \left( k_1 + k_2 \right) ,

\\

y_{k+1} &= y_k + \frac{h}{2} \left( k_1 + k_3 \right) ,

\end{align*} \]

*Mathematica*script:

Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},

h = N[(xn - x0)/steps];

Do[xnew = xold + h;

k1 = f[xold, yold];

k2 = f[xold + h, yold + k1*h];

k3 = yold + .5*(k1 + k2)*h;

ynew = yold + .5*(k1 + f[xnew, k3])*h;

sollist = Append[sollist, {xnew, ynew}];

xold = xnew;

yold = ynew, {steps}];

Return[sollist]]

Then we apply this code to numerically solve the initial value problem \( y' = (1+x)\,\sqrt{y} , \quad y(0) =1 \) with step size h=0.1:

heunit[f, {x, 0, 2}, {y, 1}, 20]

## III. Modified Euler Method / Midpoint Method

The Modified Euler’s method is also called the midpoint approximation. This method reevaluates the slope throughout the approximation. Instead of taking approximations with slopes provided in the function, this method attempts to calculate more accurate approximations by calculating slopes halfway through the line segment. The syntax of the Modified Euler’s method involves the sum of the current y term and the product of h with the function in terms of the sum of the current x and half of h (which defines the x value) and the sum of the current y and the product of the h value and the function in terms of the current x and y values (which defines the y value).

The midpoint method can be shown to have a local error of 2, so it is second-order accurate.
The midpoint method is implemented in NDSolve as "ExplicitMidpoint":

Modified Euler formula or explicit midpoint rule or midpoint Euler algorithm:

*Mathematica*syntax is as follows:

Another option:

h = 0.1

K = IntegerPart[1.2/h] (* if the value at x=1.2 needed *)

y[0] = 1 (* initial condition *)

Then implement modified Euler method/ midpoint rule:

y[K]

Let’s take all of the approximations and the exact values to compare the accuracy of the approximation methods:

x values | Exact | Euler | Backwards Euler | Trapezoid | Improved Euler | Modified Euler |

0.1 | 1.049370088 | 1.050000000 | 1.057930422 | 1.0493676 | 1.049390244 | 1.049382716 |

0.2 | 1.097463112 | 1.098780488 | 1.118582822 | 1.0974587 | 1.097594738 | 1.097488615 |

0.3 | 1.144258727 | 1.146316720 | 1.182399701 | 1.1442530 | 1.144322927 | 1.144297185 |

0.4 | 1.189743990 | 1.192590526 | 1.249960866 | 1.1897374 | 1.189831648 | 1.189795330 |

0.5 | 1.233913263 | 1.237590400 | 1.322052861 | 1.2339064 | 1.234025039 | 1.233977276 |

0.6 | 1.276767932 | 1.281311430 | 1.399792164 | 1.2767613 | 1.276904264 | 1.276844291 |

0.7 | 1.318315972 | 1.323755068 | 1.484864962 | 1.3183102 | 1.318477088 | 1.318404257 |

0.8 | 1.358571394 | 1.364928769 | 1.580059507 | 1.3585670 | 1.358757326 | 1.358671110 |

0.9 | 1.397553600 | 1.404845524 | 1.690720431 | 1.3975511 | 1.397764204 | 1.397664201 |

1.0 | 1.435286691 | 1.443523310 | 1.830688225 | 1.4352865 | 1.435521666 | 1.435407596 |

Accuracy |
N/A |
99.4261% | 72.4513% | 99.9999% | 99.9836% | 99.9916% |

Return to Mathematica page

Return to the main page (APMA0330)

Return to the Part 1 (Plotting)

Return to the Part 2 (First Order ODEs)

Return to the Part 3 (Numerical Methods)

Return to the Part 4 (Second and Higher Order ODEs)

Return to the Part 5 (Series and Recurrences)

Return to the Part 6 (Laplace Transform)