Preface


This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very little experience or have never used Mathematica before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.

Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in normal font. This means that you can copy and paste all commands into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part III of the course APMA0330

Euler's Methods


We start with the first numerical method for solving initial value problems that bears Euler's name. Leonhard Euler was born in 1707, Basel, Switzerland and passed away in 1783, Saint Petersburg, Russia. In 1738, he became almost blind in his right eye. Euler was one of the most eminent mathematicians of the 18th century, and is held to be one of the greatest in history. He is also widely considered to be the most prolific mathematician of all time. He spent most of his adult life in Saint Petersburg, Russia, except about 20 years in Berlin, then the capital of Prussia.

In 1768, Leonhard Euler (St. Petersburg, Russia) introduced a numerical method that is now called the Euler method or the tangent line method for solving numerically the initial value problem:

\[ y' = f(x,y), \qquad y(x_0 ) = y_0 , \]
where f(x,y) is the given slope (rate) function, and \( (x_0 , y_0 ) \) is a prescribed point on the plane. Euler's method or rule is a very basic algorithm that could be used to generate a numerical solution to the initial value problem for first order differential equation. The solution that it produces will be returned to the user in the form of a list of points. The user can then do whatever one likes with this output, such as create a graph, or utilize the point estimates for other purposes. Euler's rule serves to illustrate the concepts involved in the advanced methods. It has limited usage because of the larger error that is accumulated as the process proceeds, so it requires a smaller step size. However, it is important to study because the error analysis is easier to understand.

To start, we need mesh or grid points, that is, the set of discrete points for independent variable at which we find approximate solutions. In other words, we will find approximate values of the unknow solution at these mesh points. For simplicity, we use uniform grid with fixed step length h; however, in practical applications, it is almost always not a constant. For convenience, we subdivide the interval of interest [a,b] (where usually \( a= x_0 \) is the starting point) into m equal subintervals and select the mesh/grid points

\[ x_n = a + n\,h \qquad\mbox{for} \quad n=0,1,2,\ldots , m; \quad\mbox{where} \quad h = \frac{b-a}{m} . \]
The considered initial value problem is assumed to have a unique solution \( y= \phi (x) , \) on the interval of interest {a,b], and its approximations at the grid points will be denoted by yn, so we wish that \( y_n \approx \phi (x_n ) , \quad n=1,2, \ldots . \)

There are three main approaches (we do not discuss others in this section) to derive the Euler rule: either use finite difference approximation to the derivative or transfer the initial value problem to the Volterra integral equation and then truncate it, or apply Taylor series.

  1. We start with finite difference method. If we approximate the derivative in the left-hand side of the differential equation y' = f(x,y) by the finite difference

    \[ y' (x_n ) \approx \frac{y_{n+1} - y_n}{h} \]
    on the small subinterval \( [x_{n+1} , x_n ] , \) we arrive at the Euler's rule when right-hand side is evaluated at x = xn.
    \begin{equation*} y_{n+1} = y_n + (x_{n+1} - x_n ) f( x_n , y_n ) \qquad \mbox{or} \qquad y_{n+1} = y_n + h f_n , \end{equation*}
    where the following notations are used: \( h=x_{n+1} - x_n \) is the step length (which is assumed to be constant for simplicity), \( f_n = f( x_n , y_n ) \) is the value of slope function at mesh point, and \( y_n \) denotes the approximate value of the actual solution \( y=\phi (x) \) at the point \( x_n \) (\( n=1,2,\ldots \) ).
  2. Integrating the identity y'(x) = f(x,y(x)) between xn and xn+1, we obtain
    \[ y(x_{n+1}) - y(x_n ) = \int_{x_n}^{x_{n+1}} f(t, y(t))\,{\text d} t . \]
    Approximating the integral by the left rectangular (very crude) rule for numerical integration (length of interval times value of integrand at left end point) and identifying \( y(x_n ) \) with yn, we again obtain the Euler formula.

  3. We may assume the possibility of expanding the solution in a Taylor series around the point xn:
    \[ y(x_{n+1}) = y(x_n +h) = y(x_{n}) + h\, f(x_n, y(x_n )) + \frac{1}{2}\, h^2 \,y'' (x_n ) + \cdots . \]
    The Euler formula is the result of truncating this series after the linear term in h.
Each of these interpretations points the way to a class of generalizations of Euler's method that are discussed later. It is interesting to note that the generalization indicated by (i) which seems to be the most straightforward, has proved to be the least fruitful of the three.

When the slope function is evaluated at the right end point, we come to the so called backward Euler rule:
\[ y_{n+1} = y_n + h\, f(x_{n+1}, y_{n+1}) , \qquad y_0 = y(0), \qquad n=0,1,2,\ldots . \]

However, this is not the only one approximation possible, one may consider the central difference:
\[ y' (x_n ) \approx \frac{y_{n+1} - y_{n-1}}{2h} , \]
which usually gives more accurate approximation. Using central difference, we get two-step approximation:
\[ y_{n+1} = y_{n-1} + 2h\, f\left( x_n , y_n \right) , \qquad n=1,2,\ldots , \]
which requires two starting points to solve this recurrence of second order. So one can use standard Euler rule to determine y1: \( y_1 = y_0 + h\, f\left( x_0 , y_0 \right) . \) However, this numerical algorithm may produce instability and we may observe numerical solution that is different from the true solution. Such solution is called "ghost solution." This phenomenon is caused by roundoff errors.

To avoid the instability and to prevent the appearance of ghost solutions, many refine numerical procedures to integrate the ordinary differential equations have been proposed. In practical calculations, the central difference scheme is used when evaluations of slope function in intermediate points (as, for instance, required by Runge--Kutta methods) are not permitted. We mention two remedies to remove ghost solutions.

Consider the following mixed difference scheme parameterized by μ, 0 <μ<1, for the first order differential equation:
\[ \left( 1 - \mu \right) \frac{y_{n+1} - y_{n-1}}{2h} + \mu\,\frac{y_{n+1} - y_n}{h} = f\left( x_n , y_n \right) , \qquad n=1,2,\ldots . \]
If we set μ = 1, then we have Euler's rule. When μ =0, the scheme is nothing but the central difference scheme. This algorithm does not produce ghost solutions when μ > h/2.

Let φ be a real-valued function on \( \mathbb{R} \) that satisfies the property:

\[ \phi (h) = h + O\left( h^2 \right) \qquad\mbox{and} \qquad 0 < \phi(h) < 1 \quad\mbox{for all positive $h$}. \]
There exists a variety of functions φ that satisfy the above condition, e.g., \( \phi (h) = 1 - e^{-h} \quad \Longrightarrow \quad \phi (hq)/q = \left( 1- e^{-hq} \right) /q . \) Then the following nonstandard schemes are stable:

Example: Consider the initial value problem for the logistic equation

\[ \frac{{\text d}y}{{\text d}t} = y \left( 1-y \right) , \qquad y(0) = y_0 =0.5. \]
This problem has a unique solution
\[ y(t) = \frac{C\, e^t}{C\,e^t +1} , \qquad \mbox{where} \qquad C= \frac{y_0}{1-y_0} . \]
Using the following Mathematica code
y[0] = 0.5
h = 0.1
y[1] = y[0] + h*y[0]*(1 - y[0])
Do[y[n] = y[n - 2] + 2*h*y[n - 1]*(1 - y[n - 1]), {n, 2, 500}]
ListPlot[Table[{n*h, y[n]}, {n, 0, 500}]]
we observe instability

Another approach is based on transferring the given IVP \( y' = f(x,y) , \quad y(x_0 ) = y_0 \) to equivalent integral equation

\[ y(x_{n+1}) = y(x_n ) + \int_{x_n}^{x_{n+1}} f(s, y(s))\,{\text d} s , \qquad n=0,1,2,\ldots . \]
Application of left Riemann approximation, we come to the Euler's rule.

Example: We demonstrate how the Euler method works on an example of a linear equation when all calculations become transparent. Note that Mathematica attempts to solve the ODE using its function. A solution may or may not be possible using this function. In case when a solution may not be known or determined, Euler's method provides an alternative to approximate the solution numerically. Consider the initial value problem:

\[ y' = x^2 y - 1.5 \,y , \qquad y(0) =1. \]
First, we do preprocessing by plugging in the numerical values:
f[x_, y_] := y*x^2 - 1.5*y; (* slope function *)
x0 := 0 (* starting point in x *)
y0 := 0 (* starting value for y *)
xf=2.0; (* Value of x at which y is desired *)
h = (xf - x0)/5.0 (* step size in x *)
We we attempt to find an approximation for x=2 with a step size of 0.4. Then we find the explicit expression for the formulated above IVP using standard Mathematica command and plot it:
soln = DSolve[{f[x, y[x]] == y'[x], y[x0] == y0}, y, x]
plot = Plot[Evaluate[y[x] /. soln], {x, x0, xf},
PlotStyle -> {{Thickness[0.01], RGBColor[1, 0, 0]}}]
Now we implement Euler's algorithm:
X[0] = x0; Y[0] = y0;
X[1] = X[0] + h
Y[1] = Y[0] + f[X[0], Y[0]]*h
plot1 = ListPlot[{{X[0], Y[0]}, {X[1], Y[1]}}, Joined -> True,
PlotStyle -> {Thickness[0.005], RGBColor[0, 0, 1]}, DisplayFunction -> Identity]
The procedure is repeated using the next x value (x[2] = x[1] + h) and the y value approximation in the previous step.
X[2] = X[1] + h
Y[2] = Y[1] + f[X[1], Y[1]]*h
plot2 = ListPlot[{{X[0], Y[0]}, {X[1], Y[1]}, {X[2], Y[2]}}, Joined -> True,
PlotStyle -> {Thickness[0.005], RGBColor[0, 0, 1]}, DisplayFunction -> Identity]
The procedure on the third step is repeated using the next x value (x[3] = x[2] + h) and the y value approximation in the previous step.
X[3] = X[2] + h
Y[3] = Y[2] + f[X[2], Y[2]]*h
plot3 = ListPlot[{{X[0], Y[0]}, {X[1], Y[1]}, {X[2], Y[2]}, {X[3], Y[3]}}, Joined -> True,
PlotStyle -> {Thickness[0.005], RGBColor[0, 0, 1]}, DisplayFunction -> Identity]
On the fourth step,we have
X[4] = X[3] + h
Y[4] = Y[3] + f[X[3], Y[3]]*h
plot4 = ListPlot[{{X[0], Y[0]}, {X[1], Y[1]}, {X[2], Y[2]}, {X[3], Y[3]}, {X[4], Y[4]}}, Joined -> True,
PlotStyle -> {Thickness[0.005], RGBColor[0, 0, 1]}, DisplayFunction -> Identity]
Finally, we plot Euler's approximations along with the explicit solution:
Show[plot, plot4, PlotRange -> Automatic]
Euler's approximation to the explicit solution.

In the above codes, a special option DisplayFunction was used. Actually, you can either remove this option or replace with a standard one: DisplayFunction -> $DisplayFunction. All Mathematica graphics functions such as Show and Plot have an option DisplayFunction,which specifies how the Mathematica graphics and sound objects they produce should actually be displayed. The way this works is that the setting you give for DisplayFunction is automatically applied to each graphics object that is produced.

DisplayFunction -> $DisplayFunction default setting
DisplayFunction -> Identity generate no display
DisplayFunction -> f apply f to graphics objects to produce display

Within the Mathematica kernel, graphics are always represented by graphics objects involving graphics primitives. When you actually render graphics, however, they must be converted to a lower-level form which can be processed by a Mathematica front end, such as a notebook interface, or by other external programs. The standard low-level form that Mathematica uses for graphics is PostScript. The Mathematica function Display takes any Mathematica graphics object, and converts it into a block of PostScript code. It can then send this code to a file, an external program, or in general any output stream.

We combine all steps into subroutine:
EulerMethod[n_,x0_,y0_,xf_,f_]:=Module[{Y,h,i},
h=(xf-x0)/n;
Y=y0;
For[i=0,i<n,i++,
Y=Y+f[x0+i*h,Y]*h]; Y]
In the above code,
n is number of steps
x0 = initial condition for x
y0 = initial condition for y
xf = final value of x at which y is desired.
f = slope function for the differential equation dy/dx = f(x,y)
nth=7;
Do[
Nn[i]=2^i;
H[i]=(xf-x0)/Nn[i];
AV[i]=EulerMethod[2^i,x0,y0,xf,f];
Et[i]=EV-AV[i];
et[i]=Abs[(Et[i]/EV)]*100.0;
If[i>0,
Ea[i]=AV[i]-AV[i-1];
ea[i]=Abs[Ea[i]/AV[i]]*100.0;
sig[i]=Floor[(2-Log[10,ea[i]/0.5])];
If[sig[i]<0,sig[i]=0];
] ,{i,0,nth}];
This loop calculates the following
AV = approximate value of the ODE solution using Euler's Method by calling the module EulerMethod
Et = true error
et = absolute relative true error percentage
Ea = approximate error
ea = absolute relative approximate error percentage
sig = least number of significant digits correct in approximation

The following code uses Euler's Method to calculate intermediate step values for the purpose of displaying the solution over the range specified. The number of steps used is the maximum value, n (in this example, n=128).

n=128;
X[0]=x0;
Y[0]=y0;
h=(xf-x0)/n;
For[i=1,i<=n,i++,
X[i]=x0+i*h;
Y[i]=Y[i-1]+f[X[i-1],Y[i-1]]*h; ];
data = Table[{X[i], Y[i]}, {i, 0, n}];
plot2 = ListPlot[data, Joined -> True,
PlotStyle -> {Thickness[0.005], RGBColor[0, 0, 1]}];
Finally, we plot the exact solution along with Euler's approximation:
Show[plot, plot2, PlotLabel -> "Exact and Approximate Solution"]
Explicit solution (red) and its Euler's approximation (blue).

Now we compare the value at the final point (denoted by xf) with a step size

data = Table[{H[i], AV[i]}, {i, 0, nth}];
plot3 = ListPlot[data, Joined -> True,
PlotStyle -> {Thickness[0.006], RGBColor[0.5, 0.5, 0]}, PlotLabel ->
"Approximate value of the solution of the ODE\nat x = xf as a \ function of step size"]
Dependence of the value at the final point with step size.
data = Table[{Nn[i], AV[i]}, {i, 0, nth}];
plot4 = ListPlot[data, Joined -> True,
PlotStyle -> {Thickness[0.006], RGBColor[0, 0.5, 0.5]}, PlotLabel -> "Approximate value of the solution of the ODE \n at x = xf as a \ function of number of steps "]
Dependence of the value at the final point with the number of steps.
We plot dependences of errors with the step size.
soln = DSolve[{f[x, y[x]] == y'[x], y[x0] == y0}, y, x]
EV = (y /. First[soln])[xf]
data = Table[{Nn[i], Et[i]}, {i, 0, nth}]; plot5 = ListPlot[data, Joined -> True,
PlotStyle -> {Thickness[0.006], RGBColor[0.5, 0, 0.5]}, PlotLabel ->
"True error as a function of number of steps"]
True error as a function of number of steps.
data = Table[{Nn[i], et[i]}, {i, 0, nth}];
plot6 = ListPlot[data, AxesOrigin -> {0, 0},
Joined -> True, PlotStyle -> {Thickness[0.006], RGBColor[0.3, 0.3, 0.4]},
PlotLabel -> "Absolute relative true error percentage \n as a function of number of steps"]
Absolute relative true error percentage.
data = Table[{Nn[i], Ea[i]}, {i, 1, nth}];
plot7 = ListPlot[data, AxesOrigin -> {0, 0},
Joined -> True, PlotStyle -> {Thickness[0.006], RGBColor[0.7, 0.3, 0]},
PlotLabel -> "Approximate error \nas a function of number of steps"]
Approximate error.
data = Table[{Nn[i], ea[i]}, {i, 1, nth}];
plot9 = ListPlot[data, AxesOrigin -> {0, 0},
Joined -> True, PlotStyle -> {Thickness[0.006], RGBColor[0, 0.3, 0.7]},
PlotLabel -> "Absolute relative approximate error percentage \nas a function of \ number of steps"]
Absolute relative approximate error.

Example: To demonstrate the latter approach, we consider the initial value problem for the integro-differential equation

\[ \dot{y} = 2.3\, y -0.01\,y^2 -0.1\,y\, \int_0^t y(\tau )\,{\text d}\tau , \qquad y(0) =50. \]
Choosing the uniform grid \( t_k = k\,h , \quad k=0,1,2,\ldots , m; \) we integrate both sides to obtain the integral equation
\[ y(t ) = 50 + 2.3\,\int_0^t y(s)\,{\text d}s -0.01\,\int_0^t y^2 (s) \,{\text d}s -0.1\,\int_0^t y(s)\,{\text d}s \int_0^s y(\tau )\,{\text d}\tau \]
for each mesh point t = tk. Since the double integral can be written as
\[ \int_0^t y(s)\,{\text d}s \int_0^s y(\tau )\,{\text d}\tau = \int_0^t \,{\text d}\tau \,\int_{\tau}^t {\text d}s \, y(s)\,y(\tau ) , \]
application of left rectangular rule yields for the first mesh point t1 = h:
\[ y(t_1 ) \approx 50 + 2.3\,h\,y(0) -0.01\,h\,y^2 (0) -0.1 \, h\, \int_0^{t_1} {\text d}s \, y(s)\,y(0 ) \approx 50 +2.3\,h\,50 -0.01\,h \,50^2 - 0.1\,h^2 \,50^2 . \]
So we choose the right hand-side as the approximate value y1.

For the general step in Euler's rule, we have

\[ y_{k+1} = y_k + 2.3\,h\,y_k -0.01\,h\,y^2_k -0.1 \, y_k \,h \int_0^{t_k} {\text d}s \, y(s)\qquad k=1,2,\ldots . \]
If the trapezoidal rule is used to approximate the integral, then this expression becomes
\[ y_{k+1} = y_k + 2.3\,h\,y_k -0.01\,h\,y^2_k -0.1 \, y_k \,h\, T_k (h) ,\qquad k=1,2,\ldots ; \]
where \( T_0 (h) = h^2 \,50^2 \) and
\[ T_{k} = T_{k-1} (h) + \frac{h}{2} \left( y_k + y_{k-1} \right) ,\qquad k=1,2,\ldots ; \]
Finally, we ask Mathematica to perform all calculations:
h := 0.001
T[0] := h^2 *50^2
y[0] := 50
y[1] := y[0] + 2.3*h*50 - 0.01*h*50^2 - 0.1*h^2*50^2
Do[ T[k] = T[k - 1] + h*(y[k] + y[k - 1])/2;
y[k + 1] = y[k] + 2.3*h*y[k] - 0.01*h*(y[k])^2 - 0.1*y[k]*h*T[k], {k, 1, 2000}]
ListPlot[Table[{k*h, y[k]}, {k, 1, 2000}]]

There are several ways to implement the Euler numerical method for solving initial value problems. We demonstrate its implementations in a series of codes. To start, define the initial point and then the slope function:

Clear[x0,y0,x,y,f]
{x0, y0} = {0, 1}
f[x_, y_] = x^2 + y^2          (* slope function f(x,y) = x^2 + y^2 was chosen for concreteness *)

Next, define the step size:

h = 0.1
Now we define the Euler method itself:
euler[{x_, y_}] = {x + h, y + h*f[x, y]}
Create the table of approximations using Euler's rule:
eilist = NestList[euler, {x0, y0}, 10]

Plot with some options:

plp = ListPlot[eilist]

or
ListPlot[eilist, Joined -> True]
or
ListPlot[eilist, Joined -> True, Mesh -> All]
or
ListPlot[eilist, Filling -> Axis]

Another way is to make a loop.

Clear[y]
y[0]=1; f[x_,y_]=x^2 +y^2 ;
Do[y[n + 1] = y[n] + h f[1+.01 n, y[n]], {n, 0, 99}]
y[10]

First of all, it is always important to clear all previous assignment to all the variables that we are going to use, so we have to type: Clear[y]

The basic structure of the loop is:
Do[some expression with n, {n, starting number, end number}]
or with option increment, denoted by k:
Do[some expression with n, {n, starting number, end number, k}]

The function of this Do loop is to repeat the expression, with n taking values from “starting number” to “ending number,” and therefore repeat the expression (1+ (ending number) - (starting number))  many times. For our example, we want to iterate 99 steps, so n will go from 0, 1, 2, ..., until 99, which is 100 steps in total. There is just one technical issue: we must have n as an integer. Therefore we let y[n] denote the actual value of y(x) when x = n*h. This way everything is an integer, and we have our nice do loop.
Finally put y[10] , which is actualy y(0.1), then shift+return, and we have our nice answer.

First, we start with output of our program, which is, perhaps, the most important part of our program design. You'll notice that in the program description, we are told that the "solution that it produces will be returned to the user in the form of a list of points. The user can then do whatever one likes with this output, such as create a graph, or utilize the point estimates for other purposes.

Next, we must decide upon the information that must be passed to the program for it to be able to achieve this. In programming jargon, these values are often referred to as parameters. So what parameters would an Euler program need to know in order to start solving the problem numerically? Quite a few should come to mind:

The slope function f(x, y)
The initial x-value, x0
The initial y-value, y0
The final x-value,
The number of subdivisions to be used when chopping the solution interval into individual jumps or the step size.

To code these parameters into the program, we need to decide on actual variable names for each of them. Let's choose variable names as follows:

f, for the slope function f(x, y)
x0,      for the initial x-value, x0
y0,      for the initial y-value, y0
xn,      for the final x-value
Steps, for the number of subdivisions
h,        for step size

There are many ways to achieve this goal. However, it is natural to have the new subroutine that looks similar to build-in Mathematica's commands. So our program might look something like this:

euler[f,{x,x0,xn},{y,y0},Steps]

In order to use Euler's method to generate a numerical solution to an initial value problem of the form:

\[ y' = f(x, y) , \qquad y(x_0 ) = y_0 . \]

We have to decide upon what interval, starting at the initial point x0, we desire to find the solution. We chop this interval into small subdivisions of length h, called step size. Then, using the initial condition \( (x_0,y_0) \) as our starting point, we generate the rest of the solution by using the iterative formulas:

\[ \begin{split} x_{n+1} = x_n + h , \\ y_{n+1} = y_n + h f(x_n, y_n) \end{split} \]
to find the coordinates of the points in our numerical solution. The algorithm is terminated when the right end of the desired interval has been reached.

If you would like to use a built-in Euler’s method, unluckily there is none. However, we can define it ourselves. Simply copy the following command line by line:

euler[f_, {x_, x0_, xn_}, {y_, y0_}, Steps_] :=
Block[{ xold = x0, yold = y0, sollist = {{x0, y0}}, h },
h = N[(xn - x0) /Steps];            (* or n = (xn - x0) /Steps//N; *)
Do[ xnew = xold + h;
ynew = yold + h * (f /. {x -> xold, y -> yold});
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew,
{Steps}
];
Return[sollist]
]

Now we have our euler function:   euler[f(x,y), {x,x0,x1},{y,y0},steps]
Then this script will solve the differential equation y’=f(x,y), subject to the initial condition y(x0)=y0, and generate all values between x0 and x1. The number of steps for the Euler’s method is specified with steps.
To solve the same problem as above, we simply need to input:

euler[1/(3*x - 2*y + 1), {x, 0, 0.4}, {y, 1}, 4]
Out[2]= {{0, 1}, {0.1, 0.9}, {0.2, 0.7}, {0.3, 1.2}, {0.4, 1.}}

Example: Consider the initial value problem \( y' = x+y, \quad y(0) =1. \)

euler[y+x, {x, 0, 1}, {y, 1}, 10]
Out[3]= {{0, 1}, {0.1, 1.1}, {0.2, 1.22}, {0.3, 1.362}, {0.4, 1.5282}, {0.5,
1.72102}, {0.6, 1.94312}, {0.7, 2.19743}, {0.8, 2.48718}, {0.9, 2.8159}, {1., 3.18748}}

As you can see, the output is really a big table of values, and you can really just read the last one off to get y(1). If the only one final value is needed, then Return command should be replaced with
Return[sollist[[Steps + 1]]]]

We can plot the output as follows

ListPlot[euler[y + x, {x, 0, 1}, {y, 1}, 10]]

Or we can plot it as

aa = euler[y + x, {x, 0, 1}, {y, 1}, 10]
Out[4]= {{0, 1}, {0.1, 1.1}, {0.2, 1.22}, {0.3, 1.362}, {0.4, 1.5282}, {0.5,
1.72102}, {0.6, 1.94312}, {0.7, 2.19743}, {0.8, 2.48718}, {0.9,
2.8159}, {1., 3.18748}}
ListPlot[aa, AxesLabel -> {"x", "y"}, PlotStyle -> {PointSize[0.015]}]


The following code, which uses a slightly different programming paradigm, implements Euler's method for a system of differential equations:

euler[F_, a_, Y0_, b_, Steps_] :=
Module[{t, Y, h = (b - a)/Steps//N, i},
t[0] = a; Y[0] = Y0;
Do[
t[i] = a + i h;
Y[i] = Y[i - 1] + h F[t[i - 1], Y[i - 1]],
{i, 1, n}
];
Table[{t[i], Y[i]}, {i, 0, n}]
]

And the usage message is:

euler::usage = "euler[F, t0, Y0, b, n] gives the numerical solution to {Y' == F[t, Y], Y[t0] == Y0} over the interval\n
[t0, b] by the n-step Euler's method. The result is in the form of a table of {t, Y} pairs."

Note that this function uses an exact increment h rather than converting it explicitly to numeric form using N. Of course you can readily change that or accomplish the corresponding thing by giving an approximate number for one of the endpoints.

Next we plot the points by joining them with a curve:

a = ListPlot[euler[f, 0, 1, 3, 30], Joined -> True]

Another way without writing a subroutine:

f[x_, y_] := y^2 - 3*x^2;
x0 = 0;
y0 = 1;
xend = 1.1;
steps = 5;
h = (xend - x0)/steps // N;
x = x0;
y = y0;
eulerlist = {{x, y}};
For[i = 1, i <= steps, y = f[x, y]*h + y;
x = x + h;
eulerlist = Append[eulerlist, {x, y}]; i++]
Print[eulerlist]

The results can also be visualized by connecting the points:

a = ListPlot[eulerlist, Joined -> True, Epilog -> {PointSize[0.02], Map[Point, eulerlist]}];
s = NDSolve[{u'[t] == f[x, u[t]], u[0] == 1}, u[t], {t, 0, 1.1}];
b = Plot[Evaluate[u[x] /. s], {x, 0, 1.1}, PlotStyle -> RGBColor[1, 0, 0]];
Show[a,b]

Next, we demonstrate application of Function command:

EulerODE[f_ /; Head[f] == Function, {t0_, y0_}, t1_, n_] :=
Module[{h = (t1 - t0)/n // N, tt, yy},
tt[0] = t0; yy[0] = y0;
tt[k_ /; 0 < k < n] := tt[k] = tt[k - 1] + h;
yy[k_ /; 0 < k < n] :=
yy[k] = yy[k - 1] + h f[tt[k - 1], yy[k - 1]];
Table[{tt[k], yy[k]}, {k, 0, n - 1}]
];

ty = EulerODE[Function[{t, y}, y^2/t/2 + y^2/t^1.5], {1, 1}, 2, 100];
Plot[Interpolation[ty][t], {t, 1, 2}]

Here is another steamline of the Euler method (based on Mathematica function), which we demonstrate in the following

Example: Consider the initial value problem \( y' = y^2 - 3\, x^2 , \quad y(0) =-1. \)

Clear[ x,y,h,i]
f[x_, y_] := y^2 - 3*x^2
x[1] = 0;     y[1] = -1;
h = 0.25; K = IntegerPart[2.5/h]
Do[ { x[i+1] = x[i]+h,
          y[i+1] = y[i] +h*f[x[i],y[i]] } , {i,1,K+1} ]

Now we can plot our results, making sure we only refer to values of x and y that we have defined:

Clear[pairs]
pairs = Table[{x[i], y[i]}, {i, 1, K+1}]
plot5 = ListPlot[pairs, Joined -> True, PlotStyle -> {Thickness[0.005], RGBColor[0, 1, 0]}]
Out[23]= {{0, -1}, {0.25, -0.75}, {0.5, -0.65625}, {0.75, -0.736084}, {1., \ -1.0225}, {1.25, -1.51113}, {1.5, -2.11213}, {1.75, -2.68436}, {2., \ -3.17979}, {2.25, -3.65202}, {2.5, -4.11458}}
Finally, we plot Euler approximations along with the actual solution:
soln = DSolve[{f[x, y[x]] == y'[x], y[0] == -1}, y, x]
plot1 = Plot[Evaluate[y[x] /. soln], {x, 0, 2.5}, PlotStyle -> {{Thickness[0.008], RGBColor[1, 0, 0]}}];
Show[plot1, plot5]

Now we are going to repeat the problem, but using Mathematica lists format instead. The idea is still the same---we set a new x value, compute the corresponding y value, and save the two quantities in two lists. In particular, in a list we are simply storing values; In previously presented code, we were actually storing rules, which are more complicated to store and evaluate. We start out with just one value in each list (the initial conditions). Then we use the Append command to add our next pair of values to the list.

Clear[ x,y,h,i]
x = {0.0 };
y = {2.0 };
h = 0.1;
Do [ { x = Append[x,x[[i]]+h],
           y = Append[y,y[[i]]+h*f[x[[i]],y[[i]] ] ],
         } , {i,1,20} ]

Now we can plot our results:

Clear[ pairs ]
pairs = Table[ {x[[i]], y[[i]] }, {i,1,21} ]
ListPlot [ pairs, Joined -> True ]

Euler's method is first-order accurate, so that errors occur one order higher starting at powers of \( h^2 . \)
Euler's method is implemented in NDSolve as ExplicitEuler:

NDSolve[{y'[t] == t^2 - y[t], y[0] == 1}, y[t], {t, 0, 2},
Method -> "ExplicitEuler", "StartingStepSize" -> 1/10]

 

 

II. Backward Euler Method


 

Backward Euler formula:
\[ y_{n+1} = y_n + (x_{n+1} - x_n ) f(x_{n+1}) \qquad \mbox{or} \qquad y_{n+1} = y_n + h\,f_{n+1} , \]
where h is the step size (which is assumed to be fixed, for simplicity) and \( f_{n+1} = f \left( x_{n+1}, y_{n+1} \right) . \)

Example: Consider the following initial value problem:

\[ y' = y^3 - 3\,t , \qquad y(0) =1 . \]
Here is the Mathematica code that solve this problem:
y[0] = 1.;        (* initial condition *)
h = 0.1;          (* step size *)
t[0] = 0.;        (* starting vaue of the independent value *)
M = Round[0.5/h];       (* number of points to reach the final destination, in our case it is 0.5    *)
toler = h;      (* define the tolerance *)
Do[
  t[n + 1] = t[n] + h;
  eqn = (z == y[n] + h (z^3 - 3 t[n + 1]) );
  ans = z /. NSolve[eqn, z, Reals];
  indlist = {};
  toler = h;
 While[ Length[indlist] == 0, 
  toler = toler*2.;
  indlist = Flatten[Position[Map[(Abs[y[n] - #] < toler) &, ans], True]];
  ];
  ind = indlist[[1]];
  y[n + 1] = ans[[ind]];
  , {n, 0, M}]

Then we plot solution:
ListPlot[Table[{t[n], y[n]}, {n, 0, M}], PlotStyle->PointSize[0.025]]
y[M]
t[M]

NSolve has to spend time to compute all roots to the equation (which can be computationally expensive). FindRoot does a pretty fast search looking for only a single root, so it is quick for complex equations.
If you care about all possible roots, or if you have no clue where the roots of the equation may be, FindRoot is a terrible choice. If you only care about a single root and have a rough idea of where it might be, though, FindRoot will find it quickly.

 

 

Fixed Point Iteration

Bracketing Methods

Secant Methods

Euler's Methods

Heun Method

Runge-Kutta Methods

Runge-Kutta Methods of order 2

Runge-Kutta Methods of order 3

Runge-Kutta Methods of order 4

Polynomial Approximations

Error Estimates

Adomian Decomposition Method

Modified Decomposition Method

Multistep Methods

Multistep Methods of order 2

Multistep Methods of order 4

Milne Method

Hamming Method

 

Return to Mathematica page

Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)