# Preface

This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very little experience or have never used Mathematica before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.

Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in normal font. This means that you can copy and paste all commands into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part III of the course APMA0330
The most important single result in numerical computations is Taylor's theorem, which we now state below.

Theorem: Let f(x) have n+1 continuous derivatives on the closed interval [a.b] for some positive n, and let $$x, x_0 \in [a,b] .$$ Then

$f(x) = p_n + R_n (x)$
for
$p_n (x) = f(x_0 ) + f' (x_0 ) \,(x-x_0 ) + \cdots + \frac{f^{(n)} (x_0 )}{n!} \, (x- x_0 )^n = \sum_{k=0}^n \frac{(x- x_0 )^k}{k!} \, f^{(k)} (x_0 )$
and
$R_n (x) = \frac{1}{n!} \int_{x_0}^x (x-t)^n (t)\, {\text d}t = \frac{(x- x_0 )^{n+1}}{(n+1)!} \,f^{(n+1)} (\xi_x ) ,$
where ξ is a point between x and x0. ■

Brook Taylor (1685--1731) was educated at St. John's College of Cambridge University (England), from which he graduated in 1709. He wrote two very significant books, ‘Methodus incrementorum directa et inversa’ and ‘Linear Perspective,’ which were published in 1715. He was the one to invent ‘Integration of Parts’ and also a series called the ‘Taylor’s Expansion.’ The Taylor’s Theorem is based on the letter written by him to Machin in which he tells about the origination of this idea. Although it appears that he did not appreciate its importance and he certainly did not bother with a proof of Taylor's theorem, which was discovered and published in 1694 by Johann Bernoulli (1667--1748). Taylor acknowledged that his work was based on that of Newton and Kepler. However, Johann Bernoulli claimed his priority in discovering both of his main results (Taylor's expansion and integration by parts).

# Polynomial Approximations

The Taylor series method is of general applicability, and it is a standard to which we compare the accuracy of the various other numerical methods for solving an initial value problem for ordinary differential equations:
$y' = f(x,y ) , \quad y(x_0 ) = y_0 .$
In what follows, it will be assumed that the given slope function f(x,y) has as many derivatives in a neighborhood of the initial point as needed. For simplicity, we consider the set of uniformly distributed grid points starting from x0: $$x_n = x_0 + n\,h , \quad n=0,1,2,\ldots .$$

Theorem: Suppose that a function y(t) has m+1 continuous derivatives on the interval [a,b] containing grid points $$\{ x_n \}_{n\ge 0} .$$ Assume that y(t) has a Taylor series expansion of order m about fixed value $$x_n \in [a,b] :$$

$y(x_n +h) = y(x_n ) + h\, \Phi_m \left( x_n, y(x_n )\right) + O\left( h^{m+1} \right) ,$
where the increment function is
$\Phi_m \left( x_n, y(x_n )\right) = \sum_{j=1}^m \frac{y^{(j)} (x_n )}{j!} \, h^{j-1}$
and
$y^{(k)} (x) = \left( \frac{\partial}{\partial x} + f\,\frac{\partial}{\partial y} \right)^{k-1}\, f\left( x, y(x) \right) , \qquad k=0,1,2,\ldots .$
In particular,
\begin{align*} y' (x) &= f(x,y(x)) , \\ y'' (x) &= f_x + f_y \, y' = f_x + f\, f_y , \\ y^{(3)} (x)&= f_{xx} + 2\, f_{xy} y' + f_y y'' + f_{yy} \left( y' \right)^2 \\ &= f_{xx} + 2\,f_{xy}\,f + f_{yy} f^2 + f_y \left( f_x + f\, f_y \right) , \\ y^{(4)} (x)&= f_{xxx} + 3\, f_{xxy} y' +3\, f_{xyy} \left( y' \right)^2 +3\, f_{xy} y'' + f_y y''' +3\, f_{yy} y' y'' + f_{yyy} \left( y' \right)^3 \\ &= f_{xxx} + 3\, f_{xxy} \,f +3\, f_{xyy} \left( f \right)^2 +3\, f_{xy} \left( f_x + f\, f_y \right) + f_{yy} f\left( f_x + f\, f_y \right) + f_y y''' . \qquad ■ \end{align*}

The approximate numerical solution to the initial value problem $$y' = f(x,y ) , \quad y(x_0 ) = y_0$$ over the interval [a,b] is derived by using the Taylor series expansion on each subinterval $$[ x_n , x_{n+1}] .$$ The general step for Taylor's method of order m is

$y_{n+1} = y_n + h\,y' (x_n ) + \frac{h^2}{2} \,y'' (x_n ) + \cdots + \frac{h^m}{m!} \, y^{(m)} (x_n ).$
The Taylor method of order m has the property that the final global error is of the order $$O\left( h^{m+1} \right) .$$ It is possible theoretically to choose m as large as necessary to make this error as small as desired. However, in practice, we usually compute two sets of approximations using step sizes h and h/2 and compare the results.

## II. Second Order Polynomial Approximation

Since the first order Taylor series approximation is identical with Euler’s method, we start with the second order one:

$y_{n+1} = y_n + h\,f (x_n , y_n ) + \frac{h^2}{2} \left[ f_x (x_n , y_n ) + f(x_n , y_n )\, f_y (x_n , y_n ) \right] = y_n + h\, \Phi_2 (h) ,$
where the increment function Φ2 is just adding the second order differential deviation to the next term in the same equation used for the first order Taylor expansion. We show how it works in the following examples.

Example: Consider the initial value problem $$y’=1/(2x-3y+5), \quad y(0)=1$$ and let us try to find out y(1).

To achieve this, we pick up a fixed step length h = 0.1, which means that we need to do 10 iterations to reach the end point x = 1.

Clear[y]
h:=0.1;
f1[x_, y_] := 1 / (2 x - 3 y + 5)
fx[x_, y_] := D[f1[a, b], a] /. {a -> x, b -> y}
fy[x_, y_] := D[f1[a, b], b] /. {a -> x, b -> y}
f2[x_, y_] := fx[x, y] + fy[x, y] * f1[x, y]
y[0] = 1;
Do[y[n + 1] = y[n] + h * f1[h n, y[n]] + 1 / 2 * h^2 * f2[h n, y[n]], {n, 0, 9}]
y[10]
Out[29]= 1.43529

At each step in the sequence, the expansion requires 3 addition, 1 subtraction, 3 multiplication, and 1 division operations. Since there are 8 steps to this sequence, in order to obtain the desired approximation, the code performs 80 operations, which would be hectic work to do by hand. The approximation is y10 = 1.435290196. This algorithm requires at least 22 operations per step, which means that the entire sequence requires 220 steps. However, this is not significant since Mathematica does all the work.

To check the answer, we write a subroutine to evaluate numerically the initial value problem for arbitrary slope function, initial conditions, and step size:

taylor2[f_, {x_, x0_, xn_}, {y_, y0_}, stepsize_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
h = N[stepsize];
Do[xnew = xold + h;
fold = f /. {x -> xold, y -> yold};
Phi]2 = fold + (h/2)*((D[f, x] /. {x -> xold, y -> yold}) + (D[f, y] /. {x -> xold, y -> yold})*(f /. {x -> xold, y -> yold})); ynew = yold + h*\[Phi]2; sollist = Append[sollist, {xnew, ynew}]; xold = xnew; yold = ynew, {(xn - x0)/stepsize}]; Return[sollist]] Now we apply this subroutine to our slope function: taylor2[1/(2*x - 3*y + 5), {x, 0, 1}, {y, 1}, .1] // Flatten Out[8]= {0, 1, 0.1, 1.04938, 0.2, 1.09747, 0.3, 1.14427, 0.4, 1.18976, 0.5, 1.23393, 0.6, 1.27678, 0.7, 1.31833, 0.8, 1.35858, 0.9, 1.39756, 1., 1.43529} Using Mathematica command NDSolve, we check the obtained value: sol = NDSolve[{y'[x] == 1/(2*x - 3*y[x] + 5), y[0] == 1}, y[x], {x, 0, 2}] Out[5]= {{y[x] -> InterpolatingFunction[{{0., 2.}}<>][x]}} z = sol /. x -> sol /. x -> 1.0 Out[6]= {{y[{{y[1.] -> 1.43529}}] -> {{InterpolatingFunction[0.,2.}},<>][y[1.] -> 1.43529]}}}} Example: Let us take a look at the initial value problem $$y’=(x^2 -y^2) \sin (y), \quad y(0)=1 ,$$ for which it is impossible to find the actual solution. First we find two derivatives: \begin{align*} y' &= f(x,y) = \left( x^2 - y^2 \right) \sin (y) , \\ y'' &= 2\left( x- y\, y' \right) \sin (y) + \left( x^2 - y^2 \right) y' \, \cos (y) . \end{align*} We check the formulas with Mathematica: f[x_, y_] = (x^2 + y^2)*Sin[y] f2[x_,y_] = D[f[x,y],x] + D[f[x,y],y] f[x,y] Then we ask Mathematica to do all calculations y[0] = 1; x[0] = 0; h = 0.1; n = Floor[(1 - 0)/h] Solution = taylor2[(x^2 + y^2)*Sin[y], {x, 0, 1}, {y, 1}, .1] Out[4]= {0, 1, 0.1, 1.0935, 0.2, 1.21486, 0.3, 1.37875, 0.4, 1.60729, 0.5, 1.92811, 0.6, 2.34585, 0.7, 2.75325, 0.8, 2.97922, 0.9, 3.06953, 1., \ 3.10788} We compare the second order Taylor series approximation y10 = 3.10788 with those obtained with the aid of NDSolve (one needs to clear variables before invoking function NDSolve): sol = NDSolve[{y'[x] == (x^2 + y[x]^2)*Sin[y[x]], y[0] == 1}, y[x], {x, 0, 2}] z = sol /. x -> sol /. x -> 1.0 Out[2]= {{y[{{y[1.] -> 3.11958}} So we see that Taylor's approximation gives two correct significant digits. Now we use another approach involving NestList command. f[x_,y_]= (x^2 + y^2)*Sin[y] f2[x_,y_] = D[f[x,y],x] + D[f[x,y],y] f[x,y] h = 0.1 euler2[{x_, y_}] = {x + h, N[y + h*f[x, y] + (h^2/2)*f2[x, y]]} Expand[euler2[{x,y}]] NestList[euler2, {0, 1}, 10] // Flatten Out[12]= {0, 1, 0.1, 1.0935, 0.2, 1.21486, 0.3, 1.37875, 0.4, 1.60729, 0.5, 1.92811, 0.6, 2.34585, 0.7, 2.75325, 0.8, 2.97922, 0.9, 3.06953, 1., 3.10788} Here is the direct application of polynomial approximation with many loops. f[x_,y_]= (x^2 + y^2)*Sin[y] f1[x_,y_]=D[f[x,y],x]+f[x,y]*D[f[x,y],y] y[1] = 1; h =0.1; a = 0; b = 1; n = Floor[(b-a)/h]; Do[x[i] = 0.0 + (i-1)*h, {i,1,n+1}] Do[{k1=f1[x[i],y[i]], y[i+1] = y[i] + h*f[x[i],y[i]] +h*h/2*k1}, {i,1,n}] Do[Print[x[i], " ", y[i]],{i,1,n+1}] ## III. Third Order Polynomial Approximation The third order Taylor approximation is adding a third order differential deviation to the equation for the 2nd order expansion. \[ y_{n+1} = y_n + h\,f (x_n , y_n ) + \frac{h^2}{2} \ y'' (_n ) + \frac{h^3}{3!}\, y''' (x_n ) = y_n + h\, \Phi_3 (h) ,
where the increment function Φ3 adds just one term to Φ2. We will show how it works in two examples that were considered previously.

Example: We reconsider the IVP: $$y’=1/(2x-3y+5), \quad y(0)=1 .$$ To use the third order polynomial approximation, we choose a uniform grid with constant step length h = 0.1.

Here is the syntax for the third order Taylor approximation (which takes too long to execute):

Clear[y]
f1[x_, y_] := 1 / (2 x - 3 y + 5)
fx[x_, y_] := D[f1[a, b], a] /. {a -> x, b -> y}
fy[x_, y_] := D[f1[a, b], b] /. {a -> x, b -> y}
f2[x_, y_] := fx[x, y] + fy[x, y] * f1[x, y]
fxx[x_, y_] := D[fx[a, b], a] /. {a -> x, b -> y}
fxy[x_, y_] := D[fx[a, b], b] /. {a -> x, b -> y}
fyy[x_, y_] := D[fy[a, b], b] /. {a -> x, b -> y}
f3[x_, y_] := fxx[x, y] + 2 * fxy[x, y] * f1[x, y] +
fyy[x, y] * (f1[x, y])^2 + fx[x, y] * fy[x, y] + (fy[x, y])^2 * f1[x, y]
y[0] = 1;
Do[y[n + 1] = y[n] + h * f1[h n, y[n]] +
1 / 2 * h^2 * f2[h n, y[n]] + 1 / 6 * h^3 * f3[h n, y[n]], {n, 0, 9}]
y[10]
Out[41]= 1.43528
The approximation for this value is y10=1.435283295 while the true value is 1.4352866048707.

The total number of numerical operations here is 420. It is obvious at this point why using mathematical programs such as Mathematica is a desired approach for such problems.

Example: We reconsider the IVP: $$y’=(x^2 -y^2) \sin (y), \quad y(0)=1 .$$ To use the third order polynomial approximation, we choose a uniform grid with constant step length h = 0.1.

taylor3[f_, {x_, x0_, xn_}, {y_, y0_}, stepsize_] :=
Block[{xold = x0, yold = y0, sollist = {{x0, y0}}, h},
h = N[stepsize];
Do[xnew = xold + h;
fold = f /. {x -> xold, y -> yold};
\[Phi]2 =
fold + (h/2)*((D[f, x] /. {x -> xold,
y -> yold}) + (D[f, y] /. {x -> xold,
y -> yold})*(f /. {x -> xold, y -> yold}));
\[Alpha] = (D[f, {x, 2}] /. {x -> xold, y -> yold}) +
2*(D[f, x, y] /. {x -> xold, y -> yold})*(f /. {x -> xold,
y -> yold}) + (D[f, {y, 2}] /. {x -> xold,
y -> yold})*(f /. {x -> xold, y -> yold})^2 + (D[f,
x] /. {x -> xold, y -> yold})*(D[f, y] /. {x -> xold,
y -> yold}) + (D[f, y] /. {x -> xold,
y -> yold})^2*(f /. {x -> xold, y -> yold});
ynew = yold + h*\[Phi]2 + h^3/3!*\[Alpha];
sollist = Append[sollist, {xnew, ynew}];
xold = xnew;
yold = ynew, {(xn - x0)/stepsize}];
Return[sollist]]

Solution = taylor3[(x^2 + y^2)*Sin[y], {x, 0, 1}, {y, 1}, .1]
Out[8]= {{0, 1}, {0.1, 1.09483}, {0.2, 1.21862}, {0.3, 1.38695}, {0.4, 1.62303}, {0.5, 1.95295},
{0.6, 2.36595}, {0.7, 2.73957}, {0.8, 2.9658}, {0.9, 3.07578}, {1., 3.12003}}

Another example of code (with the same output):

f[x_,y_]=(x^2 + y^2)*Sin[y]
f1[x_,y_]=D[f[x,y],x]+f[x,y]*D[f[x,y],y]
f2[x_,y_]=D[f[x,y],x,x]+2*f[x,y]*D[f[x,y],x,y] + D[f[x,y],y,y]*f[x,y]*f[x,y] + D[f[x,y],x]*D[f[x,y],y] +f[x,y]*(D[f[x,y],y])^2
y[1] = 1; h =0.1;
a = 0; b = 1; n = Floor[(b-a)/h];
Do[x[i] = 0.0 + (i-1)*h, {i,1,n+1}]

Do[{k1=f1[x[i],y[i]],
k2=f2[x[i],y[i]],
y[i+1] = y[i] + h*f[x[i],y[i]] +h*h/2*k1 + k2/6*h^3}, {i,1,n}]
Do[Print[x[i], " ", y[i]],{i,1,n+1}]