The Taylor series method is of general applicability, and it is the standard to which the accuracy of various other numerical methods are compared. It can be devised to have any specified degree of accuracy under some general assumptions. The most important single result in numerical computations is Taylor's theorem, which we now state below.

Theorem (Taylor): : Let f(x) have n+1 continuous derivatives on the closed interval [a.b] for some positive n, and let \( x, x_0 \in [a,b] . \) Then

\[ f(x) = p_n + R_n (x) \]
for
\[ p_n (x) = f(x_0 ) + f' (x_0 ) \,(x-x_0 ) + \cdots + \frac{f^{(n)} (x_0 )}{n!} \, (x- x_0 )^n = \sum_{k=0}^n \frac{(x- x_0 )^k}{k!} \, f^{(k)} (x_0 ) \]
and
\[ R_n (x) = \frac{1}{n!} \int_{x_0}^x (x-t)^n (t)\, {\text d}t = \frac{(x- x_0 )^{n+1}}{(n+1)!} \,f^{(n+1)} (\xi_x ) , \]
where ξ is a point between x and x0. ■

Brook Taylor (1685--1731) was educated at St. John's College of the Cambridge University (England), from which he graduated in 1709. He wrote two very significant books, ‘Methodus incrementorum directa et inversa’ and ‘Linear Perspective,’ which were published in 1715. He was the one to invent ‘Integration of Parts’ and also a series called the ‘Taylor’s Expansion.’ The Taylor’s Theorem is based on the letter written by him to Machin in which he tells about the origination of this idea. Although it appears that he did not appreciate its importance and he certainly did not bother with a proof of Taylor's theorem, which was discovered and published in 1694 by Johann Bernoulli (1667--1748). Taylor acknowledged that his work was based on that of Newton and Kepler. However, Johann Bernoulli claimed his priority in discovering both of his main results (Taylor's expansion and integration by parts).

Johann Bernoulli Brook Taylor

Polynomial Approximations


The Taylor series method is of general applicability, and it is a standard to which we compare the accuracy of the various other numerical methods for solving an initial value problem for ordinary differential equations:
\[ y' = f(x,y ) , \quad y(x_0 ) = y_0 . \]
A numerical method for solving the above initial value problem starts with choosing a set of mesh points
\[ \left( x_0 , y_0 \right) , \left( x_1 , y_1 \right) , \ldots , \left( x_N , y_N \right) , \]
where \( x_n = x_0 + n\,h , \) n = 0, 1, In what follows, it will be assumed that the given slope function f(x,y) has as many derivatives in a neighborhood of the initial point as needed. For simplicity, we consider the set of uniformly distributed grid points starting from x0: \( x_n = x_0 + n\,h , \quad n=0,1,2,\ldots . \)

Theorem: Suppose that a function y(x) is a solution of the initial value problem \( y' = f(x,y), \ y(x_0 ) = y_0 \) and that it has m+1 continuous derivatives on the interval [a,b] containing grid points \( \{ x_n \}_{n\ge 0} . \) Assume that y(x) has a Taylor series expansion of order m about fixed value \( x_n \in [a,b] : \)

\[ y(x_n +h) = y(x_n ) + h\, \Phi_m \left( x_n, y(x_n )\right) + O\left( h^{m+1} \right) , \]
where the increment function is
\[ \Phi_m \left( x_n, y(x_n )\right) = \sum_{j=1}^m \frac{y^{(j)} (x_n )}{j!} \, h^{j-1} \]
and
\[ y^{(k)} (x) = \left( \frac{\partial}{\partial x} + f\,\frac{\partial}{\partial y} \right)^{k-1}\, f\left( x, y(x) \right) , \qquad k=0,1,2,\ldots . \]
In particular,
\begin{align*} y' (x) &= f(x,y(x)) , \\ y'' (x) &= f_x + f_y \, y' = f_x + f\, f_y , \\ y^{(3)} (x)&= f_{xx} + 2\, f_{xy} y' + f_y y'' + f_{yy} \left( y' \right)^2 \\ &= f_{xx} + 2\,f_{xy}\,f + f_{yy} f^2 + f_y \left( f_x + f\, f_y \right) , \\ y^{(4)} (x)&= f_{xxx} + 3\, f_{xxy} y' +3\, f_{xyy} \left( y' \right)^2 +3\, f_{xy} y'' + f_y y''' +3\, f_{yy} y' y'' + f_{yyy} \left( y' \right)^3 \\ &= f_{xxx} + 3\, f_{xxy} \,f +3\, f_{xyy} \left( f \right)^2 +3\, f_{xy} \left( f_x + f\, f_y \right) + f_{yy} f\left( f_x + f\, f_y \right) + f_y y''' . \end{align*}
Then the approximate numerical solution at mesh points \( \left( x_n , f_n \right) , \ n=0,1,2,\ldots , N, \) to the initial value problem \( y' = f(x,y), \ y(x_0 ) = y_0 \) can be evaluated according to the formula:
\[ y_{n+1} = y_n + h \,f' (x_n ) + \frac{h^2}{2!}\, f'' (x_n ) + \cdots + \frac{h^m}{m!} \,f^{(m)} (x_n ) , \]
where h is a step size. ■

The approximate numerical solution to the initial value problem \( y' = f(x,y ) , \quad y(x_0 ) = y_0 \) over the interval [a,b] is derived by using the Taylor series expansion on each subinterval \( [ x_n , x_{n+1}] . \) The general step for Taylor's method of order m is

\[ y_{n+1} = y_n + h\,y' (x_n ) + \frac{h^2}{2} \,y'' (x_n ) + \cdots + \frac{h^m}{m!} \, y^{(m)} (x_n ). \]
The Taylor method of order m has the property that the final global error is of the order \( O\left( h^{m+1} \right) . \) It is possible theoretically to choose m as large as necessary to make this error as small as desired. However, in practice, we usually compute two sets of approximations using step sizes h and h/2 and compare the results.

Theorem (Precision of Taylor's method of order m): Suppose that the solution to the initial value problem \( y' = f(x,y ) , \quad y(x_0 ) = y_0 \) possesses m+1 continuous derivatives. Let \( \left( x_n , f_n \right) , \ n=0,1,2,\ldots , N, \) be the sequence of approximations generated by Taylor's method of order m. Then

\[ \begin{split} \left\vert e_n \right\vert &= \left\vert y(x_n ) - y_n \right\vert = O \left( h^{m+1} \right) , \\ \left\vert e_{n+1} \right\vert &= \left\vert y(x_{n+1} ) - y_n - h\, \Phi_m \left( x_n, y(x_n )\right) \right\vert = O \left( h^{m} \right) . \end{split} \]
In particular, the final global error at the end interval will satisfy
\[ \left\vert y(b) - e_m \right\vert = O \left( h^{m} \right) . \qquad \blacksquare \]

 

Second Order Polynomial Approximations


Since the first order Taylor series approximation is identical with Euler’s method, we start with the second order one:

\[ y_{n+1} = y_n + h\,f (x_n , y_n ) + \frac{h^2}{2} \left[ f_x (x_n , y_n ) + f(x_n , y_n )\, f_y (x_n , y_n ) \right] = y_n + h\, \Phi_2 (h) , \]
where the increment function Φ2 is just adding the second order differential deviation to the next term in the same equation used for the first order Taylor expansion. We show how it works in the following examples.

 

Third Order Polynomial Approximations


The third order Taylor approximation is adding a third order differential deviation to the equation for the 2nd order expansion.

\[ y_{n+1} = y_n + h\,f (x_n , y_n ) + \frac{h^2}{2} \ y'' (_n ) + \frac{h^3}{3!}\, y''' (x_n ) = y_n + h\, \Phi_3 (h) , \]
where the increment function Φ3 adds just one term to Φ2. We will show how it works in two examples that were considered previously.

 

Fourth Order Polynomial Approximations


 

Taylor's method of order 4: To approximate the solution of the initial value problem \( y' = f(x,y), \ y(a ) = y_0 \) over the interval [𝑎,b] by evaluating derivatives y'', y''', and y'''' and using the Taylor polynomial at each step.

function T4=taylor(df,a,b,ya,m)
% Input -- df=[y',y'',y''',y''''] entered as a string 'df'
%          where y' = f(x,y) 
%       -- a and b are the left and right end points   
%       -- ya is the initial condition y(a) 
%       -- m is the number of steps 
% Output-- T4=[X',Y'] where X is the vector of abscissas and 
%          Y is the vector of ordinates 
h=(b-a)/m; 
X=zeros(1,m+1);
Y=zeros(1,m+1);
X=a:h:b;
Y(1)=ya;
for j=1:m
    D=feval(df,X(j),Y(j));
    Y(j+1)=Y(j)+h*(D(1)+h*(D(2)/2+h*(D(3)/6+h*(D(4)/24))));
end
T4=[X',Y'];