Historically, Picard's iteration scheme was the first method to solve analytically nonlinear differential equations, and it was discussed in the first part of the first part of the course (see introductory secion xv Picard). In this section, we widen this procedure for systems of first order differential equations written in normal form \( \dot{\bf x} = {\bf f}(t, {\bf x}) . \) Although this method is rarely used for actual evaluations due to slow convergence and obstacles with performing explicit integration, it is very important for understanding material. Working with Picard's iterations and its refinements helps everyone to develop computational skills. This section also gives some motivated examples about transferring differential equations with complicated slope functions to systems of ODEs with polynomial driven terms that are suitable for Picard's iteration.

This section expands Picard's iteration process to systems of ordinary differential equations in normal form (when the derivative is isolated). Picard's iterations for a single differential equation \( {\text d}x/{\text d}t = f(t,x) \) was considered in detail in the first tutorial (see section for reference). Therefore, our main interest would be to apply Picard's iteration to systems of first order ordinary differential equations in normal form (which means that the derivative is isolated)

\[ \begin{cases} \dot{x}_1 \equiv {\text d}x_1 /{\text d}t &= f_1 (t, x_1 , x_2 , \ldots , x_n ) , \\ \dot{x}_2 \equiv {\text d}x_2 /{\text d}t &= f_2 (t, x_1 , x_2 , \ldots , x_n ) , \\ \quad \vdots & \quad \vdots \\ \dot{x}_n \equiv {\text d}x_n /{\text d}t &= f_n (t, x_1 , x_2 , \ldots , x_n ) . \end{cases} \]
This explicit notation cannot be called compact and informative. Hence, we introduce n-dimensional vectors of unknown variables and slope functions in order to rewrite the above system in compact form. So the system of differential equations can be written in vector form
\[ \frac{{\text d} {\bf x}}{{\text d}t} = {\bf f}(t, {\bf x} ) , \]
where
\[ {\bf x} (t) = \begin{bmatrix} x_1 (t) \\ x_2 (t) \\ \vdots \\ x_n (t) \end{bmatrix} , \qquad {\bf f} (t, x_1 , x_2 , \ldots , x_n ) = \begin{bmatrix} f_1 (t, x_1 , x_2 , \ldots , x_n ) \\ f_2 (t, x_1 , x_2 , \ldots , x_2 ) \\ \vdots \\ f_n (t, x_1 , x_2 , \ldots , x_n ) \end{bmatrix} \]
are n-column vectors. Note that the preceding system of equations contains n dependent variables \( x_1 (t), x_2 (t) , \ldots , x_n (t) \) while the independent variable is denoted by t, which may be associated with time. In engineering and physics, it is a custom to follow Isaac Newton and denote a derivative with respect to time variable t by dot: \( \dot{\bf x} = {\text d}{\bf x} / {\text d} t. \) When input function f satisfies some general conditions (usually required to be Lipschitz continuity in the dependent variable \( {\bf x} \) ), then Picard's iteration converges to a solution of the Volterra integral equation in some small neighborhood of the initial point. Thus, Picard's iteration is an essential part in proving existence of solutions for the initial value problems.

Note that Picard's iteration is not suitable for numerical calculations. The reason is not only slow convergence, but mostly it is impossible, in general, to perform explicit integration to obtain the next iteration. Although in case of a polynomial input function, integration can be performed explicitly (especially, with the aid of a computer algebra system), and the number of terms quickly grows as a snow ball. While we know that the resulting series converges eventually to the true solution, its range of convergence is too small to keep many iterations. In another section, we show how to bypass this obstacle.

Charles Picard

If an initial position of the vector \( {\bf x} (t) \) is known, we get an initial value problem:

\begin{equation} \label{EqPicard.1} \frac{{\text d} {\bf x}}{{\text d}t} = {\bf f}(t, {\bf x} ) , \qquad {\bf x} (t_0 ) = {\bf x}_0 , \end{equation}
where \( {\bf x}_0 \) is an initial column vector. Many brilliant mathematicians participated in proving the existence of a solution to the given initial value problem more than 100 years ago. Their proof was based on what is now called the Picard's iteration, named after the French mathematician Charles Émile Picard (1856--1941) whose theories did much to advance research in analysis, algebraic geometry, and mechanics.

The Picard procedure is actually a practical extension of the Banach fixed point theorem, which is applicable to continuous contractive functions. Since any differential equation involves an unbounded derivative operator, the fixed point theorem is not suitable for it. To bypass this obstacle, Picard suggested to apply the (bounded) inverse operator L-1 to the first derivative \( \texttt{D} . \) Recall that the inverse \( \texttt{D}^{-1} , \) called in mathematical literature as «antiderivative», is not an operator because it assigns to every input-function infinite many outputs. To restrict its output to a single one, we consider the differential operator on the set of functions (which becomes a vector space only when the differential equation and the initial condition are all homogeneous) with a specified initial condition f(x0) = y0. So the derivative operator on this set of functions we denote by L, and its inverse is a bounded integral operator. The first step in deriving Picard's iterations is to rewrite the given initial value problem in equivalent (this is true when the slope function f is continuous in Lipschitz sense) form as Volterra integral equation of second kind:

\begin{equation} \label{EqPicard.2} {\bf x} ( t ) ={\bf x}_0 + \int_{t_0}^t {\bf f} (s, {\bf x} (s) ) \,{\text d}s . \end{equation}
This integral equation is obtained upon integration of both sides of the differential equation \( {\text d}{\bf x} / {\text d} t = {\bf f}(t, {\bf x}), \) subject to the initial condition.

The equivalence follows from the Fundamental Theorem of Calculus. It suffices to find a continuous function \( {\bf x}(t) \) that satisfies the integral equation within the interval \( t_0 -h < t < t_0 +h , \) for some small value \( h \) since the right hand-side (integral) will be continuously differentiable in \( t . \)

Assuming that the vector-function \( {\bf f} ({\bf x}, t) \) satisfies the Lipschitz condition in x:

\[ \| {\bf f}({\bf x} , t ) - {\bf f}({\bf y} ,t ) \| \le L\,\| {\bf x} - {\bf y} \| , \]
where constant L is called the Lipschitz constant, then the initial value problem \eqref{EqPicard.1} has a unique solution in some neighborhood of the initial position.

Now we apply a technique known as Picard iteration to construct the required solution:

\begin{equation} \label{EqPicard.3} {\bf x}_{m+1} ( t ) ={\bf x}_0 + \int_{t_0}^t {\bf f} (s, {\bf x}_m (s) ) \,{\text d}s , \qquad m=0,1,2,\ldots . \end{equation}
The initial approximation is chosen to be the initial value (constant): \( {\bf x}_0 (t) \equiv {\bf x}_0 .\) (The sign ≡ indicates the values are identically equivalent, so this function is the constant). When input function f(t, x) satisfies some (sufficient) conditions, it can be shown that this iterative sequence converges to the true solution uniformly on the interval \( [t_0 -h, t_0 +h ] \) when h is small enough. Actually, the necessary and sufficient conditions for existence of a solution to a vector initial value problem are still waiting for their discovery.

Of course, we can differentiate the recursive integral relation to obtain a sequence of the initial value problems

\begin{equation} \label{EqPicard.4} \frac{{\text d}{\bf x}_{m+1} ( t )}{{\text d}t} = {\bf f} (t, {\bf x}_m (t) , \qquad {\bf x}_m (t_0 ) = {\bf x}_0 , \qquad m=0,1,2,\ldots . \end{equation}
that is equivalent to the original recurrence \eqref{EqPicard.3}. However,this recurrence relation \eqref{EqPicard.4} involves the derivative operator \( \texttt{D} = {\text d}/{\text d}t . \) Since \( \texttt{D} \) is an unbounded operator, it is not possible to prove convergence of the iteration procedure \eqref{EqPicard.4}. On the other hand, the convergence of \eqref{EqPicard.3} involving a bounded integral oopeprator can be accomplished directly (see Tutorial I).

 

Picard's iteration for higher order differential equations


Now we turn our attention to applying Picard's iteration procedure to solving initial value problems for higher order (linear or nonlinear) single differential equations:
\begin{equation} \label{EqPicard.5} x^{(n)} (t) = f\left( t, x, x' , \ldots , x^{(n-1)} (t) \right) , \qquad x(0) = x_0 , \ x' (0) = x_1 , \ \ldots , \ x^{(n-1)} (0) = x_{n-1} , \end{equation}
where \( x^{(n)} (t) = \frac{{\text d}^n x}{{\text d}t^n} = \texttt{D}^n x(t) \) stands for the n-th derivative. There are usually two options to achieve this. First, we can transfer this single differential equation to an equivalent system of first order ODEs upon introducing new dependent variables:
\begin{equation} \label{EqPicard.6} y_1 (t) = x(t) , \quad y_2 (t) = x' (t) = y'_1 (t) , \quad \ldots , \quad y_{n} (t) = y'_{n-1} (t) = x^{(n-1)} (t) . \end{equation}
Rewriting this system of equations in a standard vector form \( \dot{\bf y} = {\bf f}(t, {\bf y}) , \) we can apply Picard's iterations for column-vector y.

Another approach is based on extension of Picard's idea for a higher order equation. It is based on the observation that the initial value problem for n-th order derivative

\begin{equation} \label{EqPicard.7} \texttt{D}^n x(t) = f(t, x, x' , \ldots , x^{(n-1)}) , \quad x(0) = x_0 , \ x' (0) = x_1 , \ \ldots , \ x^{(n-1)} (0) = x_{n-1} , \end{equation}
where \( \texttt{D} = \frac{{\text d}}{{\text d}t} \) is the derivative operator, has an explicit solution (which is unique)
\begin{equation} \label{EqPicard.8} x(t) = \frac{1}{(n-1)!} \int_0^t \left( t-s \right)^{n-1} f(s, x(s) , \ldots )\,{\text d}s + \sum_{k=0}^{n-1} \frac{x_k}{k!}\, t^k . \end{equation}
Upon denoting by ϕm(t) the m-th Picard iterate, we write the Picard iteration scheme:
\begin{equation} \label{EqPicard.9} \phi_{m+1} (t) = \frac{1}{(n-1)!} \int_0^t \left( t-s \right)^{n-1} f\left( s, \phi_m (s) , \ldots \right)\,{\text d}s + \sum_{k=0}^{n-1} \frac{a_{k}}{k!}\, t^k , \qquad m=0, 1, 2, \ldots , \end{equation}
where coefficients \( a_k \) and first n initial values of y and its derivatives at t = 0 to satisfy the initial conditions \eqref{EqPicard.5}. Since the convergence of the successive approximations to the true solution is often slow, it could be beneficial to choose the initial approximation wisely. All approaches are demonstrated in a series of examples.

 

 

  1. Djang, G.-F., A modified method of iteration of the Picard type in the solution of differential equations, Journal of The Franklin Institute, 1948, Vol. 246, Issue 6, pp. 453--457. https://doi.org/10.1016/0016-0032(48)90261-0
  2. Hu, J. and Li, W.-P., Theory of Ordinary Differential Equations, The Hong Kong University of Science and Technology, 2005.
  3. Let's Learn, Nemo, YouTube.
  4. Robin, W.A., Solving differential equations using modified Picard iteration, International Journal of Mathematical Education in Science and Technology, Volume 41, 2010, Issue 5, Pages 649--665; https://doi.org/10.1080/00207391003675182