Preface


The method of Picard iterations was the first method that was used to prove the existence of solutions to initial value problems for Ordinary Differential Equations (ODEs). It is not practical because every iteration repeats the same calculation, slowing down the overall process. Every iteration brings more accuracy to the approximation, unless a stabilized point has been reached (when a given iteration is equal to the previous one). Practical applications usually apply to polynomial slope functions. Otherwise, it becomes impossible to integrate. Thus, Picard's iterations are used mostly for theoretical applications, as proven existence of solutions to an initial value problem.

In this section, we discuss so called Picard's iteration method that was initially used to prove the existence of an initial value problem (see section in this tutorial). Although this method is not of practical interest and it is rarely used for actual determination of a solution to the initial value problem, nevertheless, there are known some improvements in this procedure that make it practical. Picard's iteration is important for understanding the material because it leads to an equivalent integral formulation useful in establishing many numerical methods. Also, Picard's iteration helps in developing algorithmic thinking when the user implements it in a computer.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340

Picard Iterations


Charles Picard

Consider the initial value problem

\[ \frac{{\text d}y}{{\text d}x} = f(x,y), \qquad y(x_0 ) = y_0 . \]
The derivative operator \( \texttt{D} = {\text d}/{\text d}x \) is an unbounded operator because it could transfer a bounded function into unbounded function. For example, the bounded and smooth function sin(ωt) is mapped by the derivative operator into the unbounded function  ωcos(ωt), which may exceed any number upon choosing an appropriate value of ω. This example shows that a fast oscillating function may have a very large derivative, so its finite difference approximation will be problematic. The same effect is observed when the solution of a differential equation may be unbounded at some point. Therefore, a practical approximation of such functions may not be accurate unless we know that a solution is a smooth varying function. As an alternative, we try to reformulate the initial value problem as an integral equation.

The inverse of the derivative operator \( \texttt{D} \) is not an operator in a general mathematical sense because it assigns to every function infinitely many outputs that are usually called an «antiderivative». In order to single out one output, we consider the differential operator on a special set of functions that have a prescribed value at one point x = x0, so f(x0) = y0. We change the notation \( \texttt{D} \) on such a set of functions (which is not a linear space) to L. If we rewrite our initial value problem in the operator form \( L[y] = f , \) then apply its inverse L-1 to both sides, we reduce the given problem to the fixed point problem

\[ y = L^{-1} \left[ f(x,y) \right] \]
for a bounded operator L-1. If the given initial value problem is homogeneous, then Banach's or Brouwer's fixed point theorem assures us that for some bounded operators (such as L-1f for the Lipschitz slope function f) the iteration \( y_{n+1} = L^{-1} \left[ f(x,y_n ) \right] , \ y_0 = y(x_0 ) \) converges to a true solution. So Picard's iteration is actually an extension and implementation of this contraction theorem. Banach’s fixed point theorem formulates an easy-to-check assumption under which we have existence and uniqueness and comparability of a fixed point guaranteed. Moreover, it presents the order of the convergence of the approximating sequence and gives hints to stop the iteration in finitely many steps.

If we integrate (that is, apply the inverse operator L-1 to the unbounded derivative operator restricted on the set of functions with a specified initial condition) both sides of the differential equation \( y' = f(x,y) \) with respect to x from x0 to x, we get the following integral equation

Function sin(100t)
\[ y(x) = y(x_0 ) + \int_{x_0}^x f\left( t, y(t) \right) {\text d} t . \]
The right-hand side of the (Volterra) integral equation above is actually the inverse operator generated by the initial value problem. Therefore, both problems, the initial value problem and its integral reformulation, are equivalent in the sense that every solution of one of them is also a solution of its counterpart and vise versa. Although this integral equation is very hard (if not impossible) to solve, it has some advantages over the initial value problem. The main feature of the integral equation is that it contains integration---a bounded operator. This should be almost obvious for everyone who took an introductory calculus course: integration (of a positive function) is an area under the curve, which cannot be infinite for any smooth and bounded function on a finite interval.

Now we deal with the fixed point problem that is more computationally friendly because we can approximate an integral with any accuracy we want. The Picard iterative process consists of constructing a sequence of functions { ϕn } that will get closer and closer to the desired solution. This is how the process works:

\[ \mbox{Set} \quad \phi_0 \equiv y_0 \qquad \mbox{for all $x$}. \]
For the nonzero parameter n, define ϕn by the recursive formula
\[ \phi_{n+1} (x) = y_0 + \int_{x_0}^x f\left( t, \phi_n (t) \right) {\text d} t, \qquad n=0,1,2,\ldots . \]
This recurrence is equivalent to the sequence of initial value problems:
\[ \frac{{\text d}\phi_{n+1} (x)}{{\text d}x} = f\left( x, \phi_n (x) \right) , \qquad \phi_n (x_0 ) = y_0 , \qquad n=0,1,2,\ldots . \]
Then the limit function \( \phi (x) = \lim_{n\to \infty} \phi_n (x) \) gives the solution of the original initial value problem. However, practical implementation of Picard's iteration is possible only when the slope function is a polynomial. Otherwise, explicit integration is most likely impossible to achieve, even for a very sophisticated computer algebra system. We will show in the second part how to bypass this obstacle.
Theorem: Suppose that the slope function f satisfies the existence and uniqueness theorem ( Picard--Lindelöf) in some rectangular domain \( \displaystyle R = [x_0 -a, x_0 +a]\times [y_0 -b , y_0 +b ] : \)
  • The slope function is bounded: \( M = \max_{(x,y) \in R} \, |f(x,y)| . \)
  • The slope function is Lipschitz continuous: \( |f(x,y) - f(x,z) | \le L\,|y-z| . \) for all (x,y) from R.
Then the successive approximation
\[ \phi_n (x) = y_0 + \int_{x_0}^x f \left( t, \phi_{n-1} (t) \right) {\text d}t , \qquad \phi_0 = y_0 , \]
converges to the unique solution of the initial value problem \( y' = f(x,y) , \ y(x_0 ) = y_0 , \) in the interval \( x_0 - h \le x \le x_0 +h , \) where \( \displaystyle h = \min \left\{ a, \frac{b}{M} \right\} . \)    ⧫

If we chose the initial approximation as \( \phi_0 (x_0 ) = y_0 , \) then at least the initial condition \( y( x_0 ) = y_0 \) is satisfied. The next approximation is obtained by substituting the initial approximation into the right hand side of integral equation:
\[ \phi_1 (x) = y_0 + \int_{x_0}^x f \left( t, \phi_{0} (t) \right) {\text d}t = y_0 + \int_{x_0}^x f \left( t, y_{0} (t) \right) {\text d}t . \]
Similarly, ϕ2(x) is obtained from ϕ1(x):
\[ \phi_2 (x) = y_0 + \int_{x_0}^x f \left( t, \phi_{1} (t) \right) {\text d}t . \]
In general, we have
\[ \phi_{n+1} (x) = y_0 + \int_{x_0}^x f \left( t, \phi_{n} (t) \right) {\text d}t . \]
In this manner, we generate the sequence of functions \( \displaystyle \left\{ \phi_n (x) \right\} = \left\{ \phi_0 , \phi_1 (x), \phi_2 (x) , \ldots \right\} . \) Each member of this sequence satisfies the initial condition, but in general none satisfies the differential equation. If for some index m, we find that \( \displaystyle \phi_{m+1} (x) = \phi_m (x) , \) then ϕm(x) is a fixed point of the integral equation and it is a solution of the differential equation as well. Hence, the sequence is terminated at this point.

First, we verify that all members of the sequence exist. This follows from the inequality:

\[ \left\vert \phi_{n+1} (x) \right\vert \le \left\vert y_0 \right\vert + \max \left\vert f(x,y) \right\vert \int_{x_0}^x {\text d}t = M\,|x-x_0 | \le b \]
for \( \displaystyle \left\vert x - x_0 \right\vert \le b\,M, \) where \( \displaystyle M = \max \left\vert f(x,y \right\vert . \)

To show that the sequence converges, we represent the limit function as infinite series

\[ \phi (x) = \lim_{n\to \infty} \phi_n (x) = \phi_0 + \sum_{n\ge 1} \left[ \phi_n (x) - \phi_{n-1} (x) \right] . \]
The convergence of the sequence \( \displaystyle \left\{ \phi_n (x) \right\} \) is established by showing that the preseding infinite series converges. To do this, we estimate the magnitude of the general term \( \displaystyle \left\vert \phi_n (x) - \phi_{n-1} (x) \right\vert \) .

We go step by step, and consider the first term

\[ \left\vert \phi_1 (x) - \phi_0 \right\vert = \left\vert \int_{x_0}^x f(t, y_0 ) \,{\text d}t \right\vert \le M \int_{x_0}^x f(t, y_0 ) \,{\text d}t = M \left\vert x - x_0 \right\vert . \]
For the second term, we have
\[ \left\vert \phi_2 (x) - \phi_1 (x) \right\vert \le \int_{x_0}^x \left\vert f(t, \phi_1 (t) ) - f(t, \phi_0 ) \right\vert {\text d}t \le L \int_{x_0}^x M \left( t - x_0 \right) {\text d}t = \frac{1}{2}\,ML\left( x- x_0 \right)^2 , \]
where L is the Lipschitz constant for the function f(x,y), that is, \( \displaystyle \left\vert f(x, y_1 ) - f(x, y_2 ) \right\vert \le L \left\vert y_1 - y_2 \right\vert . \) For the n-th term, we have
\begin{align*} \left\vert \phi_n (x) - \phi_{n-1} (x) \right\vert &\le \int_{x_0}^x \left\vert f \left( t , \phi_{n-1} (t) \right) - f \left( \phi_{n-2} (t) \right) \right\vert {\text d}t \\ &\le L \int_{x_0}^x \left\vert \phi_{n-1} (t) - \phi_{n-2} (t) \right\vert {\text d}t \le L \int_{x_0}^x \frac{M\,L^{n-2} \left( t- x_0 \right)^{n-1}}{(n-1)!} \, {\text d}t \\ &= \frac{M\,L^{n-1} \left\vert x - x_0 \right\vert^n}{n!} \le \frac{M\,L^{n-1} h^n}{n!} , \qquad h= \max \left\vert x - x_0 \right\vert . \end{align*}
Substituting these formulas into the finite sum
\[ \phi_n (x) = \phi_0 + \sum_{k=1}^n \left[ \phi_k (x) - \phi_{k-1} (x) \right] , \]
we obtain
\begin{align*} \left\vert \phi_n (x) - \phi_{n-1} (x) \right\vert &\le M \left\vert x - x_0 \right\vert + \frac{M\,L \left\vert x-x_0 \right\vert^2}{2} + \cdots + \frac{M\,L^{n-1} \left\vert x-x_0 \right\vert^n}{n!} . \end{align*}
When n approaches infinity, the sum
\[ M \left\vert x - x_0 \right\vert + \frac{M\,L \left\vert x-x_0 \right\vert^2}{2} + \cdots + \frac{M\,L^{n-1} \left\vert x-x_0 \right\vert^n}{n!} \]
approaches \( \displaystyle e^{L \left\vert x - x_0 \right\vert} -1 . \) Therefore, we have
\[ \left\vert \phi_n (x) - \phi_{n-1} (x) \right\vert \le \frac{M}{L} \left[ e^{L \left\vert x - x_0 \right\vert} -1 \right] \]
for any n. Hence, by the Weierstrass M-test, the series \( \displaystyle \phi (x) = \lim_{n\to \infty} \phi_n (x) = \phi_0 + \sum_{n\ge 1} \left[ \phi_n (x) - \phi_{n-1} (x) \right] \) converges absolutely and uniformly on the interval \( \displaystyle \left\vert x - x_0 \right\vert \le h . \) Then it follows that the limit function \( \displaystyle \phi (x) = \lim_{n\to \infty} \phi_n (x) \) of the sequence \( \displaystyle \left\{ \phi_n (x) \right\} \) is a continuous function on the interval \( \displaystyle \left\vert x - x_0 \right\vert \le h . \)

Next we prove that the limit function is a solution of the given initial value problem. Allowing n to approach infinity on both sides of

\[ \phi_{n+1} (x) = \phi_0 + \int_{x_0}^x f \left( t, \phi_n (t) \right) {\text d}t , \]
we get
\[ \phi (x) = \lim_{n\to\infty} \,\phi_{n+1} (x) = \phi_0 + \lim_{n \to \infty} \int_{x_0}^x f \left( t, \phi_n (t) \right) {\text d}t , \]
Since the slope function satisfies the Lipschitz condition
\[ \left\vert f \left( t, \phi_n (t) \right) - f \left( t, \phi_{n-1} (t) \right) \right\vert \le L \left\vert \phi_n (t) - \phi_{n-1} (t) \right\vert \]
on the interval \( \displaystyle \left\vert t - x_0 \right\vert \le h , \) it follows that the sequence \( \displaystyle f \left( t, \phi_n (t) \right) \) also converges uniformly to \( \displaystyle f \left( t, \phi (t) \right) \) on this interval. So we can interchange integration with the limit operation on the right-hand side to obtain
\[ \phi (x) = y_0 + \int_{x_0}^x \lim_{n\to +\infty} \, f \left( t, \phi_n (t) \right) {\text d}t = y_0 + \int_{x_0}^x f \left( t, \phi (t) \right) {\text d}t . \]
Thus, the limit function is a solution of the Volterra integral equation that is equivalent to the given initial value problem. Note that the limit function automatically satisfies the initial condition y(0) = y0. Also, differentiating both sides of the latter with respect to x and noting that the right-hand side is a differentiable function of the upper limit, we find
\[ \phi' (x) = f(x, \phi (x)) . \]

 

More details for the existence and uniqueness of the initial value problem can be found in the following textbooks
  1. Birkhoff, G. and Rota, G.-C., Ordinary Differential equations, 4rd Ed., John Wiley and Sons, New York, NY, 1989, Chapter 1, p. 23. https://doi.org/10.1137/1005043
  2. Boyce, W.E. and DiPrima, R.C., Elementary Differential equations and Boundary Value Problems, 11th edition, Wiley, New York; section 2.8.
  3. Dobrushkin, V.A., Applied Differential Equations. The Primary Course, CRC Press, Boca Raton, FL, 2015. Section 1.6.
  4. Edwards, C.H. and Penney, D.E., Differential Equations and Boundary Value Problems: Computing and Modeling, 5th edition, 2014, Prentice Hall, Upper Saddle River, NJ, Appendix A.1.
  5. Goode, S.W. and Annin, S.W., Differential Equations and Linear Algebra, 4th edition, Prentice Hall, Upper Saddle River, NJ, 2016, Appendix 4.
  6. Hille, E., Lectures on Ordinary Differential Equations, Addison-Wesley Pub. Co., Reading, MA, 1969, pp. 32-41.
  7. Nagle, R.K., Saff, E.K., and Snider, A.D., Fundamentals of Differential Equations and Boundary Value Problems, 8th edition, Addison-Wesley, Boston, MA, 2019, Chapter 13, Sections 1 and 2.
  8. Wouk, A., On the Cauchy-Picard Method, The American Mathematical Monthly, Vol. 70, No. 2. (Feb., 1963), pp. 158-162.
   ▣

Although this approach is most often connected with the names of Charles Emile Picard, Giuseppe Peano, Ernst Lindelöf, Rudolph Lipschitz, and Augustin Cauchy, it was first published by the French mathematician Joseph Liouville in 1838 for solving second order homogeneous linear equations. About fifty years later, Picard developed a much more generalized form, which placed the concept on a more rigorous mathematical foundation, and used this approach to solve boundary value problems described by second order ordinary differential equations.

Note that Picard's iteration procedure, if it could be performed, provides an explicit solution to the initial value problem. This method is not for practical applications mostly for two reasons: finding the next iterate may be impossible, and, when it works, Picard's iteration contains repetitions. Therefore, it could be used sometimes for approximations only. Moreover, the interval of existence (-h, h) for the considered initial value problem is in most cases far away from the validity interval. Some improvements in this direction are known (for instance, S.M. Lozinsky theorem), but we are far away from a complete determination of the validity interval based on Picard's iteration procedure. Also, there are a number of counterexamples in which the Picard iteration is guaranteed not to converge, even if the starting function is arbitrarily close to the real solution.

Here are some versions of Picard's iteration procedure for Mathematica:

For[{n = 1, y[0][x_] = 1}, n < 4, n++, y[n][x_] = 1 + Integrate[y[n - 1][t]^2 + t^2, {t, 0, x}]; (* x^2 + y^2 is taken for example *)
Print[{n, y[n][t]}]]
or
Clear[picardSeries, iterate];
iterate[initial_, flow_, psi_, n_, t_] :=
initial +
Expand@Apply[Subtract,
Assuming[{Subscript[t, n + 1] > 0},
Integrate[flow[Subscript[t, n + 1], psi], Subscript[t, n + 1]]] /. {{Subscript[t, n + 1] -> Subscript[t, n]}, {Subscript[t, n + 1] -> 0}}]
picardSeries[initialVector_, flow_, n_, var_] :=
Module[{time},
Fold[
iterate[
initialVector, flow, #1, #2, time ] &,
initialVector, Reverse[Range[n] - 1]] /. {Subscript[time, 0] -> var} ]
The iteration step is called iterate, and it keeps track of the iteration order n so that it can assign a separate integration variable name at each step. As additional tricks to speed things up, I avoid the automatic simplifications for definite integrals by doing the integral as an indefinite one first, then using Subtract to apply the integration limits.

Instead of a generic simplification, I add Expand to the result in order to help the subsequent integration step recognize how to split up the integral over the current result. The last argument in iterate is the name that I intend to use as the base variable name for the integration, but it will get a subscript n attached to it.

Then iterate is called by picardSeries, using increasing subscripts starting with the outermost integration. This is implemented using Fold with a Reverse range of indices. The first input is the initial condition, followed by the function defining the flow (specifically, defined below as flow[t_, f_]). After the order n, I also specify the name of the independent variable var as the last argument.

Here is the example applied to the IVP: \( y' = y^2 + x^2 , \ y(0) =1 . \)

flow[t_, f_] := t^2 + f^2
picardSeries[1, flow, 5, x]
Other options:
Clear[h]
h = Function[{t}, 1 + Integrate[#^2 + x^2, {x, 0, t}]][x] & ;
NestList[h, 1, 3]
or
Clear[y, h]
h = Function[{t}, 1 + Integrate[#^2 + x^2, {x, 0, t}]][x] & ;
y[0][x_] = 1
y[n_][x_] := y[n][x] = h[y[n - 1][x]]
or
h = (y[#2][x_] = Function[{t}, 1 + Integrate[#1^2 + x^2, {x, 0, t}]][x]) &;
FoldList[h, 1, Range[2]]

Example: Consider the initial value problem

\[ y' = \sqrt[3]{y} , \qquad y(0) =0. \]
The corresponding Picard's iterations are
\[ \phi_{m+1} (x) = \int_0^x \sqrt[3]{\phi_m (s)} \,{\text d} s , \qquad \phi_0 = 0, \quad m=0,1,2,\ldots . \]
Since the initial ieteration is identically zero, all other terms become zero as well. So Picard's iteration provides the trivial solution---identically zer. On the other hand, the function
\[ y = \phi (x) = \left( \frac{2}{3}\, x \right)^{2/3} \]
is another solution, which is a singular solution.

We make a conclusion that continuity of a slope function is unsufficient for the conclusion that there exists a unique solution---it is sufficient for existence of the solution.

Let us consider another slope function

\[ f(x,y) = \begin{cases} 0, & \ x=0, \quad -\infty < y < \infty \\ 2x, & \ 0 < y \le 1, \quad -\infty < y < 0, \\ 2x - \frac{4y}{x} , & \ 0 < x \le 1, \quad 0 \le y \le x^2 \\ -2x & \ 0 < x \le 1, \quad x^2 < y < +\infty , \end{cases} \]
for 0 ≤ x ≤ 1. −∞ < y < ∞.

It is clear that the slope function is continuous and bounded by the constant 2. Using the initial approximation ϕ0 = 0, we obtain two sequences defined on the interval [0, 1]:

\[ \pgi_{2k-1} (x) = x^2 , \qquad \phi_{2k} (x) = - x^2 , \qquad k=1,2,\ldots . \]
Therefore, the sequence of Picard's iterations has two cluster values for each x ≠ 0, and it does not converge because neither of the functions ϕ2k and ϕ2k-1 is a solution of the given differential equation.    ■

Example: Consider the initial value problem for the linear equation

\[ y' = x \left( 3 - 2y \right) , \qquad y(0) =1. \]
Since the given equation is actually separable, we have
\[ \frac{{\text d}y}{3-2y} = \frac{{\text d}x}{x} \qquad \Longrightarrow \qquad \ln | 3 - 2y | = x^2 + c . \]
Then its explicit solution follows immediately
\[ y(x) = \frac{3}{2} - \frac{1}{2}\, e^{-x^2} , \]
which guarantees the validity interval to be the whole real axis.

Expanding the explicit solution into a Maclaurin series, we obtain

\[ y(x) = 1 + \frac{x^2}{2} - \frac{x^4}{4} + \frac{x^6}{12} - \frac{x^8}{2\cdot 4!} + \frac{x^{10}}{2\cdot 5!} + \cdots . \]
Indeed, Mathematica confirms
Series[3/2 - 1/2 *Exp[-x^2] , {x, 0, 10}]
SeriesData[x, 0, {1, 0, Rational[1, 2], 0, Rational[-1, 4], 0, Rational[1, 12], 0, Rational[-1, 48], 0, Rational[1, 240]}, 0, 11, 1]
Now we perform Picard's iterations:
\begin{align*} \phi_0 (x) &= 1, \\ \phi_1 (x) &= 1 + \int_0^x t\left( 3-2 \right) {\text d}t = 1 + \frac{x^2}{2} , \\ \phi_2 (x) &= 1 + \int_0^x t\left( 3-2\,\phi_1 (t) \right) {\text d}t = 1 + \frac{x^2}{2} - \frac{x^4}{4} , \\ \phi_3 (x) &= 1 + \int_0^x t\left( 3-2\,\phi_2 (t) \right) {\text d}t = 1 + \frac{x^2}{2} - \frac{x^4}{4} + \frac{x^6}{12} , \end{align*}
and so on. So we see that first terms stabilize with each iteration, and we hope to achieve the explicit solution (expansion). We check with Mathematica:
h = Function[{t}, 1 + Integrate[x*(3 - 2*#), {x, 0, t}]][x] &; NestList[h, 1, 3]
which confirms our conclusion!    ■

Example: Consider the initial value problem for the Riccati equation

\[ y' = x^2 + y^2 , \qquad y(0) =1. \]
First, we find its series solution.
Clear[seriesDSolve];
seriesDSolve[ode_, y_, iter : {x_, x0_, n_}, ics_: {}] :=
Module[{dorder, coeff},
dorder = Max[0, Cases[ode, Derivative[m_][y][x] :> m, Infinity]];
(coeff = Fold[
Flatten[{#1,
Solve[#2 == 0 /. #1,
Union@Cases[#2 /. #1, Derivative[m_][y][x0] /; m >= dorder,
Infinity]]}] &,
ics,
Table[
SeriesCoefficient[ode /. Equal -> Subtract, {x, x0, k}],
{k, 0, Max[n - dorder, 0]}]
];
Series[y[x], iter] /. coeff) /; dorder > 0 ]
seriesDSolve[y'[x] == x^2 + (y[x])^2, y, {x, 0, 15}, {y[0] -> 1}]
1 + x + x^2 + (4 x^3)/3 + (7 x^4)/6 + (6 x^5)/5 + (37 x^6)/30 + ( 404 x^7)/315 + (369 x^8)/280 + (428 x^9)/315 + (1961 x^10)/1400 +
( 75092 x^11)/51975 + (1238759 x^12)/831600 + (9884 x^13)/6435 + ( 17121817 x^14)/10810800 + (115860952 x^15)/70945875 + SeriesData[ x, 0, {}, 0, 16, 1]
So we have
\begin{align*} y(x) &= 1 + x + x^2 + \frac{4}{3}\,x^3 + \frac{7}{6}\, x^4 + \frac{6}{5}\,x^5 + \frac{37}{30}\, x^6 + \frac{404}{315}\, x^7 + \frac{369}{280} \, x^8 + \frac{428}{315}\,x^9 + \frac{1961}{1400}\,x^{10} \\ & \quad + \frac{75092}{51975}\,x^{11} + \frac{1238759}{831600}\, x^{12} + \frac{9884}{6435}\,x^{13} + \frac{17121817}{10810800}\, x^{14} + \frac{115860952}{70945875}\, x^{15} + \cdots . \end{align*}
Incidentally, this procedure can be automated very easily with a computer algebra system. Here's how to do so with Mathematica:
For[{n = 1, y[0][x_] = 1}, n < 4, n++,
y[n][x_] = 1 + Integrate[y[n - 1][t]^2 + t^2, {t, 0, x}];
Print[{n, y[n][t]}]]
{1,1+t+t^3/3}
{2,1+t+t^2+(2 t^3)/3+t^4/6+(2 t^5)/15+t^7/63}
{3,1+t+t^2+(4 t^3)/3+(5 t^4)/6+(8 t^5)/15+(29 t^6)/90+(47 t^7)/315+
(41 t^8)/630+(299 t^9)/11340+(4 t^10)/525+(184 t^11)/51975+t^12/2268+(4 t^13)/12285+t^15/59535}
So
\begin{align*} \phi_0 &= 1, \\ \phi_1 (x) &= 1 + x + \frac{x^3}{3} , \\ \phi_2 (x) &= 1 + x + x^2 + \frac{2\,x^3}{3} + \frac{1}{6}\, x^4 + \frac{2}{15}\, x^5 + \frac{x^7}{63} , \\ \phi_3 (x) &= 1 + x + x^2 + \frac{4}{3}\, x^3 + \frac{5}{6}\, x^4 + \frac{8}{15}\, x^5 + \frac{29}{90}\, x^6 + \frac{47}{315}\, x^7 \\ &\quad + \frac{41}{630}\, x^8 + \frac{299}{11340}\, x^9 + \frac{4}{525}\, x^{10} + \frac{184}{51975}\, x^{11} + \frac{1}{2268}\,x^{12} + \frac{4}{12285}\,x^{13} + \frac{x^{15}}{59535} . \end{align*}
If you want intermediate results, too, here is a function that keeps them:
picardList[initialVector_, flow_, n_, var_] :=
Module[{time},
FoldList[
iterate[
initialVector, flow, #1, #2, time
] &,
initialVector,
Reverse[Range[n] - 1]] /. {Subscript[time, _] -> var} ]

picardList[1, flow, 4, x] // TableForm
Picard's iterations could be found using the following subroutine:
Clear[picardSeries, iterate];
iterate[initial_, flow_, psi_, n_, t_] :=
initial +
Expand@Apply[Subtract,
Assuming[{Subscript[t, n + 1] > 0},
Integrate[flow[Subscript[t, n + 1], psi],
Subscript[t, n + 1]]] /. {{Subscript[t, n + 1] ->
Subscript[t, n]}, {Subscript[t, n + 1] -> 0}}]

picardSeries[initialVector_, flow_, n_, var_] :=
Module[{time},
Fold[iterate[initialVector, flow, #1, #2, time] &, initialVector,
Reverse[Range[n] - 1]] /. {Subscript[time, 0] -> var}]
Then we apply this subroutine to our problem:
flow[t_, f_] := f^2 + t^2
picardSeries[1, flow, 5, x]
There is no way to find a pattern or estimate the radius of convergence of the series above. Now we try to find its explicit solution:
DSolve[{y'[x] == (y [x])^2 + x^2, y[0] == 1}, y[x], x]
{{y[x] -> (-x^2 BesselJ[-(3/4), x^2/2] Gamma[1/4] + x^2 BesselJ[-(5/4), x^2/2] Gamma[3/4] +
BesselJ[-(1/4), x^2/2] Gamma[3/4] - x^2 BesselJ[3/4, x^2/2] Gamma[3/ 4])/
(x (BesselJ[1/4, x^2/2] Gamma[1/4] - 2 BesselJ[-(1/4), x^2/2] Gamma[3/4]))}}
Therefore, its solution is expressed through Bessel functions:
\[ y(x) = \dfrac{-x^2 J_{-3/4} (x^2 /2)\,\Gamma (1/4) + x^2 J_{-5/4} (x^2 /2)\,\Gamma (3/4) + J_{-1/4} (x^2 /2)\,\Gamma (3/4) -x^2 J_{3/4} (x^2 /2)\,\Gamma (3/4)}{x \, J_{1/4} (x^2 /2)\,\Gamma (1/4) -2\,x\,J_{-1/4} (x^2 /2)\,\Gamma (3/4)} . \]
Obviously, the solution blows up at the points where the denominator is zero.
FindRoot[(BesselJ[1/4, x^2/2] Gamma[1/4] - 2 BesselJ[-(1/4), x^2/2] Gamma[3/4]) == 0, {x, 1}]
{x -> 0.969811}
So we expect that the solution exists in the interval (0,0.969811). Upon plotting the fourth Picard iteration and the explicit solution, we see that
y[x_] = (-x^2 BesselJ[-(3/4), x^2/2] Gamma[1/4] +
x^2 BesselJ[-(5/4), x^2/2] Gamma[3/4] +
BesselJ[-(1/4), x^2/2] Gamma[3/4] -
x^2 BesselJ[3/4, x^2/2] Gamma[3/ 4])/
(x (BesselJ[1/4, x^2/2] Gamma[1/4] - 2 BesselJ[-(1/4), x^2/2] Gamma[3/4]))
phi4[x_] =
1 + x + x^2 + (4 x^3)/3 + (7 x^4)/6 + (6 x^5)/5 + (107 x^6)/90 +
( 362 x^7)/315 + (2683 x^8)/2520 + (2689 x^9)/2835 + (92179 x^10)/ 113400 +
(6673 x^11)/9900 + (201503 x^12)/374220 + (612853 x^13)/ 1474200 +
(3774677 x^14)/12162150 + (127144841 x^15)/567567000
Plot[{y[x], phi4[x]}, {x, 0, 1}, PlotStyle -> Thick]
Picard 4th iteration and explicit solution
Plotted figure shows a good approximation on the interval about [0,0.7]. Picard's theorem guarantees the existence of solution on the interval \( [0, 1/\sqrt{2}) \approx (0,0.7) , \) which is far away from the root 0.969811, where the denominator is zero.    ■

Example: Consider the initial value problem for the Riccati equation

\[ y' = x^2 - y^2 , \qquad y(0) =1. \]
First, we find its explicit solution
DSolve[{y'[x] == -(y [x])^2 + x^2, y[0] == 1}, y[x], x]
which gives a complex-valued answer. So we look for another option:
\[ y(x) = x\,\frac{I_{-3/4} \left( x^2 /2 \right) \pi \left( \sqrt{2} + \pi / \Gamma^2 (3/4) \right) - 2 \,K_{3/4} \left( x^2 /2\right) }{\pi \left( \sqrt{2} + \pi / \Gamma^2 (3/4) \right) I_{1/4} \left( x^2 /2 \right) + 2\,K_{1/4} \left( x^2 /2 \right)} . \]
Actually, we need to know about the true solution only whether it exists for all x and its Maclaurin power series expansion:
\[ y(x) = 1-x + x^2 - \frac{2}{3}\,x^3 + \frac{5}{6}\,x^4 - \frac{4}{5}\, x^5 + \frac{23}{30}\,x^6 - \frac{236}{315}\, x^7 + \frac{201}{280}\, x^8 - \frac{218}{315}\,x^9 + \frac{2803}{4200}\,x^{10} - \frac{33412}{51975}\,x^{11} + \cdots . \]
Now we find Picard's iterations. Of course, we use first Mathematica:
For[{n = 1, y[0][x_] = 1}, n < 8, n++,
y[n][x_] = 1 + Integrate[-y[n - 1][t]^2 + t^2, {t, 0, x}];
Print[{n, y[n][t]}]]
\begin{align*} \phi_0 &= 1, \\ \phi_1 (x) &= 1 + \int_0^x \left( t^2 -1^2 \right) {\text d}t = 1 - x + \frac{x^3}{3} , \\ \phi_2 (x) &= 1 + \int_0^x \left( t^2 -\phi_1^2 (t) \right) {\text d}t = 1 -x + x^2 - \frac{x^4}{6} + \frac{2\,x^5}{15} - \frac{x^7}{63} , \\ \phi_3 (x) &= 1 + \int_0^x \left( t^2 -\phi_2^2 (t) \right) {\text d}t = 1 - x + x^2 - \frac{2x^3}{3} + \frac{x^4}{2} - \frac{2x^5}{15} - \frac{x^6}{10} + \cdots \\ \phi_4 (x) &= 1 + \int_0^x \left( t^2 -\phi_3^2 (t) \right) {\text d}t = 1 - x + x^2 - \frac{2x^3}{3} + \frac{5\,x^4}{6} - \frac{2\,x^5}{3} + \frac{13\,x^6}{30} - \cdots . \end{align*}
There are some discrepancies in coefficients; however, Picard's theorem guarantees that they will eventually go away, and the series will be stabilized.    ■

Example: Practically, the Picard iteration scheme can be implemented only when slope function is a polynomial because this is the only functions that we integrate explicitely for its compositions. The following example shows that an equation containing a radical prevents of explicit integrations required by Picard's iteration. Consider the initial value problem for the autonomous equation

\[ y' = -\sqrt{1/4 + y^2} , \qquad y(0) =3. \]
Its solution exists for all x and is given by
\[ y(x) = -\frac{1}{2}\,\sinh \left( x- \mbox{arcsinh}(6) \right) . \]
Mathematica confirms:
DSolve[{y'[x] == -Sqrt[1/4 + (y[x])^2], y[0] == 3}, y[x], x]
Out[1]= {{y[x] -> -(1/2) Sinh[x - ArcSinh[6]]}}
Next, we find its series solution (using previously defined subroutine seriesDSolve).
seriesDSolve[ y'[x] == Sqrt[1/4 + (y[x])^2], y, {x, 0, 15}, {y[0] -> 3}]
3 + (Sqrt[37] x)/2 + (3 x^2)/2 + (Sqrt[37] x^3)/12 + x^4/8 +
( Sqrt[37] x^5)/240 + x^6/240 + (Sqrt[37] x^7)/10080 + x^8/13440 +
( Sqrt[37] x^9)/725760 + x^10/1209600 +
( Sqrt[37] x^11)/79833600 + x^12/159667200 +
( Sqrt[37] x^13)/12454041600 + x^14/29059430400 +
( Sqrt[37] x^15)/2615348736000 + SeriesData[x, 0, {}, 0, 16, 1]
Now we try to use Picard's iterations:
\begin{align*} \phi_0 &= 3, \\ \phi_1 (x) &= 3 + \int_0^x \sqrt{1/4 + 3^2}\,{\text d}t = 3 + \frac{\sqrt{37}}{2}\, x , \\ \phi_2 (x) &= 3 + \int_0^x \sqrt{1/4 + \left( 3 + \frac{\sqrt{37}}{2}\, x \right)^2}\,{\text d}t . \end{align*}
Mathematica failed to find the second Picard iteration. However, Maple can handle this integral providing the answer:
\[ \phi_2 (x) = 3/2-{\frac {\sqrt {37}\ln \left( 6+\sqrt {37} \right) }{148}} + {\frac{3\,\sqrt {37+37\,{x}^{2}+12\,\sqrt {37}x}\sqrt {37}}{74}}+1/4\,\sqrt {37+37\,{x}^{2}+12\,\sqrt {37}x}x+{\frac {\sqrt {37}{\rm arcsinh} \left(1/37\,\sqrt {37} \left( 6\,\sqrt {37}+37\,x \right) \right)}{148 }} . \]
So we conclude that Picard's procedure is not practical because we cannot find the next iteration.    ■

Example: Consider the IVP:

\[ y' = y^2 (x), \qquad y(0) = 1. \]
Since it is an autonomous equation, we find its solution by separation of variables
\[ \frac{{\text d}y}{y^2} = {\text d}x \qquad \Longrightarrow \qquad y(x) = \frac{1}{1-x} . \]
So the solution exits at any interval not containing 1. Picard's theorem provides an estimated interval of exitence to be (−h, h), where
\[ h = \min \left\{ a, \frac{b}{M} \right\} = max \,\min_{a,b} \left\{ a, \frac{b}{(1+b)^2} \right\} = \frac{1}{4} , \]
because the maximum is attained at \( a = \frac{b}{(1+b)^2} . \)

If we consider another initial condition

\[ y' = y^2 (x), \qquad y(1) = -1; \]
its solution becomes
\[ y = - \frac{1}{x} . \]
This solution does not exist at x = 0, although the slope function f(y) = y² is continuous everywhere, including x = 0.    ■

 

  1. Bai, Xiaoli, Modified Chebyshev--Picard iteration methods for solution of initial value and boundary value problems, Dissertation, Texas A&M University, 2010.
  2. Bai, X., Junkins, J.: Modified Chebyshev--Picard iteration methods for solution of initial value problems, The Journal of the Astronautical Sciences volume 59, 327–351 (2011)
  3. Bai, X., Junkins, J.: Modified Chebyshev--Picard iteration methods for solution of boundary value problems. The Journal of the Astronautical Sciences, 58, 615–642 (2011)
  4. Fabrey, J., Picard's theorem, The American Mathematical Monthly, 1972,Vol. 79, No. 9, pp. 1020--1023
  5. Lozinski, S.M., On a class of linear operators, Dokladu Akademii Nauk SSSR, 1948, Vol. 61, pp. 193-196. (Russian)
  6. Robin, W.A., Solving differential equations using modified Picard iteration, International Journal of Mathematical Education, 2010, Vol. 41, No. 5, pp. 649--665; doi: 10.1080/00207391003675182
  7. Tisdell, Christopher C., Improved pedagogy for linear differential equations by reconsidering how we measure the size of solutions, International Journal of Mathematical Education, 2017, Vol. 48, Issue 7, pp. 1087--1095; https://doi.org/10.1080/0020739X.2017.1298856
  8. Tisdell, Christopher C., On Picard's iteration method to solve differential equations and a pedagogical space for otherness, International Journal of Mathematical Education, 2018, https://doi.org/10.1080/0020739X.2018.1507051
  9. Youssef, I.K., "Picard iteration algorithm combined with Gauss--Seidel technique for initial value problems," Applied Mathematics and Computation, 2007, 190, 345--355.

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)