Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to computing page for the fourth course APMA0360
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to Mathematica tutorial for the fourth course APMA0360
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to the main page for the course APMA0360

Preface


The fundamental importance of existence theorems for initial value problems (IVPs for short) hardly needs justification. Historically, proofs of these theorems were divided into two categories. The first one employed Cauchy--Lipschitz method and it is based on constructing Euler's polygon approximations (Tonelli sequences). This approach guarantees only existence of solutions, but not its uniqueness. The Euler--Cauchy--Lipschitz method is hard to call constructive and it gives mostly theoretical justification.

Another kind of proofs are based on transferring the given initial value problem (with unbounded differential operator) into an equivalent integral equation that allows one to reformulate the IVP in terms of bounded operator (integral one). Then such integral equation is solved by successive approximations, credited to E. Picard. It is proved that such successive approximations converge to a true solution and its solution is unique. Although practical application of Picard's iteration scheme is limited, it provides a constructive proof of required solution to the IVP. This is the main reason why iteration method is so popular.

Now there are known many generalizations of these two approaches (identified with names Peano and Picard, respectively). However, they involve more sophisticated results, namely fixed point theorems in infinite dimensional function spaces; so we do not pursue this direction.

Existence of solutions to ODEs


An ordinary differential equation may have no solution, unique solution or infinitely many solutions. For instance, \( y'^2 + y^2 + 4 =0, \quad y(0) =3 \) has no solution. The ODE \( y' = x^2 +y, \quad y(0) =-2 \) has the unique solution \( y = -2-2x-x^2 , \) whereas the ODE \(x\,y' = y-2 , \quad y(0) =2 \) has infinitely many solutions y = 2 + αx, where α is any real number.

Giuseppe Peano.
Theorem (Peano, 1886) Let Ω be an open subset of ℝ × ℝ with f : Ω → ℝ to be a continuous bounded function. Then an initial value problem
\begin{equation} \label{EqExist.1} y' = f(x,y), \qquad y(x_0 ) = y_0 , \end{equation}
with (x0, y0) ∈ Ω has a local solution y : Ω → (𝑎, b), whee (𝑎, b) is a neighbourhood of (x0, y0).

 

This theorem was proved in 1886 by the Italian mathematician Giuseppe Peano (1858--1932). He employed the Euler--Cauchy--Lipschitz polygon method. The idea of this method consists in considering a suitably chosen Euler polygon as an approximate solution of IVP \eqref{EqExist.1} and in obtaining the solution of IVP \eqref{EqExist.1} as limit of sequences of such approximations. The contemporal proof is presented in the Coddington and Levinson book and it is based on reduction to the integral equation that is later solved by implementing an elegant method due to Tonelli. Peano's theorem does not ensure uniquely determined solution by their initial values.

Giuseppe Peano was a founder of symbolic logic whose interests centered on the foundations of mathematics and on the development of a formal logical language. In 1890, Peano founded the journal Rivista di Matematica, which published its first issue in January 1891. In 1891, Peano started the Formulario Project. It was to be an "Encyclopedia of Mathematics", containing all known formulae and theorems of mathematical science using a standard notation invented by Peano. In 1888, Peano published the book Geometrical Calculus that begins with a chapter on mathematical logic.

In addition to his teaching at the University of Turin, Peano lectured at the Military Academy in Turin in 1886. The following year he discovered and published a method for solving systems of linear differential equations using successive approximations. Although Émile Picard had independently developed this method, he credited the German mathematician Hermann Schwarz (1843--1921) with its discovery.

It turns out that for existence and uniqueness of solutions to ODEs continuity of the slope function is not sufficient. We need more stronger condition.

A function f from S ⊂ ℝm into ℝn is Lipschitz continuous (also locally Lipschitz continuous) at xS if there exists a positive constant C such that
\begin{equation} \label{EqExis.2} \| f(y) - f(x) \| \leqslant C\,\| y-x \| \end{equation}
for all yS sufficiently near x from some its neighborhood.

A function f from S ⊂ ℝm into ℝn is called a Lipschitz function if there exists a positive constant C such that the inequality \eqref{EqExis.2} holds for all x, yS. The smallest constant is sometimes called the (best) Lipschitz constant or the dilation or dilatation.

The following nesting hold:
\[ \mbox{differentiable at $x$} \quad \Longrightarrow \quad \mbox{Lipschitz continuous at $x$} \quad \Longrightarrow \quad \mbox{absolutely continuous} \quad \Longrightarrow \quad \mbox{continuous at $x$}. \]

In one dimensional case, the geometric intuition tells us if inequality \( |f(y) - f(x)| \leqslant C\,|y-x| \) holds for x, y ∈ [𝑎, b], then graph of f lies wedged between the line of slope C going through f(𝑎) at 𝑎 and the line of slope −C going through f(𝑎) at 𝑎. This essentially means that the function can't grow too quickly, or wedged too much.

Lemma 1:: Suppose that f(x) is a real-valued function defines on an interval [𝑎, b] ⊂ ℝ of not zero length. If its derivative f' = df/dx is bounded on closed interval [𝑎, b], then f is a Lipschitz function on this interval.
Suppose that M is such that \( | f' (x) | \leqslant M \) for all x ∈ [𝑎, b]. Then for x and y in [𝑎, b], we get from the Mean Value Theorem that
\[ f(y) - f(x) = f' (\xi ) \left( y-x \right) \]
for some ξ between x and y. Let \( M = \max_{\xi} |f' (\xi )| , \) then
\[ | f(y) - f(x) | \leqslant M \,|y-x| . \]
Thus, Eq.(1) holds with C = M.

Example 1: We start with a familiar function: f(x) = |x|. This function is no differentiable in any interval containing the origin. However, f(x) is a Lipschitz function on any finite interval with the Lipschitz constant equal to 1.

The function f = tanx is differentiable at each point from the open interval (−π/2, π/2) and, therefore, Lipschitz continuous at each x ∈ (−π/2, π/2). However, its derivative is unbounded on the open interval, so there is no constant C for inequality (1) to hold. Hence tan(x) is not Lipschitz function on (−π/2, π/2), but it is Lipschitz function on any closed interval [𝑎, b] ⊂ (−π/2, π/2).

The function

\[ f(x) = \begin{cases} x^2 \sin (1/x) , & \quad \mbox{if} \quad x \ne 0 , \\ 0, & \quad \mbox{if} \quad x =0 , \end{cases} \]
is continuous, but its derivative has an essential singularity at the origin, so it is not Lipschitz continuous. This function becomes infinitely steep as x approaches 0 since its derivative becomes infinite. However, it is uniformly continuous.

Similarly, the function

\[ f(x) = \begin{cases} e^{1/x} , & \quad \mbox{if} \quad x \ne 0 , \\ 0, & \quad \mbox{if} \quad x =0 , \end{cases} \]
has infinitely many derivatives in a neighborhood of the origin, but it is locally Lipschitz continuous.

The square root (one of each branches) function \( f(x) = \sqrt{|x|} \) is not Lipschitz.

All proofs of the following existence theorems are based on reduction of the initial value problem

\[ y' = f(x,y), \qquad y(x_0 ) = y_0 . \tag{1} \]
to the Volterra integral equation
\begin{equation} \label{EqExis.3} y(x) = y(x_0 ) + \int_{x_0}^x f(y(s), s)\,{\text d} s . \end{equation}
The main reason of this transformation is to reduce an initial value problem with unbounded derivative operator into a problem with bounded operator such as integration. However, the domain of the Volterra integral operator in right-hand side of Eq.\eqref{EqExis.3} is not defined for all y, so the domain of the operator U has to be determined with care.
Lemma 2:: Let f(x, y) be a real-valued continuous function of the variables x and y. Then any solution of the initial value problem (1) is a solution of the integral equation \eqref{EqExis.3}, and conversely.
If y(x) satisfies the integral equation \eqref{EqExis.3}, then this function satisfies the initial condition y(x0) = y0. Differentiating both sides of the integral equation and using the fundamental theorem of calculus, we conclude that y(x) is a solution of the given initial value problem (1).

Conversely, the Fundamental Theorem of the Calculus shows that \( \displaystyle y(x) = y(x_0 ) + \int_{x_0}^x f(y(s),s)\,{\text d}x \) for all continuously differentiable functions y(x). If y(x) satisfies the differential equation \eqref{EqExis.1}, then \( \displaystyle y(x) = y(x_0 ) + \int_{x_0}^x f(y(s),s)\,{\text d}x \) ; if in addition, y(x0) = y0, the integral equation \eqref{EqExis.3} is obtained.

Corollary 1:: The initial value problem (1) has a solution if and only if the inegral equation \eqref{EqExis.3} has a fixed point n class of continuous functions ℭ[x0, h] for some h > x0.

Theorem 2:: Let f(x,y) be continuous for all (x,y) in open rectangle \( R= \left\{ (x,y)\,:\, |x-x_0 | < a, \quad |y- y_0 | < b \,\right\} \) and Lipschitz continuous in y, with constant L independent of x. Then there exists a unique solution to the initial value problem
\[ y' = f(x,y) , \qquad y(x_0 ) = y_0 \]
on the interval. Moreover, if z(x) is the solution to the same problem with the initial condition \( z(x_0 ) = z_0 , \) then
\[ \left\vert y(x) - z(x) \right\vert \le e^{L(x- x_0 )} \,|y_0 - z_0 | . \qquad ⧫ \]

Theorem (Picard): Suppose that f(x,y) and \( \frac{\partial f}{\partial y} \) are continuous functions defined in some rectangular region
\[ R = \left\{ (x,y)\,: \,| x_0 - x | \le \delta , \quad |y_0 - y | \le \epsilon \right\} \]
containing the point \( \left( x_0 , y_0 \right) . \) If these functions are bounded in R:
\[ \left\vert f(x,y) \right\vert \le M \qquad \mbox{and}\qquad \left\vert \frac{\partial f}{\partial y} \right\vert \le K \]
for some positive constants M and K, then the initial value problem
\[ y' = f(x,y) , \qquad y(x_0 ) = y_0 \]
has a unique solution in the interval
\[ \left[ x_0 -h , x_0 +h \right] , \qquad \mbox{where} \qquad h \le \min \left\{ \delta , \frac{\epsilon}{M} \right\} . \qquad ⧫ \]

Theorem (Picard--Lindelöf): Suppose that a slope function f(x,y) is defined and continuous within an rectangular domain
\[ R = \left\{ (x,y)\,:\, \left\vert x- x_0 \right\vert \le a, \quad \left\vert y- y_0 \right\vert \le b\right\} , \qquad\mbox{for some positive constants }a, b. \]
We also assume that f(x,y) satisfies the following conditions.
  • The slope function is bounded: \( M = \max_{(x,y) \in R} \, |f(x,y)| . \)
  • The slope function is Lipschitz continuous: \( |f(x,y) - f(x,z) | \le L\,|y-z| . \) for all (x,y) from R.
Then for
\[ h = \min \left\{ a, \ \frac{b}{M}, \ \frac{1}{2L} \right\} \]
the initial value problem
\[ y' = f(x,y), \qquad y(x_0 ) = y_0 \]
has a unique solution y = φ(x) for \( |x-x_0 | < h . \)    ⧫
Emile Picard.
The theorem above is usually referred to as Picard's theorem (or sometimes Picard--Lindelöf theorem or the method of successive approximations) named after Émile Picard (1858--1941) who proved this result based on iteration procedure. Ernst Leonard Lindelöf (1870--1946) was a Finnish mathematician and Charles Émile Picard was a French mathematician. Rudolf Otto Sigismund Lipschitz (1832--1903) was a German mathematician who gave his name to the Lipschitz continuity condition. Picard's mathematical papers, textbooks, and many popular writings exhibit an extraordinary range of interests, as well as an impressive mastery of the mathematics of his time. In addition to his theoretical work, Picard made contributions to applied mathematics, including the theories of telegraphy and elasticity. Picard's popular writings include biographies of many leading French mathematicians, including his father in law, Charles Hermite.

Any differential operator is an unbounded operator. This prevents of using it in well-known fixed point theorems. Fortunately, there is a way to bypass this obstacle by transferring the initial value problem into an integral equation of the second kind. Since an integration is a bounded operation, it allows us to apply the method of successive approximations.

Since the initial value problem \( y' = f(x,y) , \qquad y(x_0 ) = y_0 \) is equivalent to the Volterra integral equation (subject that the slope function f(x,y) and the solution y(x) are continuous functions)

\[ y(x) = y_0 + \int_{x_0}^x f(s,y(s))\, {\text d}s , \]
this problem has a unique solution that is the limit of the sequence of functions \( \left\{ \phi_n (x) \right\}_{n\ge 0} \) that satisfies the recurrence:
\begin{equation} \label{EqExis.4} \phi_{n+1} (x) = y_0 + \int_{x_0}^x f(s,\phi_n (s))\, {\text d}s , \qquad n=0,1,2,\ldots ; \quad \phi_0 = y_0 . \end{equation}

Note that the integral recurrence relation \eqref{EqExis.4} is equivalent to a sequence of initial value problems

\[ \phi'_{n+1} (x) = f \left( x, \phi_n (x) \right) , \qquad \phi_{n+1} (x_0 ) = y_0 . \]
Solving this initial value problem is a subject of calculus because the right-hand side is a known function. However, it is impossible to prove convergence of its solutions because the initial value problem involves the unbounded derivative operator. So we are forced to stick with the Volterra equations \eqref{EqExis.4} in order to prove the convergence of its solutions.
Proof: Starting with the initial constant function \( \phi_0 = y_0 , \) successive terms are defined recursively:
\[ \phi_{n+1} (x) = y_0 + \int_{x_0}^x f(s,\phi_n (s))\, {\text d}s , \qquad n=0,1,2,\ldots . \]
We represent the limit function as a telescopic sum:
\[ \phi (x) = \lim_{n\to\infty} \phi_n (x) = \phi_0 + \sum_{k\ge 1} \left( \phi_k (x) - \phi_{k-1} (x) \right) . \tag{Picard.1} \]
The general term of the series above can be estimated as
\[ \left\vert \phi_k (x) - \phi_{k-1} (x) \right\vert = \left\vert \int_{x_0}^x \left[ f \left( t, \phi_{k-1} (t) \right) - f \left( t, \phi_{k-2} (t) \right) \right] {\text d}t \right\vert \le \int_{x_0}^x \left\vert f \left( t, \phi_{k-1} (t) \right) - f \left( t, \phi_{k-2} (t) \right) \right\vert {\text d}t \]
By the Lipschitz condition,
\[ \left\vert f \left( t, \phi_{k-1} (t) \right) - f \left( t, \phi_{k-2} (t) \right) \right\vert \le L \left\vert \phi_{k-1} (t) - \phi_{k-2} (t) \right\vert . \]
We are going to prove by induction that
\[ \left\vert \phi_{k} (t) - \phi_{k-1} (t) \right\vert \le \frac{M\,L^{k-1}}{k!} \left\vert x- x_0 \right\vert^k , \qquad k= 1,2,\ldots . \]
For k = 1, it is true because
\[ \left\vert \phi_{1} (t) - \phi_{0} (t) \right\vert = \left\vert y_0 + \int_{x_0}^x y_0 \,{\text d}s - y_0 \right\vert = \left\vert y_0 \int_{x_0}^x {\text d}s \right\vert \le \left\vert y_0 \left( x - x_0 \right) \right\vert \le M \left\vert x- x_0 \right\vert , \]
where M is some positive number.

Let us make the induction hypothesis that this holds for n−1 in place of n. Then we have

\[ \left\vert \phi_n (x) - \phi_{n-1} (x) \right\vert = \left\vert \int_{x_0}^x {\text d}s \left[ f\left( s, \phi_{n-1} (s) \right) - f\left( s, \phi_{n-2} (s) \right) \right] \right\vert \le L \int_{x_0}^x {\text d}s \left\vert \phi_{n-1} (s) - \phi_{n-2} (s) \right\vert \tag{Picard.2} \]
According to the induction hypothesis, it follows that:
\[ \left\vert \phi_{n-1} (s) - \phi_{n-2} (s) \right\vert \le \frac{M\,L^{n-2}}{(n-1)!} \left\vert s - x_0 \right\vert^{n-1} . \]
Substituting this inequality into integral (Picard.2), we get
\[ L \int_{x_0}^x {\text d}s \left\vert \phi_{n-1} (s) - \phi_{n-2} (s) \right\vert \le L \, \frac{M\,L^{n-2}}{(n-1)!} \int_{x_0}^x {\text d}s \left( s - x_0 \right)^{n-1} = \frac{M\,L^{n-1}}{n!} \left\vert x- x_0 \right\vert^n . \]
Hence,
\[ \left\vert \phi_n (x) - \phi_{n-1} (x) \right\vert \le \frac{M\,L^{n-1}}{n!} \left\vert x- x_0 \right\vert^n . \tag{Picard.3} \]

Next we show that our sequence of partial sums for series (Picard.1) converges uniformly to a limit for | x - x0 | \le; h. Recall that the Weierstrass M-Test assues us that if every function fn(x) is bounded \( \left\vert f_n (x) \right\vert \le M_n for all n ≥ 1 and all x in some interval A, and \( \sum_n M_n \) converges, then the series \( \sum_n f_n (x) \) converges absolutely and uniformly on A. Therefore, by the Weierstrass M-Test, we have

\[ \left\vert \phi (x) \right\vert \le \left\vert \phi_0 (x) \right\vert + \sum_{k\ge 1} \left\vert \phi_{k} (x) - \phi_{k-1} (x) \right\vert \le M + \sum_{k\ge 1} \frac{M\,L^{k-1}}{k!} \left\vert x - x_0 \right\vert^k = \frac{M}{L}\, e^{L |x - x_0 |} . \]
Therefore, the sequence of Picard's iteration converges uniformly for all x from some finite interval [x0 - h, x0 + h].

The last step is to prove that the limit function φ(x) satisfies the differential equation \( y' = f(x,y) \) and the initial condition y(0) = y0. This follows immediately from equivalence of the initial value problem and the corresponding Volterra integral equation. Note that the latter statement is valid only for continuous functions, so requirement that the slope function f(x,y) is continuous is essential.   ▣

Corollary 2:: The continuous dependence of the solutions on the initial conditions holds whenever slope function f satisfies a global Lipschitz condition.    ⧫
Corollary 3:: If the solution y(x) of the initial value problem \( y' = f(x,y), \ y(x_0 )= y_0 \) has an a priori bound M, i.e., \( |y(x)| \le M \) whenever y(x) exists, then the solution exists for all \( x \in \mathbb{R} . \)    ⧫

Notes: There are two main drawbacks in application of Picard's iteration scheme. If differential equation is not autonomous (so slope function depends on both, independent and dependent variables), then Picard's iteration always produces some noise terms at every stage. Of course, their contribution is low and next iteration corrects them, but new noise terms appear.

Generally speaking, Picard's iteration method is applicable only to differential equations with polynomial slope functions; otherwise integration cannot be performed explicitly. In tutorial II, it will be shown how this obstacle can be bypassed in many practical cases. Another approach was demonstrated by the brilliant scientist Lloyd Nicholas Trefethen (born 30 August 1955 in Boston, Massachusetts) who suggested to approximate slope functions with Chebyshev polynomials first and based on their expansions apply Picard's iteration. This numerical method was implemented in the Chebfun package---a clear demonstration how fruitful can be a combination of analytical tools with computational techniques.

Example 2: Consider the initial value problem for the Riccati equation

\[ y' = x^2 + y^2, \quad y(0)=0 , \tag{1.1} \]
which has a unique solution \( y = \phi (x) \) expressed via Bessel functions
\[ \phi (x) = x \, \dfrac{Y_{-3/4} \left( \frac{x^2}{2} \right) - J_{-3/4} \left( \frac{x^2}{2} \right)}{J_{1/4} \left( \frac{x^2}{2} \right) - Y_{1/4} \left( \frac{x^2}{2} \right)} . \tag{1.2} \]
You don't need to know how to solve the Ricatti equation because its polynomial solution can be obtained with one line Mathematica command:
AsymptoticDSolveValue[{y'[x] == x^2 + (y[x])^2, y[0] == 0}, y[x], {x, 0, 20}]
x^3/3 + x^7/63 + (2 x^11)/2079 + (13 x^15)/218295 + (46 x^19)/12442815
So we get a 20 degree approximation:
\[ y(x) = \frac{x^3}{3} + \frac{x^7}{63} + \frac{2\,x^{11}}{2079} + \frac{13\,x^{15}}{218 295} + \frac{46\,x^{19}}{12442815} + \cdots . \tag{1.3} \]
First, we transfer the given initial value problem (1.1) into Volterra integral equation
\[ y(x) = \int_0^x \left( y^2 (s) + s^2 \right) {\text d} s = \frac{x^3}{3} + \int_0^x y^2 (s)\,{\text d} s . \tag{1.4} \]
We solve thos integral equation by Picard's iteration. Upon setting the initial iterate to be zero, ϕ0 = 0, we obtain a sequence
\[ \phi_{m+1} (x) = \frac{x^3}{3} + \int_0^x \phi^2_m (s)\,{\text d} s , \qquad s=0,1,2,\ldots . \tag{1.5} \]
The first iteration follows immediately: ϕ1 = x³/3. The second one is obtained by integrating Eq.(1.5) for m = 1:
\[ \phi_{2} (x) = \frac{x^3}{3} + \int_0^x \phi_1^2 (s)\,{\text d} s = \frac{x^3}{3} + \frac{x^7}{21} . \]
You see that the first term is crrect, but thre second one is not. When determine next Picard's iterative terms, we will put correct terms in bold font so you can observe noise terms.
\[ \phi_{3} (x) = \frac{x^3}{3} + \int_0^x \phi_2^2 (s)\,{\text d} s = {\bf \frac{x^3}{3} + \frac{x^7}{63} } + \frac{2\,x^{11}}{693} + \frac{x^{15}}{6615} . \]
Integrate[(x^3 /3 + x^7 /21)^2 , x]
x^7/63 + (2 x^11)/693 + x^15/6615
The formula for ϕ3(x) contains two correct terms, but also incorrect two higher order terms---a noise. Next iteration gives
\[ \phi_{4} (x) = \frac{x^3}{3} + \int_0^x \phi_3^2 (s)\,{\text d} s = {\bf \frac{x^3}{3} + \frac{x^7}{63} + \frac{2\,x^{11}}{2079} } + \frac{19\,x^{15}}{30977} + \cdots . \]
Integrate[(x^3 /3 + x^7 /63 + 2*x^(11)/693 + x^(15) /6615)^2 , x]
x^7/63 + (2 x^11)/2079 + (19 x^15)/130977 + (2 x^19)/197505 + ( 662 x^23)/1159801335 + (4 x^27)/123773265 + x^31/1356504975
Therefore, we can make an observation that every next iteration gives a new correct terms along with some other noise terms. Moreover, the number of terms grows as a snow ball---this method cannot be considered as a practical one.    ■

Example 3: Consider the initial value problem for the first order differential equation

\[ \frac{{\text d}y}{{\text d}x} = \sqrt{\frac{1}{4} + 2\,y} , \qquad y(0) = 1. \tag{2.1} \]
We apply Picard's iteration directly by solving the fixed point problem
\[ \phi_{n+1} (x) = 1 + \int_0^x \sqrt{\frac{1}{4} + 2\,\phi_n (t)}\,{\text d}t , \qquad n=0,1,2,\ldots . \tag{2.2} \]
If we start with the initial condition ϕ0 = 1, we almost immediately come to a problem of explicit integration. Indeed, we calculate first the few terms:
\begin{align*} \phi_0 &= 1 , \\ \phi_1 (x) &= 1 +\int_0^x \sqrt{\frac{1}{4} + 2}\,{\text d}t = 1 + \frac{3}{2}\, x , \\ \phi_2 (x) &= 1 +\int_0^x \sqrt{\frac{1}{4} + 2\,\phi_1 (t)}\,{\text d}t = 1+ \int_0^x \sqrt{\frac{1}{4} + 2 +3\,t}\,{\text d}t = \frac{x}{\sqrt{3}}\, \sqrt{3 + 4\,x} + \frac{1}{4} \left( 1 + \sqrt{9 + 12\,x} \right) , \end{align*}
because Mathematica helps:
Simplify[1 + Assuming[x > 0, Integrate[Sqrt[9/4 + 3*t], {t, 0, x}]]]
x Sqrt[1 + (4 x)/3] + 1/4 (1 + Sqrt[9 + 12 x])
Although, the next iteration Mathematica can handle by invoking special functions,
Simplify[1 + Assuming[x > 0, Integrate[ Sqrt[t Sqrt[1 + (4 t)/3] + 1/4 (1 + Sqrt[9 + 12 t])], {t, 0, x}]]
it will not overcome the further step.

 

This example illustrates that Picard's scheme is not practical because the corresponding iteration cannot be performed indefinitely---the functions are not explicitly integrable. In order to apply Picard's method, the slope function must be a polynomial---otherwise the results of integration cannot be expressed through elementary functions.
End of Example 2
   ■

There are known several improvements and generalizations of Picard's existence theorem. The following one, credited to the famous Russian mathematician Sergei Lozinsky (1914--1985). One substantial advantage of the Lozinsky statement compared to classical Picard's iteration consists in determination of wider domain of existence.

Sergey Lozinsky.

Theorem (Lozinsky): Let f(x,y) be a real-valued continuous function in some domain Ω ⊂ ℝ² and M(x) be a nonnegative continuous function on some finite interval \( I \ (x_1 \le x \le x_1 ) \) inside Ω. Let \( |f(x,y)| \le M(x) \) for xI and (x,y) ∈ Ω. Suppose that the closed domain Q, defined by inequalities

\[ x_0 \le x \le x_1 , \qquad \left\vert y - y_0 \right\vert \le \int_{x_0}^x M(u)\,{\text d}u , \]
is a subset of Ω and there exists a nonnegative integrable function k(x), kI.such that
\[ \left\vert f(x, y_1 ) - f(x, y_2 ) \right\vert \le k(x) \left\vert y_1 - y_2 \right\vert , \qquad x_0 \le x \le x_1 , \quad (x,y_1)\ (x,y_2 ) \in Q . \]
Then the formula
\[ \phi_{n+1} (x) = y_0 + \int_{x_0}^x f \left( s, \phi_n (s) \right) {\text d}s \]
defines the sequence of functions { φn(x) } that converges to a unique solution of the initial value problem \( y' = f(x,y), \ y(x_0 ) = y_0 \) provided that all points (x, φn(x)) are included in Q when x0xx1. Moreover,
\[ \left\vert y(x) - \phi_{n} (x) \right\vert \le \frac{1}{n!} \int_{x_0}^x M(u)\,{\text d}u \left[ \int_{x_0}^x k(u)\,{\text d}u \right]^n . \qquad ⧫ \]

The theorem above was proved by the famous Russian mathematician Sergey Mikhailovich Lozinskii/Lozinsky (1914--1985), student of V.I. Smirnov and S.N. Bernstein. Sergey Lozinsky made many important contributions to the error estimation methods for various types of approximation solutions of ordinary differential equations. During two years (1941-1942) he was in active military duties on defending Leningrad (now St. Petersburg).

Example 4:. Consider the initial value problem for the Riccati equation

\[ y' = x^2 + y^2, \quad y(0)=0 , \]
which has a unique solution \( y = \phi (x) \) expressed via Bessel functions
\[ \phi (x) = x \, \dfrac{Y_{-3/4} \left( \frac{x^2}{2} \right) - J_{-3/4} \left( \frac{x^2}{2} \right)}{J_{1/4} \left( \frac{x^2}{2} \right) - Y_{1/4} \left( \frac{x^2}{2} \right)} . \]
This function blows up at \( x \approx 2.003147359 \) --- the first positive root of the transcendent equation \( J_{1/4} \left( \frac{x^2}{2} \right) = Y_{1/4} \left( \frac{x^2}{2} \right) . \)
Plot[{BesselJ[1/4, x^2 /2], BesselY[1/4, x^2 /2]}, {x, 0.5, 8}, PlotStyle -> Thick]
Actually, Picard's theorem guarantees a unique solution within the interval \( \left[ 0, 2^{-1/2} \right] \approx [0,0.707107 ] . \) To find its solution, we use Picard's iteration procedure:
\begin{align*} \phi_0 (x) &= 0 , \\ \phi_1 (x) &= \int_0^x \left( x^2 + 0^2 \right) {\text d}x = \frac{x^3}{3} , \\ \phi_2 (x) &= \int_0^x \left( x^2 + \left( \frac{x^3}{3} \right)^2 \right) {\text d}x = \frac{x^3}{3} + \frac{x^7}{63} , \\ \phi_3 (x) &= \int_0^x \left( x^2 + \left( \phi_2 (x) \right)^2 \right) {\text d}x = \frac{x^3}{3} + \frac{x^7}{63} + \frac{2\,x^{11}}{2079} + \frac{x^{15}}{59535} , \\ \phi_4 (x) &= \int_0^x \left( x^2 + \left( \phi_3 (x) \right)^2 \right) {\text d}x \\ &= \frac{x^3}{3} + \frac{x^7}{63} + \frac{2\,x^{11}}{2079} + \frac{13\,x^{15}}{218,295} + \frac{82\,x^{19}}{37,328,445} + \frac{662\,x^{23}}{10,438,212,015} + \cdots , \end{align*}
and so on.

We can extend the Picard domain of existence [0, 0.707] with the aid of the Lozinsky theorem. Let us consider the domain Ω defined by inequalities (containing positive numbers x1 and A to be determined)

\[ 0 \le x \le x_1 , \qquad 0 \le |y| \le A\,x^3 . \tag{3.1} \]
Then
\[ M(x) = \max_{(x,y) \in \Omega} \left( x^2 + y^2 \right) = x^2 + A^2 x^6 , \qquad 0 \le x \le x_1 , \]
and the set Q is defined by inequalities
\[ 0 \le x \le x_1 , \qquad 0 \le |y| \le \frac{x^3}{3} + A^2 \frac{x^7}{7} . \]
In order to have Q ⊂ Ω, the following inequality must hold:
\[ \frac{1}{3} + A^2 \frac{x_1^4}{7} \le A . \]
This is equivalent to the quadratic equation \( A^2 \frac{x_1^4}{7} + A + \frac{1}{3} = 0 , \) which has two roots:
\[ A = \frac{1 \pm \sqrt{1 - \frac{4\,x_1^4}{21}}}{2\,x_1^4 /7} . \]
Hence, the quadratic equation has two real roots when x1 satisfies inequality
\[ x_1 \le \sqrt[4]{\frac{21}{4}} \approx 1.513700052 . \tag{3.2} \]
Therefore, Lozinsky's theorem gives a larger interval of existence (more than double) than Picard's theorem.

Example 5:. Consider the initial value problem for the Riccati equation

\[ y' = x^2 - y^2, \quad y(0)=0 , \tag{4.1} \]
The slope function x² −y² attains its maximum in the domain containing lines y = ±x:
\[ \max \left\vert x^2 - y^2 \right\vert = \begin{cases} x^2 , & \quad\mbox{if} \quad x^2 \ge y^2 , \\ y^2 , & \quad\mbox{if} \quad y^2 \ge x^2 . \end{cases} \]
For the function x² −y² in the rectangle |x| ≤ 𝑎, |y| ≤ b, the Picard theorem guarantees the existence of a unique solution within the interval |x| ≤ h. where
\[ h = \min \left\{ a, \ \frac{b}{\max \left\{ a^2 , b^2 \right\}} \right\} \le 1 . \]
Upon plotting the corresponding direction field, we see that solution of the given initial value problem exists for all x., but Picard's theorem assues us about finite interval,
dfield = VectorPlot[{1, x^2 - y^2}, {x, -5, 5}, {y, -5, 5}, Axes -> True, VectorScale -> {Small, Automatic, None}, AxesLabel -> {"x", "dydx=x^2-y^2"}]
To extend the interval of existence, we apply the Lozinsky theorem. First, we consider the function x² −y² in the domain Ω bounded by inequalities \( 0 \le x \le x_p^{\ast} \quad \) and \( \quad |y| \le A\,x^p , \quad \) where A and p are some positive constants and \( \quad x_p^{\ast} \quad \) will be determined shortly. Then
\[ \left\vert x^2 - y^2 \right\vert \le M(x) = \max_{(x,y) \in \Omega} \left\vert x^2 - y^2 \right\vert = \begin{cases} x^2 , & \quad \mbox{if}\quad x^2 \ge \left( A\,x^p \right)^2 , \\ \left( A\,x^p \right)^2 = A^2 x^{2p} , & \quad \mbox{if}\quad \left( A\,x^p \right)^2 \ge x^2 . \end{cases} \]
Now we define the domain Q by inequalities
\[ 0 \le x \le x_p^{\ast} , \quad |y| \le \int_0^x M(u)\,{\text d} u, \]
where
\[ \int_0^x M(u)\,{\text d}u = \int_0^{A^{1/(p-1)}} u^2 {\text d}u + \int_{A^{1/(p-1)}}^x A^2 u^{2p} {\text d}u = \frac{1}{3}\,A^{3/(p-1)} - \frac{1}{2p+1} \,x^{2p+1} . \]
In order to guarantee inclusion Q &subset Ω, the following inequality should hold: \( \quad \int_0^x M(u)\,{\text d}u \le A\,x^p . \quad \) It is valid valid in the interval \( \quad \epsilon < x < x_p^{\ast} , \quad \) where \( x_p^{\ast} \) is the root of the equation \( \quad \int_0^x M(u)\,{\text d}u = A\,x^p \quad \) and ϵ is a small number. When A → +0 and p → 1+0, the root \( x_p^{\ast} \) could be made arbitrary large. For instance, when A = 0.001 and p = 1.001, the root is \( \quad x_p^{\ast} \approx 54.69. Therefore, the given IVP has a solution on the whole line −∞ < x < ∞.
  1. Bownds, J.M. and Diaz, J.B., Euler--Cauchy polygns and the local existence of solutions to abstract ordinary differential equations, Funmcialaj Ekvacioj, 1972, Vol. 15, pp. 193--207.
  2. Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations. New York: McGraw-Hill.
  3. Gardner, C., Another elementary proof of Peano existence theorem, The American Mathematical Monthly, 1976
  4. Hsu, S.-B., Ordinary Differential Equations with Applications, Second edition, World Scientific, 2013.
  5. Kennedy, H.C., Is there an elementary proof of Peano's existence theorem for first order differential equations? The American Mathematical Monthly, 1969, Vol. 76, No. 9, pp. 1043--1045.
  6. Murray, F.J. and Miller, K.S., Existence Theorems for Ordinary Differential Equations, Dover Publications; Dover Ed edition (June 5, 2007)
  7. Peano, G., "Sull'integrabilità delle equazioni differenziali del primo ordine". 1886, Atti Accad. Sci. Torino. 21: 437–445.
  8. Petrovski, I.G., Ordinary Differential Equations, Dover, NY, 1973.
  9. Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
  10. Walter, J., On elementary proofd of Peano's existence theorems,

 

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)