Consider the initial value problem
The inverse of the derivative operator D is not an operator in general mathematical sense because it assigns to every function infinite many outputs that are usually called the «antiderivative». In order to single out one output, we consider the differential operator on a special set of functions that have a prescribed value at one point x = x0, so f(x0) = y0. We denote D on such set of functions (which is not a linear space) as L. If we rewrite our initial value problem in the operator form L[y]=f, then apply its inverse L-1 to both sides, we reduce the given problem to the fixed point problem
If we integrate (that is, apply the inverse operator L-1 to the unbounded derivative operator restricted on the set of functions with a specified initial condition) both sides of the differential equation y′=f(x,y) with respect to x from x0 to x, we get the following integral equation
Now we deal with the fixed point problem that is more computationally friendly because we can approximate an integral with any accuracy we want. The Picard iterative process consists of constructing a sequence of functions { φn } that will get closer and closer to the desired solution. This is how the process works:
First, we verify that all members of the sequence exist. This follows from the inequality:
To show that the sequence converges, we represent the limit function as infinite series
We go step by step, and consider the first term
Next we prove that the limit function is a solution of the given initial value problem. Allowing n to approach infinity on both sides of
More details for the existence and uniqueness of the initial value problem can be found in the following textbooks
Although this approach is most often connected with the names of Charles Emile Picard, Giuseppe Peano, Ernst Lindelöf, Rudolph Lipschitz, and Augustin Cauchy, it was first published by the French mathematician Joseph Liouville in 1838 for solving second order homogeneous linear equations. About fifty years later, Picard developed a much more generalized form, which placed the concept on a more rigorous mathematical foundation, and used this approach to solve boundary value problems described by second order ordinary differential equations.
Note that Picard's iteration procedure, if it could be performed, provides an explicit solution to the initial value problem. This method is not for practical applications mostly for two reasons: finding next iterate may be impossible, and, when it works, Picard's iteration contains repetitions. Therefore, it could be used sometimes for approximations only. Moreover, the interval of existence (-h, h) for the considered initial value problem is in most cases far away from the validity interval. There are known some improvements in this direction (for instance, S.M. Lozinsky theorem), but we are far away from complete determination of the validity interval based on Picard's iteration procedure. Also, there are a number of counterexamples in which Picard iteration is guaranteed not to converge, even if the starting function is arbitrarily close to the real solution.
Here are some versions of Picard's iteration procedure for matlab:
Expanding the explicit solution into Maclaurin series, we obtain
d = 8; t = chebfun('t',[0 d]); u0 = 1;
L = chebop(0,d); L.op = @(t,u) diff(u) - sin(u); L.lbc = u0;
yexact = L\sin(t);
This first plot shows iterates k=0,…,4, with the exact solution in red.
u = u0 + 0*t;
f = @(u,t) sin(u) + sin(t);
LW = 'linewidth'; FS = 'fontsize'; IN = 'interpret'; LT = 'latex';
hold off
ss = @(k) ['$k = ' int2str(k) '$'];
for k = 0:4
plot(u,'b',LW,1.6), hold on, ylim([-3 10])
text(1.015*d,u(end),ss(k),IN,LT)
u = u0 + cumsum(f(u,t));
end
plot(uexact,'r',LW,1.6), xlabel('t',FS,10), ylabel('u',FS,10)
title('Picard iterates $k = 0,\dots,4$',FS,12,IN,LT)
A second plot shows k=5,…,9.
hold off
for k = 5:9
plot(u,'b',LW,1.6), hold on, ylim([0 7])
text(1.015*d,u(end),ss(k),IN,LT)
u = u0 + cumsum(f(u,t));
end
plot(uexact,'r',LW,1.6), xlabel('t',FS,10), ylabel('u',FS,10)
title('Picard iterates $k = 5,\dots,9$',FS,12,IN,LT)
A third plot shows k=10,…,14.
hold off
for k = 10:14
plot(u,'b',LW,1.6), hold on, ylim([1 6])
text(1.015*d,u(end),ss(k),IN,LT)
u = u0 + cumsum(f(u,t));
end
plot(uexact,'r',LW,1.6), xlabel('t',FS,10), ylabel('u',FS,10)
title('Picard iterates $k = 10,\dots,14$',FS,12,IN,LT)
These plots show vividly the kind of convergence one can expect from a
Picard iteration: starting at the initial condition, sweeping slowly
across the domain. There is a numerical method based on this idea,
called waveform relaxation, but one can see immediately from the
pictures that it is unlikely to be efficient when carried out over
long time intervals. Instead, standard numerical methods just march
once rather than many times from left to right, but they march with a
small discrete time step and a discrete formula of higher order.
To see the convergence in a quantitative fashion, it is interesting plot the errors of iterates 0,…,4 as a function of t on a loglog plot. The zeroth iterate has accuracy O(t), the first has accuracy O(t²), and so on:
u = u0 + 0*t;
ss = @(k) ['$k = ' int2str(k) '$'];
tt = logspace(-2,log10(8),600); hold off
for k = 0:4
errtt = abs(u(tt)-uexact(tt));
loglog(tt,errtt,'k',LW,.7), hold on
text(8.7,errtt(1),ss(k),IN,LT)
u = u0 + cumsum(f(u,t));
end
xlabel('t',FS,10), ylabel('error',FS,10)
axis([1e-2 8 1e-16 1e3])
title('Errors of iterates $0,\dots,4$',FS,12,IN,LT)
■
================== to be removed ============
xxxxxxxxxx
x=4; y=16;
z = x + j*y
Example:
**DESCRIPTION OF PROBLEM GOES HERE**
This is a description for some MATLAB code. MATLAB is an extremely useful tool for many different areas in engineering, applied mathematics, computer science, biology, chemistry, and so much more. It is quite amazing at handling matrices, but has lots of competition with other programs such as Mathematica and Maple. Here is a code snippet plotting two lines (y vs. x and z vs. x) on the same graph. Click to view the code!
Two n-by-n matrices A and B are called similar if there exists an invertible n-by-n matrix S such that
Theorem:
If λ is an eigenvalue of a square matrix A, then its algebraic multiplicity is at least as large as its geometric multiplicity.
▣