Fixed Point Iteration


This tutorial surves as an introduction to the computer algebra system Maple, created by MapleSoft ©. It is made solely for the purpose of education. This tutorial assumes that you have a working knowledge of basic Maple functionality. This includes knowing how to define variables and carry out algebraic manipulations. If you are completely new to Maple, then please go to the introductory tutorial for APMA0330.

Usually, the same algorithm can be implemented in many different ways to accomplish the same task. As a result, this tutorial may introduce several different commands (or sets of commands) to do the same thing. It is always good to be aware of these alternate pathways; however, you are welcome to pick up any approach whatever you like the best .

Finally, the commands in this tutorial are all written in red text (as Maple input), while Maple output is in blue, which means that the output is in 2D output style. You can copy and paste all commands into Maple, change the parameters and run them. You, as the user, are free to use the mw files and scripts to your needs, and have the rights to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately. The tutorial accompanies the textbook Applied Differential Equations. The Primary Course by Vladimir Dobrushkin, CRC Press, 2015; http://www.crcpress.com/product/isbn/9781439851043.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Maple tutorial for the first course (APMA0330)
Return to Maple tutorial for the second course (APMA0340)
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330

Return to Part III with(linalg)

Fixed Point Iterations


Iteration is a fundamental principle in computer science. As the name suggests, it is a process that is repeated until an answer is achieved or stopped. In this section, we study the process of iteration using repeated substitution.

More specifically, given a function g defined on the real numbers with real values and given a point x0 in the domain of g, the fixed point iteration is

\[ x_{i+1} = g(x_i ) \quad i =0, 1, 2, \ldots , \]
which gives rise to the sequence \( \left\{ x_i \right\}_{i\ge 0} . \) If this sequence converges to a point x, then one can prove that the obtained x is a fixed point of g, namely,
\[ x = g(x ) . \]
The iteration process is expressed by restart:
x[n+1]:=g(x[n]); # n = 0,1,2,...

with a selected initial value for n = 0 in the neighbourhood of the fixed point x.

Example. Consider the convergent iteration

\[ p_0 = 0.5 \qquad \mbox{and} \qquad p_{k+1} = e^{-2p_k} \quad \mbox{for} \quad k=0,1,2,\ldots . \]
The first 10 terms are obtained by the calculations
\begin{align*} p_1 &= e^{-1} \approx 0.367879 , \\ p_2 &= e^{-2*p_1} \approx 0.479142 , \\ p_3 &= e^{-2*p_2} \approx 0.383551 , \\ \vdots & \qquad \vdots \\ p_9 &= e^{-2*p_8} \approx 0.409676 , \\ p_{10} &= e^{-2*p_9} \approx 0.440717 . \\ \end{align*}
The sequence is converging, and Maple reveals that
\[ \lim_{k\to \infty} p_k = 0.426302751 \ldots . \]

Example. Consider \( g(x) = 10/ (x^3 -1) \) and the fixed point iterative scheme \( x_{i+1} = 10/ (x^3_i -1) ,\) with the initial guess x0 = 2.

If we repeat the same procedure, we will be surprised that the iteration is gone into an infinite loop without converging.

Theorem (Banach): Assume that the function g is continuous on the interval [a,b].

Stefan Banach (was born in 1892 in Kraków, Austria-Hungary and died 31 August 1945 in Lviv, Ukrainian SSR, Soviet Union) was a Polish mathematician who is generally considered one of the world's most important and influential 20th-century mathematicians. He was one of the founders of modern functional analysis, and an original member of the Lwów School of Mathematics. His major work was the 1932 book, Théorie des opérations linéaires (Theory of Linear Operations), the first monograph on the general theory of functional analysis.
Rudolf Otto Sigismund Lipschitz (1832--1903) was a German mathematician who made contributions to mathematical analysis (where he gave his name to the Lipschitz continuity condition) and differential geometry, as well as number theory, algebras with involution and classical mechanics.

Theorem: Assume that the function g and its derivative are continuous on the interval [a,b]. Suppose that \( g(x) \in [a,b] \) for all \( x \in [a,b] , \) and the initial approximation x0 also belongs to the interval [a,b].

In practice, it is often difficult to check the condition \( g([a,b]) \subset [a,b] \) given in the previous theorem. We now present a variant of it.

Theorem: Let P be a fixed point of g(x), that is, \( P= g(P) . \) Suppose g(x) is differentiable on \( \left[ P- \varepsilon , P+\varepsilon \right] \quad\mbox{for some} \quad \varepsilon > 0 \) and g(x) satisfies the condition \( |g' (x) | \le L < 1 \) for all \( x \in \left[ P - \varepsilon , P+\varepsilon \right] . \) Then the sequence \( x_{i+1} = g(x_i ) , \) with starting point \( x_0 \in \left[ P- \varepsilon , P+\varepsilon \right] , \) converges to P. ■

Remark: If g is invertible then P is a fixed point of g if and only if P is a fixed point of g-1.

Remark: The above therems provide only sufficient conditions. It is possible for a function to violate one or more of the hypotheses, yet still have a (possibly unique) fixed point. ■

The Banach theorem allows one to find the necessary number of iterations for a given error "epsilon." It can be calculated by the following formula (a-priori error estimate)

\[ \frac{1}{L} \, \ln \left( \frac{(1-L)\,\varepsilon}{|x_0 - x_1 |} \right) \le \mbox{iterations}(\varepsilon ), \]
and a-posteriori error estimate (where p is a fixed point):
\[ |x_k - p |\le \frac{L}{1-L} \left\vert x_k - x_{k-1} \right\vert . \]

Example. The function \( g(x) = 2x\left( 1-x \right) \) violates the hypothesis of the theorem because it is continuous everywhere \( (-\infty , \infty ) . \) Indeed, g(x) clearly does not map the interval \( [0.5, \infty ) \) into itself; moreover, its derivative \( |g' (x)| \to + \infty \) for large x. However, g(x) has fixed points at x = 0 and x = 1/2.

Example. Consider the equation

\[ x = 1 + 0.4\, \sin x , \qquad \mbox{with} \quad g(x) = 1 + 0.4\, \sin x . \]
Note that g(x) is a continuous functions everywhere and \( 0.6 \le g(x) \le 1.4 \) for any \( x \in \mathbb{R} . \) Its derivative \( \left\vert g' (x) \right\vert = \left\vert 0.4\,\cos x \right\vert \le 0.4 < 1 . \) Therefore, we can apply the theorem and conclude that the fixed point iteration
\[ x_{k+1} = 1 + 0.4\, \sin x_k , \qquad k=0,1,2,,\ldots \]
will converge for arbitrary initial point from the interval [0.6 , 1.4]. Using the Maple command "fsolve" we immediately arrive at the solution: Digits:=4;
p:=fsolve(1-x+ 0.4*sin (x) =0);

Example. Consider a similar equation

\[ x = 1 + 2\, \sin x , \qquad \mbox{with} \quad g(x) = 1 + 2\, \sin x . \]
Since \( -1 \le g(x) \le 3 , \) we are looking for a fixed point from this interval, [-1,3]. Since for its derivative we have
\[ g'(x) = 2\, \cos x \qquad \Longrightarrow \qquad \max_{x\in [-1,3]} \,\left\vert g' (x) \right\vert =2 > 1, \]
we do not expect convergence of the fixed point iteration \( x_{k+1} = 1 + 2\, \sin x_k . \)

Example. Our next example is concerned with finding the solution of \( x = \frac{1}{2} \left( \sin x + \cos x \right) . \) Using the Maple command "fsolve" we immediately arrive at the solution: p:=fsolve(sin(x) + cos(x) =2*x);

blue \[ p := 0.7048120020 \]
One can read the fixed point from the following Figure: alias(H=Heaviside,th=thickness,sc=scaling,co=color):
p[1]:=plot({x,(sin(x)+cos(x))/2},x=0..1, sc=constrained,th=3,co=black):
p[2]:=plot({1,H(x-1),-H(x-1.001),0.7048*H(x-0.7048), 0.7048*H(x-0.7058)},x=0..1.001,co=black, title="Fixed Point at x = 0.7048"):
p[3]:=plot([[0.7048,0.7048]],style=point,symbol=circle, symbolsize=30):
plots[display](seq(p[k],k=1..3));

The fixed point iteration is given as follows: g(x):=(sin(x)+cos(x))/2;
x[0]:=0; x[1]:=evalf(subs(x=0,g(x)),2);

blue \[ g(x) := \frac{1}{2}\,\sin (x) + \frac{1}{2}\,\cos (x) \]
\[ x_0 := 0 \]
\[ x_1 := 0.50 \]
for i from 2 to 10 do x[i]:=evalf(subs(x=%,g(x))) od;

blue \begin{align*} x_2 &:= 0.6785040503 \\ x_3 &:= 0.7030708012 \\ x_4 &:= 0.7047118221 \\ x_5 &:= 0.7048062961 \\ x_6 &:= 0.7048116773 \\ x_7 &:= 0.7048119834 \\ x_8 &:= 0.7048120009 \\ x_9 &:= 0.7048120019 \\ x_{10} &:= 0.7048120020 \end{align*}
We see, the 10th iteration x[10] is identical to the above Maple solution.

The following steps are concerned with both the a-priori and the a-posteriori error estimate. At first we determine the Lipschitz constant from the absolute derivative of the function g(x): AbsoluteDerivative:=abs(Diff(g(xi),xi))=abs(diff(g(x),x));

blue \[ \mbox{AbsoluteDerivative}:= \left\vert \frac{{\text d}}{{\text d}\xi} \, g(\xi ) \right\vert = \left\vert \frac{1}{2}\, \cos (x) - \frac{1}{2}\,\sin (x) \right\vert \]
p[1]:=plot(abs(diff(g(x),x)),x=0..1,sc=constrained, th=3,co=black):
p[2]:=plot({H(x),H(x-1),-H(x-1.001)},x=0..1.001,co=black, title="Absolute Derivative of g(x)"):
plots[display](p[1],p[2]);

The greatest value of the absolute derivative in the neighbourhood of the expected fixed point, here in the interval [0, 1], may be assumed to be as the Lipschitz constant: L:=evalf(subs(x=0,abs(diff(g(x),x))),2);

blue \[ L := 0.50 \]
Assuming " i " iterations, then the a-priori error estimate is given by: for i from 1 by 3 to 10 do
APrioriEstimate[i]:=evalf((L^i/(1-L))*abs(x[1]-x[0]),5) od;

blue \begin{align*} \mbox{APrioriEstimate}_1 &:= 0.50000 \\ \mbox{APrioriEstimate}_4 &:= 0.062500 \\ \mbox{APrioriEstimate}_7 &:= 0.0078125 \\ \mbox{APrioriEstimate}_{10} &:= 0.00097655 \end{align*}
The necessary number of iterations for a given error " epsilon " can be calculated according to the formula
\[ \frac{1}{L} \, \ln \left( \frac{(1-L)\,\varepsilon}{|x_0 - x_1 |} \right) \le \mbox{iterations}(\varepsilon ), \]
iterations[epsilon]:= evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L),3);

blue \[ \mbox{iterations}_{\varepsilon} = -1.44\,\ln \left( 1.00\,\varepsilon \right) \]
for i from 1 to 5 do iterations[epsilon=10^(-i)]:= evalf(ln((1-L)*10^(-i)/abs(x[1]-x[0]))/ln(L),2) od;

blue \begin{align*} \mbox{iterations}_{\varepsilon = 1/10} &:= 3.3 \\ \mbox{iterations}_{\varepsilon = 1/100} &:= 6.7 \\ \mbox{iterations}_{\varepsilon = 1/1000} &:= 10. \\ \mbox{iterations}_{\varepsilon = 1/10000} &:= 13. \\ \mbox{iterations}_{\varepsilon = 1/100000} &:= 17. \end{align*}
The a-posteriori error estimate after the 10th iteration is given by: PosterioriEstimate[k]:=(L/(1-L))*abs(x[k]-x[k-1]);

blue \[ \mbox{PosterioriEstimate} = \frac{L\, | x_k - x_{k-1} |}{1-L} \]
x[k]:=x[10]; x[k-1]:=x[9];

blue \[ \begin{split} x_k &:= 0.7048120020 \\ x_{k-1} &:= 0.7048120019 \end{split} \]
PosterioriEstimate[10]:= subs({k=10,L=L},PosterioriEstimate[k]);

blue \[ \mbo{PosterioriEstimate}_{10} := 0.1000000000 \,10^{-9} \]
The necessary a-priori-iterations with the above a-posteriori error is given by APrioriIterations[evalf(0.1*10^(-9),2)]:= evalf(subs(epsilon=0.1*10^(-9),-1.44*ln(1.00*epsilon)),3);

blue \[ \mbox{APrioriIterations}_{0.10\,10^{9}} := 33.1 \]

 

II. Aitken's Algorithm


This algorithm was proposed by one of New Zealand's greatest mathematicians Alexander Craig "Alec" Aitken (1895--1967). It consists in the following stages:

Theorem (Aitken's Acceleration): Assume that the sequence \( \{ p_n \}_{n\ge 0} \) converges linearly to the limit p  and that \( p_n \ne p \) for all \( n \ge 0. \) If there exists a real number A < 1 such that  

\[ \lim_{n \to \infty} \, \frac{p- p_{n+1}}{p- p_n} =A, \]
then the sequence \( \{ q_n \}_{n\ge 0} \) defined by  
\[ q_n = p_n - \frac{\left( \Delta p_n \right)^2}{\Delta^2 p_n} = p_n - \frac{\left( p_{n+1} - p_n \right)^2}{p_{n+2} - 2p_{n+1} + p_n} \]
converges to p faster than \( \{ p_n \}_{n\ge 0} \) in the sense that \( \lim_{n \to \infty} \, \left\vert \frac{p - q_n}{p- p_n} \right\vert =0 . \) ■ 

 

III. Steffensen's Acceleration


Johan Frederik Steffensen (1873--1961) was a Danish mathematician, statistician, and actuary who did research in the fields of calculus of finite differences and interpolation. He was professor of actuarial science at the University of Copenhagen from 1923 to 1943. Steffensen's inequality and Steffensen's iterative numerical method are named after him.

When Aitken's process is combined with the fixed point iteration in Newton's method, the result is called Steffensen's acceleration.  Starting with p0, two steps of Newton's method are used to compute \( p_1 = p_0 - \frac{f(p_0 )}{f' (p_0 )} \) and \( p_2 = p_1 - \frac{f(p_1 )}{f' (p_1 )} , \) then Aitken's process is used to compute \( q_0 = p_0 -  \frac{\left( \Delta p_0 \right)^2}{\Delta^2 p_0} . \)  To continue the iteration set \( q_0 = p_0 \) and repeat the previous steps. This means that we have a fixed-point iteration:  

\[ p_0 , \qquad p_1 = g(p_0 ), \qquad p_2 = g(p_1 ). \]
Then we compute
\[ q_0 = p_0 -  \frac{\left( \Delta p_0 \right)^2}{\Delta^2 p_0} = p_0 - \frac{\left( p_1 - p_0 \right)^2}{p_2 - 2p_1 +p_0} . \]
At this point, we ``restart'' the fixed point iteration with \( p_0 = q_0 , \) e.g.,
\[ p_3 = q_0 , \qquad p_4 = g(p_3 ), \qquad p_5 = g(p_4 ). \]
and compute
\[ q_3 = p_3 -  \frac{\left( \Delta p_3 \right)^2}{\Delta^2 p_3} = p_3 - \frac{\left( p_4 - p_3 \right)^2}{p_5 - 2p_4 +p_3} . \]
If at some point \( \Delta^2 p_n =0 \) (which appears in the denominator), then we stop and select the current value of pn+2 as our approximate answer.

Steffensen's acceleration is used to quickly find a solution of the fixed-point equation x = g(x) given an initial approximation p0. It is assumed that both g(x) and its derivative are continuous, \( | g' (x) | < 1, \) and that ordinary fixed-point iteration converges slowly (linearly) to p.

map(q -> int(q, t = 0..5), B)