# Preface

This is a
tutorial made solely for the purpose of education and it was designed
for students taking Applied Math 0330. It is primarily for students who
have very little experience or have never used
*Mathematica* before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.

Finally, the commands in this tutorial are all written in bold black font,
while *Mathematica* output is in normal font. This means that you can
copy and paste all commands into *Mathematica*, change the parameters and
run them. You, as the user, are free to use the scripts for your needs to learn the *Mathematica* program, and have
the right to distribute this tutorial and refer to this tutorial as long as
this tutorial is accredited appropriately.

Return to computing page for the second course APMA0340

Return to Mathematica tutorial for the second course APMA0330

Return to Mathematica tutorial for the first course APMA0340

Return to the main page for the course APMA0340

Return to the main page for the course APMA0330

Return to Part III of the course APMA0330

# Fixed Point Iteration

Iteration is a fundamental principle in computer science. As the name suggests, it is a process that is repeated until an answer is achieved or stopped. In this section, we study the process of iteration using repeated substitution.

More specifically, given a function *g* defined on the real numbers with real values and given a point *x*_{0} in the domain of *g*, the fixed point iteration is

*x*, then one can prove that the obtained

*x*is a fixed point of

*g*, namely,

*Mathematica*provides dedicated commands

**FixedPoint**and

**FixedPointList**to achieve that. We present an alternative approaches. One can also use

**NSolve**command, as the following example shows.

*f(x)*be the following piecewise function:

**NSolve**command:

3.14159 + 12.5664 C[1]), (C[1] \[Element] Integers &&

C[1] >= 0) || (C[1] \[Element] Integers && C[1] <= -1.)]}}

2/(π + 4 π C[1]), (C[1] \[Element] Integers &&

C[1] >= 0) || (C[1] \[Element] Integers && C[1] <= -1)]}}

*Mathematica*to use a given precision is to use

**Block**and make $MinPrecision equal to $MaxPrecision. So you can obtain your result as:

FixedPointList[N[1/2 Sqrt[10 - #^3] &, 10], 1.5`10]]

{Length[result1], result1[[-1]] // InputForm}

*Mathematica*reveals that

Do[Print["k= ", k, " " , "p= " , p[k] = Exp[-2*p[k - 1]]], {k, 1, 10}]

FindRoot[p == Exp[-2*p], {p, 0.5}]

*x*

_{0}= 2.

If you repeat the same procedure, you will be surprised that the iteration is gone into an infinite loop without converging.

**Assume that the function**

**Theorem (Banach)**:*g*is continuous on the interval [a,b].

- If the range of the mapping
*y = g(x)*satisfies \( y \in [a,b] \) for all \( x \in [a,b] , \) then*g*has a fixed point in [a,b]. - Furthermore, suppose that the derivative
*g'(x)*is defined over (a,b) and that a positive constant (called Lipschitz constant)*L < 1*exists with \( |g' (x) | \le L < 1 \) for all \( x \in (a,b) , \) then*g*has a unique fixed point*P*in [a,b]. ■

Rudolf Otto Sigismund Lipschitz (1832--1903) was a German mathematician who made contributions to mathematical analysis (where he gave his name to the Lipschitz continuity condition) and differential geometry, as well as number theory, algebras with involution and classical mechanics.

**Assume that the function**

**Theorem**:*g*and its derivative are continuous on the interval [a,b]. Suppose that \( g(x) \in [a,b] \) for all \( x \in [a,b] , \) and the initial approximation

*x*

_{0}also belongs to the interval [a,b].

- If \( |g' (x) | \le K < 1 \) for all
\( x \in (a,b) , \) then the iteration
\( x_{i+1} = g(x_i ) \) will converge to the unique
fixed point \( P \in [a,b] . \) In this case,
*P*is said to be an attractive fixed point: \( P= g(P) . \) - If \( |g' (x) | > 1 \) for all
\( x \in (a,b) , \) then the iteration
\( x_{i+1} = g(x_i ) \) will not converge to
*P*. In this case,*P*is said to be a repelling fixed point and the iteration exhibits local divergence. ■

**Let**

**Theorem**:*P*be a fixed point of

*g(x)*, that is, \( P= g(P) . \) Suppose

*g(x)*is differentiable on \( \left[ P- \varepsilon , P+\varepsilon \right] \quad\mbox{for some} \quad \varepsilon > 0 \) and

*g(x)*satisfies the condition \( |g' (x) | \le K < 1 \) for all \( x \in \left[ P - \varepsilon , P+\varepsilon \right] . \) Then the sequence \( x_{i+1} = g(x_i ) , \) with starting point \( x_0 \in \left[ P- \varepsilon , P+\varepsilon \right] , \) converges to

*P*.

** Remark:** If

*g*is invertible then

*P*is a fixed point of

*g*if and only if

*P*is a fixed point of

*g*

^{-1}.

** Remark:** The above theorems provide only sufficient conditions. It is possible for a function to violate one or more of the
hypotheses, yet still have a (possibly unique) fixed point. ■

The Banach theorem allows one to find the necessary number of iterations for a given error "epsilon." It can be calculated by the following formula (**a-priori error estimate**)

**a-posteriori error estimate**(where

*p*is a fixed point):

*g(x)*clearly does not map the interval \( [0.5, \infty ) \) into itself; moreover, its derivative \( |g' (x)| \to + \infty \) for large

*x*. However,

*g(x)*has fixed points at

*x = 0*and

*x = 1/2*.

*g(x)*is a continuous functions everywhere and \( 0.6 \le g(x) \le 1.4 \) for any \( x \in \mathbb{R} . \) Its derivative \( \left\vert g' (x) \right\vert = \left\vert 0.4\,\cos x \right\vert \le 0.4 < 1 . \) Therefore, we can apply the theorem and conclude that the fixed point iteration

## II. Aitken's Algorithm

Sometimes we can accelerate or improve the convergence of an algorithm with very little additional effort, simply by using the output of the algorithm to estimate some of the uncomputable quantities. One such acceleration was proposed by A. Aiken. Alexander Craig "Alec" Aitken was born in 1895 in Dunedin, Otago, New Zealand and died in 1967 in Edinburgh, England, where he spent the rest of his life since 1925. Aitken had an incredible memory (he knew π to 2000 places) and could instantly multiply, divide and take roots of large numbers. He played the violin and composed music to a very high standard.

Suppose that we have an iterative process that generates a sequence of numbers \( \{ x_n \}_{n\ge 0} \) that converges to α. We generate a new sequence \( \{ q_n \}_{n\ge 0} \) according to

**Theorem (Aitken's Acceleration):**Assume that the sequence \( \{ p_n \}_{n\ge 0} \) converges linearly to the limit

*p*and that \( p_n \ne p \) for all \( n \ge 0. \) If there exists a real number

*A*< 1 such that

*p*faster than \( \{ p_n \}_{n\ge 0} \) in the sense that \( \lim_{n \to \infty} \, \left\vert \frac{p - q_n}{p- p_n} \right\vert =0 . \) ■

This algorithm was proposed by one of New Zealand's greatest mathematicians Alexander Craig "Alec" Aitken (1895--1967). When it is applied to determine a fixed point in the equation \( x=g(x) , \) it consists in the following stages:

- select
*x*_{0}; - calculate
\[ x_1 = g(x_0 ) , \qquad x_2 = g(x_1 ) ; \]
- calculate
\[ x_3 = x_2 + \frac{\gamma_2}{1- \gamma_2} \left( x_2 - x_1 \right) , \qquad \mbox{where} \quad \gamma_2 = \frac{x_2 - x_1}{x_1 - x_0} ; \]
- calculate
\[ x_4 = g(x_3 ) , \qquad x_5 = g(x_4 ) ; \]
- calculate
*x*_{6}as the extrapolate of \( \left\{ x_3 , x_4 , x_5 \right\} . \) Continue this procedure, ad infinatum. ■

We assume that *g* is continuously differentiable, so according to Mean Value Theorem there exists
ξ_{n-1} between α (which is the root of \( \alpha = g(\alpha ) \) ) and
*x*_{n-1} such that

Since we are assuming that \( x_n \,\to\, \alpha , \) we also know that \( g' (\xi_n ) \, \to \,g' (\alpha ) ; \) thus, we can denote \( \gamma_n \approx g' (\xi_{n-1} ) \approx g' (\alpha ) . \) Using this notation, we get

*x*

_{0}= 0, we compute the first ordinary iterates as follows:

*x*

_{2}, we can begin to compute the accelerated values

*q*

_{n}:

```
Aitken Extrapolation, Version 1
```

x1 = g(x0)

x2 = g(x1)

for k=1 to n do

if (dabs[x1 - x0) > 1.d-20) then

gamma = (x2 - x1)/(x1 - x0)

else

gamma = 0.0d0

endif

q = x2 + gamma*(x2 - x1)/(1 - gamma)

if(abs(q - x2) < error) then

alpha = q

stop

endif

x = g(x2)

x0 = x1

x1 = x2

x2 = x

enddo

Since division by zero---or a very small number---is possible in the algorithm's computation of γ, we put in a
conditional test: Only if the denominator is greater than 10^{-20}do we carry through the division.

## III. Steffensen's Acceleration

Johan Frederik Steffensen (1873--1961) was a Danish mathematician, statistician, and actuary who did research in the fields of calculus of finite differences and interpolation. He was professor of actuarial science at the University of Copenhagen from 1923 to 1943. Steffensen's inequality and Steffensen's iterative numerical method are named after him.

When Aitken's process is combined with the fixed point iteration in Newton's method, the result is called Steffensen's acceleration. Starting with *p*_{0}, two steps of Newton's method are used to compute \( p_1 = p_0 - \frac{f(p_0 )}{f' (p_0 )} \) and \( p_2 = p_1 - \frac{f(p_1 )}{f' (p_1 )} , \) then Aitken's process is used to compute \( q_0 = p_0 - \frac{\left( \Delta p_0 \right)^2}{\Delta^2 p_0} . \) To continue the iteration set \( q_0 = p_0 \) and repeat the previous steps. This means that we have a fixed-point iteration:

*p*

_{n+2}as our approximate answer.

Steffensen's acceleration is used to quickly find a solution of the fixed-point equation *x = g(x)* given an initial approximation *p*_{0}. It is assumed that both *g(x)* and its derivative are continuous, \( | g' (x) | < 1, \) and that ordinary fixed-point iteration converges slowly (linearly) to *p*.

Now we present the pseudocode of the algorithm that provides faster convergence. Note that we check again for division by small numbers before computing γ.

```
Steffensen's acceleration
```

for k=1 to n do

x1 = g(x0)

x2 = g(x1)

if (dabs[x1 - x0) > 1.d-20) then

gamma = (x2 - x1)/(x1 - x0)

else

gamma = 0.0d0

endif

x0 = x2 + gamma*(x2 - x1)/(1 - gamma)

if(abs(x0 - x2) < error) then

alpha = x0

stop

endif

enddo

## V. Wegstein's Method

To solve the fixed point problem \( g(x) =x , \) the following algorithm, presented in

J.H. Wegstein, Accelerating convergence of iterative processes, Communications of the Association for the Computing Machinery (ACM),

**1**, 9--13, 1958.

can be used

*q*.

q < 0 |
Convergence is monotone |

0 < q < 0.5 |
Convergence is oscillatory |

0.5 < q < 1 |
Convergence |

q > 1 |
Divergence |

*x*

^{0}and

*x*

^{1}and poor start may cause the process to fail, converge to another root, or, at best, add unnecessary number of iterations.

Return to Mathematica page

Return to the main page (APMA0330)

Return to the Part 1 (Plotting)

Return to the Part 2 (First Order ODEs)

Return to the Part 3 (Numerical Methods)

Return to the Part 4 (Second and Higher Order ODEs)

Return to the Part 5 (Series and Recurrences)

Return to the Part 6 (Laplace Transform)

Return to the Part 7 (Boundary Value Problems)