Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to computing page for the fourth course APMA0360
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to Mathematica tutorial for the fourth course APMA0360
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to the main page for the course APMA0360
Return to Part III of the course APMA0330

Preface


This section discusses some practical algorithms for finding a point p in the general equation of the form p = g(p), for some continuous function g(x). Existence of solution to the equation above is known as the fixed point theorem, and it has numerous generalizations. This theorem has many applications in mathematics and numerical analysis. For instance, Picard's iteration and Adomian decomposition method are based on fixed point theorem.

Fixed Point Iteration


Iteration is a fundamental principle in computer science. As the name suggests, it is a process that is repeated until an answer is achieved or stopped. In this section, we study the process of iteration using repeated substitution.
A fixed point of a function g(x) is a real number p such that p = g(p).

More specifically, given a function g defined on the real numbers with real values and given a point x0 in the domain of g, the fixed point (also called Picard's) iteration is

\[ x_{i+1} = g(x_i ) \quad i =0, 1, 2, \ldots , \]
which gives rise to the sequence \( \left\{ x_i \right\}_{i\ge 0} . \) If this sequence converges to a point x, then one can prove that the obtained x is a fixed point of g, namely,
\[ x = g(x ) . \]

One of the most important features of iterative methods is their convergence rate defined by the order of convergence. Let { xn } be a sequence converging to α and let εn = xn - α.

If there exists a real number p and a nonzero positive constant Cp such that
\[ \lim_{n\to \infty} \,\frac{\left\vert \varepsilon_{n+1} \right\vert}{\left\vert \varepsilon_{n} \right\vert^p} = C_p , \]
then p is called the order at which the sequence { xn } converges to the root α of the fixed point equation x = g(x), and Cp is the asymptotic error constant.
When one wants to apply a function until the result stops changing, Mathematica provides dedicated commands FixedPoint and FixedPointList to achieve that. We present an alternative approaches.
fixedp[g_, x0_, n_] := Module[{x1}, x1 = g[x0]; Print[x1];
Do[x1 = g[x1]; Print[x1], {k, 1, n}]]
One can also use NSolve command, as the following example shows.

Example 1: Suppose we want to find all the fixed points of \( f(x)=x\, \sin (1/x) . \) Since it has infinitely many fixed points, so there would in theory have infinitely many outputs. Let f(x) be the following piecewise function:

f[x_] := Piecewise[{{x Sin [1/x], -1 <= x < 0 || 0 < x <= 1}}, 0]
Then we apply NSolve command:
NSolve[x == f[x], x]
{{x -> 0}, {x -> ConditionalExpression[2./(
3.14159 + 12.5664 C[1]), (C[1] \[Element] Integers &&
C[1] >= 0) || (C[1] \[Element] Integers && C[1] <= -1.)]}}
As we see, NSolve provides several answers. You can even get a nice analytical answer out of Solve
Solve[x == f[x], x]
{{x -> 0}, {x -> ConditionalExpression[
2/(π + 4 π C[1]), (C[1] \[Element] Integers &&
C[1] >= 0) || (C[1] \[Element] Integers && C[1] <= -1)]}}
   ■

Example 2: Suppose we need to solve the polynomial equation \( x^3 + 4\,x^2 -10 =0 , \) which we rewrite as \( 4\, x^2 = 10 - x^3 . \) Taking square root, we reduce our problem to fixed point problem: \( x = \frac{1}{2}\, \sqrt{10 - x^3} . \) Then we force Mathematica to use a given precision is to use Block and make $MinPrecision equal to $MaxPrecision. So you can obtain your result as:

Block[{$MinPrecision = 10, $MaxPrecision = 10},
FixedPointList[N[1/2 Sqrt[10 - #^3] &, 10], 1.5`10]]
Mathematical operations on numbers of a given precision cannot guarantee the output to maintain the precision of the input numbers. In general precision will decrease. The amount of decrease depends on the operations and some operations will cause precision to decrease significantly. If you want a specific precision on the final result then the working precision should be higher than the desired output precision. Frequently, the working precision is done at twice the desired output precision to allow for significant lose of precision during complex calculations. Sometimes using machine precision numbers will perform well.
result = FixedPointList[N[1/2 Sqrt[10 - #^3] &], 1.5];
{Length[result1], result1[[-1]] // InputForm}
{53, InputForm[1.3652300134140969`]}
   ■

Example 3: Consider the convergent iteration

\[ p_0 = 0.5 \qquad \mbox{and} \qquad p_{k+1} = e^{-2p_k} \quad \mbox{for} \quad k=0,1,2,\ldots . \]
The first 10 terms are obtained by the calculations
\begin{align*} p_1 &= e^{-1} \approx 0.367879 , \\ p_2 &= e^{-2*p_1} \approx 0.479142 , \\ p_3 &= e^{-2*p_2} \approx 0.383551 , \\ \vdots & \qquad \vdots \\ p_9 &= e^{-2*p_8} \approx 0.409676 , \\ p_{10} &= e^{-2*p_9} \approx 0.440717 . \\ \end{align*}
The sequence is converging, and Mathematica reveals that
\[ \lim_{k\to \infty} p_k = 0.426302751 \ldots . \]
p[0] = 0.5
Do[Print["k= ", k, " " , "p= " , p[k] = Exp[-2*p[k - 1]]], {k, 1, 10}]
FindRoot[p == Exp[-2*p], {p, 0.5}]
   ■

Example 4: Consider \( g(x) = 10/ (x^3 -1) \) and the fixed point iterative scheme \( x_{i+1} = 10/ (x^3_i -1) ,\) with the initial guess x0 = 2.

If you repeat the same procedure, you will be surprised that the iteration is gone into an infinite loop without converging.

Theorem (Banach): Assume that the function g is continuous on the interval [𝑎,b].
  • If the range of the mapping y = g(x) satisfies \( y \in [a,b] \) for all \( x \in [a,b] , \) then g has a fixed point in [𝑎,b].
  • Furthermore, suppose that the derivative g'( x) is defined over (𝑎,b) and that a positive constant (called Lipschitz constant) L < 1 exists with \( |g' (x) | \le L < 1 \) for all \( x \in (a,b) , \) then g has a unique fixed point P in [𝑎,b]. ■
Stefan Banach (was born in 1892 in Kraków, Austria-Hungary and died 31 August 1945 in Lvov, Ukrainian SSR, Soviet Union) was a Polish mathematician who is generally considered one of the world's most important and influential 20th-century mathematicians. He was one of the founders of modern functional analysis, and an original member of the Lwów School of Mathematics. His major work was the 1932 book, Théorie des opérations linéaires (Theory of Linear Operations), the first monograph on the general theory of functional analysis.
Rudolf Otto Sigismund Lipschitz (1832--1903) was a German mathematician who made contributions to mathematical analysis (where he gave his name to the Lipschitz continuity condition) and differential geometry, as well as number theory, algebras with involution and classical mechanics.
Theorem: Assume that the function g and its derivative are continuous on the interval [𝑎,b]. Suppose that \( g(x) \in [a,b] \) for all \( x \in [a,b] , \) and the initial approximation x0 also belongs to the interval [a,b].
  • If \( |g' (x) | \le K < 1 \) for all \( x \in (a,b) , \) then the iteration \( x_{i+1} = g(x_i ) \) will converge to the unique fixed point \( P \in [a,b] . \) In this case, P is said to be an attractive fixed point: \( P= g(P) . \)
  • If \( |g' (x) | > 1 \) for all \( x \in (a,b) , \) then the iteration \( x_{i+1} = g(x_i ) \) will not converge to P. In this case, P is said to be a repelling fixed point and the iteration exhibits local divergence. ■
In practice, it is often difficult to check the condition \( g([a,b]) \subset [a,b] \) given in the previous theorem. We now present a variant of it.
Theorem: Let P be a fixed point of g(x), that is, \( P= g(P) . \) Suppose g(x) is differentiable on \( \left[ P- \varepsilon , P+\varepsilon \right] \quad\mbox{for some} \quad \varepsilon > 0 \) and g(x) satisfies the condition \( |g' (x) | \le K < 1 \) for all \( x \in \left[ P - \varepsilon , P+\varepsilon \right] . \) Then the sequence \( x_{i+1} = g(x_i ) , \) with starting point \( x_0 \in \left[ P- \varepsilon , P+\varepsilon \right] , \) converges to P.

Remark: If g is invertible then P is a fixed point of g if and only if P is a fixed point of g-1.

Remark: The above theorems provide only sufficient conditions. It is possible for a function to violate one or more of the hypotheses, yet still have a (possibly unique) fixed point. ■

The Banach theorem allows one to find the necessary number of iterations for a given error "epsilon." It can be calculated by the following formula (a-priori error estimate)

\[ \frac{1}{L} \, \ln \left( \frac{(1-L)\,\varepsilon}{|x_0 - x_1 |} \right) \le \mbox{iterations}(\varepsilon ), \]
and a-posteriori error estimate (where p is a fixed point):
\[ |x_k - p |\le \frac{L}{1-L} \left\vert x_k - x_{k-1} \right\vert . \]

 

Example 5: The function \( g(x) = 2x\left( 1-x \right) \) violates the hypothesis of the theorem because it is continuous everywhere \( (-\infty , \infty ) . \) Indeed, g(x) clearly does not map the interval \( [0.5, \infty ) \) into itself; moreover, its derivative \( |g' (x)| \to + \infty \) for large x. However, g(x) has fixed points at x = 0 and x = ½.    ■

Example 6: Consider the equation

\[ x = 1 + 0.4\, \sin x , \qquad \mbox{with} \quad g(x) = 1 + 0.4\, \sin x . \]
Note that g(x) is a continuous functions everywhere and \( 0.6 \le g(x) \le 1.4 \) for any \( x \in \mathbb{R} . \) Its derivative \( \left\vert g' (x) \right\vert = \left\vert 0.4\,\cos x \right\vert \le 0.4 < 1 . \) Therefore, we can apply the theorem and conclude that the fixed point iteration
\[ x_{k+1} = 1 + 0.4\, \sin x_k , \qquad k=0,1,2,,\ldots \]
will converge for arbitrary initial point from the interval [0.6 , 1.4].    ■

Example 7: Consider a similar equation

\[ x = 1 + 2\, \sin x , \qquad \mbox{with} \quad g(x) = 1 + 2\, \sin x . \]
Since \( -1 \le g(x) \le 3 , \) we are looking for a fixed point from this interval, [-1,3]. Since for its derivative we have
\[ g'(x) = 2\, \cos x \qquad \Longrightarrow \qquad \max_{x\in [-1,3]} \, \left\vert g' (x) \right\vert =2 > 1, \]
we do not expect convergence of the fixed point iteration \( x_{k+1} = 1 + 2\, \sin x_k . \)    ■

 

I. Aitken's Algorithm


Alexander Aitken.

Sometimes we can accelerate or improve the convergence of an algorithm with very little additional effort, simply by using the output of the algorithm to estimate some of the uncomputable quantities. One such acceleration was proposed by A. Aiken. Alexander Craig "Alec" Aitken was born in 1895 in Dunedin, Otago, New Zealand and died in 1967 in Edinburgh, England, where he spent the rest of his life since 1925. Aitken had an incredible memory (he knew π to 2000 places) and could instantly multiply, divide and take roots of large numbers. He played the violin and composed music to a very high standard.

Suppose that we have an iterative process that generates a sequence of numbers \( \{ x_n \}_{n\ge 0} \) that converges to α. We generate a new sequence \( \{ q_n \}_{n\ge 0} \) according to

\[ q_n = x_n - \frac{\left( x_{n+1} - x_n \right)^2}{x_{n+2} -2\, x_{n+1} + x_n} = x_n - \frac{\left( \Delta x_n \right)^2}{\Delta^2 x_n} , \qquad n=2,3,\ldots , \]
where \( \Delta x_n = x_{n+1} - x_n \) is the forward difference. Corespondingly, the backward difference is
\[ \nabla f(n) = f(n) - f(n-1). \]
Theorem (Aitken's Acceleration): Assume that the sequence \( \{ p_n \}_{n\ge 0} \) converges linearly to the limit p  and that \( p_n \ne p \) for all \( n \ge 0. \) If there exists a real number A < 1 such that  
\[ \lim_{n \to \infty} \, \frac{p- p_{n+1}}{p- p_n} =A, \]
then the sequence \( \{ q_n \}_{n\ge 0} \) defined by  
\[ q_n = p_n - \frac{\left( \Delta p_n \right)^2}{\Delta^2 p_n} = p_n - \frac{\left( p_{n+1} - p_n \right)^2}{p_{n+2} - 2p_{n+1} + p_n} \]
converges to p faster than \( \{ p_n \}_{n\ge 0} \) in the sense that \( \lim_{n \to \infty} \, \left\vert \frac{p - q_n}{p- p_n} \right\vert =0 . \)    ▣ 

This algorithm was proposed by one of New Zealand's greatest mathematicians Alexander Craig "Alec" Aitken (1895--1967). When it is applied to determine a fixed point in the equation \( x=g(x) , \) it consists in the following stages:

We assume that g is continuously differentiable, so according to Mean Value Theorem there exists ξn-1 between α (which is the root of \( \alpha = g(\alpha ) \) ) and xn-1 such that

\[ \alpha - x_n = g(\alpha ) - g(x_{n-1}) = g' (\xi_{n-1} )(\alpha - x_{n-1}) . \]
Now consider
\[ \alpha - x_n = \left( \alpha - x_{n-1} \right) + \left( x_{n-1} - x_n \right) = \frac{1}{g' (\xi_{n-1})} \,(\alpha - x_n ) + \left( x_{n-1} - x_n \right) , \]
so that solving for α implies
\[ \alpha = x_n + \frac{g' (\xi_{n-1} )}{1- g' (\xi_{n-1} )} \left( x_n - x_{n-1} \right) . \]

Since we are assuming that \( x_n \,\to\, \alpha , \) we also know that \( g' (\xi_n ) \, \to \,g' (\alpha ) ; \) thus, we can denote \( \gamma_n \approx g' (\xi_{n-1} ) \approx g' (\alpha ) . \) Using this notation, we get

\[ \alpha = x_n + \frac{g' (\gamma_n}{1- \gamma_n} \left( x_n - x_{n-1} \right) , \]
which we use to define a new sequence of approximations:
\[ q_n = x_n + \frac{g' (\gamma_n}{1- \gamma_n} \left( x_n - x_{n-1} \right) , \qquad \gamma_n = \frac{x_{n-1} - x_n}{x_{n-2} - x_{n-1}} . \]

Example 8: Consider \( g(x) = \frac{1}{3}\, e^{-x} \) and we solve the equation \( g(x) = x \) using iteration

\[ x_n = g(x_{n-1}) , \qquad n = 1,2,\ldots . \]
Taking x0 = 0, we compute the first ordinary iterates as follows:
\begin{align*} x_1 &= g(x_0 ) = \frac{1}{3}\, e^0 = \frac{1}{3} , \\ x_2 &= g(x_1 ) = \frac{1}{3}\, e^{-1/3} = 0.262513 , \\ x_3 &= g(x_2 ) = \frac{1}{3}\, e^{-x_1} = 0.256372 . \end{align*}
FindRoot[x == Exp[-x]/4, {x, 0}]
{x -> 0.203888}
x[0] = 1/3.0
0.333333
x[1] = Exp[-x[0]]/3
0.238844
x[2] = Exp[-x[1]]/3
0.262513
x[3] = Exp[-x[2]]/3
0.256372
x[4] = Exp[-x[3]]/3
0.257951
Now starting with x2, we can begin to compute the accelerated values qn:
\[ q_n = x_n + \frac{\gamma_n}{1- \gamma_n} \left( x_n - x_{n-1} \right) , \qquad \mbox{where} \quad \gamma_n = \frac{x_{n-1} - x_n}{x_{n-2} - x_{n-1}} . \]
Proceeding in this fashion, we get
\begin{align*} q_2 &= x_2 + \frac{\gamma_2}{1- \gamma_2} \left( x_2 - x_{1} \right) = \\ q_3 &= x_3 + \frac{\gamma_3}{1- \gamma_3} \left( x_3 - x_{2} \right) = \end{align*}
gamma[2] = (x[1] - x[2])/(x[0] - x[1])
-0.250492
q[2] = x[2] + gamma[2]*(x[2] - x[1])/(1 - gamma[2])
0.257771
gamma[3] = (x[2] - x[3])/(x[1] - x[2])
-0.25943
q[3] = x[3] + gamma[3]*(x[3] - x[2])/(1 - gamma[3])
0.257637
Now we present version 1 of Aitken extrapolation that does not take as much advantage of acceleration as is possible. Aitken Extrapolation, Version 1
x1 = g(x0)
x2 = g(x1)
for k=1 to n do
if (dabs[x1 - x0) > 1.d-20) then
gamma = (x2 - x1)/(x1 - x0)
else
gamma = 0.0d0
endif
q = x2 + gamma*(x2 - x1)/(1 - gamma)
if(abs(q - x2) < error) then
alpha = q
stop
endif
x = g(x2)
x0 = x1
x1 = x2
x2 = x
enddo
Since division by zero---or a very small number---is possible in the algorithm's computation of γ, we put in a conditional test: Only if the denominator is greater than 10-20 do we carry through the division.

Example 9: Repeat the previous example according to version 1.    ■

 

II. Steffensen's Acceleration


Johan Steffensen.

Johan Frederik Steffensen (1873--1961) was a Danish mathematician, statistician, and actuary who did research in the fields of calculus of finite differences and interpolation. He was a professor of actuarial science at the University of Copenhagen from 1923 to 1943. Steffensen's inequality and Steffensen's iterative numerical method are named after him.

When Aitken's Δ² process is combined with the fixed point iteration, the result is called Steffensen's acceleration. Starting with p0, two steps of iteration procedure should be performed

\[ p_0 , \qquad p_1 = g(p_0 ), \qquad p_2 = g(p_1 ). \]
Then we compute
\[ q_0 = p_0 -  \frac{\left( \Delta p_0 \right)^2}{\Delta^2 p_0} = p_0 - \frac{\left( p_1 - p_0 \right)^2}{p_2 - 2\,p_1 +p_0} . \]
At this point, we ``restart'' the fixed point iteration with \( p_0 = q_0 , \) e.g.,
\[ p_3 = q_0 = p_0 - \frac{\left( p_1 - p_0 \right)^2}{p_2 - 2\,p_1 +p_0} , \qquad p_4 = g(p_3 ), \qquad p_5 = g(p_4 ). \]
and compute
\[ q_3 = p_3 -  \frac{\left( \Delta p_3 \right)^2}{\Delta^2 p_3} = p_3 - \frac{\left( p_4 - p_3 \right)^2}{p_5 - 2p_4 +p_3} . \]
If at some point \( \Delta^2 p_n =0 \) (which appears in the denominator), then we stop and select the current value of pn+2 as our approximate answer.

Steffensen's acceleration is used to quickly find a solution of the fixed-point equation x = g(x) given an initial approximation p0. It is assumed that both g(x) and its derivative are continuous, \( | g' (x) | < 1, \) and that ordinary fixed-point iteration converges slowly (linearly) to p.

steffensen[g_, x0_, n_] :=
Module[{x1, x2, q0}, q0 = x0; x1 = N[g[q0]]; x2 = g[x1];
Do[q0 = q0 - (x1 - q0)^2 /(x2 - 2*x1 + q0); Print[q0]; x1 = g[q0];
x2 = g[x1]; q0 = x2, {k, 1, n}]]

Example 10: Consider the transcendent function

\[ g(x) = e^x\,\cos x + \ln \left( x^2 +1 \right) . \]
      First, we find the fixed point by solving the equation x = g(x)
FindRoot[Cos[x]*Exp[x] + Log[x^2 + 1] - x, {x, 1}]
{x -> 1.4989770002379905}
Then we plot g(x) - x
Plot[{Cos[x]*Exp[x] + Log[x^2 + 1], x}, {x, 0, 1.9},
PlotStyle -> {Thickness[0.01]}, PlotLabel ->
Style["Transcendent function", FontSize -> 16, Background -> Cyan]]
       Fixed point of the transcendent function.            Mathematica code

Now we proceed with the iterations:
g[x_] = Cos[x]*Exp[x] + Log[x^2 + 1];
x0 = 1.1; x1 = g[x0]; x2 = g[x1];
q0 = x2 - (x1 - x0)^2 /(x2 - 2*x1 + x0)
1.2784
Second iteration gives
x0 = q0; x1 = g[x0]; x2 = g[x1];
q0 = x2 - (x1 - x0)^2 /(x2 - 2*x1 + x0)
0.25248
The third iteration yields
x0 = q0; x1 = g[x0]; x2 = g[x1];
q0 = x2 - (x1 - x0)^2 /(x2 - 2*x1 + x0)
1.63301
Next:
1.56931
so it fluctuates around the true value. Now we start with a better guess, but results is similar.
steffensen[g, 1.3, 10]
1.41687 1.10225 -0.436183 1.38304 0.898461 -0.268983 1.42889
Now we try another guess:
steffensen[g, 1.5, 10]
Almost immediately, we get a good approximation but later it diverges.    ■

Example 11: We consider a similar function

\[ g(x) = \cos x\,e^{-x} + \ln \left( x^2 +1 \right). \]
      First, we find the fixed point by solving the equation x = g(x)
FindRoot[Cos[x]*Exp[-x] + Log[x^2 + 1] - x, {x, 1}]
{x -> 0.811837760929652}
FixedPoint[Cos[#]*Exp[-#] + Log[#^2 + 1] &, 1.1]
{x -> 0.811837760929652}
Then we plot g(x) - x
Plot[{Cos[x]*Exp[-x] + Log[x^2 + 1], x}, {x, 0, 1.9},
PlotStyle -> {Thickness[0.01]}, PlotLabel ->
Style["Transcendent function", FontSize -> 16, Background -> Cyan]]
       Fixed point of the transcendent function.            Mathematica code

We apply Steffensen's algorithm starting relatively far away from the true fixed point:
g[x_] = Cos[x]*Exp[-x] + Log[x^2 + 1];
steffensen[g, 1.5, 10]
which provides a good estimation to the fixed point after ninth iteration.    ■

 

III. Wegstein's Method


To solve the fixed point problem \( g(x) =x , \) the following algorithm, presented by J.H. Wegstein in 1958, can be used
\[ x_{k+1} = \frac{x_{k-1} g(x_k ) - x_k g(x_{k-1})}{g(x_k ) + x_{k-1} -x_k - g(x_{k-1})} , \quad k=1,2,\ldots . \]
Till some extend, Wegstein's method is a version of predictor-corrector method because it can be broken into two stages:
\[ \begin{split} p^{(n+1)} = g \left( x^{(n)} \right) , \quad x^{(n+1)} = q\, x^{(n)} + \left( 1-q \right) p^{(n+1)} , \quad n=1,2,\ldots ; \\ q= \frac{b}{b-1} , \quad b= \frac{x^{(n)} - p^{(n+1)}}{x^{(n-1)} - x^{(n)}} , \end{split} \]
for \( n=0,1,2,\ldots . \) Wegstein (1958) found that the iteration converge differently depending on the value of q.
q < 0  Convergence is monotone 
0 < q < 0.5  Convergence is oscillatory 
0.5 < q < 1  Convergence 
q > 1  Divergence 
Wegstein's method depends on two first guesses x0 and x1 and poor start may cause the process to fail, converge to another root, or, at best, add unnecessary number of iterations.

Example 12: Repeat the previous example according to Wegstein's method.    ■

 

IV. Krasnosel’skii--Mann iteration algorithm (K-M)


     
   Mark Krasnosel'skii            William Robert Mann

There are several iteration techniques for approximating fixed points equations of various classes. The Picard’s iteration technique, the Mann iteration technique and the Krasnosel’skii iteration technique are the most used of all those methods. Mark Alexandrovich Krasnosel'skii (1920--1997) was a Soviet, Russian mathematician renowned for his work on nonlinear functional analysis and its applications. William Robert Mann (1920--2006) was a mathematician from Chapel Hill, North Carolina, USA. The K--M algorithm is used to a fixed point equation

\[ T\,x = x , \]
where T is a self-mapping of a closed subset. The K--M algorithm generates a sequence { xm } according to the recursive formula
\[ x_{n+1} = \left( 1 - \alpha_n \right) x_n + \alpha_n T\,x_n , \qquad n=0,1,2,\ldots , \]
where αn is a sequence in the interval (0,1) and the initial guess x0 is chosen arbitrary. Obviously, for the special case αn =1, the Krasnosel’skii--Mann iterative scheme results in Picard's method.

Example 13: Repeat the previous example according to version 1.    ■

 

V. The Adomian decomposition method


George Adomian

The Adomian decomposition method (ADM) is a method for solving nonlinear equations. The crucial step of this method includes the decomposition of the nonlinear term into so called Adomian's polynomials.

Consider the nonlinear equation f(x) = 0, which can be transformed into

\[ x = g(x) , \]
where g(x) is a nonlinear function, which is assumed to have as many derivatives as needed. Let x = c be an estimated root of the above equation x = g(x). By making a shift u = x - c, we transfer the given fixed problem to the following one:
\[ u + c = g( u+c) \qquad\Longrightarrow \qquad u = g( u+c) - c . \]
The above equation is more friendly for the Adomian decomposition method (ADM for short) because u has small value, so u0 = 0 will be its initial approximation. Then we represent u as the series that is assumed to converge:
\[ u = \sum_{i\ge 0} u_i = \sum_{i\ge 1} u_i . \]
Hence, the initial approximation is zero: u0 = 0, which corresponds to x0 = c. The nonlinear term g(u+c) can be also decomposed into the Adomian polynomials, which leads to the equation
\[ u = \sum_{i\ge 1} u_i = \sum_{i\ge 0} A_i -c , \]
where the Ai's are the Adomian polynomials:
\[ A_n \left( u_0 , u_1 , \ldots , u_n \right) = \frac{1}{n!} \left[ \frac{{\text d}^n}{{\text d}\lambda^n} \,\,g\left( c+ \sum_{j\ge 0} \lambda^j u_j \right) \right]_{\lambda = 0} , \qquad n=0,1,2,\ldots . \]
First few of these polynomials can be generated as follows:
\begin{align*} A_0 &= g\left( x_0 \right) , \\ A_1 &= x_1 g' \left( x_0 \right) , \\ A_2 &= x_2 g' \left( x_0 \right) + \frac{x_1^2}{2}\,g'' \left( x_0 \right) , \\ A_3 &= x_3 g' \left( x_0 \right) + x_1 x_2 g'' (x_0 ) + \frac{x_1^3}{3!}\,g''' \left( x_0 \right) , \\ A_4 &= x_4 g' \left( x_0 \right) + \left( \frac{x_2^2}{2!} + x_1 x_3 \right) g'' \left( x_0 \right) + \frac{1}{2!}\,x_1^2 x_2 \,g''' \left( x_0 \right) + \frac{x_1^4}{4!} \,g^{(4)} \left( x_0 \right) , \\ A_5 &= x_5 g' \left( x_0 \right) + \left( x_1 x_4 + x_2 x_3 \right) g'' \left( x_0 \right) + \frac{1}{2!}\left( x_1^2 x_3 + x_1 x_2^2 \right) g''' \left( x_0 \right) + \frac{x_1^3 x_2}{3!} \,g^{(4)} (x_0 ) + \frac{x_1^5}{5!} \, g^{(5)} \left( x_0 \right) . \end{align*}
The following Mathematica code was prepared by Jun-Sheng Duan:
Ado[M_]:= Module[{c,n,k,j,der},Table[c[n,k],{n,1,M},{k,1,n}];
A[0]=f[u[0]];
For[n = 1,n<=M,n++,c[n,1]=u[n];
For[k = 2,k<=n,k++,
c[n,k]=Expand[1/n*Sum[(j + 1)*u[j + 1]* c[n-1-j,k-1],{j,0,n-k}]]];
der = Table[D[f[u[0]],{u[0],k}],{k,1,M}];
A[n]=Take[der,n].Table[c[n,k],{k,1,n}]];
Table[A[n],{n,0,M}]]
Now we calculate each term in the sum for x recursively:
\begin{align*} u_0 &= 0 \qquad \Longrightarrow \qquad x_0 = c , \\ u_1 &= A_0 \left( u_0 \right) = g(c), \\ u_2 &= A_1 \left( u_0 , u_1 \right) = u_1 g' (c) = g(c)\,g' (c), \\ u_3 &= A_2 = u_2 g' \left( c \right) + \frac{u_1^2}{2}\,g'' \left( c \right) \\ \vdots & \qquad \vdots \\ u_{n+1} &= A_n \left( u_0 , u_1 , \ldots , u_n \right) . \end{align*}

Since un+1 = An, we represent the required solution of the fixed point problem x = g(x) as

\[ x = c + \sum_{n\ge 1} u_n c + g(c) + g(c)\,g' (c) + g(c)\,g' (c) + \frac{1}{2}\,g^2 (c)\, g'' (c) + \cdots \]
Five term approximation becomes
\begin{align*} x &= c + g(c) \left[ 1 + g' (c) + \left[ g' (c) \right]^2 + \left[ g' (c) \right]^3 + \left[ g' (c) \right]^4 + \frac{1}{2}\,g(c) \,g'' (c) + \frac{3}{2}\,g(c)\,g' (c)\, g'' (c) + 3\,g(c) \left[ g' (c) \right]^2 g'' (c) \right] \\ & \quad + \frac{g^3 (c)}{3} \left[ \frac{1}{2}\,g''' (c) + 2\,g' (c)\,g''' (c) + \frac{1}{8}\, g(c)\,g^{(4)} (c) \right] . \end{align*}
u1 = g; u2 = g*g1; u3 = u2*g1 + u1^2 /2 *g2;
u4 = u3*g1 + u1*u2*g2 + u1^3/6 *g3;
u5 = u4*g1 + (u2^2/2 + u1*u3)*g2 + 1/2*u1^2*u2*g3 + u1^4/4! *g4;
s = FullSimplify[u1 + u2 + u3 + u4 + u5];
Expand[s/g]
1 + g1 + g1^2 + g1^3 + g1^4 + (g g2)/2 + (3 g g1 g2)/2 + 3 g g1^2 g2 + (g^2 g2^2)/2 + (g^2 g3)/6 + 2/3 g^2 g1 g3 + (g^3 g4)/24
Depending on the initial approximation c, we can calculate its 5-term Adomian approximation using the following Mathematica code
AdoFixed5[c_, g_] := Module[{x},
x = c + N[g[c]]*(1 + N[D[g[x], x] /. x -> c] +
N[D[g[x], x] /. x -> c]^2 + N[D[g[x], x] /. x -> c]^3 +
N[(D[g[x], x] /. x -> c)^4] +
0.5*N[g[c]]*N[D[g[x], x, x] /. x -> c] +
N[3/2 *g[c]]*N[D[g[x], x] /. x -> c]* N[D[g[x], x, x] /. x -> c ] +
N[3*g[c]]*N[D[g[x], x] /. x -> c]^2 * N[D[g[x], x, x] /. x -> c]) +
N[(g[c])^3 /3] *(0.5*N[D[g[x], x, x, x] /. x -> c] +
2*N[D[g[x], x] /. x -> c]*N[D[g[x], x, x, x] /. x -> c] +
N[1/8*g[c]]*N[D[g[x], {x, 4}] /. x -> c])]

The ADM does not assure the existence and uniqueness of solutions, but it can be safely applied in cases when a fixed point theorem holds. This will be demonstrated in the following examples.

Example 14: Let us calculate the first five Adomian polynomails for g(x) = 1/x = x-1:

Series[1/(Sum[Subscript[x, n] lambda^n, {n, 0, 11}] ), {lambda, 0, 10}]
\begin{align*} A_0 &= \frac{1}{x_0} , \\ A_1 &= - \frac{x_1}{x_0^2} , \\ A_2 &= \frac{1}{x_0^3} \left( x_1^2 - x_0 x_2 \right) , \\ A_3 &= \frac{1}{x_0^4} \left( -x_1^3 + 2\,x_0 x_1 x_2 - x_0^2 x_3 \right) , \\ A_4 &= \frac{x_1^4}{x_0^5} - 3\,\frac{x_1 x_3}{x_0^4} + \frac{x_2^2}{x_0^3} + 2\,\frac{x_1 x_3}{x_0^3} - \frac{x_4}{x_0^2} , \\ A_5 &= - \frac{x_1^5}{x_0^6} + 4\,\frac{x_1^3 x_2}{x_0^5} - 3\,\frac{x_1^2 x_3}{x_0^4} + 2\, \frac{x_2 x_3}{x_0^3} + 2\, \frac{x_1 x_4}{x_0^3} - \frac{x_5}{x_0^2} . \end{align*}
   ■

Example 15: Consider the simplest algebraic equation---quadratic equation

\[ a\, x^2 +b\, x +c =0, \qquad b\ne 0, \]
where a, b, and c are some given constants. The case b = 0 can be easily included in our consideration by adding and subtracting x. Expressing x from it, we obtain
\[ x = - \frac{c}{b} - \frac{a}{b} \, x^2 = \alpha + \beta\,x^2 \qquad\mbox{with} \quad \alpha = - \frac{c}{b} , \ \beta = - \frac{a}{b} . \]
It is convenient to consider alongside a numerical example and show all calculations for it. Therefore, we consider the equation
\[ x^2 -x -6 =0 , \]
whose solutions are x = 3 and x = -2. We rewrite it in the form
\[ x = -6 + x^2 \qquad \Longrightarrow \qquad \alpha =-6, \quad \beta = 1 . \]
We decompose x as an infinite series
\[ x = x_0 + x_1 + x_3 + \cdots = \sum_{n\ge 0} x_n . \]
Our next step is to express the nonlinear term x² as a series over Adomian's polynomials. More specifically, we can write the quadratic equation in the operator form
\[ L\,x = N\left[ x \right] + g , \]
where L x = x, N[x] = -ax²/b = βx² = x², and g = -c/b = α 6. We decompose the nonlinear term into the Adomian polynomials:
\[ N\left[ x \right] = - \frac{a}{b} \, x^2 = \beta\,x^2 = \beta \left( A_0 + A_1 + A_3 + \cdots \right) = \beta \sum_{n\ge 0} A_n . \]
Substituting both series into the quadratic equation, we get
\[ x = \sum_{n\ge 0} x_n = - \frac{c}{b} - \frac{a}{b} \sum_{n\ge 0} A_n = \alpha + \beta \sum_{n\ge 0} A_n . \]
The various components xn of the solution x can be easily determined by using the recursive relation
\[ \begin{split} x_0 &= - \frac{c}{b} = \alpha , \qquad \Longrightarrow \qquad x_0 = -6, \\ x_{n+1} &= \beta \, A_n , \qquad n=0,1,2,\ldots . \end{split} \]
Using the explicit expression for N, we find first few Adomian polynomials:
\begin{align*} A_0 &= x_0^2 = \frac{c^2}{b^2} \qquad \Longrightarrow \qquad A_0 = 6^2 = 36 = x_1 , \\ A_1 &= 2\,x_0 x_1 = -12 \, x_1 \qquad \Longrightarrow \qquad A_1 = -12\,x_1 = -12 \cdot 36 = -432 = x_2 , \\ A_2 &= x_1^2 + 2\,x_0 x_2 \qquad \Longrightarrow \qquad A_2 = 3888 = x_3 , \\ A_3 &= 2\, x_1 x_2 + 2\,x_0 x_3 \qquad \Longrightarrow \qquad A_3 = x_4 =-77760, \\ A_4 &= x_2^2 + 2\,x_1 x_3 + 2\, x_0 x_4 \qquad \Longrightarrow \qquad A_4 = x_5 = 1399680 , \\ A_5 &= 2\,x_0 x_5 + 2\,x_1 x_4 + 2\,x_2 x_3 \qquad \Longrightarrow \qquad A_5 = x_6 = \\ A_6 &= x_3^2 + 2\,x_0 x_6 + 2\,x_1 x_5 + 2\,x_2 x_4 \qquad \Longrightarrow \qquad A_6 = \\ A_7 &= 2\,x_0 x_7 + 2\,x_1 x_6 + 2\,x_2 x_5 + 2\,x_3 x_4 \qquad \Longrightarrow \qquad A_7 = \\ A_8 &= x_4^2 + 2\,x_0 x_8 + 2\,x_1 x_7 + 2\, x_2 x_6 + 2\,x_3 x_5 , \\ \vdots & \end{align*}
Therefore,
\begin{align*} x_0 &= - \frac{c}{b} = \alpha , \qquad \Longrightarrow \qquad x_0 = 6, \\ x_1 &= A_0 = \beta\, A_0 = \beta\,x_0^2 = \beta\,\alpha^2 \qquad \Longrightarrow \qquad x_1 = 6^2 = 36 , \\ x_2 &= \beta \, A_1 = 2\,\beta \,x_0 x_1 = 2\,\beta^2 \alpha^3 \qquad \Longrightarrow \qquad x_2 = -2\cdot 6^3 , \\ x_3 &= \beta \,A_2 = \beta \left( x_1^2 + 2\, x_0 x_2 \right) = 5\, \beta^2 \alpha^4 \qquad \Longrightarrow \qquad x_3 = 5\cdot 6^4 , \\ x_4 &= \beta \left( 2\,x_0 x_4 + 2\,x_1 x_2 \right) = 14\,\beta^4 \alpha^5 \qquad \Longrightarrow \qquad x_4 = - 14\cdot 6^5 , \\ x_5 &= \beta \, A_4 \qquad \Longrightarrow \qquad x_5 = \\ x_6 &= \\ x_7 &= \\ \vdots & \end{align*}
We recognize that the series \( x= x_0 + x_1 + x_2 + \cdots \) in our case becomes the Maclaurin expansion of the function
\[ x = \frac{1}{2\beta} - \frac{1}{2\beta} \,\sqrt{1 - 4\,\alpha\beta} = 1 - 2z - 2z^2 - 4z^3 - 10 z^4 -28 z^5 - \cdots , \qquad z= \alpha\beta . \]
The series converges when \( \left\vert 4\alpha\beta \right\vert <1 , \) which is equivalent to \( \displaystyle \left\vert 4\left( -\frac{c}{b} \right) \left( -\frac{a}{b} \right) \right\vert <1 . \) So the series converges when \( b^2 > \left\vert a\,c \right\vert . \) This condition is violated for our case and series diverges.

We rewrite our algebraic equation as \( x= g(x) = x^2 -6 . \) Then the derivative, \( g' (x) = 2x , \) at two roots is \( |g' (3) | > 1 \) and \( |g' (-2) | = 4 > 1 , \) so our equation is not a contraction (as required by condition discovered by Cherruault for ADM convergence). To overcome this misfortune, we add and subtract 4x to both sides of the given equation

\[ x^2 -x -6 + 4x = 4x \qquad \Longleftrightarrow \qquad x = g(x) = \frac{1}{4}\, x^2 + \frac{3}{4}\, x - \frac{3}{2} . \]
Now we choose x0 = -3/2, and expand the remaining function into Adomian series
\[ \frac{1}{4}\, x^2 + \frac{3}{4}\, x = \frac{x}{4} \left( x+3 \right) = \sum_{n\ge 0} A_n , \qquad \mbox{with} \quad A_0 = f(x_0 ) = - \frac{9}{16} , \]
where \( f(x) = x\left( x+3 \right) /4 . \) Substituting the series into our quadratic equation, we obtain
\[ x_{n+1} = A_n , \qquad n=0,1,2,\ldots . \]
Now we calculate the Adomian polynomials using derivatives
\[ f(x) = \frac{x}{4} \left( x+3 \right) \qquad \Longrightarrow \qquad f'(x) = \frac{2x+3}{4} , \quad f'' (x) = \frac{1}{2} . \]
Therefore,
\begin{align*} A_0 &= f(x_0 ) = - \frac{9}{16} \approx - 0.5625, \\ A_1 &= f' (x_0 )\,x_1 = 0 , \\ A_2 &= \frac{1}{4} \, x_1^2 , \\ A_3 &= \frac{1}{2}\, x_1 x_2 , \\ A_4 &= \frac{1}{4} \left( 2\,x_1 x_3 + x_2^2 \right) , \\ A_5 &= \frac{1}{2} \left( x_1 x_4 + x_2 x_3 \right) , \\ A_6 &= \frac{1}{4} \left( 2\,x_1 x_5 + 2\, x_2 x_4 + x_3^2 \right) , \\ A_7 &= \frac{1}{2} \left( x_1 x_6 + x_2 x_5 + x_3 x_4 \right) , \\ A_8 &= \frac{1}{4} \left( 2\, x_1 x_7 + 2\,x_2 x_6 + 2\,x_3 x_5 + x_4^2 \right) , \end{align*}
and so on. Then we determine
\begin{align*} x_0 &= f(x_0 ) = - \frac{3}{2} , \\ x_1 &= - \frac{9}{16} = - \frac{3^2}{2^4} \approx - 0.5625, \\ x_2 &= 0 , \\ x_3 &= \frac{1}{4} \,\frac{9^2}{16^2} = \frac{3^4}{2^{10}} \approx 0.0791016 , \\ x_4 &= 0, \\ x_5 &= - \frac{1}{4} \, 2\,\frac{9}{16} \, \frac{1}{4} \,\frac{9^2}{16^2} = - \frac{729}{32768} \qquad \Longrightarrow \qquad x_3 + x_5 = \frac{1863}{32768} \approx 0.0568542, \\ x_6 &= 0 , \end{align*}
and so on. Adding the first terms, we get approximations
\[ z_2 = x_0 + x_1 + x_2 = - \frac{33}{16} \approx - 2.0625 \]
and
\[ z_5 = x_0 + x_1 + x_2 + x_3 + x_4 + x_5 = - \frac{65721}{32768} \approx -2.00565 , \]
which we expect to be approximations to the root -2.    ■

Example 16: Consider the transcendent equation

\[ x\,\cos x - e^{x}\, \sin x +x = 0, \]
which we transfer into fixed point expression by subtracting x from both sides:
\[ x= g(x) , \qquad\mbox{where} \quad g(x) = e^{x}\, \sin x - x\,\cos x . \]
First, we find its nontrivial (≠ 0) root with the standard Mathematica command:
FindRoot[Exp[x]*Sin[x] - x*Cos[x] - x == 0, {x, 0.5}]
{x -> 0.656321}
So the root is 0.6563205569359923. Let x = c be an estimated root of the above equation x = g(x). By making a shift u = x - c, we transfer the given fixed problem to the following one:
\[ u + c = g( u+c) \qquad\Longrightarrow \qquad u = g( u+c) - c . \]
Assuming that the null of the above equation is represented by a convergent series
\[ u= \sum_{i\ge 1} u_i , \]
we decompose function g(u+c) into Adomian series:
\[ g(u+c) = e^{u+c}\, \sin (u+c) - (u+c)\,\cos (u+c) = \sum_{k\ge 0} A_k , \]
where u0 = 0 and
\begin{align*} u_1 &= A_0 = g(c) , \\ u_2 &= A_1 = u_1 g'(c) = g(c)\,g'(c) , \\ u_3 &= A_2 = u_2 g' (c) + \frac{u_1^2}{2}\,g'' (c) = g(c) \left\{ \left[ g' (c) \right]^2 + \frac{1}{2}\, g(c)\,g'' (c) \right\} , \\ u_4 &= A_3 = \frac{1}{6} \left( -5\,x_1^3 -12\,x_1 x_2 + 6\, x_3 \right) , \\ A_4 &= x_4 -\frac{5}{2}\,x_1^2 x_2 - x_2^2 -2\,x_1 x_3 , A_5 &= x_5 -2\,x_1 x_4 - 2\,x_2 x_3 - \frac{5}{2}\,x_1^2 x_3 - \frac{5}{2}\,x_1 x_2^2 + \frac{3}{40}\,x_1^5 . \end{align*}
Series[(c+Sum[Subscript[x, n] lambda^n, {n, 1, 6}])* Cos[c+Sum[Subscript[u, n] lambda^n, {n, 1, 6}]] - Exp[c+Sum[Subscript[u, n] lambda^n, {n, 1, 6}]]* Sin[c+Sum[Subscript[u, n] lambda^n, {n, 1, 6}] , {lambda, 0, 6}]
So we see that we get zero. Of course, this is an obvious root. Now we try another initial approximation, say x0 = 1/2 = 0.5. We solve the equation
\[ x = e^{x}\,\sin (x) - x\,\cos (x) -1/2 + 1/2 = g(x) + 1/2, \]
by representing x as the convergent series:
\[ x = \sum_{i\ge 0} x_i = 1/2 + \sum_{i\ge 1} x_i \]
The nonlinear function \( g(x) = x\,\cos (x) - e^{x}\,\sin (x) -1/2 \) is decomposed into Adomian series:
\[ g(x) = x\,\cos (x) - e^{x}\,\sin (x) -1/2 = \sum_{k\ge 0} A_k . \]
We again ask Mathematica for help:
Series[(1/2+Sum[Subscript[x, n] lambda^n, {n, 1, 6}])* Cos[1/2+Sum[Subscript[x, n] lambda^n, {n, 1, 6}]] - Exp[1/2+Sum[Subscript[x, n] lambda^n, {n, 1, 6}]]* Sin[1/2+Sum[Subscript[x, n] lambda^n, {n, 1, 6}] , {lambda, 0, 6}]
This gives
\begin{align*} x_0 &= 1/2 , \\ x_1 &= A_0 = 3\,\cos \left( \frac{1}{2} \right) - e^{1/2} \sin \left( \frac{1}{2} \right) -1/2, \\ x_2 &= A_1 = x_1 \left[ \cos \left( \frac{1}{2} \right) - e^{1/2} \cos \left( \frac{1}{2} \right) - 3\,\sin \left( \frac{1}{2} \right) - e^{1/2} \sin \left( \frac{1}{2} \right) \right] , \\ x_3 &= A_2 = x_1^2 \left[ - \sin \left( \frac{1}{2} \right) - e^{1/2} \cos \left( \frac{1}{2} \right) - \frac{3}{2}\,\cos \left( \frac{1}{2} \right)\right] + x_2 \left[ \cos \left( \frac{1}{2} \right) - e^{1/2} \cos \left( \frac{1}{2} \right) - e^{1/2} \sin \left( \frac{1}{2} \right) - 3\,\sin \left( \frac{1}{2} \right) \right] , \end{align*}
and so on. We represent these components in floating point representation:
\begin{align*} x_0 &= 0.5 , \\ x_1 &= 1.34231 , \\ x_2 &= -3.75581 , \\ x_3 &= 4.66619 , \\ x_4 &= 18.6289 . \\ \end{align*}
Adding the first four terms, we suppose to obtain the root approximation:
\[ x \approx x_0 + x_1 + x_2 + x_3 + x_4 = 21.3816, \]
which is wrong. This indicates that our initial approximation x0 = 0.5 is too far for the ADM method. Now we try another initial approximation: x0 = 0.6. Computations show.
g[x_] = Exp[x]*Sin[x] - x*Cos[x] - 0.6;
x0 = 0.6;
x1 = g[x0]
-0.0663557
x2 = x1*(D[g[x], x] /. x -> x0)
-0.135774
x3 = x2*(D[g[x], x] /. x -> x0) + x1^2/2*(D[g[x], x, x] /. x -> x0)
-0.267617
x4 = x3*(D[g[x], x] /. x -> x0) + x1*x2*(D[g[x], x, x] /. x -> x0) + x1^3/6*(D[g[x], x, x, x] /. x -> x0)
-0.506002
x5 = x4*(D[g[x], x] /. x -> x0) + (x2^2/2 + x1*x3)*(D[g[x], x, x] /. x -> x0) + 1/2*x1^2 *x2*(D[g[x], {x, 3}] /. x -> x0) + x1^3 *x2/6*(D[g[x], {x, 4}] /. x -> x0) + x1^5/120*(D[g[x], {x, 5}] /. x -> x0)
-0.911373
Adding these components, we get
p5 = x0 + x1 + x2 + x3 + x4 + x5
which gives again the wrong answer. Therefore, we conclude that this fixed point problem does not satisfy conditions of fixed point theorem and the ADM method diverges independently what initial guess was made.    ■

Example 17: Consider the transcendent equation

\[ 3\,x\,\cos x - e^{-x}\, \sin x +x = 0, \]
which we transfer into fixed point expression by subtracting x from both sides:
\[ x= g(x) , \qquad\mbox{where} \quad g(x) = e^{-x}\, \sin x - 3\,x\,\cos x . \]
By plotting these functions, g(x) and x, we estimate its fixed point to be around 1.8:
Plot[{Exp[-x]*Sin[x] - 3*x*Cos[x], x}, {x, 0, 2}, PlotStyle -> Thick]
Mathematica easily finds its root to be 1.883605826148771.
FindRoot[Exp[-x]*Sin[x] - 3*x*Cos[x] - x == 0, {x, 2}]
{x -> 1.88361}
Root of the given equation.
   ■

 

  1. Ezzatia, R., and Azadegan, E., A two-parameter family of second-order iterative methodsfor solving non-linear equations, Mathematical Sciences, 2008, Vol. 2, No. 2, pp. 151--158.
  2. Gautschi, W., Alexander M. Ostrowski, (1893–1986): His life, work, and students (2010).
  3. Hajjah, A., Imran, M., and Gamal, M.D.H., A two-step iterative method free from derivative for solving nonlinear equations, Applied Mathematical Sciences, 2014, Vol. 8, 2014, no. 161, 8021 - 8027. http://dx.doi.org/10.12988/ams.2014.49710
  4. Krasnosel’skii, M.A., Two remarks on the method of successive approximations, Uspekhi Matematicheskikh Nauk, 1955, vol. 10, pp. 123--127.
  5. Mann, W.R., Mean value methods in iteration, Proceedings of the American Mathematical Society, 1953, vol.4, pp. 506--510.
  6. McNamee, J.M., Numerical Methods for Roots of Polynomials, Part I, Elsevier, Amsterdam, 2007.
  7. Ortega, J.M., Rheinboldt, W.C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970.
  8. Ostrowski, A.M., Solutions of Equations and System of Equations, Academic Press, New York, 1960
  9. Petković, M.S., Neta, R., Petković, L.D., Džunić, J., Multipoint methods for solving nonlinear equations: A survey, Applied Mathematics and Computation, 2014, Vol. 226, pp. 635--660.
  10. Traub, J.F., Iterative Methods for the Solution of Equations, Prentice-Hall, Englewood Cliffs, New Jersey, 1964.
  11. Wegstein, J.H., Accelerating convergence of iterative processes, Communications of the Association for the Computing Machinery (ACM), 1, 9--13, 1958.

 

 

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)