Preface


This section gives comparisons to some iterative methods discussed in the previous sections.
Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part III of the course APMA0330

Comparison analysis


Practically, nonlinear equations are mostly impossible to solve analytically. Therefore, iterative methods are generally employed in such situations. There are known two classes of iteration methods for solving nonlinear equations and systems of nonlinear equations: one-point and multipoint schemes. The one-point methods can attain high order by using higher derivatives of the function, which is expensive from a computational point of view. On the other hand, the multipoint methods are allowing the user not to throw away information that had already been computed. This approach provides the construction of very efficient root-finding methods, which explains recent increased interest in study of multipoint root-finding methods.

Kung--Traub conjecture: Multipoint iterative methods without memory costing n + 1 function evaluations per iteration have order of convergence at most 2n.

Although the order of convergence is an important feature of iteration, it is not the only goal in constructing root-finding algorithm. Another objective is to determine the root with minimal computational cost. When a derivative of the function is complicated, then iteration methods without derivatives are preferable. Multipoint methods that satisfy the Kung--Traub conjecture are usually called optimal methods, and naturally they are of particular interest. In order to nderstand the efficiency of numerical procedures, we present examples of applications of some famous iteration methods along with optimal one point methods (Newton's and Steffensen's iterations) and two point Ostrowski method.

Let f be a real single-valued function of a real variable. If f(α) = 0, then (α is said to be a zero of f or, equivalently, a root of the equation f(x) = 0. To find its root, we consider an iteration process

\[ x_{n+1} = \phi \left( x_n \right) , \qquad n=0,1,2,\ldots , \]
where xn is an approximation to the zero α of the equation f(x) = 0 and ϕ is an iteration function. The iterative method starts with an initial guess x0 and at every step we use only the last known approximate. In this case, we call the method one-point. The function ϕ may depend on derivatives of f in order to increase the order. In fact, to get a method of order r, one has to use all derivatives up to order r -1. For example, the most well known one-point iterative method of order 2 is Newton’s method or Newton–Raphson’s method, named after Isaac Newton and Joseph Raphson. If the iteration function ϕ depends on more than one previous step, we get a multipoint method. The simplest examples of a multipoint iteration methods is the secant method.

Another type of iteration functions is derived by using the expressions depending on the function f(x). Its example gives Steffensen’s method

\[ x_{n+1} = x_n - \frac{f^2 (x_n )}{f\left( x_n + f\left( x_n \right) \right) - f\left( x_n \right)} , \qquad n=0,1,2,\ldots . \]
We test the iterative methods to find the root of the equation f(x) = 0 with f to be an algebraic polynomial or transcendent function on some examples. First, we remind the main formulas used so far.
  1. The secant method (order of convergence is the golden ratio \( \phi = \left( 1 + \sqrt{5} \right)/2 \approx 1.618 \) ):
    \[ x_{k+1} = x_k - \frac{f(x_k ) \left( x_k - x_{k-1} \right)}{f (x_k ) - f(x_{k-1} )} , \qquad k=1,2,\ldots . \]
  2. Newton--Raphson Method (of order two):
    \[ x_{k+1} = x_k - \frac{f(x_k )}{f' (x_k )}, \quad f' (x_k ) \ne 0, \qquad k=0,1,2,\ldots . \]
  3. Newton--secant method (of order three):
    \[ z_k = x_k - \frac{f(x_k )}{f' (x_k )}, \qquad x_{k+1} = z_k - \frac{f(z_k )\left( x_k - z_k \right)}{f (x_k ) - f (z_k )}, \qquad k=0,1,2,\ldots . \]
  4. The double-Newton method with fourth-order convergence
    \[ z_k = x_k - \frac{f(x_k )}{f' (x_k )}, \qquad x_{k+1} = x_k - \frac{f(x_k )}{f' (x_k )} - \frac{f(z_k )}{f' (z_k )} , \qquad k=0,1,2,\ldots . \]
  5. The Chebyshev method (of order three):
    \[ x_{k+1} = x_k - \frac{f(x_k )}{f' (x_k )} - \frac{\left[ f(x_k ) \right]^2 f'' (x_k )}{2 \left[ f'(x_k ) \right]^3} , \quad f' (x_k ) \ne 0, \qquad k=0,1,2,\ldots . \]
  6. The BSC (Basto--Semiao--Calheiros) method (of order three):
    \[ x_{k+1} = x_k - \frac{f(x_k )}{f' (x_k )} - \frac{\left[ f(x_k ) \right]^2 f'' (x_k )}{2 \left[ f'(x_k ) \right]^3 - 2\,f(x_k )\,f' (x_k )\,f'' (x_k )} , \qquad k=0,1,2,\ldots . \]
  7. Halley’s method (of order three):
    \[ x_{k+1} = x_k - \frac{f(x_k )}{f' (x_k )} - \frac{2\,f(x_k )\,f'(x_k )}{2 \left[ f'(x_k ) \right]^2 - f (x_k )\,f'' (x_k )} , \qquad k=0,1,2,\ldots . \]
  8. Ostrowski’s method (of order three):
    \[ x_{k+1} = x_k - \frac{f(x_k )}{\sqrt{\left[ f' (x_k ) \right]^2 - f(x_k )\,f'(x_k )}} , \qquad k=0,1,2,\ldots . \]
  9. A family of iterative formulae having third order convergence:
    \[ x_{k+1} = x_k - \left( 1 + \frac{1}{2}\,\frac{L_f (x_k )}{1 - \beta\,L_f (x_k )} \right) \frac{f(x_k )}{f' (x_k )} , \quad L_f (x_k ) = \frac{f'' (x_k )\,f(x_k )}{\left[ f' (x_k )\right]^2} , \qquad k=0,1,2,\ldots . \]
    This family includes the Chebyshev’s method (β = 0), Halley's method (β = ½), and super-Halley method (β = 1).
  10. Traub--Ostrowski’s method (of order four):
    \[ z_k = x_k - \frac{f(x_k )}{f' (x_k )}, \qquad x_{k+1} = x_k - \frac{f(z_k )- f(x_k )}{2\, f (z_k ) - f(x_k )} \cdot \frac{f(x_k )}{f' (x_k )} , \qquad k=0,1,2,\ldots . \]
  11. Classical two-point optimal Ostrowski’s method (of order four):
    \[ z_k = x_k - \frac{f(x_k )}{f' (x_k )}, \qquad x_{k+1} = x_k - \frac{f(x_k )}{f (x_k ) - 2\, f(z_k )} \cdot \frac{f(z_k )}{f' (x_k )} , \qquad k=0,1,2,\ldots . \]
  12. Laguerre’s method (of order three):
    \[ x_{k+1} = x_k - \frac{\nu\, f(x_k )}{f' (x_k ) + \sqrt{\left( \nu -1 \right) \left[ f' (x_k ) \right]^2 - \nu \left( \nu -1 \right) f(x_k )\, f'' (x_k )}} , \qquad k=0,1,2,\ldots ; \]
    where ν ≠ 0,1.
  13. King’s method (of order four):
    \[ z_k = x_k - \frac{f(x_k )}{f' (x_k )}, \qquad x_{k+1} = x_k - \frac{f(x_k )}{f' (x_k )} - \frac{f(x_k ) + 3\, f(z_k )}{f(z_k ) + f(x_k )} \cdot \frac{f(x_k )}{f' (x_k )} , \qquad k=0,1,2,\ldots . \]
  14. Jarratt’s method (of order four):
    \[ z_k = x_k - \frac{f(x_k )}{f' (x_k )}, \qquad x_{k+1} = x_k - \left[ 1 - \frac{3}{2} \cdot \frac{f' (z_k ) - f' (x_k )}{3\,f' (z_k ) - f' (x_k )} \right] \frac{f(x_k )}{f' (x_k )} , \qquad k=0,1,2,\ldots . \]
    or
    \[ x_{k+1} = x_k - J_f (x_k )\,\frac{f(x_k )}{f' (x_k )} , \quad J_f (x_k ) = \frac{3\,f' (y_k ) + f' (x_k )}{6\,f' (y_k ) -2\,f' (x_k )}, \quad y_k = x_k - \frac{2}{3}\,\frac{f(x_k )}{f' (x_k )} , \quad k =0,1,\ldots . \]
  15. Fifth order method:
    \[ y_k = x_k - \frac{f(x_k )}{f' (x_k )} , \quad x_{k+1} = y_k - \frac{f' (y_k ) + 3\, f' (x_k )}{5\,f' (y_k ) - f' (x_k )} \cdot \frac{f (y_k )}{f' (x_k )} \quad k-0,1,2,\ldots . \]
  16. Six oreder method developed by Parhi and Gupta:
    \[ y_k = x_k - \frac{f(x_k )}{f' (x_k )} , \quad z_k = x_k - \frac{2\,f(x_k )}{f' (x_k ) + f' (y_k )} , \quad x_{k+1} = z_k - \frac{f(x_k )}{f' (x_k )} \,\frac{f' (x_k ) + f' (y_k )}{3\,f' (y_k ) - f' (x_k )} . \]
  17. Six oreder Jarratt's method
    \begin{align*} J_f (x_k ) &= \frac{3\,f' (y_k ) + f' (x_k )}{6\,f' (y_k ) -2\,f' (x_k )}, \quad y_k = x_k - \frac{2}{3}\,\frac{f(x_k )}{f' (x_k )} , \\ z_k &= x_k - J_f (x_k )\, \frac{f(x_k )}{f' (x_k )} , \\ x_{k+1} &= z_k - \frac{f(z_k )}{f' (z_k )} , \qquad k=0,1,2,\ldots . \end{align*}

All computations were done using Mathematica with 64 digit floating point arithmetics. We acceptan approximate solution rather than the exact root, depending on the precision (ϵ = 10-15) of the computer. We use the following stopping criteria for computations: (i)   \( \left\vert x_{k+1} - x_{k} \right\vert < \epsilon , \)    (ii)   \( \left\vert f \left( x_{k+1} \right) \right\vert < \epsilon , \) and so, when the stop-ping criterion is satisfied, xk+1 is taken as the exact root.

Comparisons was done on the following examples of functions:

 

  1. Chun, C., Ham, Y.M., Some fourth-order modifications of Newton’s method, Applied Mathematics and Computation, 2008, Vol. 197, pp. 654--658.
  2. Forsythe, G.E., Malcolm, M.A., and Moler, C.B. Computer Methods for Mathematical Computations, (Englewood Cliffs, NJ: Prentice-Hall. 1977.
  3. Jarratt, P., Some fourth order multipoint iterative methods for solving equations, Math. Comput. 20 (95) (1966) 434–437.
  4. King, R.F., A Family of Fourth Order Methods for Nonlinear Equations, SIAM Journal on Numerical Analysis, 1973, Volume 10, Issue 5, pp. 876--879. https://doi.org/10.1137/0710072
  5. Kou, J., Li, Y., The improvements of Chebyshev–Halley methods with fifth-order convergence, Applied Mathematics and Computation, 2007, Volume 188, Issue 1, pp. 143--147. https://doi.org/10.1016/j.amc.2006.09.097
  6. Kung, H.T., Traub, J.F., Optimal order of one-point and multipoint iterations, 1973, Report.
  7. Neta, B., Numerical Methods for the Solution of Equations, 1983, California.
  8. Ostrowski, A.M., Solution of Equations in Euclidean and Banach Space, Academic Press, New York, 1973.
  9. Parhi, S.K., Gupta, D.K., A sixth order method for nonlinear equations, Applied Mathematics and Computation, 2008, Vol. 203, pp. 50--55.
  10. Petković, M.S., Neta, B., Petković, L.D., Džunić, J., Multipoint methods for solving nonlinear equations: A surve, Applied Mathematics and Computation, 2014, Vol. 226, pp. 635--660.
  11. Sharma, J.R., Arora, H., Efficient Ostrowski-like methods of optimal eighthand sixteenth order convergence and their dynamics, Afrika Matematika (2019) 30: pp. 921–941 https://doi.org/10.1007/s13370-019-00691-2
  12. Wang, X., Kou, J., Li, Y., A variant of Jarratt method with sixth-order convergence, Applied Mathematics and Computation, 2008, Vol. 204, pp. 14--19.
  13. Zhang, Q., Li, J., Huang, F., Optimal Steffensen-type families for solving nonlinear equations, Applied Mathematics and Computation, 2011, Vol. 217, Issue 23, pp. 9592--9597. https://doi.org/10.1016/j.amc.2011.04.035
  14. Verona, J.L., Graphic and numerical comparison between iteration methods, The Mathematical Intelligencer, 2002, volume 24, pages 37–46.

 

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)