Preface
This tutorial was made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very little experience or have never used Mathematica before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel.
Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in normal font. This means that you can copy and paste all commands into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330
Return to Part III of the course APMA0330
Secant method
In this section, we consider a problem of finding root of the equation \( f(x) =0 \) for sufficiently smooth function f(x). If this equation has a solution, it is called a zero/null of the function f. Every rootfinding problem can be transformed into any number of fixed point problems. Consider the following example.
Example: Let \( f(x) = x^5 -5x+3 \) and we try to find its null, so we need to solve the equation \( x^5 -5x+3 =0 . \) Mathematica provides two real positive roots:
Let us add and subtract x from the equation: \( x^5 -5x+3+x =x . \) Expressing x, we derive another fixed point formula
The formula involved in the secant method is very close to the one used in regula falsi:
Example: Let us find a positive square root of 6. To start secant method, we need to pick up two first approximations,which we choose by obvious bracketing: \( x_0 =2, \quad x_1 =3 . \)
Example: We consider the function \( f(x) = e^x\, \cos x - x\, \sin x \) that has one root within the interval [0,3]. We define the range of x:
xa = 0; xb = 3;
PlotLabel -> "Function on given interval", PlotStyle -> {FontSize -> 11, Blue}]

Graphics[Line[{{xguess1, maxi}, {xguess1, mini}}]], Axes -> True,
PlotLabel -> "Function with upper and lower guesses",
PlotStyle -> {FontSize -> 10}, PlotRange -> {{0.7, 2}, {-0.2, 0.4}}]
secantline[x_] = m*x + f[xguess2] - m*xguess2;
sline = Plot[secantline[x], {x, xa, x1}, PlotStyle -> {Orange}]
Show[Graphics[Line[{{xguess2, maxi}, {xguess2, mini}}]],
Graphics[Line[{{x1, maxi}, {x1, mini}}]], curve,
Graphics[Line[{{xguess1, maxi}, {xguess1, mini}}]], sline,
Axes -> True, PlotLabel -> "Function with upper and lower guesses and estimated root",
PlotStyle -> {FontSize -> 10}, PlotRange -> {{0.5, 2}, {-0.25, 0.9}}]

We make the second iteration.
secantline[x_] = m*x + f[x1] - m*x1;
sline = Plot[secantline[x], {x, 1, 1.3}, PlotStyle -> {Orange}]
Show[Graphics[Line[{{xguess2, maxi}, {xguess2, mini}}]],
Graphics[Line[{{x1, maxi}, {x1, mini}}]], curve,
Graphics[Line[{{xguess1, maxi}, {xguess1, mini}}]], sline,
Axes -> True, PlotLabel -> "Function with upper and lower guesses and estimated root",
PlotStyle -> {FontSize -> 10}, PlotRange -> {{0.5, 2}, {-0.25, 0.9}}]

II. Newton’s Method or Newton--Raphson Method
The following iterative method, named after the English mathematicians Isaac Newton (1642--1726) and Joseph Raphson (1648--1715), used for solving the equation f(x) = 0:
Newton's method can be realized with the aid of FixedPoint command:
Example: Suppose we want to find a square root of 4.5. First, we apply NewtonZero command:
When Newton's method is applied to find a square root of a positive number A, we get the Babylonian formula
The Babylonians had an accurate and simple method for finding the square roots of numbers. This method is also known as Heron's method, after the Greek mathematician who lived in the first century AD. Indian mathematicians also used a similar method as early as 800 BC. The Babylonians are credited with having first invented this square root method, possibly as early as 1900 BC.
Example: We use Newton's method to find a positive square root of 6. Starting with one of the two initial positions, we get
Do[Print["k= ", k, " " , "p= " , p[k] = (p[k - 1] + 6/p[k - 1])/2], {k, 1, 5}]
Theorem: Let f be twice continuously differentiable function on the interval [a,b] with \( p \in \left( a, b \right) \quad\mbox{and} \quad f(p) =0 . \) Suppose that \( f' (p) \ne 0. \) Then there exists a positive number δ such that for any \( p_0 \in \left[ p - \delta , p+\delta \right] , \) the sequence \( \left\{ p_n \right\} \) generated by Newton's algorithm converges to p. ■
Sometimes Newton’s method does not converge; the above theorem guarantees that δ exists under certain conditions, but it could be very small. Newton's method is a good way of approximating solutions, but applying it requires some intelligence. You must beware of getting an unexpected result or no result at all. So why would Newton’s method fail? Well, the derivative may be zero at the root (so when the function at one of the iterated points will have zero slope); the function may fail to be continuously differentiable; one of the iterated points xn is a local minimum/maximum of f; and you may have chosen a bad starting point, one that lies outside the range of guaranteed convergence. It may happen that in the Newton--Raphson method, an initial guess close to one root can jump to a location several roots away. Depending on the context, each one of these may be more or less likely. Degenerate roots (those where the derivative is 0) are "rare" in general and we do not consider this case.
Finally, there is a chance that Newton's method will cycle back and forth between two value and never converge at all. This failure is illustrated in Figure below.a2 = {Arrowheads[Medium], Arrow[{{3.5, 7.125}, {2.5, 4}}]}
b1 = Graphics[{Dashed, Red, a1}]
b2 = Graphics[{Dashed, Blue, a2}]
f[x_] = x^3 - 5*x^2 + 3.05*x + 15
graph = Plot[f[x], {x, 0.5, 4}, PlotRange -> {4, 16}, PlotStyle -> Thick]
v1 = {Arrowheads[Medium], Arrow[{{2.5, 4}, {2.5, 6.875}}]}
v2 = {Arrowheads[Medium], Arrow[{{3.5, 4}, {3.5, 7.125}}]}
b11 = Graphics[{Dashed, Red, v1}]
b22 = Graphics[{Dashed, Blue, v2}]
Show[graph, b1, b2, b11, b22]

Example: Consider the function \( f(x) = x\,e^{-x^2} \) that obviously has a root at x = 0. However, if your starting point exceeds the root of the equation \( f'(x) = 0 , \) which is \( x_0 = 1/\sqrt{2} \approx 0.707107 , \) than Newton's algorithm diverges.
D[x*Exp[-x^2], x]
FindRoot[E^-x^2 - 2 E^-x^2 x^2 == 0, {x, 1}]
Plot[x*Exp[-x^2], {x, -3, 3}, PlotStyle -> Thick, PlotLabel -> x*Exp[-x^2]]
p[0] = 0.8
Do[Print["k= ", k, " " , "p= " , p[k] = p[k - 1] - p[k - 1]/(1 - 2*p[k - 1])], {k, 1, 15}]

Example: Consider the cubic function \( f(x) = x^3 - 0.926\,x^2 + 0.0371\,x + 0.043 . \) First we plot the function, and then define the range of 'x' you want to see its null.
Plot[f[x], {x, 0, 1}, PlotStyle -> {Thick, Blue}]
xbegin = 0; xend = 0.5;
curve = Plot[f[x], {x, xbegin, xend}, PlotLabel ->
"Function on given interval", PlotStyle -> {Thick, FontSize -> 11}]
![]() |
![]() |
step = (xend - xbegin)/10;
Do[ If[f[i] > maxi, maxi = f[i]];
If[f[i] < mini, mini = f[i]], {i, xbegin, xend, step}];
tot = maxi - mini
mini = mini - 0.1*tot; maxi = maxi + 0.1*tot;
tline = Plot[tanline[x], {x, xbegin, xend}, PlotStyle -> {Thick, Orange}]
Show[Graphics[Line[{{x0, maxi}, {x0, mini}}]], curve,
Graphics[Line[{{x1, maxi}, {x1, mini}}]], tline, Axes -> True,
PlotLabel -> "Function with upper and lower guesses and estimated root", PlotStyle -> {FontSize -> 10}]
![]() |
![]() |
NumberForm[N[s], 20]
Example: We reconsider the function \( f(x) = e^x\, \cos x - x\, \sin x \) that has one root within the interval [0,3]. We define the range of x:
step = (xb - xa)/10;
x[0] = xa;
Do[ x[i] = x[i - 1] + step; If[f[x[i]] > maxi, maxi = f[x[i]]; If[f[x[i]] < mini, mini = f[x[i]]]];
Print["i= ", i, ", x[i]= ", x[i], ", f[x[i]]= ", N[f[x[i]]]]; , {i, 1, 10}]
g[x_] := f'[x]
actual = x /. s
NumberForm[N[actual], 20]
Array[xr, nmaximum];
xr[i] = xr[i - 1] - f[xr[i - 1]]/g[xr[i - 1]]]
For[i = 1, i <= nmaximum, i++, Et[i] = Abs[actual - xr[i]]]
For[i = 1, i <= nmaximum, i++, et[i] = Abs[Et[i]/actual*100]]
For[i = 1, i <= nmaximum, i++,
If[i <= 1, Ea[i] = Abs[xr[i] - x0], Ea[i] = Abs[xr[i] - xr[i - 1]]]]
For[i = 1, i <= nmaximum, i++, ea[i] = Abs[Ea[i]/xr[i]*100]]
For[i = 1, i <= nmaximum, i++,
If[(ea[i] >= 5) || (i <= 1), sigdig[i] = 0,
sigdig[i] = Floor[(2 - Log[10, Abs[ea[i]/0.5]])]]]
ListPlot[xrplot, Joined -> True, PlotRange -> All,
AxesOrigin -> {1, Min[xrplot]},
PlotLabel -> "Estimated root as a function of number of iterations"]

Example: This example shows that Newton's method may converge slowly due to an inflection point occurring in the vicinity of the root. Let us consider the function \( f(x) = (x-0.5)^3 . \) First we define the function and its derivative:
g[x_] := f'[x]
curve = Plot[f[x], {x, xbegin, xend},
PlotLabel -> "Function on given interval", PlotStyle -> {FontSize -> 11, Blue}]

tline = Plot[tanline[x], {x, -1.5, 0.0}, PlotStyle -> {Orange}];
Show[curve, tline, Graphics[Line[{{x0, 2}, {x0, -15}}]],
Graphics[Line[{{x1, 2}, {x1, -15}}]], Axes -> True, PlotRange -> All,
PlotLabel -> "Function with upper and lower guesses and estimated root", PlotStyle -> {FontSize -> 10}]

tline = Plot[tanline[x], {x, -1, 1}, PlotStyle -> {Orange}];
Show[Graphics[Line[{{x1, 0.6}, {x1, -1}}]], curve,
Graphics[Line[{{x2, 0.6}, {x2, -1}}]], tline, Axes -> True,
PlotLabel -> "Second iteration and estimated root", PlotStyle -> {FontSize -> 10}, PlotRange -> {{-1, 1}, {-1, 0.7}}]

III. Chebyshev Iteration Scheme
The Chebyshev iteration is an iterative method for determining the solutions of a system of linear equations \( f(x) =0. \) The method is named after Russian mathematician Pafnuty Chebyshev (1821--1894), who discoved it in 1838 as a student project (but not published until 1951).
Example: Let us reconsider the problem of determination of a positive square root of 6 using Chebyshev iteration scheme:
Do[Print["k= ", k, " " , "p= " , p[k] = (p[k - 1] + 6/p[k - 1])/ 2 - (p[k - 1]^2 - 6)^2/(8*p[k - 1]^3)], {k, 1, 5}]
IV. Halley's Method
The English astronomer, geophysicist, mathematician, meteorologist, and physicist Edmund/Edmond Halley (1656--1741) discovered the following iteration algorithm
Example: Let us find a few iterations for determination of a square root of 6:
Do[Print["k= ", k, " " , "p= " , p[k] = p[k - 1] - 2*p[k - 1]*(p[k - 1]^2 - 6)/(3*p[k - 1]^2 + 6)], {k, 1, 5}]
V. Dekker's Method
Brent's method is a root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less reliable methods. The algorithm tries to use the potentially fast-converging secant method or inverse quadratic interpolation if possible, but it falls back to the more robust bisection method if necessary. Brent's method was discovered in 1973 by Richard Peirce Brent (born in 1946, Melbourne). He is an Australian mathematician and computer scientist. Richard is an emeritus professor at the Australian National University and a conjoint professor at the University of Newcastle (Australia). Brent's method is based on an earlier algorithm of 1969 proposed by Theodorus Jozef Dekker (born in 1927). Theodorus is a Dutch mathematician who completed his Ph.D degree from the University of Amsterdam in 1958. Consequently, the method is also known as the Brent--Dekker method.
The idea to combine the bisection method with the secant method goes back to Dekker (1969). Suppose that we want to solve the equation f(x) = 0
. As with the bisection method, we need to initialize Dekker's method with two points, say a0 and b0, such that \( f \left( a_0 \right) \quad\mbox{and} \quad f \left( b_0 \right) \) have opposite signs. If f is continuous on \( \left[ a_0 , b_0 \right] , \) the intermediate value theorem guarantees the existence of a solution between a0 and b0.Three points are involved in every iteration:
- bk is the current iterate, i.e., the current guess for the root of f;
- ak is the "contrapoint," i.e., a point such that \( f \left( a_k \right) \quad\mbox{and} \quad f \left( b_k \right) \) have opposite signs, so the interval \( \left[ a_k , b_k \right] \) contains the solution. Furthermore, \( \left\vert f \left( b_k \right) \right\vert \) should be less than or equal to \( \left\vert f \left( a_k \right) \right\vert , \) so that bk is a better guess for the unknown solution than ak.
- bk-1 is the previous iterate (for the first iteration, we set bk-1 = a0).
Then, the value of the new contrapoint is chosen such that \( f \left( a_{k+1} \right) \quad\mbox{and} \quad f \left( b_{k+1} \right) \) have opposite signs. If \( f \left( a_k \right) \quad\mbox{and} \quad f \left( b_{k+1} \right) \) have opposite signs, then the contrapoint remains the same: ak+1 = ak. Otherwise, \( f \left( b_{k+1} \right) \quad\mbox{and} \quad f \left( b_{k} \right) \) have opposite signs, so the new contrapoint becomes ak+1 = bk.
Finally, if \( \left\vert f \left( a_{k+1} \right) \right\vert < \left\vert f \left( b_{k+1} \right) \right\vert , \) then ak+1 is probably a better guess for the solution than bk+1, and hence the values of ak+1 and bk+1 are exchanged.
This ends the description of a single iteration of Dekker's method. ■
Dekker's method performs well if the function f is reasonably well-behaved. However, there are circumstances in which every iteration employs the secant method, but the iterates bk converge very slowly (in particular, \( \left\vert b_k - b_{k-1} \right\vert \) may be arbitrarily small). Dekker's method requires far more iterations than the bisection method in this case.
VI. Brent's Method
Brent (1973) proposed a small modification to avoid this problem. He inserted an additional test which must be satisfied before the result of the secant method is accepted as the next iterate. Therefore, Brent's method is a hybrid method which combines the reliability of bracketing method and the speed of open methods.
Two inequalities must be simultaneously satisfied: Given a specific numerical tolerance δ if the previous step used the bisection method, the inequality
If the previous step performed interpolation, then the inequality
Also, if the previous step used the bisection method, the inequality
Brent proved that his method requires at most N2 iterations, where N denotes the number of iterations for the bisection method. If the function f is well-behaved, then Brent's method will usually proceed by either inverse quadratic or linear interpolation, in which case it will converge superlinearly.
Furthermore, Brent's method uses inverse quadratic interpolation instead of linear interpolation (as used by the secant method). If \( f(b_k ), \ f(a_k ) , \mbox{ and } f(b_{k-1}) \) are distinct, it slightly increases the efficiency. As a consequence, the condition for accepting s (the value proposed by either linear interpolation or inverse quadratic interpolation) has to be changed: s has to lie between \( \left( 3 a_k + b_k \right) /4 \) and bk.
Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)