es

# Jacobi's scheme

The simplest iterative method that was first applied for solving systems of linear equations is known as the Jacobi’s method, Carl Gustav Jacob Jacobi (1804--1851) was a German mathematician who made fundamental contributions to elliptic functions, dynamics, differential equations, determinants, and number theory. Although much of his work was in applied mathematics, Jacobi believed in the importance of doing mathematics for its own sake. A fine teacher, he held positions at the Universities of Berlin and Königsberg (now it is city of Kaliningrad) and was one of the most famous mathematicians in Europe.

In numerical linear algebra, the Jacobi method (a.k.a. the Jacobi iteration method) is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The method was discovered by Carl Gustav Jacob Jacobi in 1845.

The idea of this iteration method is to extract the main diagonal of the matrix of the given vector equation

${\bf A}\,{\bf x} = {\bf b} ,$
and consider the rest of matrix A as perturbation:
${\bf A} = \Lambda + {\bf M} .$
Here A is the given square n × n matrix, Λ is the diagonal matrix built from A (it is assumed that the diagonal entries of A do not vanish), and M is the matrix obtained from A by annihilating its diagonal entries. So
$\Lambda = \begin{bmatrix} a_{1,1} & 0 & 0 & \cdots & 0 \\ 0 & a_{2,2} & 0 & \cdots & 0 \\ 0 & 0 & a_{3,3} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & a_{n,n} \end{bmatrix} , \qquad {\bf M} = \begin{bmatrix} 0 & a_{1,2} & a_{1,3} & \cdots & a_{1,n} \\ a_{2,1} & 0 & a_{2,3} & \cdots & a_{2,n} \\ a_{3,1} & a_{3,2} & 0 & \cdots & a_{3,n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & a_{n,3} & \cdots & 0 \end{bmatrix} .$
Then applying the inverse of the diagonal matrix, we get
${\bf x} = \Lambda^{-1} {\bf b} - \Lambda^{-1} {\bf M}\,{\bf x} = {\bf d} + {\bf B}\,{\bf x} , .$
where
${\bf d} = \Lambda^{-1} {\bf b} = \begin{bmatrix} \frac{b_1}{a_{1,1}} \\ \frac{b_2}{a_{2,2}} \\ \vdots \\ \frac{b_n}{a_{n,n}} \end{bmatrix} , \qquad {\bf B} = - \Lambda^{-1} {\bf M} = - \begin{bmatrix} 0 & \frac{a_{1,2}}{a_{1,1}} & \frac{a_{1,3}}{a_{1,1}} & \cdots & \frac{a_{1,n}}{a_{1,1}} \\ \frac{a_{2,1}}{a_{2,2}} & 0 & \frac{a_{2,3}}{a_{2,2}} & \cdots & \frac{a_{2,n}}{a_{2,2}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{a_{n,1}}{a_{n,n}} & \frac{a_{n,2}}{a_{n,n}} & \frac{a_{n,3}}{a_{n,n}} & \cdots & 0 \end{bmatrix} .$
Since the new equation x = d + B x is of the form x = f(x), it is suitable for iteration procedure: x(k+1) = d + B x(k). Jacobi’s method uses diagonal part of matrix A for converting the given system of equation into fixed point form.

Here is the algorithm for Jacobi iteration method:


Input: initial guess x(0) to the solution, (diagonal dominant) matrix A, right-hand side
vector b, convergence criterion
Output: solution when convergence is reached
Comments: pseudocode based on the element-based formula above

k = 0
while convergence not reached do
for i := 1 step until n do
σ = 0
for j := 1 step until n do
if j ≠ i then
σ = σ + aij xj(k)
end
end
xi(k+1) = (bi − σ) / aii
end
increment k
end

jacMeth[mat_List, b_List, n_ : 5, fp_ : True] := Module[{dia, low, up, len, its}, If[n >= 5, dia = DiagonalMatrix[Diagonal[mat]]; low = LowerTriangularize[mat, -1]; up = UpperTriangularize[mat, 1]; len = ConstantArray[0, Length[Diagonal[mat]]]; its = If[fp == True, Prepend[Take[ FoldList[N[LinearSolve[dia, -(low + up) . # + b]][[All, 1]] &, len, Range[n]], -6], ""], Prepend[Take[ FoldList[LinearSolve[dia, -(low + up) . # + b][[All, 1]] &, len, Range[n]], -6], ""]]; Grid[{{"Matrix", "A", "d", "l", "u", "b", "x0"}, Take[MatrixForm[#] & /@ {"", mat, dia, low, up, b, len}, -7], Join[{"Iterations"}, {"0"}, Take[Range[0, n], -5]], its}, Frame -> All], "Must have n$GreaterEqual]5"] ]; Example 1: Let us consider the system \[ \begin{split} 6 x_1 - 5 x_2 &= 7 \\ 4 x_1 - 7 x_2 &= 1 . \end{split}$
We rewrite the given system of equations in vector form A v = b, where ${\bf A} = \begin{bmatrix} 6 & -5 \\ 4 & -7 \end{bmatrix} , \qquad {\bf b} = \begin{bmatrix} 7 \\ 1 \end{bmatrix} .$ Then we extract the diagonal matrix from A: ${\bf A} = \Lambda + {\bf M}, \qquad \Lambda = \begin{bmatrix} 6 & \phantom{-}0 \\ 0 & -7 \end{bmatrix}, \quad {\bf M} = \begin{bmatrix} 0 & -5 \\ 4 & \phantom{-}0 \end{bmatrix}$

Jacobi's method begins with solving the first equation for x₁ and the second equation for x₂, to obtain

$\begin{split} x_1 &= \frac{5 x_2 + 7}{6} , \\ x_2 &= \frac{4 x_1 -1}{7} . \end{split} \tag{1.1}$
We rewrite system (1.1) in vector form:
${\bf x} = \Lambda^{-1} {\bf b} - \Lambda^{-1}{\bf M}\,{\bf x} , \qquad \Lambda^{-1} = \begin{bmatrix} \frac{1}{6} &0 \\ 0 & -\frac{1}{7} \end{bmatrix} , \quad \Lambda^{-1}{\bf M} = \begin{bmatrix} 0 & -\frac{5}{6} \\ -\frac{4}{7} & 0 \end{bmatrix} .$
We now need an initial approximation to the solution. It turns out that it does not matter what this initial approximation is, so we might as well take x₁ = 0, x₂ = 0. We use these values in Equations (1.1) to get new values of x₁ and x₂:
$\begin{split} x_1 &= \frac{5\cdot 0 + 7}{6} = \frac{7}{6} \approx 1.16667, \\ x_2 &= \frac{4 \cdot 0 -1}{7} = -\frac{1}{7} \approx -0.142857. \end{split}$
We repeat this process (using the old values of x₁ and x₂ to get the new values of x₁ and x₂), producing the sequence of approximations given in the following Table

 n x₁ x₂ 0 1 2 3 4 5 0 $$\displaystyle \frac{7}{6}$$ $$\displaystyle \frac{22}{21} \approx 1.04762$$ $$\displaystyle \frac{101}{63} \approx 1.60317$$ $$\displaystyle \frac{682}{441} \approx 1.54649$$ $$\displaystyle \frac{2396}{1323} \approx 1.81104$$ 0 $$\displaystyle -\frac{1}{7}$$ $$\displaystyle \frac{11}{21} \approx 0.52381$$ $$\displaystyle \frac{67}{147} \approx 0.455782$$ $$\displaystyle \frac{341}{441} \approx 0.773243$$ $$\displaystyle \frac{2287}{3087} \approx 0.740849$$

The successive vectors $$\displaystyle \quad \begin{bmatrix} x_1^{(k)} \\ x_2^{(k)} \end{bmatrix} \quad$$ are called iterates. For instance, when k = 5, the fifth iterate is $$\displaystyle \quad \begin{bmatrix} 1.8110 \\ 0.740849 \end{bmatrix} \quad$$

Here is Mathematica code to produce the result you see above, first using a module, which is used in other examples also.

In code above, parameter n_ is for the number of iterations you want - the display is always the last 5 iterations; fp_ is for floating point, the default being "True" - if you input "False" as the last argument, you will get an explicit solution, which produces fractions when input coefficients are from ℚ.

Clear[A, M, B, b, diag, d, $CapitalLambda], x, l, u, x1, x2, x3]; Now we apply subroutine jacMeth to our problem. jacMeth[({ {6, -5}, {4, -7} }), ({ {7}, {1} }), 5 , False] Matrix A Λ L U b x0 $$\displaystyle \begin{pmatrix} 6 & -5 \\ 4 & -7 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 6 & 0 \\ 0 & -7 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&0 \\ 4&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&-5 \\ 0&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 7 \\ 1 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0 \\ 0 \end{pmatrix}$$ Iterations 0 1 2 3 4 5 $$\displaystyle \begin{pmatrix} 0&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} \frac{7}{6} & -\frac{1}{7} \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} \frac{22}{21} & \frac{11}{21} \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} \frac{101}{63} & \frac{67}{147} \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} \frac{682}{441} & \frac{341}{441} \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} \frac{2396}{1323} & \frac{2287}{3087} \end{pmatrix}$$ Each iteration may be plotted for the separate elements of the solution matrix jacMethPlt[mat_List, b_List, n_] := ListPlot[ Transpose[ FoldList[ N[LinearSolve[ DiagonalMatrix[ Diagonal[ mat]], -(LowerTriangularize[mat, -1] + UpperTriangularize[mat, 1]) . # + b]][[All, 1]] &, ConstantArray[0, Length[Diagonal[mat]]], Range[n]]], Axes -> False, Frame -> {True, True, False, False}, FrameLabel -> {"Iteration", "Value"}, PlotLegends -> PointLegend[Table[Subscript[x, i], {i, n}], LegendMarkerSize -> 10, LegendFunction -> Frame]] jacMethPlt[({ {6, -5}, {4, -7} } ), ({ {7}, {1} }), 10] Although exact calculations involve rational numbers, their usage in actual computations is entirely impractical because the number of bits required to maintain exactness increases exponentially with the number of iterations. Therefore, next we demonstrate Jacobi's iterative method with floating point arithmetic. Now we repeat calculations by making 50 iterative steps. jacMeth[({ {6, -5}, {4, -7} }), ({ {7}, {1} }), 50, False] // Timing It took less than a second for Mathematica to execute the code with 50 steps. We do not present all output because it is expressed with huge numbers not suitable for human's eyes. For instance, \[ x_1^{(50)} = \frac{2272544311845448533481444917041002}{1136272165922724266740722458520501} .$
Last[Prepend[ FoldList[LinearSolve[d, -(l + u) . # + b] &, {0, 0}, Range[50]], {"x1", "x2"}]] // Timing
N[%]

Expressing the rational numbers above as floating point reals, we get the same result, which is more suitable for human eyes. So, we repeat calculations using floating point representation.

jacMeth[({ {6, -5}, {4, -7} }), ({ {7}, {1} }), 50, True] // Timing

Finally, we can obtain the same result as floating point, also in less than a second.

Matrix A Λ L U b x0
$$\displaystyle \begin{pmatrix} 6 & -5 \\ 4 & -7 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 6 & 0 \\ 0 & -7 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&0 \\ 4&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&-5 \\ 0&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 7 \\ 1 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0 \\ 0 \end{pmatrix}$$
Iterations 0 46 47 48 49 50
(2., 1.) (2., 1.) (2., 1.) (2., 1.) (2., 1.) (2., 1.)

Here are the component parts of the code above, producing the same output in steps.

A = ({ {6, -5}, {4, -7} });
d = DiagonalMatrix[Diagonal[A]];
l = LowerTriangularize[A, -1];
u = UpperTriangularize[A, 1];
b = ({ {7}, {1} });
x[1] = ({ {0}, {0}Sdemo3 });
Grid[{{"A", "d", "l", "u", "b", "x0"}, MatrixForm[#] & /@ {A, d, l, u, b, x[1]}}, Frame -> All]

Starting point does not matter for final destination of Jacobi's ietrating scheme. Let's test that with calculations using different starting points, random floating point numbers and integers. Below, a function is provided that works with a list of starting points.

Last[Prepend[ FoldList[LinearSolve[d, -(l + u) . # + b] &, {#1, #2}, Range[50]], {"x1", "x2"}]] &(*starting point->*)[0., 0.][[All, 1]]
{2., 1.}
stPts = {{0., .5, .25, RandomReal[], RandomInteger[{2, 29}]}, {0., .5, .25, N[RandomReal[]], N[RandomInteger[{31, 49}]]}};
varSt = MapThread[ Last[Prepend[ FoldList[LinearSolve[d, -(l + u) . # + b] &, {#1, #2}, Range[50]], {"x1", "x2"}]] &,(*starting point->*)stPts][[All, All, 1]];
TableForm[ Transpose[{stPts[[1]], varSt[[All, 1]], stPts[[2]], varSt[[All, 2]]}], TableHeadings -> {None, {"Starting\npoint", "\n\!$$\*SubscriptBox[\(x$$, $$1$$]\)", "Starting\npoint", "\n\!$$\*SubscriptBox[\(x$$, $$2$$]\)"}}]

Starting Starting
Point x Point x
0 2. 0 1.
0.5 2. 0.5 1.
0.25 2. 0.25 1.
0.0013514 2. 0.742626 1.
10 2. 16 1.
End of Example 1
Example 2: Let us consider the system
$\begin{split} 5 x_1 - 6 x_2 &= 4 \\ 7 x_1 - 4 x_2 &= 10 . \end{split}$
We rewrite the given system of equations in vector form A v = b, where ${\bf A} = \begin{bmatrix} 5 & -6 \\ 7 & -4 \end{bmatrix} , \qquad {\bf b} = \begin{bmatrix} 4 \\ 10 \end{bmatrix} .$ Then we extract the diagonal matrix from A: ${\bf A} = \Lambda + {\bf M}, \qquad \Lambda = \begin{bmatrix} 5 & \phantom{-}0 \\ 0 & -4 \end{bmatrix}, \quad {\bf M} = \begin{bmatrix} 0 & -6 \\ 7 & \phantom{-}0 \end{bmatrix}$ Applying the inverse of the diagonal matrix, we obtain
${\bf x} = \Lambda^{-1} {\bf b} - \Lambda^{-1} {\bf M}\, {\bf x} , \qquad \Lambda^{-1} = \begin{bmatrix} \frac{1}{5} & 0 \\ 0 & -\frac{1}{4} \end{bmatrix} , \quad \Lambda^{-1} {\bf M} = \begin{bmatrix} 0 & - \frac{6}{5} \\ - \frac{7}{4} & 0 \end{bmatrix} .$
The eigenvalues of matrix −Λ−1M are λ₁ ≅ 1.44914 and λ₂ ≅ −1.44914. Since the magnitudes of these eigenvalues exceed 1, we conclude that the Jacobi's iteration scheme diveges for this matrix.
A = {{0, 6/5}, {7/4, 0}};
N[Eigenvalues[A]]
{-1.44914, 1.44914}
Solving for x₁ and x₂, we get
$\begin{split} x_1 &= \frac{6 x_2 + 4}{5} , \\ x_2 &= \frac{7 x_1 -10}{4} . \end{split} \tag{2.1}$
Initial approximation can be chosen x₁ = 0, x₂ = 0. We use these values in Equations (2.1) to get new values of x₁ and x₂:
$\begin{split} x_1 &= \frac{6\cdot 0 + 4}{5} = \frac{4}{5} = 0.8, \\ x_2 &= \frac{7 \cdot 0 -10}{4} = -\frac{10}{4} = 2.5. \end{split}$
We repeat this process (using the old values of x₁ and x₂ to get the new values of x₁ and x₂), producing the sequence of approximations given in Table

 n x₁ x₂ 0 1 2 3 4 5 0 $$\displaystyle \frac{4}{5}$$ $$\displaystyle \frac{11}{5} = 2.2$$ $$\displaystyle \frac{13}{25} = 0.52$$ $$\displaystyle -\frac{341}{50} = -6.82$$ $$\displaystyle -\frac{823}{250} = -3.292$$ 0 $$\displaystyle -\frac{5}{2}$$ $$\displaystyle \frac{11}{10} =1.1$$ $$\displaystyle -\frac{127}{20} = -6.35$$ $$\displaystyle -\frac{341}{100} = -3.41$$ $$\displaystyle \frac{2887}{200} = -14.435$$

It is hard to believe that this iteration scheme provides the correct answer: x₁ = 2 and x₂ = 1.

A = {{5, -6}, {7, -4}};
b = {4, 10};
LinearSolve[A, b]
{2, 1}

Here are the last five runs:

Matrix A Λ L U b x0
$$\displaystyle \begin{pmatrix} 5 & -6 \\ 7 & -4 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 5 & 0 \\ 0 & -4 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&0 \\ 7&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&-6 \\ 0&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 4 \\ 10 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0 \\ 0 \end{pmatrix}$$
Iterations 48 49 50 51 52 53
$$\displaystyle \left\{ \begin{array}{c} -1.08216 \times 10^8 , \\ -5.41082 \times 10^7 \end{array} \right\}$$ $$\displaystyle \{ \begin{array}{c} -6.49298 \times 10^7 , \\ -1.89379 \times 10^8 \end{array} \}$$ $$\displaystyle \left\{ \begin{array}{c} -2.27254 \times 10^8 , \\ -3.97695 \times 10^8 \end{array} \right\}$$ $$\displaystyle \{ \begin{array}{c}-4.77234 \times 10^8 , \\ -2.38617 \times 10^8 \end{array} \}$$ $$\displaystyle \left\{ \begin{array}{c} -2.86341 \times 10^8 , \\ -8.3516 \times 10^8 \end{array} \right\}$$ $$\displaystyle \left\{ \begin{array}{c} -2.86341 \times 10^8 , \\ -8.3516 \times 10^8 \end{array} \right\}$$
End of Example 2
Unlike direct methods that converge to the theoretical solution in a finite time, iterative methods provide approximations since they converge, under some conditions, to the exact solution of the system of linear equations in infinite steps.

The Jacobi method to be applied, matrix A should not display zeros on its main diagonal. It is also convenient to organize the unknown values x1, x2, … , xn into a column vector, denoted by x.

To start an iteration method, you need an initial guess vector

${\bf x}^{(0)} = \begin{pmatrix} x_1^{(0)} \\ x_2^{(0)} \\ \vdots \\ x_n^{(0)} \end{pmatrix} .$
Since convergence of Jacobi's scheme does not depend on your initial choice x(0), it could be any vector, for instance, zero. At the (k + 1)st iteration x(k+1) = d + Bx(k), the vector x(k+1) is calculated by
$$\label{EqJacobi.7} {\bf x}^{(k+1)}_j = \frac{1}{a_{j,j}} \left( b_j - \sum_{i\ne j} a_{j,i} x_i^{(k)} \right) , \qquad j=1,2,\ldots , n.$$
The vector x(k) is used in calculating x(k+1). Consequently, these two vectors must be stored separately.

If the diagonal elements of matrix A are much larger than the off-diagonal elements, the entries of B should all be small and the Jacobi iteration should converge. We introduce a very important definition that reflects diagonally dominant property of a matrix.

A square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the n×n matrix A = [𝑎ij] is diagonally dominant if $\left\vert a_{ii} \right\vert \ge \sum_{j\ne i} \left\vert a_{ij} \right\vert \qquad \forall i.$
If A is diagonally dominant, the matrix B = −Λ−1M of the Jacobi iteration will have the property
$\sum_{j=1}^n \left\vert b_{i,j} \right\vert = \sum_{j\ne i} \frac{|a_{i,j}|}{|a_{i.i}|} < 1 \qquad \mbox{for} \quad i=1,2,\ldots ,n.$
Thus,
$\| {\bf B} \|_{\infty} = \| \Lambda^{-1} {\bf M} \|_{\infty} = \max_{1 \le i \le n} \left( \sum_{j=1}^n \left\vert b_{i,j} \right\vert \right) = \max_{1 \le i \le n} \left( \sum_{j=1\ne i}^n \left\vert \frac{a_{i,j}}{a_{i,i}} \right\vert \right) < 1 .$
It follows, then, that if A is diagonally dominant, the Jacobi iteration will converge to the solution of A x = b.

Theorem 2: If a system of n linear equations in n variables has a strictly diagonally dominant coefficient matrix, then it has a unique solution and both the Jacobi and the Gauss--Seidel method converge to it.

Although we don't need to prove this theorem because it is a special (finite-dimensional) case of more general fixed point theorem, we present its proof..

For j = 1, 2, … , n, let $\alpha_j = \sum_{i=1}^{j-1} \left\vert a_{j,i} \right\vert , \qquad \beta_j = \sum_{i=j+1}^{n} \left\vert a_{j,i} \right\vert \qquad\mbox{and} \qquad M_j = \frac{\beta_j}{| a_{j,i} | - \alpha_j} .$ Since A is diagonally dominant, it follows that $\left\vert a_{j,j} \right\vert > \alpha_j + \beta_j , \qquad j=1,2,\ldots , n.$ Consequently, Mj < 1 for j = 1, 2, … , n. Thus, $M = \max_{1 \le j \le n} M_j < 1.$ We will show that $\| {\bf B} \|_{\infty} = \max_{{\bf x} \ne 0} \frac{\| {\bf B}\,{\bf x} \|_{\infty}}{\| {\bf x} \|_{\infty}} \le M < 1 .$ Let x be a nonzero vector in ℝn and let y = B x. Choose k so that $\| {\bf y} \|_{\infty} = \max_{1 \le i \le n} \, | y_i | = | y_k | .$ It follows from the deﬁnition of B that ${\bf y} = {\bf B} \,{\bf x} = {\bf S}^{-1} {\bf U}\, {\bf x} ,$ and hence ${\bf y} = {\bf B}\,{\bf x} = \Lambda^{-1} \left( {\bf L}\,{\bf y} + {\bf U}\, {\bf x} \right) ,$ where Λ is the diagonal matrix extracted from A and L = SΛ is strictly lower triangular matrix. Comparing the kth coordinates of each side, we see that $y_k = -\frac{1}{a_{k,k}} \left( \sum_{i=1}^{k-1} a_{k,i} y_i + \sum_{i=k+1}^n a_{k,i} x_i \right) ,$ and hence $\| {\bf y} \|_{\infty} = \left\vert y_k \right\vert \le \frac{1}{| a_{k,k} |} \left( \alpha_k \| {\bf y} \|_{\infty} + \beta_k \| {\bf x} \|_{\infty} \right) .$ From this inequality, it follows $\frac{\| {\bf B}\,{\bf x} \|_{\infty}}{\| {\bf x} \|_{\infty}} = \frac{\| {\bf y} \|_{\infty}}{\| {\bf x} \|_{\infty}} \le M_k \le M .$ So $\| {\bf B} \|_{\infty} = \max_{{\bf x} \ne 0} \frac{\| {\bf B}\,{\bf x} \|_{\infty}}{\| {\bf x} \|_{\infty}} \le M < 1 ,$ and hence the iteration will converge to the solution of A x = b.
Example 3: We consider system of linear equations $\begin{split} 5\,x_1 - 4\, x_2 + x_3 &= 5, \\ 5\,x_1 + 7\, x_2 -4\, x_3 &= 21, \\ 3\, x_1 - 7\, x_2 + 8\, x_3 &= -9 , \end{split} \tag{3.1}$ with matrix not being diagonally dominant: ${\bf A} = \begin{bmatrix} 5 & -4 & \phantom{-}1 \\ 5 & \phantom{-}7 & -4 \\ 3 & -7 & \phantom{-}8 \end{bmatrix} .$ Indeed, $5 = |-4| +1, \qquad 7 < 5 + |-4| = 9, \qquad 8 < 3 + |-7| = 10 .$ We can check that the given system has the unique solution x₁ = 2, x₂ = 1, x₃ = −1.

A = {{5, -4, 1}, {5, 7, -4}, {3, -7, 8}};
b = {5, 21, -9};
LinearSolve[A, b]
{2, 1, -1}

However, the code above will not work if you define vector b as column vector. To overcome this, use the following modification.

b = ({ {5}, {21}, {-9} });
LinearSolve[A, b][[All,1]]

Upon extracting the diagonal matrix from A, we split the given system (3.1) into the following form: $\left( \Lambda + {\bf M} \right) {\bf x} = {\bf b} \qquad \iff \qquad {\bf x} = \Lambda^{-1} {\bf b} - \Lambda^{-1} {\bf M} \,{\bf x} ,$ where ${\bf d} = \Lambda^{-1} {\bf b} = \begin{bmatrix} \frac{1}{5} & 0 & 0 \\ 0 & \frac{1}{7} & 0 \\ 0 & 0 & \frac{1}{8} \end{bmatrix} \begin{pmatrix} \phantom{-}5 \\ 21 \\ -9 \end{pmatrix} = \begin{bmatrix} 1 \\ 3 \\ - \frac{9}{8} \end{bmatrix} , \qquad {\bf B} = - \Lambda^{-1} {\bf M} = \begin{bmatrix} 0 & \frac{4}{5} & - \frac{1}{5} \\ - \frac{5}{7} & 0 & \frac{4}{7} \\ - \frac{3}{8} & \frac{7}{8} & 0 \end{bmatrix} .$ Matrix B has one real eigenvalue λ₁ ≈ -0.362726, and two complex conjugate eigenvalues λ₂ ≈ 0.181363 + 0.308393j, λ₃ ≈ 0.181363 - 0.308393j, where j is the imaginary unit of the complex plane ℂ, so j² = −1.

B= {{0, 4/5, -1/5},{-5/7,0,4/7},{-3/8,7/8,0}};
Eigenvalues[B]
{Root[13 - # + 280 #^3& , 1, 0], Root[13 - # + 280 #^3& , 3, 0], Root[ 13 - # + 280 #^3& , 2, 0]}
Since all eigenvalues have absolute value less than 1, we expect Jacobi's method to be convergent,
Sqrt[0.181363^2 + 0.308393^2]
0.357769
The code above shows that the absolute value of complex number λ = x + jy, with x = 0.181363 and y = 0.308393, is $$\displaystyle |\lambda | = | x + {\bf j}\,y | = \sqrt{x^2 + y^2} \approx 0.357769 < 1 .$$

Although we know exact solution, x₁ = 2, x₂ = 1, x₃ = −1, of the given system of linear equations (3.1), we know that convergence of Jacobi's iteration scheme does not depend on the initial guess. So we start with zero: ${\bf x}^{(0)} = \left[ 0, 0, 0 \right] .$

$CapitalLambda] = DiagonalMatrix[Diagonal[A]]; MatrixForm[%]/ $$\displaystyle \begin{pmatrix} 5 &0&0 \\ 0&7&0 \\ 0&0&8 \end{pmatrix}$$ M = A - \[CapitalLambda]; MatrixForm[%] $$\displaystyle \begin{pmatrix} 0 &-4&1 \\ 5&0&-4 \\ 3&-7&0 \end{pmatrix}$$ Inverse[\[CapitalLambda]] . b[[All, 1]] {1, 3, -(9/8)} B = -Inverse[\[CapitalLambda]] . M Using general formula \[ {\bf x}^{(k+1)} = d + {\bf B}\,{\bf x}^{(k)} , \qquad k=0,1,2,\ldots , \qquad {\bf d} = \begin{bmatrix} 1 \\ 3 \\ - \frac{9}{8} \end{bmatrix} , \qquad {\bf B} = \begin{bmatrix} 0 & \frac{4}{5} & - \frac{1}{5} \\ - \frac{5}{7} & 0 & \frac{4}{7} \\ - \frac{3}{8} & \frac{7}{8} & 0 \end{bmatrix} , \tag{3.2}$ we obtain the first approximation: ${\bf x}^{(1)} = {\bf d} + {\bf B}\,{\bf x}^{(0)} = \begin{bmatrix} 1 \\ 3 \\ - \frac{9}{8} \end{bmatrix} = \begin{bmatrix} 1 \\ 3 \\ - 1.125 \end{bmatrix} .$ To find the second approximand, we ask Mathematica for help
d = {1,3,-9/8};
B= {{0, 4/5, -1/5},{-5/7,0,4/7},{-3/8,7/8,0}};
x2 = d + B.d
{29/8, 23/14, 9/8}
or
N[%]
{3.625, 1.64286, 1.125}
${\bf x}^{(2)} = {\bf d} + {\bf B}\, {\bf d} = \begin{bmatrix} \frac{29}{8} \\ \frac{23}{14} \\ \frac{9}{8} \end{bmatrix} = \begin{bmatrix} 3.625 \\ 1.64286 \\ 1.125 \end{bmatrix} .$ The next approximand is
d = {1,3,-9/8};
B= {{0, 4/5, -1/5},{-5/7,0,4/7},{-3/8,7/8,0}};
x3 = d + B.x2
{117/56, 59/56, -(67/64)}
N[%]
{2.08929, 1.05357, -1.04688}
${\bf x}^{(3)} = {\bf d} + {\bf B}\, {\bf x}^{(2)} = \begin{bmatrix} \frac{117}{56} \\ \frac{59}{56} \\ -\frac{67}{64} \end{bmatrix} = \begin{bmatrix} 2.08929 \\ 1.05357 \\ -1.04688 \end{bmatrix} .$ Now it is right time to organize our calculations in a loop.

jacMeth[({ {5, -4, 1}, {5, 7, -4}, {3, -7, 8} }), ({ {5}, {21}, {-9} })]
Matrix A Λ L U b x0
$$\displaystyle \begin{pmatrix} 5 & -4 & 1 \\ 5 & 7 & -4 \\ 3 & -7 & 8 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 5 & 0& 0 \\ 0 & 7&0 \\ 0&0& 8 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&0&0 \\ 5&0&0 \\ 3& -7 & 0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&-4 & 1 \\ 0&0& -4 \\ 0&0&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 5 \\ 21 \\ -9 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}$$
Iterations 0 1 2 3 4 5
$$\displaystyle \begin{pmatrix} 0&0&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 1 & 3 & - \frac{9}{8} \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} \frac{29}{8} & \frac{23}{14} & \frac{9}{8} \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} \frac{117}{56} & \frac{59}{56} & -\frac{67}{64} \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} \frac{4597}{2240} & \frac{713}{784} & -\frac{221}{224} \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} \frac{15091}{7840} & \frac{3043}{3136} & -\frac{2813}{2560} \end{pmatrix}$$

We repeat calculations with floating point output.

jacMeth[({ {5, -4, 1}, {5, 7, -4}, {3, -7, 8} }), ({ {5}, {21}, {-9} }), 5]
Matrix A Λ L U b x0
$$\displaystyle \begin{pmatrix} 5 & -4 & 1 \\ 5 & 7 & -4 \\ 3 & -7 & 8 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 5 & 0& 0 \\ 0 & 7&0 \\ 0&0& 8 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&0&0 \\ 5&0&0 \\ 3& -7 & 0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&-4 & 1 \\ 0&0& -4 \\ 0&0&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 5 \\ 21 \\ -9 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}$$
Iterations 0 1 2 3 4 5
$$\displaystyle \begin{pmatrix} 0&0&0 \end{pmatrix}$$ $$\displaystyle \begin{array}{c} 1. \\ 3. \\ - 1.125 \end{array}$$ $$\displaystyle \begin{array}{c} 3.625, \\ 1.64286, \\ 1.125 \end{array}$$ $$\displaystyle \begin{array}{c} 2.08929, \\ 1.05357, \\ -1.04688 \end{array}$$ $$\displaystyle \begin{array}{c} 2.05223, \\ 0.909439, \\ -0.986607 \end{array}$$ $$\displaystyle \begin{array}{c} 1.92487, \\ 0.970344, \\ -1.09883 \end{array}$$

We repeat calculations with 20 runs, displaying only the initial and last five:

jacMeth[({ {5, -4, 1}, {5, 7, -4}, {3, -7, 8} }), ({ {5}, {21}, {-9} }), 20]
Matrix A Λ L U b x0
$$\displaystyle \begin{pmatrix} 5 & -4 & 1 \\ 5 & 7 & -4 \\ 3 & -7 & 8 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 5 & 0& 0 \\ 0 & 7&0 \\ 0&0& 8 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&0&0 \\ 5&0&0 \\ 3& -7 & 0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0&-4 & 1 \\ 0&0& -4 \\ 0&0&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 5 \\ 21 \\ -9 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}$$
Iterations 0 16 17 18 19 20
$$\displaystyle \begin{pmatrix} 0&0&0 \end{pmatrix}$$ (2., 1., −1.) (2., 1., −1.) (2., 1., −1.) (2., 1., −1.) (2., 1., −1.)
End of Example 3

Theorem 3: If the Jacobi or the Gauss-Seidel method converges for a system of n linear equations in n unknowns, then it must converge to the solution of the system.

The idea of the proof is quite simple: to apply an iterative method, we first transferred the linear algebra problem A x = b to a fixed point problem x = f(x), and only then applied the iterative method to the latter. To prove Theorem 3, we go backward: from the fixed point equation we go to the linear algebra one.

Convergence means that "as iterations increase, the values of the iterates get closer and closer to a limiting value:' This means that x(k) converge to r, As usual, we denote by x = c the true solution of the system A x = b.

We must show that r = c. Recall that the Jacobi's (GS is similar) iterates satisfy the equation $\Lambda\,{\bf x}^{(k+1)} = {\bf b} + \left( \Lambda - {\bf A} \right) {\bf x}^{(k)} , \qquad k=0,,2,\ldots . \tag{P3.1}$ The Gauss-Seidel method leads to a similar equation where A is further decomposed into upper and lower triangular parts. Application of limit to both sides of Eq.(P3.1), we obtain $\Lambda\,{\bf r} = {\bf b} + \left( \Lambda - {\bf A} \right) {\bf r} \qquad \iff \qquad {\bf 0} = {\bf b} - {\bf A}\,{\bf r} .$ Therefore, vector $$\displaystyle {\bf r} = \lim_{k\to \infty} {\bf x}^{(k)}$$ satisfies the equation A x = b, so it must be equal to c.

Example 4: Let us reconsider the system four equations with four unknowns:
$\begin{split} 9 x_1 - 8 x_2 + 5 x_3 - 4 x_4 &= 9 , \\ 1 x_1 + 8 x_2 -5 x_3 + 3 x_4 &= 7 , \\ -2 x_1 -4 x_2 + 7 x_3 -6 x_4 &= -11, , \\ 2 x_1 -4 x_2 -5 x_3 + 6 x_4 &= 9 . \end{split} \tag{4.1}$
The matrix of this system ${\bf A} = \begin{bmatrix} \phantom{-}9 & -8 & \phantom{-}5 & -4 \\ \phantom{-}1 & \phantom{-}8 & -5 & \phantom{-}3 \\ -2 & -4 & \phantom{-}7 & -6 \\ \phantom{-}2 & -4 & -5 & \phantom{-}6 \end{bmatrix} \tag{4.2}$ is singular of rank 3:
A = {{9, -8, 5, -4}, {1, 8, -5, 3}, {-2, -4, 7, -6}, {2, -4, -5, 6}};
MatrixRank[A]
3
Although we know one of solutions, x₁ = 2, x₂ = 1, x₃ = 3, x₄ = 4, we demonstrate the iterative method---Jacobi's one. Since the nullspace of A is spanned on the vector [12, 31, 124, 120], the general solution of Eq.(4.1) depends on real parameter t ∈ ℝ: ${\bf x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} 2 \\ 1 \\ 3 \\ 4 \end{bmatrix} + t \begin{bmatrix} 12 \\ 31 \\ 124 \\ 120 \end{bmatrix} \in \mathbb{R}^{4\times 1} .$
A = {{9, -8, 5, -4}, {1, 8, -5, 3}, {-2, -4, 7, -6}, {2, -4, -5, 6}};
NullSpace[A]
{{12, 31, 124, 120}}
Upon extracting diagonal matrix from A, we split it into sum of two matrices ${\bf A} = \Lambda + {\bf M} , \qquad \Lambda = \begin{bmatrix} 9 & 0 & 0 & 0 \\ 0 & 8 & 0 & 0 \\ 0 & 0 & 7 & 0 \\ 0 & 0 & 0 & 6 \end{bmatrix} , \quad {\bf M} = \begin{bmatrix} \phantom{-}0 & -8 & \phantom{-}5 & -4 \\ \phantom{-}1 & \phantom{-}0 & -5 & \phantom{-}3 \\ -2 & -4 & \phantom{-}0 & -6 \\ \phantom{-}2 & -4 & -5 & \phantom{-}0 \end{bmatrix} .$ Then the given system of equations (4.2) can be rewritten in equivalent form ${\bf x} = \Lambda^{-1} {\bf b} - \Lambda^{-1} {\bf M}\,{\bf x} = {\bf d} + {\bf B}\,{\bf x} ,$ where ${\bf d} = \Lambda^{-1} {\bf b} = \begin{bmatrix} 1 \\ \frac{7}{8} \\ -\frac{11}{7} \\ \frac{3}{2} \end{bmatrix} , \quad {\bf B} = - \Lambda^{-1} {\bf M} = \begin{bmatrix} 0 & \frac{8}{9} & - \frac{5}{9} & \frac{4}{9} \\ -\frac{1}{8} & 0 & \frac{5}{8} & - \frac{3}{8} \\ \frac{2}{7} & \frac{4}{7} & 0 & \frac{6}{7} \\ - \frac{1}{3} &\frac{2}{3} & \frac{5}{6} & 0\end{bmatrix} .$
A = {{9, -8, 5, -4}, {1, 8, -5, 3}, {-2, -4, 7, -6}, {2, -4, -5, 6}};
diag = {{9, 0, 0, 0}, {0, 8, 0, 0}, {0, 0, 7, 0}, {0, 0, 0, 6}};
M = A - diag;
Inverse[diag] . {9, 7, -11, 9}
{1, 7/8, -(11/7), 3/2}
B = -Inverse[diag] . M
{{0, 8/9, -(5/9), 4/9}, {-(1/8), 0, 5/8, -(3/8)}, {2/7, 4/7, 0, 6/7}, {-(1/3), 2/3, 5/6, 0}}
The Jacobi scheme generates approximants according to the formula ${\bf x}^{(k+1)} = {\bf d} + {\bf B}\,{\bf x}^{(k)} , \qquad k=0, 1, 2, \ldots . \tag{4.3}$ Eigenvalues of matrix B can be determined with Mathematica
N[Eigenvalues[B]]
{1., -0.576902 + 0.664232 I, -0.576902 - 0.664232 I, 0.153804}
Since one of the eigenvalues λ₁ = 1, we do not expect convergence of Jacobi's iteration procedure. Other eigenvalues λ₂ = 0.153804, λ₃ = -0.576902 + 0.664232j, λ₄ = -0.576902 − 0.664232j, have magnitudes less than 1.
Sqrt[0.5769020858621225^2 + 0.6642324328522395^2]
0.879784
Starting with zero vector x(0) = 0, we obtain the first approximant to be x(1) = d. The second and third approximations become ${\bf x}^{(2)} = {\bf d} + {\bf B}\,{\bf x}^{(1)} = {\bf d} + {\bf B}\,{\bf d} = \begin{pmatrix} \frac{209}{63} \\ -\frac{89}{112} \\ \frac{1}{2} \\ \frac{37}{84} \end{pmatrix}$ and ${\bf x}^{(3)} = {\bf d} + {\bf B}\,{\bf x}^{(2)} = {\bf d} + {\bf B} \left( {\bf d} + {\bf B}\,{\bf b} \right) = {\bf b} + {\bf B}\,{\bf b} + {\bf B}^2 {\bf b} = \begin{pmatrix} \frac{40}{189} \\ \frac{175}{289} \\ -\frac{1235}{1764} \\ \frac{425}{1512} \end{pmatrix} .$
diag = {{9, 0, 0, 0}, {0, 8, 0, 0}, {0, 0, 7, 0}, {0, 0, 0, 6}};
M = A - diag ;
d = {1, 7/8, -(11/7), 3/2}; B = -Inverse[diag] . M ;
x1 = d + B . d
{209/63, -(89/112), 1/2, 37/84}
x2 = d + B . d + B . B . d
{40/189, 175/288, -(1235/1764), 425/1512}
Now it is right time to organize our calculations in a loop.

 n x₁ x₂ x₃ x₄ 0 1 2 3 4 5 10 0 $$\displaystyle 1$$ $$\displaystyle \frac{209}{63} \approx 3.31746$$ $$\displaystyle \frac{40}{189} \approx 0.21164$$ $$\displaystyle \approx 2.054$$ $$\displaystyle \approx 2.34034$$ $$\displaystyle \approx 2.31208$$ 0 $$\displaystyle \frac{7}{8}$$ $$\displaystyle -\frac{89}{112} \approx -0.794643$$ $$\displaystyle \frac{175}{288} \approx 0.607639$$ $$\displaystyle \approx 0.305567$$ $$\displaystyle \approx -0.427675$$ $$\displaystyle \approx -0.128371$$ 0 $$\displaystyle -\frac{11}{7}$$ $$\displaystyle \frac{1}{2} = 0.5$$ $$\displaystyle -\frac{1235}{1764} \approx -0.700113$$ $$\displaystyle \approx -0.922808$$ $$\displaystyle \approx 0.262426$$ $$\displaystyle \approx -0.257853$$ 0 $$\displaystyle \frac{3}{2}$$ $$\displaystyle \frac{37}{84} \approx 0.440476$$ $$\displaystyle \frac{425}{1512} \approx 0.281085$$ $$\displaystyle \approx 1.25112$$ $$\displaystyle \approx 0.250038$$ $$\displaystyle \approx 0.729597$$
The numerical values in the table above confirm that Jacobi's iteration diverges for this system of equations.

We are going to make a numerical experiment and check whether Jacobi's iteration depends on the initial guess, which we take closer to the true values: x(0) = [1.5, 0.5, 2.5, 3.5]. Using iteration (4.3), we calculate Jacobi's approximants.

 n x₁ x₂ x₃ x₄ 0 1 2 3 4 5 10 1.5 1.61111 2.16138 1.84245 1.84634 2.09821 1.99034 0.5 0.9375 0.731647 0.810433 0.906534 0.738926 0.815846 2.5 2.14286 2.35317 2.355912.35591 2.18519 2.37932 2.2872 3.5 3.41667 3.37368 3.22828 3.3894 3.3099 3.35227

Mathematica confirms these calculations.

Last[Prepend[ FoldList[N[d + B . #] &, {0, 0, 0, 0}, Range[5]], {"x1", "x2", "x3", "x4"}]]
{2.34034, -0.427675, 0.262426, 0.250038}

TableForm[ Transpose[ Join[Prepend[ FoldList[N[d + B . #] &, {1.5, 0.5, 2.5, 3.5}, Range[5]], {"x1", "x2", "x3", "x4"}], {Last[ Prepend[FoldList[N[d + B . #] &, {1.5, 0.5, 2.5, 3.5}, Range[10]], {"x1", "x2", "x3", "x4"}]]}]], TableHeadings -> {None, {"n", "0", "1", "2", "3", "4", "5", "10"}}]

This table of iterative values shows that the first coordinate plays around true value x₁ = 2, but we cannot claim convergence of other coordinates, except probably x₂ = 1.

jacMeth[({{9, -8, 5, -4}, {1, 8, -5, 3}, {-2, -4, 7, -6}, {2, -4, -5, 6}} ), ({ {9}, {7}, {-11}, {9} }), 50]
Matrix A Λ L U b x0
$$\displaystyle \begin{pmatrix} 9 & -8 & 5&4 \\ 1 & 8 & -5&3 \\ =2 & -4 & 7&-6 \\ 2&-4&-5&6 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 9 & 0& 0&0 \\ 0 & 8&0&0 \\ 0&0& 7&0 \\ 0&0&0&6 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0 & 0 & 0&0 \\ 1 & 0 & 0&0 \\ =2 & -4 & 0&0 \\ 2&-4&-5&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0 & -8 & 5&4 \\ 0 & 0 & -5&3 \\ 0 & 0 & 0&-6 \\ 0&0&0&0 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 9 \\ 7 \\ -11 \\ 9 \end{pmatrix}$$ $$\displaystyle \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0\end{pmatrix}$$
Iterations 0 46 47 48 49 50
(0, 0, 0, 0) {1.67103,
0.131896,
-0.455655,
0.651516}
{1.65994,
0.137018,
-0.46018,
0.651207}
{1.66687,
0.135692,
-0.460686,
0.654547}
{1.66746,
0.133257,
-0.4566,
0.650932}
{1.66142,
0.137093,
-0.460923,
0.652518}

jacMethPlt[A, b, 50]
End of Example 4

Input: square matrix A and vector b
n is the size of A and vector b
whle precision condition  do
for i=1 : n do
s=0
for j=1 : n do
if j ≠ i then
s = s + 𝑎i,jxj
endif
endfor
yi = (bi−s)/𝑎i,i
endfor
x=y
endwhile


1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
2. Beezer, R., A First Course in Linear Algebra, 2015.
3. Beezer, R., A Second Course in Linear Algebra, 2013.