es

# Successive overrelaxation or SOR method

The process of correcting an equation by modifying one unknown is sometimes called relaxation. The Gauss--Seidel algorithm performs successive relazations by moving from equation to equation, modifying one one unknown in a turn. In many cases, convergence can be accelerated substantially by overrelaxing that involves predefined factor ω > 0.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. It was devised simultaneously by David M. Young Jr. (1923--2008) and by Stanley P. Frankel (1919--1978) in 1950 for the purpose of automatically solving linear systems on digital computers.

When SOR method is applied to the traditional fixed point equation (3), it generates a sequence of approximations according to the formula

${\bf x}^{(k+1)}_{SOR} = \left( 1 - \omega \right) {\bf x}^{(k)}_{SOR} + \omega\,f\left( {\bf x}^{(k)}_{SOR} \right) , \qquad k=0,1,2,\ldots ,$
where values of overrelaxation factor ω > 1 are used to speed up convergence of a slow-converging process, while values of ω < 1 are often used to help establish convergence of a diverging iterative process or speed up the convergence of an overshooting process. With ω = 1, we recover Gauss-Seidel method. The optimal choice of ω never exceeds 2. It is often in the neighborhood of 1.9.

To describe application of the overrelaxation for solving system of linear equations (1) or (2), we follow Gauss-Seidel approach and decompose matrix A of coefficients into sum of three matrices:

${\bf A} = \Lambda + {\bf L} + {\bf U} ,$
where Λ is the diagonal matrix extracted from A, L is the strictly lower triangular part of A, and U is the strictly upper triangular part of A:
$\Lambda = \begin{bmatrix} a_{1,1} & 0 & 0 &\cdots & 0 \\ 0 & a_{2,2} & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & a_{n,n}\end{bmatrix} , \qquad {\bf L} = \begin{bmatrix} 0 & 0 & 0 & \cdots & 0 \\ a_{2,1} & 0 & 0 & \cdots & 0 \\ a_{3,1} & a_{3,2} & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & a_{n,3} & \cdots & 0 \end{bmatrix} , \qquad {\bf U} = \begin{bmatrix} 0 & a_{1,2} & a_{1,3} & \cdots & a_{1n} \\ 0 & 0 & a_{2,3} & \cdots & a_{2,n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 0 \end{bmatrix} .$
More specifically, if x(k) is the solution at the step k and xGS(k+1) is the update to the step k + 1 according to the Gauss--Siedel’s method, the update formula of the SOR method is:
${\bf x}_{SOR}^{(k+1)} = \omega\,{\bf x}_{GS}^{(k+1)} + \left( 1 - \omega \right) {\bf x}^{(k)} ,$
where ω is a real parameter to be set. Obviously if ω = 1, the SOR algorithm degenerates into the Gauss-Siedel’s method. The explicit update formula of the SOR method can be simply obtained from that of Gauss-Seidel’s method by adding the contribution due to x(k):
\begin{align*} x_1^{(k+1)} &= \frac{\omega}{a_{1,1}} \left( b_1 - \sum_{j=2}^n a_{1,j} x_j^{(k)} \right) + \left( 1 - \omega \right) x_1^{(k)} , \\ x_2^{(k+1)} &= \frac{\omega}{a_{2,2}} \left( b_2 - \sum_{j=3}^n a_{2,j} x_j^{(k)} - a_{2,1} x_1^{(k+1)}\right) + \left( 1 - \omega \right) x_2^{(k)} , \\ \cdots & \qquad \cdots \\ x_i^{(k+1)} &= \frac{\omega}{a_{i,i}} \left( b_i - \sum_{j=i+1}^n a_{i,j} x_j^{(k)} - \sum_{j=1}^{i-1} a_{i,j} x_j^{(k+1)}\right) + \left( 1 - \omega \right) x_i^{(k)} , \tag{9} \\ \cdots & \qquad \cdots \\ x_n^{(k+1)} &= \frac{\omega}{a_{n,n}} \left( b_n - \sum_{j=1}^{n-1} a_{n,j} x_j^{(k+1)}\right) + \left( 1 - \omega \right) x_n^{(k)} . \end{align*}


Inputs: A, b, ω
Output: φ

Choose an initial guess φ to the solution
repeat until convergence
for i from 1 until n do
set σ to 0
for j from 1 until n do
if j ≠ i then
set σ to σ + 𝑎i,j φj
end if
end (j-loop)
set φi to (1 − ω)φi + ω(bi − σ) / 𝑎i,i
end (i-loop)
check if convergence is reached
end (repeat)

Another algorithm using while loop:

input A and b
input ω
n is the size of A
while precision conditions do
for i = 1 : n do
s = 0
for j = 1 : n do
if j ne; i then
s = s + 𝑎i,j xj
end if
end for
xi = ω (bi − s)/ 𝑎i,i + (1 − ω ) xi
end for
end while

Matlab algorithm:

function [x,r,k]=RM(A,b,Omega,kmax,eps1)
% kmax maximum number of iterations;
% eps1 given tolerance;
% Omega: relaxation parameter
% stopping criteria: norm(r)/norm(b)<=eps1 or k>kmax
% Output: x the kth iterate, r the kth residual
% k the number of iterations
% Initial guess is x=0; r=b
n=length(b);r=b;x=zeros(n,1);nb=norm(b);
E=tril(A,-1);D=diag(diag(A));M=(1/Omega)*D-E;
k=1;mr=1;mx=1;e=1;
while e>eps1 & k<=kmax
dx=M\r;
x=x+dx;% saxpy operation
r=r-A*dx;% GAXPY operation
e=norm(r)/nb;
k=k+1;
end


Theorem 4 (Kahan, 1958): A necessary condition for the SOR method to converge is ω ∈ (0, 2).

Example 8: Let us reconsider the system from Example 3: $\begin{split} 5\,x_1 - 4\, x_2 + x_3 &= 5, \\ 5\,x_1 + 7\, x_2 -4\, x_3 &= 21, \\ 3\, x_1 - 7\, x_2 + 8\, x_3 &= -9 , \end{split} \tag{3.1}$ with matrix not being diagonally dominant: ${\bf A} = \begin{bmatrix} 5 & -4 & \phantom{-}1 \\ 5 & \phantom{-}7 & -4 \\ 3 & -7 & \phantom{-}8 \end{bmatrix} .$
The SOR scheme yields \begin{align*} x_1^{(k+1)} &= \frac{\omega}{5} \left( 5 + 4\,x_2^{(k)} - x_3^{(k)} \right) + \left( 1 - \omega \right) x_1^{(k)} , \\ x_2^{(k+1)} &= \frac{\omega}{7} \left( 21 + 4\, x_3^{(k)} - 5 \, x_1^{(k+1)} \right) + \left( 1 - \omega \right) x_2^{(k)} , \\ x_3^{(k+1)} &= \frac{\omega}{8} \left( -9 - 3\,x_1^{(k+1)} +7 \, x_2{(k+1)} \right) + \left( 1 - \omega \right) x_3^{(k)} , \end{align*} Now it is right time to organize our calculations in a loop.

 n x₁ x₂ x₃ 0 1 2 3 4 5 0 $$\displaystyle \frac{}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{1}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$

 n x₁ x₂ x₃ 0 1 2 3 4 5 0 $$\displaystyle \frac{}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{1}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$
End of Example 8
Example 9: Let us reconsider the system four equations with four unknowns:
$\begin{split} 9 x_1 - 8 x_2 + 5 x_3 - 4 x_4 &= 9 , \\ 1 x_1 + 8 x_2 -5 x_3 + 3 x_4 &= 7 , \\ -2 x_1 -4 x_2 + 7 x_3 -6 x_4 &= -11, , \\ 2 x_1 -4 x_2 -5 x_3 + 6 x_4 &= 9 . \end{split}$
The matrix of this system ${\bf A} = \begin{bmatrix} \phantom{-}9 & -8 & \phantom{-}5 & -4 \\ \phantom{-}1 & \phantom{-}8 & -5 & \phantom{-}3 \\ -2 & -4 & \phantom{-}7 & -6 \\ \phantom{-}2 & -4 & -5 & \phantom{-}6 \end{bmatrix} \tag{4.2}$ is singular of rank 3:
The SOR scheme yields \begin{align*} x_1^{(k+1)} &= \frac{\omega}{9} \left( 9 + 8\,x_2^{(k)} -5\, x_3^{(k)} + 4\, x_4^{(k)} \right) + \left( 1 - \omega \right) x_1^{(k)} , \\ x_2^{(k+1)} &= \frac{\omega}{8} \left( 7 + 5\, x_3^{(k)} - 3\, x_4^{(k)} - x_1^{(k+1)} \right) + \left( 1 - \omega \right) x_2^{(k)} , \\ x_3^{(k+1)} &= \frac{\omega}{7} \left( -11 + 6\,x_4^{(k)} +2\, x_1^{(k+1)} +4 \, x_2{(k+1)} \right) + \left( 1 - \omega \right) x_3^{(k)} , \\ x_4^{(k+1)} &= \frac{\omega}{6} \left( 9 - 2\,x_1^{(k+1)} +4 \, x_2{(k+1)} + 5\,x_3^{(k+1)} \right) + \left( 1 - \omega \right) x_4^{(k)} , \end{align*} Now it is right time to organize our calculations in a loop.

 n x₁ x₂ x₃ x₄ 0 1 2 3 4 5 0 $$\displaystyle \frac{}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{1}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$

 n x₁ x₂ x₃ x₄ 0 1 2 3 4 5 0 $$\displaystyle \frac{}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{1}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$
End of Example 9
Example 10: We demonstrate performance of three iterating methods for solving the following system of equations: ${\bf A}\,{\bf x} = {\bf b} , \tag{10.1}$ where ${\bf A} = \begin{bmatrix} \phantom{-}4 & -1 & 2 \\ -2 & \phantom{-}4 & 5 \\ \phantom{-}1 & \phantom{-}2 & 5 \end{bmatrix}, \qquad {\bf b} = \begin{pmatrix} -2 \\ -4 \\ -5 \end{pmatrix} .$ This system has the true solution x = [1, 2, −2]. We start with Jacobi's scheme by extracting the diagonal matrix from A: $\Lambda = \begin{bmatrix} 4 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 5 \end{bmatrix} , \qquad {\bf M} = \begin{bmatrix} \phantom{-}0 & -1 & 2 \\ -2 & \phantom{-}0 & 5 \\ \phantom{-}1 & \phantom{-}2 & 0 \end{bmatrix} .$ Although the given system has matrix A that is not diagonally dominant, Jacobi's scheme converges because eigenvalues of matrix M do not exceed 1.
A = {{4, -1, 2}, {-2, 4, 5}, {1, 2, 5}};
diag = DiagonalMatrix[{4, 4, 5}];
B = Inverse[diag] . (A - diag)
{{0, -(1/4), 1/2}, {-(1/2), 0, 5/4}, {1/5, 2/5, 0}}
N[Eigenvalues[B]]
{-0.946897, 0.702665, 0.244232}
We write Jacobi's iterations in matrix form: ${\bf x}^{(k+1)} = {\bf d} + {\bf B}\,{\bf x}^{(k)} , \qquad k=0,1,2,\ldots ;$ where ${\bf d} = \Lambda^{-1} {\bf b} = \begin{pmatrix} -\frac{1}{2} \\ -1 \\ -1 \end{pmatrix} , \qquad {\bf B} = \Lambda^{-1} {\bf M} = \begin{bmatrix} \phantom{-}0 & -\frac{1}{4} & \frac{1}{2} \\ -\frac{1}{2} & \phantom{-}0 & \frac{5}{4} \\ \phantom{-}\frac{1}{5} & \phantom{-}\frac{2}{5} & 0 \end{bmatrix} .$

 n x₁ x₂ x₃ 0 1 2 3 4 5 0 $$\displaystyle \frac{}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{1}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$
Application of the Gauss--Seidel method to system (10.1) of equations gives \begin{align*} x_1^{(k+1)} &= \frac{1}{4} \left( -2 + x_2^{(k)} -2\, x_2^{(k)} \right) , \\ x_2^{(k+1)} &= \frac{1}{4} \left( -4 + 2\,x_1^{(k+1)} -5\, x_3^{(k)} \right) , \\ x_3^{(k+1)} &= \frac{1}{5} \left( -5 - x_1^{(k+1)} -2\,x_2^{(k+1)} \right) . \end{align*}

 n x₁ x₂ x₃ 0 1 2 3 4 5 0 $$\displaystyle \frac{}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{1}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$
The SOR scheme yields \begin{align*} x_1^{(k+1)} &= \frac{\omega}{4} \left( -2 + x_2^{(k)} -2\, x_3^{(k)} \right) + \left( 1 - \omega \right) x_1^{(k)} , \\ x_2^{(k+1)} &= \frac{\omega}{4} \left( -4 - 5\, x_3^{(k)} + 2 \, x_1^{(k+1)} \right) + \left( 1 - \omega \right) x_2^{(k)} , \\ x_3^{(k+1)} &= \frac{\omega}{5} \left( -5 - x_1^{(k+1)} -2 \, x_2{(k+1)} \right) + \left( 1 - \omega \right) x_3^{(k)} , \end{align*}

 n x₁ x₂ x₃ 0 1 2 3 4 5 0 $$\displaystyle \frac{}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{1}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$
when ω = 1.9, we have

 n x₁ x₂ x₃ 0 1 2 3 4 5 0 $$\displaystyle \frac{}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{1}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ 0 $$\displaystyle -\frac{1}{}$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$ $$\displaystyle \frac{}{} \approx$$
End of Example 10

1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
2. Beezer, R., A First Course in Linear Algebra, 2015.
3. Beezer, R., A Second Course in Linear Algebra, 2013.