MATLAB TUTORIAL for the First Course. Part VII: Tridiagonal linear systems.

Prof. Vladimir A. Dobrushkin

This tutorial contains many matlab scripts.
You, as the user, are free to use all codes for your needs, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately. Any comments and/or contributions for this tutorial are welcome; you can send your remarks to

Email Vladimir Dobrushkin

Tridiagonal linear systems

  The solution of linear systems of equations is one of the most important areas of computational mathematics. We are not going to present this topic in detail---it deserves a special course. Instead, we consider one particular and very important case when the leading matrix is tridiagonal:

\[ {\bf A}\,{\bf x} = {\bf b} , \]
where x is an unknown n-vector, b is the given n-vector, and A is a tridiagonal \( n \times n \) matrix
\[ {\bf A} = \begin{bmatrix} a_{11} & a_{12} & 0 & \cdots & 0 \\ a_{21} & a_{22} & a_{23} & \cdots & 0 \\ 0 & a_{32} & a_{33} & \cdots & 0 \\ \vdots& \vdots & \vdots & \ddots & a_{n-1,n} \\ 0&0&0& a_{n, n-1} & a_{n,n} \end{bmatrix} . \]
Before proceeding with the algorithm, let us make a notational simplification. Instead of writing the system in the traditional matrix notation and indexing style, we will use li, di, and ui to denote the lower-diagonal, diagonal, and upper-diagonal elements:
\begin{split} l_i &= a_{i,i-1} , \qquad 2 \le i \le n , \\ d_i &= a_{i,i} , \qquad 1 \le i \le n , \\ u_i &= a_{i, i+1} , \qquad 1 \le i \le n-1 , \end{split}
where we adopt the convention that l1=0 and un=0. Under this notation, the augmented matrix corresponding to the system is
\[ \left[ {\bf A} \, \vert \, {\bf b} \right] = \begin{bmatrix} a_{11} & a_{12} & 0 & \cdots & 0 & | & b_1 \\ a_{21} & a_{22} & a_{23} & \cdots & 0 & | & b_2 \\ 0 & a_{32} & a_{33} & \cdots & 0 & | & b_3 \\ \vdots& \vdots & \vdots & \ddots & a_{n-1,n} & | & b_{n-1} \\ 0&0&0& a_{n, n-1} & a_{n,n} & | & b_n \end{bmatrix} . \]
Then we can store the entire problem using just the four vectors for l, d, u, and b, instead of the entire n-by-n matrix, which is mostly zeroes anyway.
Israel Gelfand.

 This system of algebraic equations could be solved using standard Gaussian elimination procedure, which actually reduces the problem to an upper triangular form. This stage is usually referred to as the forward elimination (FE). Once it is completed, the second stage, which is called backward substitution (BS), involves finding actual solution. Therefore, this algorithm is usually called FEBS, or in computational jargon "progonka." In engineering, the FEBS method is associated with Llewellyn H. Thomas from Bell laboratories who solves a simple Poisson problem (see following example) using this method in 1946. Historically, a prominent Soviet mathematician Israel Moiseevich Gelfand (1913--2009) discovered FEBS algorithm in 1933 being a sophomore college student. He personally refused to associate his name with FEBS because, in his opinion, it was a very simple application of Gaussian elimination. Instead, he suggested to use the label "progonka" for FEBS algorithm.

  Using the elimination procedure, the augmented matrix is reduced to an equivalent upper triangular form:

\[ \left[ {\bf T} \, \vert \, {\bf g} \right] = \begin{bmatrix} \delta_1 & u_{1} & 0 & \cdots & 0 & | & g_1 \\ 0 & \delta_{2} & u_{2} & \cdots & 0 & | & g_2 \\ 0 & 0 & \delta_{3} & \cdots & 0 & | & g_3 \\ \vdots& \vdots & \vdots & \ddots & u_{n-1} & | & g_{n-1} \\ 0&0&0& 0 & \delta_{n} & | & g_n \end{bmatrix} , \]
where
\[ \delta_1 = d_1 , \quad \delta_2 = d_2 - u_1 (l_2 /\delta_1 ), \quad \delta_3 = d_3 - u_2 (l_3 /\delta_2 ), \quad \cdots , \quad \delta_k = d_k - u_{k-1} \left( l_k / \delta_{k-1} \right) , \quad 2\le k \le n. \]
Similarly,
\[ g_1 = b_1 , \quad g_2 = b_2 - g_1 (l_2 /\delta_1 ), \quad g_3 = b_3 - g_2 (l_3 /\delta_2 ), \quad \cdots , \quad g_k = b_k - g_{k-1} \left( l_k / \delta_{k-1} \right) , \quad 2\le k \le n. \]
The augmented matrix \( \left[ {\bf T} \, \vert \, {\bf g} \right] \) is row equivalent to the original augmented matrix \( \left[ {\bf A} \, \vert \, {\bf b} \right] , \) meaning that we can progress from one to the other using elementary row operations. Thus, the two augmented matrices represent systems with exactly the same solution sets. Moreover, the solution is now easy to obtain since we can solve the last equation \( \delta_n x_n = g_n \) to get \( x_n = g_n / \delta_n , \) and then use this value in the previous equation to get xn-1, and so on. Again, carrying out the backward substitution stage requires the assumption that each \( \delta_k \ne 0, \quad 1 \le k \le n . \)

function x=FEBS(u,d,l,b)
% l, d and u are the lower-diagonal, diagonal,
% and upper-diagonal elements correspondingly.
% d is n-vector, l and u are (n-1)-vectors. 
% b is n-vector of right part of tridiagonal linear system Ax = b.
if ~all(d)
error('Vector d includes zero elements')  
end
n = numel(b);
l = [NaN,l];
u = [u,NaN];
% FEBS (progonka) algorithm 
% Forward elimination 
for i = 2:n 
d(i) = d(i) - u(i-1)*l(i)/d(i-1);
b(i) = b(i) - b(i-1)*l(i)/d(i-1);
end 
% Backward substitution 
x(n) = b(n)/d(n) ;
for i = n-1:-1:1 
x(i) = (b(i) - u(i)*x(i+1))/d(i);
end

Theorem: If the tridiagonal matrix A is diagonally dominant (\( d_i > |l_i | + |u_i | > 0, \quad 1 \le i \le n; \) ), then FEBS algorithm will succeed in producing the correct solution to the original linear system, within the limitations of rounding error. ■

 

Example: Consider the Dirichlet problem on the interval [0,1]:

\[ -u'' + u = f(x), \qquad x\in (0,1), \qquad u(0) = u(1) =0. \]
Here f is a known function, and we seek u that satisfies the differential equation and the boundary conditions at x=0 and x=1.

We divide the interval [0,1] into n equal subintervals \( x_{k-1} , x_k ] , \) according to

\[ 0 = x_0 < x_1 < x_2 < \cdots < x_{n-1} < x_n =1, \]
with \( h = x_k - x_{k-1} \) (therefore, \( x_k = kh \) ), and let \( U (x) \) denote the approximation to u(x). We can use Taylor approximations:
\[ u'' (x) = \frac{u(x-h) -2\, u(x) + u(x+h)}{h^2} + \frac{h^2}{12} \, u^{(4)} (\eta ) , \]
for some \( \eta \in [x-h, x+h] . \) We drop the remainder term and replace u with U, which yields
\[ -\frac{U(x-h) -2\, U(x) + U(x+h)}{h^2} + U(x) = f(x) . \]
This holds for all \( x\in [0,1] . \) To get a practical computational problem, we will impose the latter only on our grid points; i.e., we seek the n-1 values \( U_k = U (x_k ) , \quad 1 \le k \le n-1, \) that satisfy the recurrence:
\[ -U_{k-1} + \left( 2 + h^2 \right) U_k - U_{k+1} = h^2 f(x_k) , \qquad 1 \le k \le n-1. \]

This is a tridiagonal system of linear equations. Written out in matrix-vector form, we have
\[ \begin{bmatrix} 2 + h^2 & -1 & 0 & 0 & 0 & \cdots & 0 \\ -1 & 2+h^2 & -1 & 0 & 0& \cdots & 0 \\ 0 & -1 & 2 + h^2 & -1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0&0& \cdots & -1 & 2+ h^2 \end{bmatrix} \, \begin{bmatrix} U_1 \\ U_2 \\ \vdots \\ U_{n-2} \\ U_{n-1} \end{bmatrix} = h^2 \begin{bmatrix} f(x_1 ) \\ f(x_2 ) \\ \vdots \\ f(x_{n-2} \\ f(x_{n-1} ) \end{bmatrix} . \]
Moreover, it is diagonally dominant system, and we can apply the FEBS algorithm to produce solutions.

Using the input function \( f(x) = 100 \left( x - 0.55 \right)^2 , \) we solve the boundary value problem.

n=500;
h=1/n;
t=linspace(0,1,n+1);
u=-ones(1,n-2);
l=u;
b=100*(t(2:end-1)-0.55).^2*h^2;
d=(2+h^2)*ones(1,n-1);
z=FEBS(u,d,l,b);
plot(t,[0,z,0],'LineWidth',2);
grid