MATLAB TUTORIAL for the First Course. Part IV: Finite Difference Schemes

Prof. Vladimir A. Dobrushkin

This tutorial contains many matlab scripts.
You, as the user, are free to use all codes for your needs, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately. Any comments and/or contributions for this tutorial are welcome; you can send your remarks to

Email Vladimir Dobrushkin

Finite Difference Schemes

Methods involving difference quotient approximations for derivatives can be used for solving certain second-order boundary value problems. Consider the Dirichlet boundary value problem for the linear differential equation

\[ x'' (t) = p(t)\,x' + q(t)\,x + r(t) \qquad \mbox{subject} \quad u(a) = \alpha, \quad u(b) = \beta \]
over interval [a,b]. Form a partition of [a,b] using the uniform mesh points \( a=t_0 < t_1 < \cdots < t_m =b , \) where \( t_j = a+ jh, \quad j=0,1,2,\ldots , m \) with step length \( h =(b-a)/m . \) We use the central difference formulas to approximate derivatives
\[ \begin{split} x' (t) &= \frac{x(t_{j+1}) - x(t_{j-1})}{2h} + O \left( h^2 \right) , \\ x'' (t) &= \frac{x(t_{j+1}) - 2\, x(t_j ) + x(t_{j-1})}{2h} + O \left( h^2 \right) . \end{split} \]
To start the derivation, we replace each term \( x( t_j ) \) on the right side with xj and the resulting equations are substituted into the given differential equation. This yields
\[ \frac{x(t_{j+1}) - 2\, x(t_j ) + x(t_{j-1})}{2h} + O \left( h^2 \right) = p(t_j ) \left( \frac{x(t_{j+1}) - x(t_{j-1})}{2h} + O \left( h^2 \right) \right) + q(t_j ) + r(t_j ) . \]
For simplicity, we introduce the notation \( p_j = p(t_j ) , \quad q_j = q(t_j ) , \quad r_j = r(t_j ) . \) Next, we drop the two terms O(h3) to obtain
\[ \frac{x(t_{j+1}) - 2\, x(t_j ) + x(t_{j-1})}{2h} = p_j \, \frac{x(t_{j+1}) - x(t_{j-1})}{2h} + q_j + r_j . \]
This difference equation is used to compute numerical approximations to the iven differential equation. This is carried out by multiplying each side by h2 and then collecting terms involving xj-1, xj, and xj+1 and arranging them in a system of linear equations:
\[ \left( - \frac{h}{2} \, p_j -1 \right) x_{j-1} + \left( 2 + h^2 q_j \right) x_j + \left( \frac{h}{2} \, p_j -1 \right) x_{j+1} = -h^2 r_j , \]
for \( j=1,2,\ldots , m-1 , \) where \( x_0 = \alpha \) and \( x_m = \beta . \) This system has the familiar tridiagonal form, which is more visible when displayed with matrix notation:
\[ \begin{bmatrix} 2+h^2 q_1 & \frac{h}{2}\,p_1 -1 &&& \\ -\frac{h}{2}\, p_2 -1 & 2 + h^2 q_2 & \frac{h}{2}\,p_1 -1 && \\ \vdots & \vdots & &&& \\ & - \frac{h}{2}\,p_j & 2 + h^2 q_j & \frac{h}{2}\,p_j & 0 \\ && -\frac{h}{2} \,p_{m-2} -1 & 2 + h^2 q_{m-2} & \frac{h}{2} \, p_{m-2} -1 \\ &&&-\frac{h}{2} \,p_{m-1} -1 & 2 + h^2 q_{m-1} & \frac{h}{2} \, p_{m-1} -1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_j \\ \vdots \\ x_{m-2} \\ x_{m-1} \end{bmatrix} = \begin{bmatrix} e_0 -h^2 r_1 \\ -h^2 r_2 \\ \vdots \\ -h^2 r_j \\ \vdots \\ -h^2 r_{m-2} \\ e_m - h^2 r_{m-1} \end{bmatrix} , \]
where
\[ e_0 = \left( \frac{h}{2}\, p_1 +1 \right) \alpha \qquad\mbox{and} \qquad e_m = \left( 1- \frac{h}{2} \, p_{m-1} \right) \beta . \]

There is a special scheme, called progonka or forward elimination and back substitution (FEBS) to solve algebraic equations with tridiagonal matrices. This scheme, which is widely used in numerical simulations, was first discovered by a prominent Soviet mathematician Israel Moiseevich Gel'fand (1913--2009) in his student work. He personally never claimed his authority for the discovery because he thought it was just easy (from his point of view) application of Gauss elimination. In engineering, the FEBS method is sometimes accosiated with Llewellyn H. Thomas from Bell laboratories who used it in 1946.

To solve the linear system of equations \( {\bf A} \,{\bf x} = {\bf b} , \) with tridiagonal matrix A, use the following matlab code:

function x=trisys(A,D,C,b)
% Input:     A is the subdiagonal of the coefficient matrix
%            D is the main diagonal of the coefficient matrix 
%            C is the superdiagonal of the coefficient matrix 
%            b is the constant vector of the linear system 
% Output:    x is the solution vector
n=length(b);
for k=2:n 
    mult=A(k-1)/D(k-1); 
    D(k)=D(k)-mult*C(k-1); 
    b(k)=b(k)-mult*b(k-1); 
end 
x(n)=b(n)/D(n);
for k=n-1:-1:1
    x(k)=(b(k)-C(k)*X(k+1))/D(k); 
end

Now we are ready to solve numerically teh boundary value problem
\[ x'' (t) = p(t)\,x' + q(t)\,x + r(t) \qquad \mbox{subject} \quad u(a) = \alpha, \quad u(b) = \beta \]
over interval [a,b] by using the finite-difference scheme of order O(h2).

Remark: The uniform mesh is \( a= t_1 < t_2 < \cdots < t_{m+1} =b \) and the solution points are \( \left\{ (t_j , x_j ) \right\}_{j=1}^{m+1} .\)

 function F=finddiff(p,q,r,a,b,alpha,beta,m)
% Input:    p, q, r are the coefficient functions 
%           input as strings: 'p', 'q', 'r'
%           a and b are the left and right end points 
%           alpha=x(a) and beta=x(b)
%           m is the number of steps 
% Output:   F=[t',x'] where t' is the 1xm vector of abscissas
%            and x' is the 1 x m vector of ordinates
% initialize vectors and h
t=zeros(1,m+1);
x=zeros(1,m-1);
va=zeros(1,m-2);
vb=zeros(1,m-1); 
vc=zeros(1,m-2);
vd=zeros(1,m-1);
h=(b-a)/m;
% calculate the constant vector b in Ax=b 
vt==a+h:h:a+h*(m-1);
vb=-h^2*feval(r,vt);
vb(1)=vb(1)+(1+h/2*feval(p,vt(1)))*alpha;
vb(m-1)=vb(m-1)+(1-h/2*feval(p,vt(m-1)))*alpha; 
% calculate the main diagonal of A in A*x = b
vd=2+h^2*feval(q,vt);
% calculate the superdiagonal of A in A*x = b 
vta=vt(1,2:m-1);
va=-1-h/2*feval(p,vta);
% calculate the subdiagonal of A in A*x = b
vtc=vt(1,1:m-2);
vc=-1+h/2*feval(p,vtc);
% solve A*x = b using trisys
x=trisys(va,vd,vc,vb);
t=[a,vt,b];
x=[alpha,x,beta];
F=[t',x'];