Preface


This section discusses a simplified version of the Adomian decomposition method first concept of which was proposed by Randolf Rach in 1989 that was crystallized later in a paper published with his colleagues G. Adomian and R.E. Meyers. That is way this technique is frequently referred to as the Rach--Adomian--Meyers modified decomposition method (MDM for short). Initially, this method was applied to power series expansions, which was based on the nonlinear transformation of series by the Adomian--Rach Theorem. Similar to the Runge--Kutta methods, the MDM can be implemented in numerical integration of differential equations by one-step methods. In case of polynomials or power series, it shows the advantage in speed and accuracy of calculations when at each step the Adomian decomposition method allows one to perform explicit evaluations.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part VII of the course APMA0330

Falkner--Skan layer


Preface


This tutorial was made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very little experience or have never used Mathematica and programming before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel. The Mathematica commands in this tutorial are all written in bold black font, while Mathematica output is in normal font.

Finally, you can copy and paste all commands into your Mathematica notebook, change the parameters, and run them because the tutorial is under the terms of the GNU General Public License (GPL). You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately. The tutorial accompanies the textbook Applied Differential Equations. The Primary Course by Vladimir Dobrushkin, CRC Press, 2015; http://www.crcpress.com/product/isbn/9781439851043

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0330
Return to the main page for the course APMA0340
Return to Part VII of the course APMA0330

Tridiagonal linear systems


  The solution of linear systems of equations is one of the most important areas of computational mathematics. We are not going to present this topic in detail---it deserves a special course. Instead, we consider one particular and very important case when the leading matrix is tridiagonal:

\[ {\bf A}\,{\bf x} = {\bf b} , \]
where x is an unknown n-vector, b is the given n-vector, and A is a tridiagonal \( n \times n \) matrix
\[ {\bf A} = \begin{bmatrix} a_{11} & a_{12} & 0 & \cdots & 0 \\ a_{21} & a_{22} & a_{23} & \cdots & 0 \\ 0 & a_{32} & a_{33} & \cdots & 0 \\ \vdots& \vdots & \vdots & \ddots & a_{n-1,n} \\ 0&0&0& a_{n, n-1} & a_{n,n} \end{bmatrix} . \]
Before proceeding with the algorithm, let us make a notational simplification. Instead of writing the system in the traditional matrix notation and indexing style, we will use li, di, and ui to denote the lower-diagonal, diagonal, and upper-diagonal elements:
\begin{split} l_i &= a_{i,i-1} , \qquad 2 \le i \le n , \\ d_i &= a_{i,i} , \qquad 1 \le i \le n , \\ u_i &= a_{i, i+1} , \qquad 1 \le i \le n-1 , \end{split}
where we adopt the convention that l1=0 and un=0. Under this notation, the augmented matrix corresponding to the system is
\[ \left[ {\bf A} \, \vert \, {\bf b} \right] = \begin{bmatrix} a_{11} & a_{12} & 0 & \cdots & 0 & | & b_1 \\ a_{21} & a_{22} & a_{23} & \cdots & 0 & | & b_2 \\ 0 & a_{32} & a_{33} & \cdots & 0 & | & b_3 \\ \vdots& \vdots & \vdots & \ddots & a_{n-1,n} & | & b_{n-1} \\ 0&0&0& a_{n, n-1} & a_{n,n} & | & b_n \end{bmatrix} . \]
Then we can store the entire problem using just the four vectors for l, d, u, and b, instead of the entire n-by-n matrix, which is mostly zeroes anyway.
Israel Gelfand

 This system of algebraic equations could be solved using standard Gaussian elimination procedure, which actually reduces the problem to an upper triangular form. This stage is usually referred to as the forward elimination (FE). Once it is completed, the second stage, which is called backward substitution (BS), involves finding actual solution. Therefore, this algorithm is usually called FEBS, or in computational jargon "progonka." In engineering, the FEBS method is associated with the British scientist Llewellyn H. Thomas from Bell laboratories who solved a simple Poisson problem (see following example) using this method in 1946. Historically, a prominent Soviet mathematician Israel Moiseevich Gelfand (1913--2009) discovered FEBS algorithm in 1933 being a sophomore college student. He personally refused to associate his name with FEBS because, in his opinion, it was a very simple application of Gaussian elimination. Instead, he suggested to call FEBS algorithm as "progonka (прогонка)," and this slang is widely accepted.

  Using the elimination procedure, the augmented matrix is reduced to an equivalent upper triangular form:

\[ \left[ {\bf T} \, \vert \, {\bf g} \right] = \begin{bmatrix} \delta_1 & u_{1} & 0 & \cdots & 0 & | & g_1 \\ 0 & \delta_{2} & u_{2} & \cdots & 0 & | & g_2 \\ 0 & 0 & \delta_{3} & \cdots & 0 & | & g_3 \\ \vdots& \vdots & \vdots & \ddots & u_{n-1} & | & g_{n-1} \\ 0&0&0& 0 & \delta_{n} & | & g_n \end{bmatrix} , \]
where
\[ \delta_1 = d_1 , \quad \delta_2 = d_2 - u_1 (l_2 /\delta_1 ), \quad \delta_3 = d_3 - u_2 (l_3 /\delta_2 ), \quad \cdots , \quad \delta_k = d_k - u_{k-1} \left( l_k / \delta_{k-1} \right) , \quad 2\le k \le n. \]
Similarly,
\[ g_1 = b_1 , \quad g_2 = b_2 - g_1 (l_2 /\delta_1 ), \quad g_3 = b_3 - g_2 (l_3 /\delta_2 ), \quad \cdots , \quad g_k = b_k - g_{k-1} \left( l_k / \delta_{k-1} \right) , \quad 2\le k \le n. \]
The augmented matrix \( \left[ {\bf T} \, \vert \, {\bf g} \right] \) is row equivalent to the original augmented matrix \( \left[ {\bf A} \, \vert \, {\bf b} \right] , \) meaning that we can progress from one to the other using elementary row operations. Thus, the two augmented matrices represent systems with exactly the same solution sets. Moreover, the solution is now easy to obtain since we can solve the last equation \( \delta_n x_n = g_n \) to get \( x_n = g_n / \delta_n , \) and then use this value in the previous equation to get xn-1, and so on. Again, carrying out the backward substitution stage requires the assumption that each \( \delta_k \ne 0, \quad 1 \le k \le n . \)

 

FEBS (progonka) algorithm
% elimination stage

for i = 2 to n
d(i) = d(i) - u(i-1)*l(i)/d(i-1)
b(i) = b(i) - b(i-1)*l(i)/d(i-1)
endfor

% backward substitution

x(n) = b(n)/d(n)
for i=n-1 downto 1
x(i) = (b(i) - u(i)*x(i+1))/d(i)
endfor

Theorem: If the tridiagonal matrix A is diagonally dominant (\( d_i > |l_i | + |u_i | > 0, \quad 1 \le i \le n; \) ), then FEBS algorithm will succeed in producing the correct solution to the original linear system, within the limitations of rounding error. ■

 

Example: Consider the Dirichlet problem on the interval [0,1]:

\[ -u'' + u = f(x), \qquad x\in (0,1), \qquad u(0) = u(1) =0. \]
Here f is a known function, and we seek u that satisfies the differential equation and the boundary conditions at x=0 and x=1.

We divide the interval [0,1] into n equal subintervals \( x_{k-1} , x_k ] , \) according to

\[ 0 = x_0 < x_1 < x_2 < \cdots < x_{n-1} < x_n =1, \]
with \( h = x_k - x_{k-1} \) (therefore, \( x_k = kh \) ), and let \( U (x) \) denote the approximation to u(x). We can use Taylor approximations:
\[ u'' (x) = \frac{u(x-h) -2\, u(x) + u(x+h)}{h^2} + \frac{h^2}{12} \, u^{(4)} (\eta ) , \]
for some \( \eta \in [x-h, x+h] . \) We drop the remainder term and replace u with U, which yields
\[ -\frac{U(x-h) -2\, U(x) + U(x+h)}{h^2} + U(x) = f(x) . \]
This holds for all \( x\in [0,1] . \) To get a practical computational problem, we will impose the latter only on our grid points; i.e., we seek the n-1 values \( U_k = U (x_k ) , \quad 1 \le k \le n-1, \) that satisfy the recurrence:
\[ -U_{k-1} + \left( 2 + h^2 \right) U_k - U_{k+1} = h^2 f(x_k) , \qquad 1 \le k \le n-1. \]

This is a tridiagonal system of linear equations. Written out in matrix-vector form, we have
\[ \begin{bmatrix} 2 + h^2 & -1 & 0 & 0 & 0 & \cdots & 0 \\ -1 & 2+h^2 & -1 & 0 & 0& \cdots & 0 \\ 0 & -1 & 2 + h^2 & -1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0&0& \cdots & -1 & 2+ h^2 \end{bmatrix} \, \begin{bmatrix} U_1 \\ U_2 \\ \vdots \\ U_{n-2} \\ U_{n-1} \end{bmatrix} = h^2 \begin{bmatrix} f(x_1 ) \\ f(x_2 ) \\ \vdots \\ f(x_{n-2} \\ f(x_{n-1} ) \end{bmatrix} . \]
Moreover, it is diagonally dominant system, and we can apply the FEBS algorithm to produce solutions.

   ■

 

 

Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First Order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)