PLU Factorization

So far, we tried to represent a square nonsingular matrix A as a product of a lower-triangular matrix L and an upper triangular matrix U: \( {\bf A} = {\bf L}\,{\bf U} .\) When this is possible we say that A has an LU-decomposition (or factorization). It turns out that this factorization (when it exists) is not unique. If L has 1's on it's diagonal, then it is called a Doolittle factorization. If U has 1's on its diagonal, then it is called a Crout factorization. When L is an unitary matrix, it is called a Cholesky decomposition. Moreover, Gaussian elimination in its pure form may be unstable.

While LU-decomposition is a useful computational tool, but this does not work for every matrix. Consider even the simple example with matrix

\[ {\bf A} = \begin{bmatrix} 0&2 \\ 3 & 0 \end{bmatrix} . \]
Another cause of instability in Gaussian elimination is when encountering relatively small pivots. Gaussian elimination is guaranteed to succeed if row or column interchanges are used in order to avoid zero pivots when using exact calculations. But, when computing the LU factorization numerically, this is not necessarily true. When a calculation is undefined because of division by zero, the same calculation will suffer numerical difficulties when there is division by a nonzero number that is relatively small. For instance, let
\[ {\bf A} = \begin{bmatrix} 10^{-22}&2 \\ 1 & 3 \end{bmatrix} . \]
When computing the factors L and U, the process does not fail in this case because there is no division by zero.
\[ {\bf L} = \begin{bmatrix} 1&0 \\ 10^{22} & 1 \end{bmatrix} , \qquad {\bf U} = \begin{bmatrix} 10^{-22}&2 \\ 0 & 3 - 2\cdot 10^{22} \end{bmatrix} . \]
When these computations are performed in floating point arithmetic, the number \( 3 - 2\cdot 10^{22} \) is not represented exactly but will be rounded to the nearest floating point number which we will say is \( 2\cdot 10^{22} . \) This means that our matrices are now floating point matrices L' and U':
\[ {\bf L}' = \begin{bmatrix} 1&0 \\ 10^{22} & 1 \end{bmatrix} , \qquad {\bf U}' = \begin{bmatrix} 10^{-22}&2 \\ 0 & - 2\cdot 10^{22} \end{bmatrix} . \]
When we compute L'U', we get
\[ {\bf L}' {\bf U}' = \begin{bmatrix} 10^{-22}&2 \\ 1 & 0 \end{bmatrix} . \]
The answer should be equal to A, but obviously that is not the case. The 3 in position (2,2) of matrix A is now 0. Also, when trying to solve a system such as \( {\bf A} \, {\bf x} = {\bf b} \) using the LU factorization, the factors L'U' would not give you a correct answer. The LU factorization was a stable computation but not backward stable.

Recall that an \( n \times n \) permutation matrix P is a matrix with precisely one entry whose value is "1" in each column and row, and all of whose other entries are "0." The rows of P are a permutation of the rows of the identity matrix. Since not every matrix has LU decomposition, we try to find a permulation matrix P such that PA has LU factorization: \( {\bf P}\,{\bf A} = {\bf L}\,{\bf U} , \) where L and U are again lower and upper triangular matrices, and P is a permutation matrix. In this case, we say that A has a PLU factorization. It is also referred to as the LU factorization with Partial Pivoting (LUP) with row permutations only. An LU factorization with full pivoting involves both row and column permutations, \( {\bf P}\,{\bf A}\, {\bf Q} = {\bf L}\,{\bf U} , \) where L and U, and P are defined as before, and Q is a permutation matrix that reorders the columns of A.

For a permutation matrix P, the product PA is a new matrix whose rows consists of the rows of A rearranged in the new order. Note that a product of permutation matrices is a permutation matrix. Every permutation matrix is an orthogonal matrix: \( {\bf P}^{-1} = {\bf P}^{\mathrm T} . \)

Example. Let P be a permutation matrix that interchange rows 1 and 2 and also interchange rows 3 and 4:

\[ {\bf P}\, {\bf A} = \begin{bmatrix} 0&1&0&0 \\ 1&0&0&0 \\ 0&0&0&1 \\ 0&0&1&0 \end{bmatrix} \begin{bmatrix} a_{1,1}&a_{1,2}&a_{1,3}&a_{1,4} \\ a_{2,1}&a_{2,2}&a_{2,3}&a_{2,4} \\ a_{3,1}&a_{3,2}&a_{3,3}&a_{3,4} \\ a_{4,1}&a_{4,2}&a_{4,3}&a_{4,4} \end{bmatrix} = \begin{bmatrix} a_{2,1}&a_{2,2}&a_{2,3}&a_{2,4} \\ a_{1,1}&a_{1,2}&a_{1,3}&a_{1,4} \\ a_{4,1}&a_{4,2}&a_{4,3}&a_{4,4} \\ a_{3,1}&a_{3,2}&a_{3,3}&a_{3,4} \end{bmatrix} . \]
However,
\[ {\bf A} {\bf P} = \begin{bmatrix} a_{1,1}&a_{1,2}&a_{1,3}&a_{1,4} \\ a_{2,1}&a_{2,2}&a_{2,3}&a_{2,4} \\ a_{3,1}&a_{3,2}&a_{3,3}&a_{3,4} \\ a_{4,1}&a_{4,2}&a_{4,3}&a_{4,4} \end{bmatrix} \begin{bmatrix} 0&1&0&0 \\ 1&0&0&0 \\ 0&0&0&1 \\ 0&0&1&0 \end{bmatrix} = \begin{bmatrix} a_{1,2}&a_{1,1}&a_{1,4}&a_{1,3} \\ a_{2,2}&a_{2,1}&a_{2,4}&a_{2,3} \\ a_{3,2}&a_{3,1}&a_{3,4}&a_{3,3} \\ a_{4,2}&a_{4,1}&a_{4,4}&a_{4,3} \end{bmatrix} . \]

Example. Consider a 3-by-4 matrix \( {\bf A} = \begin{bmatrix} 1&2&-3&1 \\ 2&4&0&7 \\ -1&3&2&0 \end{bmatrix} \) for which pure LU-factorization does not exist. Multiplying A by the product of two matrices

\[ {\bf E}_{31} (1) = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} , \quad {\bf E}_{21} (-2) = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 0&0&1 \end{bmatrix} , \]
we obtain
\[ {\bf E}_{21} (-2) \,{\bf E}_{31} (1)\, {\bf A} = \begin{bmatrix} 1&2&-3&1 \\ 0&0&6&5 \\ 0&5&-1&1 \end{bmatrix} . \]
To interchange last two rows, we multiply by the matrix \( {\bf P} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} . \) This yields
\[ {\bf P} \,{\bf E}_{21} (-2) \,{\bf E}_{31} (1)\, {\bf A} = \begin{bmatrix} 1&2&-3&1 \\ 0&5&-1&1 \\ 0&0&6&5 \end{bmatrix} = {\bf U} . \]
From this equation, we find the inverse matrix
\[ \left( {\bf P} \,{\bf E}_{21} (-2) \,{\bf E}_{31} (1)\right)^{-1} = \begin{bmatrix} 1&0&0 \\ 2&0&1 \\ -1&1&0 \end{bmatrix} , \]
which is not a triangular matrix. Therefore, we get PLU decomposition:
\[ {\bf A} = {\bf P} \, {\bf L} \, {\bf U} = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \, \begin{bmatrix} 1&0&0 \\ -1 &1&0 \\ 2 &0&1 \end{bmatrix} \, \begin{bmatrix} 1&2&-3&1 \\ 0&5&-1&1 \\ 0&0&6&5 \end{bmatrix} . \]
Note that \( {\bf P}^2 = {\bf I} , \) the identity matrix, and \( {\bf P}\, {\bf A} = {\bf L}\,{\bf U} . \)

Theorem: If A is a nonsingular matrix, then there exists a permutation matrix P so that PA has an LU-factorization. ■

Pivoting for LU factorization is the process of systematically selecting pivots for Gaussian elimination during the LU-decomposition of a matrix. The LU factorization is closely related to Gaussian elimination, which is unstable in its pure form. To guarantee the elimination process goes to completion, we must ensure that there is a nonzero pivot at every step of the elimination process. This is the reason we need pivoting when computing LU decompositions. But we can do more with pivoting than just making sure Gaussian elimination completes. We can reduce roundoff errors during computation and make our algorithm backward stable by implementing the right pivoting strategy. Depending on the matrix A, some LU decompositions can become numerically unstable if relatively small pivots are used. Relatively small pivots cause instability because they operate very similar to zeroes during Gaussian elimination. Through the process of pivoting, we can greatly reduce this instability by ensuring that we use relatively large entries as our pivot elements. This prevents large factors from appearing in the computed L and U, which reduces roundoff errors during computation.

The goal of partial pivoting is to use a permutation matrix to place the largest entry of the first column of the matrix at the top of that first column. For an \( n \times n \) matrix A, we scan n rows of the first column for the largest value. At step k of the elimination, the pivot we choose is the largest of the n- (k+1) subdiagonal entries of column k, which costs O(nk) operations for each step of the elimination. So for an n-by-n matrix, there is a total of \( O(n^2 ) \) comparisons. Once located, this entry is then moved into the pivot position \( A_{k,k} \) on the diagonal of the matrix.

Example. Use PLU factorization to solve the linear system

\[ \begin{bmatrix} 2&4&2 \\ 4&-10&2 \\ 1&2&4 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 5 \\ -8 \\ 13 \end{bmatrix} . \]
We want to use a permutation matrix to place the largest entry of the first column into the position (1,1). For this matrix, this means we want 4 to be the first entry of the first column. So we use the permutation matrix
\[ {\bf P}_1 = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf P}_1 {\bf A} = \begin{bmatrix} 4&-10&2 \\ 2&4&2 \\ 1&2&4 \end{bmatrix} . \]
Then we use the matrix M to eliminate the bottom two entries of the first column:
\[ {\bf M}_1 = \begin{bmatrix} 1&0&0 \\ -2/4&1&0 \\ -1/4&0&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf M}_1 {\bf P}_1 {\bf A} = \begin{bmatrix} 4&-10&2 \\ 0&9&1 \\ 0&9/2&7/2 \end{bmatrix} . \]
For latter matrix we don't need pivoting because the middle of the second row is the largest. So we just multiply by
\[ {\bf M}_2 = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&-1/2&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf M}_2 {\bf M}_1 {\bf P}_1 {\bf A} = \begin{bmatrix} 4&-10&2 \\ 0&9&1 \\ 0&0&3 \end{bmatrix} . \]
Since the inverse of \( {\bf M}_2 {\bf M}_1 \) is the required lower triangular matrix
\[ {\bf M}_2 {\bf M}_1 = \begin{bmatrix} 1&0&0 \\ -1/2&1&0 \\ 0&-1/2&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf L} = \left( {\bf M}_2 {\bf M}_1 \right)^{-1} = \begin{bmatrix} 1&0&0 \\ 1/2&1&0 \\ 1/4&1/2&1 \end{bmatrix} , \]
we have
\[ {\bf A} = \begin{bmatrix} 2&4&2 \\ 4&-10&2 \\ 1&2&4 \end{bmatrix} , \quad {\bf L} = \begin{bmatrix} 1&0&0 \\ 1/2&1&0 \\ 1/4&1/2&1 \end{bmatrix} , \quad {\bf U} = \begin{bmatrix} 4&-10&2 \\ 0&9&0 \\ 0&0&7/2 \end{bmatrix} , \quad {\bf P} = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} , \]
then
\[ {\bf L}{\bf U} = \begin{bmatrix} 4&-10&2 \\ 2&4&1 \\ 1&2&4 \end{bmatrix} , \quad {\bf P}{\bf A} = \begin{bmatrix} 4&-10&2 \\ 2&4&1 \\ 1&2&4 \end{bmatrix} . \]
Thus, the solution becomes
\[ {\bf x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 1 \\ -1 \\ 3 \end{bmatrix} . \]

Algorithm for PLU-factorization: is based on the following formula for upper triangular matrix \( {\bf U} = \left[ u_{i,j} \right] : \)

\[ u_{i,j} = a_{i,j} - \sum_{k=1}^{i-1} u_{k,j} l_{i.k} , \]
with similar formula for \( {\bf L} = \left[ l_{i,j} \right] . \) To ensure that the algorithm is numerically stable when \( u_{j,j} \ll 0, \) a pivoting matrix P is used to re-order A so that the largest element of each column of A gets shifted to the diagonal of A. The formula for elements of L follows:
\[ l_{i,j} = \frac{1}{u_{j,j}} \left( a_{i,j} - \sum_{k=1}^{j-1} u_{k,j} l_{i,k} \right) . \]

Step 1: If \( n=1 , \) return \( {\bf P} =1, \quad {\bf L} = 1, \quad {\bf U} = a_{1,2} . \)

Step 2: Find a non-zero element \( a_{i,1} \) in the first row of A. (If not, then the system is not solvable.)

Step 3: Define a permutation matrix \( {\bf Q}_{n \times n} \) such that

Step 4: Let \( {\bf B} = {\bf Q} {\bf A} . \) Note that \( b_{1,1} = a_{i,1} \ne 0. \)

Step 5: Write B as \( {\bf B} = \begin{bmatrix} b_{1,1} & {\bf r}^{\mathrm T} \\ {\bf c} & {\bf C}_1 \end{bmatrix} . \)

Step 6: Define \( {\bf B}_1 = {\bf C}_1 - {\bf c} \cdot {\bf r}^{\mathrm T} / b_{1,1} .\) Recursively compute \( {\bf B}_1 = {\bf P}_1 {\bf L}_1 {\bf U}_1 . \)

Step 7:

\[ {\bf P} = {\bf Q}^{\mathrm T} \begin{bmatrix} 1&0 \\ 0& {\bf P}_1 \end{bmatrix} , \qquad {\bf L} = \begin{bmatrix} 1&0 \\ {\bf P}_1^{\mathrm T} {\bf c}/ b_{1,1} & {\bf L}_1 \\ \end{bmatrix} , \qquad {\bf U} = \begin{bmatrix} b_{1,1}&{\bf r}^{\mathrm T} \\ 0& {\bf U}_1 \end{bmatrix} . \quad ■ \]
When employing the complete pivoting strategy we scan for the largest value in the entire submatrix \( {\bf A}_{k:n,k:n} , \) where k is the number of the elimination, instead of just the next subcolumn. This requires \( O \left( (n-k)^2 \right) \) comparisons for every step in the elimination to find the pivot. The total cost becomes \( O \left( n^3 \right) \) comparisons for an n-by-n matrix. For an \( n \times n \) matrix A, the first step is to scan n rows and n columns for the largest value. Once located, this entry is then permuted into the next diagonal pivot position of the matrix. So in the first step the entry is permuted into the (1,1) position of matrix A. We interchange rows exactly as we did in partial pivoting, by multiplying A on the left with a permutation matrix P. Then to interchange columns, we multiply A on the right by another permutation matrix Q. The matrix product PAQ interchanges rows and columns accordingly so that the largest entry in the matrix is in the (1,1) position of A. With complete pivoting, the general equation for L is the same as for partial pivoting, but the equation for U is slightly different. Complete pivoting is theoretically the most stable strategy as it can be used even when a matrix is singular, but it is much slower than partial pivoting.

Example. Consider the matrix A:

\[ {\bf A} = \begin{bmatrix} 2&3&4 \\ 4&7&5 \\ 4&9&5 \end{bmatrix} . \]
In this case, we want to use two different permutation matrices to place the largest entry of the entire matrix into the position \( A_{1,1} .\) For the given matrix, this means we want 9 to be the first entry of the first column. To achieve this we multiply A on the left by the permutation matrix \( {\bf P}_1 \) and multiply on the right by the permutation matrix \( {\bf Q}_1 .\)
\[ {\bf P}_1 = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} , \quad {\bf Q}_1 = \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix} , \quad {\bf P}_1 {\bf A} {\bf Q}_1 = \begin{bmatrix} 9&4&5 \\ 7&4&5 \\ 3&2&4 \end{bmatrix} . \]
Then in computing the LU factorization, the matrix M is used to eliminate the bottom two entries of the first column:
\[ {\bf M}_1 = \begin{bmatrix} 1&0&0 \\ -7/9&1&0 \\ -1/3&0&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf M}_1 {\bf P}_1 {\bf A} {\bf Q}_1 = \begin{bmatrix} 9&4&5 \\ 0&8/9&10/9 \\ 0&2/3&7/3 \end{bmatrix} . \]
Then, the permutation matrices P and Q are used to move the largest entry of the submatrix \( {\bf A}_{2:3;2:3} \) into the position \( A_{2,2} : \)
\[ {\bf P}_2 = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} = {\bf Q}_2 \qquad \Longrightarrow \qquad {\bf P}_2 {\bf M}_1 {\bf P}_1 {\bf A} {\bf Q}_1 {\bf Q}_2 = \begin{bmatrix} 9&4&5 \\ 0&7/3&2/3 \\ 0&10/9&8/9 \end{bmatrix} . \]
As we did before, we are going to use matrix multiplication to eliminate entries at the bottom of the column 2:
\[ {\bf M}_2 = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&-10/21&1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf M}_2 {\bf P}_2 {\bf M}_1 {\bf P}_1 {\bf A} {\bf Q}_1 {\bf Q}_2 = \begin{bmatrix} 9&4&5 \\ 0&7/3&2/3 \\ 0&0&4/7 \end{bmatrix} . \]
This upper triangular matrix product is the U matrix. The L matrix comes from the inverse of the product of M and P matrices:
\[ {\bf L} = \left( {\bf M}_2 {\bf P}_2 {\bf M}_1 {\bf P}_1 \right)^{-1} = \begin{bmatrix} 1&0&0 \\ 1/3&1&0 \\ 7/9&10/21&1 \end{bmatrix} . \]
To check, we can show that \( {\bf P} {\bf A} {\bf Q} = {\bf L} {\bf U} , \) where \( {\bf P} = {\bf P}_2 {\bf P}_1 \) and \( {\bf Q} = {\bf Q}_1 {\bf Q}_2 . \)

Rook pivoting is a third alternative pivoting strategy. For this strategy, in the kth step of the elimination, where \( 1 \le k \le n-1 \) for an n-by-n matrix A, we scan the submarix \( {\bf A}_{k:n;k:n} \) for values that are the largest in their respective row and column. Once we have found our potential pivots, we are free to choose any that we please. This is because the identified potential pivots are large enough so that there will be a very minimal difference in the end factors, no matter which pivot is chosen. When a computer uses rook pivoting, it searches only for the first entry that is the largest in its respective row and column. The computer chooses the first one it finds because even if there are more, they all will be equally effective as pivots. This allows for some options in our decision making with regards to the pivot we want to use.

Example. Consider the matrix A:

\[ {\bf A} = \begin{bmatrix} 1&4&7 \\ 7&8&2 \\ 9&5&1 \end{bmatrix} . \]
When using the partial pivoting technique, we would identify 8 as our first pivot element. When using complete pivoting, we would use 9 as our first pivot element. When rook pivoting we identify entries that are the largest in their respective rows and columns. So following this logic, we identify 8, 9, and 7 all as potential pivots. In this example we will choose 7 as the first pivot. Since we are choosing 7 for our first pivot element, we multiply A on the right by the permutation matrix Q to move 7 into the position (1,1). Note that we will also multiply on the left of A by P, but P is simply the identity matrix so there will be no effect on A.
\[ {\bf Q}_1 = \begin{bmatrix} 0&0&1 \\ 0&1&0 \\ 1&0&0 \end{bmatrix} \qquad\Longrightarrow \qquad {\bf P}_1 {\bf A} {\bf Q}_1 = \begin{bmatrix} 7&4&1 \\ 2&8&7 \\ 1&5&9 \end{bmatrix} . \]
Then we eliminate the entries in row 2 and 3 of column 1 via the matrix
\[ {\bf M}_1 = \begin{bmatrix} 1&0&0 \\ -2/7&1&0 \\ -1/7&0&1 \end{bmatrix} \qquad\Longrightarrow \qquad {\bf M}_1 {\bf P}_1 {\bf A} {\bf Q}_1 = \begin{bmatrix} 7&4&1 \\ 0&48/7&47/7 \\ 0&30/7&62/7 \end{bmatrix} . \]
If we were employing partial pivoting, our pivot would be 48/7 and if we were using complete pivoting, our pivot would be 62/7. But using the rook pivoting strategy allows us to choose either 48/7 or 62/7 as our next pivot. In this situation, we do not need to permute the matrix \( {\bf M}_1 {\bf P}_1 {\bf A}\,{\bf Q}_1 \) to continue with our factorization because one of our potential pivots is already in pivot postion. To keep things interesting, we will permute the matrix anyway and choose 62/7 to be our next pivot. To achieve this we wil multiply by the matrix \( {\bf P}_2 \) on the left and by the matrix \( {\bf Q}_2 \) on the right.
\[ {\bf P}_2 = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} = {\bf Q}_2 \qquad\Longrightarrow \qquad {\bf P}_2 {\bf M}_1 {\bf P}_1 {\bf A} {\bf Q}_1 {\bf Q}_2 = \begin{bmatrix} 7&1&4 \\ 0&62/7&30/7 \\ 0&47/7&48/7 \end{bmatrix} . \]
At this point, we want to eliminate the last entry of the second column to complete the factorization. We achieve this by matrix \( {\bf M}_2 :\)
\[ {\bf M}_2 = \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&-47/62&1 \end{bmatrix} \qquad\Longrightarrow \qquad {\bf M}_2 {\bf P}_2 {\bf M}_1 {\bf P}_1 {\bf A} {\bf Q}_1 {\bf Q}_2 = \begin{bmatrix} 7&1&4 \\ 0&62/7&30/7 \\ 0&0&7/2 \end{bmatrix} . \]
After the elimination the matrix is now upper triangular which we recognize as the factor U. The factor L is equal to the matrix product \( {\bf L} = \left( {\bf M}_2 {\bf P}_2 {\bf M}_1 {\bf P}_1 \right)^{-1} . \) You can use Sage to check that PAQ = LU, where \( {\bf P} = {\bf P}_2 {\bf P}_1 \) and \( {\bf Q} = {\bf Q}_1 {\bf Q}_2 \)
\begin{align*} {\bf P\,A\,Q} &= \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} \begin{bmatrix} 1&4&7 \\ 7&8&2 \\ 9&5&1 \end{bmatrix} \begin{bmatrix} 0&1&0 \\ 0&0&1 \\ 1&0&0 \end{bmatrix} \\ &= \begin{bmatrix} 1&0&0 \\ 1/7&1&0 \\ 2/7&47/62&1 \end{bmatrix} \begin{bmatrix} 7&1&4 \\ 0&62/7&30/7 \\ 0&0&7/2 \end{bmatrix} = {\bf L\,U} . \end{align*}

we start by considering the matrix

\[ {\bf E}_1 = {\bf I} - \begin{bmatrix} 0&0&0&\cdots &0 \\ k_{21}&0&0&\cdots &0 \\ k_{31}&0&0&\cdots &0 \\ \vdots&0&0&\cdots &0 \\ k_{n1}&0&0&\cdots &0 \end{bmatrix} , \]
where \( k_{i1} = a_{i1} / a_{11} . \)
Example: The span of the empty set \( \varnothing \) consists of a unique element 0. Therefore, \( \varnothing \) is linearly independent and it is a basis for the trivial vector space consisting of the unique element---zero. Its dimension is zero.

 

Example: In \( \mathbb{R}^n , \) the vectors \( e_1 [1,0,0,\ldots , 0] , \quad e_2 =[0,1,0,\ldots , 0], \quad \ldots , e_n =[0,0,\ldots , 0,1] \) form a basis for n-dimensional real space, and it is called the standard basis. Its dimension is n.

 

Example: Let us consider the set of all real \( m \times n \) matrices, and let \( {\bf M}_{i,j} \) denote the matrix whose only nonzero entry is a 1 in the i-th row and j-th column. Then the set \( {\bf M}_{i,j} \ : \ 1 \le i \le m , \ 1 \le j \le n \) is a basis for the set of all such real matrices. Its dimension is mn.

 

Example: The set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n \right\} \) form a basis in the set of all polynomials of degree up to n. It has dimension n+1. ■

 

Example: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials. ■

 

 

  1. Using row exchange and elementary matrices, find PLU-factorizations for the following matrices.
    1. \[ \mbox{(a)} \quad \begin{bmatrix} 2&1&1&0 \\ 4&3&3&1 \\ 8&7&9&5 \\ 6&7&9&8 \end{bmatrix} , \qquad \mbox{(b)} \quad \begin{bmatrix} 2&1&-1&8 \\ 6&-9&12&8 \\ 4&-5&6&7 \\ 2&-3&4&3 \end{bmatrix} ; \] \[ \mbox{(c)} \quad \begin{bmatrix} 2&1&3&3 \\ 6&1&2&3 \\ 6&5&7&9 \\ 4&1&0&1 \end{bmatrix} ; \qquad \mbox{(d)} \quad \begin{bmatrix} 1&2&3&4 \\ 4&2&-2&0 \\ 4&11&5&10 \\ 2&-2&3&6 \end{bmatrix} ; \]

  1. A. Edelman. The complete pivoting conjecture for Gaussian elimination is false. The Mathematica Journal, 2:58–61, 1992.
  2. L. V. Foster. The growth factor and efficiency of Gaussian elimination with rook pivoting. J. Comput. Appl. Math. , 86:177–194, 1997. Corrigendum in J. Comput. Appl. Math., 98:177, 1998.
  3. N. I. M. Gould. On growth in Gaussian elimination with complete pivoting. SIAM J. Matrix Anal. Appl. , 12(2):354–361, 1991.
  4. N. J. Higham and D. J. Higham. Large growth factors in Gaussian elimination with pivoting. SIAM J. Matrix Anal. Appl. , 10(2):155–164, Apr. 1989.
  5. S. Toledo. Locality of reference in LU decomposition with partial pivoting. SIAM J. Matrix Anal. Appl. , 18(4):1065–1081, 1997.