Preface
This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0340. It is primarily for students who have some experience using Mathematica. If you have never used Mathematica before and would like to learn more of the basics for this computer algebra system, it is strongly recommended looking at the APMA 0330 tutorial. As a friendly reminder, don'r forget to clear variables in use and/or the kernel.
Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in regular fonts. This means that you can copy and paste all comamnds into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts to your needs for learning how to use the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330
Return to Part II of the course APMA0340
RREF
In previous section, we demonstrate how Gauss elimination procedure is applied to a matrix, row by row, in order to obtain row echelon form. Specifically, a matrix is in row echelon form if
- all nonzero rows (rows with at least one nonzero element) are above any rows of all zeroes (all zero rows, if any, belong at the bottom of the matrix), and
- the leading coefficient (the first nonzero number from the left, also called the pivot) of a nonzero row is always strictly to the right of the leading coefficient of the row above it (some texts add the condition that the leading coefficient must be 1 but it is not necessarily).
Unlike the row echelon form, the reduced row echelon form of a matrix is unique and does not depend on the algorithm used to compute it. It is obtained by applying the Gauss-Jordan elimination procedure. A matrix is in reduced row echelon form (also called row canonical form) if it satisfies the following conditions:
- It is in row echelon form.
- Every leading coefficient is 1 and is the only nonzero entry in its column.
Gaussian elimination algorithm can be generalized for block matrices. Suppose that a \( (m+n) \times (m+n) \) matrix M is partitioned into 2-by-2 blocks \( {\bf M} = \begin{bmatrix} {\bf A}_{m\times m} & {\bf B}_{m\times n} \\ {\bf C}_{n\times m} & {\bf D}_{n\times n} \end{bmatrix} .\) When we apply Gauss elimination algorithm, A is just an entry (not zero) in upper left corner (which is 1 by 1 matrix). and C is the column-vector (\( (m+n-1) \times 1 \) matrix). To eliminate C, we just multiply it by \( {\bf A}^{-1} \) and subtract the corresponding block.
Now we illustrate Gaussian elimination algorithm when \( (m+n) \times (m+n) \) matrix M is partitioned into blocks \( {\bf M} = \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} ,\) where A is \( m \times m \) nonsingular matrix, B is \( m \times n \) matrix, C is \( n \times m \) matrix, and D is \( n \times n \) square matrix. Multiplying M by a "triangular" block matrix, we obtain
Recall the difference between row echelon form (also called Gauss form) and reduced row echelon form (Gauss--Jordan). The following matrices are in row echelon form but not reduced row echelon form:
This is an example of using row reduction procedure:
sage: A=matrix(QQ,3,3,[2,-4,1,4,-8,7,-2,4,-3])
sage: b=matrix(QQ,3,1,[4,2,5])
sage: B=A.augment(b) # construct augmented matrix
sage: B.add_multiple_of_row(0,0,-1/2)
[ 1 -2 1/2 2]
[ 4 -8 7 2]
[-2 4 -3 5]
sage: B.add_multiple_of_row(1,0,-4)
[ 1 -2 1/2 2]
[ 0 0 5 -6]
[-2 4 -3 5]
sage: B.add_multiple_of_row(2,0,2)
[ 1 -2 1/2 2]
[ 0 0 5 -6]
[ 0 0 -2 9]
sage: B.add_multiple_of_row(1,1,-4/5)
[ 1 -2 1/2 2 ]
[ 0 0 1 -6/5]
[ 0 0 -2 9 ]
sage: B.add_multiple_of_row(2,1,2)
[ 1 -2 1/2 2]
[ 0 0 1 -6/5]
[ 0 0 0 33/5]
Therefore, we obtained Gauss form for the augmented matrix. Now we need to repeat similar procedure to obtain Gauss-Jordan form. This can be achieved with a standard Sage command:
sage: B.echelon_form()
[ 1 -2 0 0]
[ 0 0 1 0]
[ 0 0 0 1]
We demonstrate Gauss-Jordan procedure in the following example.
Example. Solve the linear vector equation \( {\bf A}\,{\bf x} = {\bf b} , \) where
sage: A=matrix(QQ,3,3,[[-4,1,1],[-1,4,2],[2,2,-3]]);A
[ -4 1 1]
[ -1 4 2]
[ 2 2 -3]
sage: A.swap_rows(0,1);A
[ -1 4 2]
[ -4 1 1]
[ 2 2 -3]
sage: A.swap_columns(0,1);A
[ 1 -4 2]
[ 4 -1 1]
[ 2 2 -3]
sage: b=vector([6,-1,-20])
sage: B=A.augment(b);B
[ -4 1 1 6]
[ -1 4 2 -1]
[ 2 2 -3 -20]
sage: B.rref()
[ 1 0 0 -61/55]
[ 0 1 0 -144/55]
[ 0 0 1 46/11]
So the last column gives the required vector x.
Another way of expressing to say a system is consistent if and only if column n + 1 is not a pivot column of B. Sage has the matrix method .pivot() to easily identify the pivot columns of the augmented matrix. Let’s consider as an example.
sage: coeff = matrix(QQ, [[ 2, 1, 7, -7],
... [-3, 4, -5, -6],
... [ 1, 1, 4, -5]])
sage: const = vector(QQ, [2, 3, 2])
sage: aug = coeff.augment(const)
sage: aug.rref()
[1 0 3-2 0]
[0 1 1-3 0]
[0 0 0 0 1]
sage: aug.pivots()
(0, 1, 4)
We can look at the reduced row-echelon form of the augmented matrix and see a leading one in the final column, so we know the system is inconsistent. However, we could just as easily not form the reduced row-echelon form and just look at the list of pivot columns computed by aug.pivots(). Since aug has 5 columns, the final column is numbered 4, which is present in the list of pivot columns, as expected.
Example. We demonstrate the Gaussian method with Gauss--Jordan elimination by solving the system with the folloing augmented matrix
Example. What conditions must b1, b2, and b3 satisfy in order for the system
The augmented matrix corresponding to the symmetric coefficient matrix is
where b1 and b3 are arbitrary. The vectors \( [\, 1, -4 , 0 \, ]^{\mathrm T} ,\) \( [\, 0 , 2, 1 \,]^{\mathrm T} \) constitute the basis in the column space of the coefficient matrix. ■
Mathematica solves linear systems
Elementary Row Operations
Row Echelon Forms
LU-factorization
PLU Factorization
Reflection
Givens rotation
Row Space and Column Space
Null Space or Kernel
Rank and Nullity
Square Matrices
Cramer's Rule
Symmetric and self-adjoint matrices
Cholesky decomposition
Projection Operators
Gram-Schmidt
QR-decomposition
Least Square Approximation
SVD Factorization
Numerical Methods
Markov chains
Inequalities
Miscellaneous
Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Equations
Return to the Part 3 Linear Systems of Ordinary Differential Equations
Return to the Part 4 Non-linear Systems of Ordinary Differential Equations
Return to the Part 5 Numerical Methods
Return to the Part 6 Fourier Series
Return to the Part 7 Partial Differential Equations