Preface
This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0340. It is primarily for students who have some experience using Mathematica. If you have never used Mathematica before and would like to learn more of the basics for this computer algebra system, it is strongly recommended looking at the APMA 0330 tutorial. As a friendly reminder, don'r forget to clear variables in use and/or the kernel.
Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in normal font. This means that you can copy and paste all commands into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330
Return to Part II of the course APMA0340
Elementary Row Operations
Our objective is to find an efficient method of finding the solutions of systems of linear equations. There are three operations that we can perform on the equations of a linear system without altering the set of solutions:
- multiply both sides of an equation by a non-zero constant;
- interchange two equations;
- add a multiple of one equation to another.
Since the coefficient matrix A together with the given constant vector b contains all the information we need to use, we unite them into one matrix, called the augmented matrix:
Elimination produces an upper triangular system, called row echelon form for Gauss elimination and reduced row echelon form for Gauss--Jordan algorithm. The Gauss elimination introduces zeroes below the pivots, while Gauss--Jordan algorithm contains additional phase in which it introduces zeroes above the pivots. Since this phase involves roughly 50% more operations than Gaussian elimination, most computer algorithms are based on the latter method.
Suppose M = [A | b] is the augmented matrix of a linear system A x = b. We can solve the linear system by performing elementary row operations on M. In matlab, these row operations are implemented with the following functions.
Example. Consider the system of linear equations
From computational point of view, the pivot should be the largest element. If we look at the first column, the largest entry is in the second row. To move it into the first row and choose "4" as a pivot, we need to swap first two rows. It is convenient, but not necessarily, to have 1 in this position, which we call pivot.
Another option is to choose "2" in the first row as pivot because it is not zero. We want to start with a 1 in the first position of the first column. To chieve it, we can either divide each entry in the first row by 2 or switch the first and the third rows because the last row has "1" in the first position already. If we stick with the latter decision, we may want to begin by scaling the first row by 1/2.
The first argument is the index of the row to be replaced, the second argument is the row to form a multiple of, and the final argument is the scale factor. Thus, M.add_multiple_of_row(n,m,A) would replace row n with (row n)+A*(row m).
Since the last entry in the first column is already zero we can move on to the second column. Our first step is to get a 1 in the second position of the second column. Normally we would do this by scaling the row but in this case we can swap the second and third row
Now we want to eliminate the 5 below the 1. This is done by multiplying the second row by −5 and adding it to the third row.
To get a 1 as the last entry in the third column we can scale the third row by −1/2
At this point, the matrix is in echelon form (well, having the 1's down the diagonal of the matrix is not required, but it does make our work easier). All that remains to find the solution is to put the matrix in reduced echelon form which requires that we replace all entries in the first three columns that are not on the main diagonal (where the 1's are) with zeros. We will start with the third column and work our way up and to the left. Remember that this is an augmented matrix and we are going to ignore the right-most column; it just "goes along for the ride."
Therefore, our solution is complete. We see that the solution to our linear system is
\( x_1 =2, \ x_2 = -2, \ \mbox{and} \ x_3 = 3. \) ■
There is an easy way to check your work, or to carry out these steps in the future. First, let's reload the matrix M.
Theorem: Let A be a \( n \times n \) matrix. Then, if B is a matrix that results from switching one row with another row, then \( \det {\bf A} = - \det {\bf B} . \)
Theorem: Let A be a \( n \times n \) matrix. Then, if B is a matrix that results from adding or subtracting a multiple of one row to another row, then \( \det {\bf A} = \det {\bf B} . \) (The same is true for column operations also.)
Theorem: Let A be a \( n \times n \) that is upper triangular, lower triangular, or diagonal, then its determinant is the product of diagonal entries. ■
Example. We illustrate how to find an inverse matrix using row operations. Consider the matrix
Naïve Gaussian Elimination Method
The naïve Gaussian Elimination method includes two stages:
- Forward Elimination of Unknowns: the unknown is eliminated in each equation starting with the first equation. This way, the equations are reduced to one equation and one unknown in each equation.
- Back Substitution: starting from the last equation, each of the unknowns is found.
We demonstrate how the method works in the following example.
Example: Consider four equations with four unknowns. First, we enter data.
A = Table[ 1 + 10^i *j^2 + j*Floor[2.5^(i + j), {i, 0, n - 1}, {j, 0, n - 1}]; A // MatrixForm
For[k = 1, k <= n - 1, k++,
For[i = k + 1, i <= n, i++,
multiplier = N[Aug[[i, k]]/Aug[[k, k]]];
For[j = k, j <= n + 1, j++,
Aug[[i, j]] = Aug[[i, j]] - multiplier*Aug[[k, j]]]];
Print["Step=", k, " ", Aug // MatrixForm]; Print[" "]]
Step=1 \( \left( \begin{array}{cccc} 1& 4& 17& 55& 137.04 \\ 0.& 13.& 54. &153. &154.57 \\ 0.& 112.& 462.& 1137.& 309.14 \\ 0.& 1036.& 4178.& 9678.& 463.71 \end{array} \right) \)
Step=2 \( \left( \begin{array}{cccc} 1& 4& 17& 55& 137.04 \\ 0.& 13.& 54.& 153.& 154.57 \\ 0.& -1.42109 \times 10^{-14} & -3.23077& -181.154& -1022.54\\ 0.& 0.& -125.385& -2514.92& -11854.3 \end{array} \right) \)
Step=3 \( \left( \begin{array}{cccc} 1& 4& 17& 55& 137.04 \\ 0.& 13.& 54.& 153.& 154.57 \\ 0.& -1.42109\times 10^{-14} & -3.23077& -181.154& -1022.54 \\ 0.& 0.& 0.& 4515.57& 27830. \end{array} \right) \)
First, we define the vector x:
Are there any pitfalls of the Naïve Gauss elimination method?
Yes, there are two pitfalls of the Naïve Gauss eimination method.
Division by zero: It is possible for division by zero
to occur during the beginning of the n - 1 steps of forward elimination.
Round-off error: The Naïve Gauss elimination method is prone to round-off errors. This is true when there are large numbers of equations as errors propagate. Also, if there is subtraction of numbers from each other, it may create large error. ■
Mathematica solves linear systems
Elementary Row Operations
Row Echelon Forms
LU-factorization
PLU Factorization
Reflection
Givens rotation
Row Space and Column Space
Null Space or Kernel
Rank and Nullity
Square Matrices
Cramer's Rule
Symmetric and self-adjoint matrices
Cholesky decomposition
Projection Operators
Gram-Schmidt
QR-decomposition
Least Square Approximation
SVD Factorization
Numerical Methods
Markov chains
Inequalities
Miscellaneous
Return to Mathematica page
Return to the main page (APMA0330)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Equations
Return to the Part 3 Linear Systems of Ordinary Differential Equations
Return to the Part 4 Non-linear Systems of Ordinary Differential Equations
Return to the Part 5 Numerical Methods
Return to the Part 6 Fourier Series
Return to the Part 7 Partial Differential Equations