Reduce Row Reduction Form (RREF)

Carl Gauss
Wilhelm Jordan

In this section, we discuss the algorithm for reducing any matrix, whether or not the matrix is viewed as an augmented matrix for linear system, to a simple form that could be used for further analysis and/or computation. This form is known as the reduced row echelon form, which is usually abbreviated as rref. This algorithm utilizes row operations to put the augmented matrix into a special form, which is usually called as Gauss--Jordan elimination; it consists of two stages, a forward phase (usually referred to as Gaussian elimination) in which matrix is transferred to a kind of upper triangular or "steplike" matrix (see the previous section), and a backward phase in which zeroes are introduced in each column containing a pivot using row operations. When a linear system of equations is solved numerically, it nearly always uses Gaussian elimination. The second stage is not so essential for solving linear systems of equations because the Gauss--Jordan method is inefficient for practical calculations. The main reason for this is that the Gaussian elimination uses backward substitution while the Gauss--Jordan algorithm suggests to implement backward elimination, which involves roughly 50% more operations than Gaussian elimination. The Gauss--Jordan elimination is an essential algorithm for solving other problems such as matrix equations, but for linear systems of equations it is pure theoretical.

The basic idea of the backward elimination was popularized by the German engineer Wilhelm Jordan in his book on geodesy (the science of measuring Earth shapes) entitled Handbuch der Vermessungskunde and published in 1888. However, the method was also appeared in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss--Jordan elimination independently. Johann Carl Friedrich Gauss (1777--1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and sciences. Wilhelm Jordan (1842--1899) was a German geodesist who conducted surveys in Germany and Africa and founded the German geodesy journal.

Since the first stage of Gauss--Jordan algorithm was discussed in the previous section, it is assumed that the reader is familiar with it. For further exposition, we need some new definitions along with the previously discussed terminology. The aim of the Gaussian elimination is to reduce a full system of linear equations in n unknowns to triangular form using elementary row operations, thereby reducing a problem that we can�t solve to one that we can. Mathematica has a special command, RowReduce, for the Gauss--Jordan algorithm. The only unfortune aspect of using the command RowReduce is that it does not actually give the solution to the original system. You must still write out the solution yourself from rref matrix. On the other hand, the Solve command gives the actual solution to the system where you can specify which variables to solve for in the system. In this case, you should know in advance the dimension of the solution set.

There are three types of elementary row operations that do not change the solution set:
  • Swapping two rows,
  • Multiplying a row by a nonzero number,
  • Adding a multiple of one row to another row.
Two matrices are row-equivalent if one can be obtained from the other by a sequence of row operations.

Theorem: Suppose that two matrices are row-equivalent augmented matrices. Then the systems of linear equations that they represent are equivalent systems.

A matrix is in reduced row echelon form (rref) if it meets all of the following conditions:
  1. If there is a row (called a zero row) where every entry is zero, then this row lies below any other row that contains a nonzero entry.
  2. The first nonzero entry of a nonzero row is a 1. This is called a leading 1 and its location in the matrix is referred to as the pivot position. A column that contains a pivot position is called a pivot column.
  3. The leftmost nonzero entry of a row is the only nonzero entry, the leading 1, in its column.
  4. Any column in which a leading 1 appears has zeroes in every other entry.

Theorem: Every matrix has a unique reduced row echelon form.

Suppose that A is an m�n matrix and that is row equivalent to two m�n matrices B and C in reduced row-echelon form. We need to show that B = C.

If B and C are both row-equivalent to A, then they are row-equivalent to each other. Repeated row operations on a matrix combine the rows with each other using operations that are linear, and are identical in each column. A key observation for this proof is that each individual row of B is linearly related to the rows of C. This relationship is different for each row of B, but once we fix a row, the relationship is the same across columns. More precisely, there are scalars δij, i,j = 1,2, ... , m, such that

\[ \left[ {\bf B} \right]_{j,j} = \sum_{k=1}^m \delta_{ik} \left[{\bf C}\right]_{kj} . \]
So an entry of row i of matrix B (in column j) is a linear function of the entries of all the rows of matrix C that are also in column j, and the scalars (dik) depend on which row of B we are considering (the i subscript on dik), but are the same for every column (no dependence on j in dik).

We now repeatedly exploit the fact that B and C are in reduced row-echelon form. Recall that a pivot column is all zeroes, except a single one. More carefully, if R is a matrix in reduced row-echelon form, and dl is the index of a pivot column, then [R]kdl=1 precisely when k=l and is otherwise zero. Notice also that any entry of R that is both below the entry in row l and to the left of column dl is also zero (with below and left understood to include equality). In other words, look at examples of matrices in reduced row-echelon form and choose a leading 1 (with a box around it). The rest of the column is also zeros, and the lower left �quadrant� of the matrix that begins here is totally zeros.

Assuming no relationship about the form of B and C, let B have r nonzero rows and denote the pivot columns as D = { d1, d2, ... , dr |. For C, let p denote the number of nonzero rows and denote the pivot columns as P = \{ e1, e2, ... , ep }. There are four steps in the proof, and the first three are about showing that B and C have the same number of pivot columns, in the same places. In other words, the �e� symbols are a necessary fiction.

First Step. Suppose that d1 < e1. Then

\begin{eqnarray*} 1 &=& \left[ {\bf B} \right]_{1, d_1} = \sum_{k=1}^m \delta_{1,k} \left[ {\bf C} \right]_{k, d_1} \\ &=& \sum_{k=1}^m \delta_{1,k} (0) \qquad d_1 < e_1 , \\ &=& 0 . \end{eqnarray*}
The entries of C are all zero since they are left and below of the leading 1 in row 1 and column e1 of C. This is a contradiction, so we know that d1e1. Together this means that d1 = e1.

Second Step. Suppose that we have determined that d1 = e1, d2 = e2, ... , du = eu. Let us now show that du+1 = eu+1. Working towards a contradiction, suppose that du+1 < eu+1. For 1 ≤ l ≤ u.

\begin{eqnarray*} 0 &=& \left[ {\bf B} \right]_{u+1, d_{\ell}} = \sum_{k=1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{\ell}} \\ &=& \sum_{k=1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, e_{\ell}} \\ &=& \delta_{u+1, \ell} \left[ {\bf C} \right]_{\ell , e_{\ell}} + \sum_{k=1, k \ne \ell}^m \, \delta_{u+1,k} \left[ {\bf C} \right]_{k , e_{\ell}} \\ &=& \delta_{u+1, \ell} (1) + \sum_{k=1, k \ne \ell}^m \, \delta_{u+1,k} (0) \\ &=& \delta_{u+1, \ell} . \end{eqnarray*}
Now
\begin{eqnarray*} 1 &=& \left[ {\bf B} \right]_{u+1, d_{u+1}} \\ &=& \sum_{k=1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{u+1}} \\ &=& \sum_{k=1}^u \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{u+1}} + \sum_{k=r+1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{u+1}} \\ &=& \sum_{k=1}^u \left( 0 \right) \left[ {\bf C} \right]_{k, d_{u+1}} + \sum_{k=u+1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{u+1}} \\ &=& \sum_{k=u+1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{u+1}} = \sum_{k=u+1}^m \delta_{u+1,k} (0) \\ &=&0 . \end{eqnarray*}
This contradiction shows that du+1eu+1. By an entirely similar argument, we could conclude that du+1eu+1, and therefore du+1 = eu+1.

Third Step. Now we establish that r = p. Suppose that p < r. By the arguments above, we know that d1 = e1, d2 = e2, ... , dp = ep. For 1 ≤ l ≤ u < r,

\begin{eqnarray*} 0 &=& \left[ {\bf B} \right]_{u, d_{\ell}} = \sum_{k=1}^m \delta_{r,k} \left[ {\bf C} \right]_{k, d_{\ell}} \\ &=& \sum_{k=1}^p \delta_{r,k} \left[ {\bf C} \right]_{k, d_{\ell}} + \sum_{k = p+1}^m \delta_{r,k} \left[ {\bf C} \right]_{k, d_{\ell}} \\ &=& \sum_{k=1}^p \delta_{r,k} \left[ {\bf C} \right]_{k, d_{\ell}} + \sum_{k = p+1}^m \delta_{r,k} \left( 0 \right) \\ &=& \sum_{k=1}^p \delta_{r,k} \left[ {\bf C} \right]_{k, d_{\ell}} = \sum_{k=1}^p \delta_{r,k} \left[ {\bf C} \right]_{k, e_{\ell}} \\ &=& \delta_{r,\ell} \left[ {\bf C} \right]_{\ell , e_{\ell}} + \sum_{k=1, k \ne \ell}^p \delta_{r,k} \left[ {\bf C} \right]_{k , e_{\ell}} \\ &=& \delta_{r,\ell} \left( 1 \right) + \sum_{k=1, k \ne \ell}^p \delta_{r,k} \left( 0 \right) \\ &=& \delta_{r,\ell} . \end{eqnarray*}
Now examine the entries of row r of B,
\begin{eqnarray*} \left[ {\bf B} \right]_{r, j} &=& \sum_{k = 1}^m \delta_{r,k} \left[ {\bf C} \right]_{k, j} \\ &=& \sum_{k=1}^u \delta_{r,k} \left[ {\bf C} \right]_{k, j} + \sum_{k=u+1}^m \delta_{r,k} \left[ {\bf C} \right]_{k, j} \\ &=& \sum_{k=1}^u \delta_{r,k} \left[ {\bf C} \right]_{k, j} + \sum_{k=u+1}^m \delta_{r,k} \left( 0 \right) \\ &=& \sum_{k=1}^u \delta_{r,k} \left[ {\bf C} \right]_{k, j} \\ &=& \sum_{k=1}^u \left( 0 \right) \left[ {\bf C} \right]_{k, j} \\ &=& 0 . \end{eqnarray*}
So row r is a totally zero row, contradicting that this should be the bottom most nonzero row of B. Hence, ur. By an entirely similar argument, reversing the roles of B and C, we would conclude that ur, and therefore u = r. Thus, combining the first three steps we can say that D = P. In other words, B and C have the same pivot columns, in the same locations.

Fourth Step. In this final step, we will not argue by contradiction. Our intent is to determine the values of the dij. Notice that we can use the values of the di interchangeably for B and C. Here we go,

\begin{eqnarray*} 1 &=& \left[ {\bf B} \right]_{i, d_i} = \sum_{k=1}^m \delta_{i,k} \left[ {\bf C} \right]_{k, d_i} \\ &=& \delta_{i,i}\left[ {\bf C} \right]_{i, d_i} + \sum_{k=1, k\ne i} \, \delta_{i,k}\left[ {\bf C} \right]_{k, d_i} \\ &=& \delta_{i,i}\left( 1 \right) + \sum_{k=1, k\ne i} \, \delta_{i,k}\left[ {\bf C} \right]_{k, d_i} \\ &=& \delta_{i,i}\left( 1 \right) + \sum_{k=1, k\ne i} \, \delta_{i,k}\left( 0 \right) \\ &=& \delta_{i,i} , \end{eqnarray*}
and for l ≠ i
\begin{eqnarray*} 0 &=& \left[ {\bf B} \right]_{i, d_{\ell}} = \sum_{k=1}^m \delta_{i,k} \left[ {\bf C} \right]_{k, d_{\ell}} \\ &=& \delta_{i,\ell} \left[ {\bf C} \right]_{\ell , d_{\ell}} + \sum_{k=1, k\ne i} \, \delta_{i,k}\left[ {\bf C} \right]_{k, d_{\ell}} \\ &=& \delta_{i,\ell} \left( 1 \right) + \sum_{k=1, k\ne i} \, \delta_{i,k} \left( 0 \right) \\ &=& \delta_{i,\ell} . \end{eqnarray*}
Finally, having determined the values of the dij, we can show that B=C. For 1 ≤ im,
\begin{eqnarray*} \left[ {\bf B} \right]_{i, j} &=& \sum_{k=1}^m \delta_{i,k} \left[ {\bf C} \right]_{k, j} \\ &=& \delta_{i,i} \left[ {\bf C} \right]_{i, j} + \sum_{k=1, k\ne i}^m \delta_{i,k} \left[ {\bf C} \right]_{k, j} \\ &=& \left( 1 \right) \left[ {\bf C} \right]_{i j} + \sum_{k=1, k\ne i}^m \left( 0 \right) \left[ {\bf C} \right]_{k, j} \\ &=& \left[ {\bf C} \right]_{i j} . \end{eqnarray*}
So B and C have equal values in every entry, and so are the same matrix. ■

input m, n and A
r := 0
for j := 1 to n
   i := r+1
   while i <= m and A[i,j] == 0
       i := i+1
   if i < m+1
       r := r+1
       swap rows i and r of A (row op 1)
       scale A[r,j] to a leading 1 (row op 2)
       for k := 1 to m, k <> r
           make A[k,j] zero (row op 3, employing row r)
output r and A	
Matlab code:

function sol=GaussJ(Ab)
[m,n]=size(Ab);
for i=1:m 
    Ab(i,:)=Ab(i,:)/Ab(i,i); 
    for j=1:m 
    if j==i; continue; 
    end
Ab(j,:)=Ab(j,:)-Ab(j,i)*Ab(i,:);
end; end; sol=Ab;
Mathematica code without swapping rows:
GaussJordan[A0_ , n_ ] :=
Module[{A=A0}, Print[MatrixForm[A]];
For[p=1, p<=n, p++,
A[[p]] = A[[p]]/A[[p,p]];
For[i=1, 1<=n, i++,
If[i| != p,
A[[i]] = A[[i]] - A[[i,p]] A[[p]] ];];
Print[MatrixForm[A]];];]
Mathematica code with swapping rows:
GaussJordan[A0_ , n_ ] :=
Module[{A=A0}, Print[MatrixForm[A]];
For[p=1, p<=n, p++,
For[k=p+1, k<=n, k++,
If[Abs[A[[k,p]] ] > Abs[A[[p,p]] ],
A[[p,k]] = A[[p,k]] ;];];
A[[p]] = A[[p]]/A[[p,p]];
For[i=1, i<=n, i++,
If[i != p,
A[[i]] = A[[i]] - A[[i,p]] A[[p]] ];];
Print[MatrixForm[A]];];
Return[A];]
Another way is to use the standard Mathematica code that checks whether A is a rectangular matrix and if vector b has the appropriate dimensions:
GaussianElimination[A_?MatrixQ, b_?VectorQ] := Last /@ RowReduce[Flatten /@ Transpose[{A, b}]]
Example: In the following matrices, any real or complex number can be substituted instead of *.
\[ \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} , \quad \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&* \\ 0&0&0&0 \end{bmatrix} , \quad \begin{bmatrix} 1&*&0&* \\ 0&0&1&* \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix} , \quad \begin{bmatrix} 0&1&*&0&0&0&*&*&0&* \\ 0&0&0&1&0&0&*&*&0&* \\ 0&0&0&0&1&0&*&*&0&* \\ 0&0&0&0&0&1&*&*&0&* \\ 0&0&0&0&0&0&0&0&1&* \end{bmatrix} . \]
Example: Solve by Gauss-Jordan elimination:
\[ \begin{split} x_1 + 3\,x_2 -2\,x_3 +2\,x_5 &=-3 , \\ 2\,x_1 + 6\,x_2 -5\,x_3 -2\,x_4 + 4\,x_5 -3\,x_6 &=3, \\ 5\,x_3 + 10\,x_4 + x_6 &= -3, \\ 2\,x_1 + 6\,x_2 + 8\, x_4 + 4\, x_5 + x_6 &= -9 . \end{split} \]
The augmented matrix for the system becomes
\[ \begin{bmatrix} 1&3&-2&0&2&0&-3 \\ 2&6&-5&-2&4&-3&3 \\ 0&0&5&10&0&1&-3 \\ 2&6&0&8&4&1&-9 \end{bmatrix} . \]
A = {{1, 3, -2, 0, 2, 0}, {2, 6, -5, -2, 4, -3}, {0, 0, 5, 10, 0, 1}, {2, 6, 0, 8, 4, 1}}
b = {-3, 3, -3, -9}
B=Insert[A // Transpose, b, 7] // Transpose
Adding -2 times the first row to the second and fourth rows gives
A[[2]] = A[[2]] - 2*A[[1]]
A[[4]] = A[[4]] - 2*A[[1]]
A // MatrixForm
\( \begin{pmatrix} 1&3&-2&0&2&0&-3 \\ 0&0&-1&-2&0&-3&9 \\ 0&0&5&10&0&1&-3 \\ 0&0&4&8&0&1&-3 \end{pmatrix} \)
Multiplying the second row by 5 and then adding to the third row, and then multiplying the second row by 4 and adding to the last one gives
A[[3]] = A[[3]] + 5*A[[2]]
A[[4]] = A[[4]] + 4*A[[2]]
A // MatrixForm
\( \begin{pmatrix} 1&3&-2&0&2&0&-3 \\ 0&0&-1&-2&0&-3&9 \\ 0&0&0&0&0&-14&42 \\ 0&0&0&0&0&-11&33 \end{pmatrix} \)
Multiply the second row by -1:
A = A*{1, -1, 1, 1}
Divide the third row by -14 and the last row by -11; this yields
A = A*{1, 1, -1/14, -1/11} A // MatrixForm
\( \begin{pmatrix} 1&3&-2&0&2&0&-3 \\ 0&0&1&2&0&3&-9 \\ 0&0&0&0&0&1&-3 \\ 0&0&0&0&0&1&-3 \end{pmatrix} \)
Subtract the third row from the fourth one:
A[[4]] = A[[4]] - A[[3]] A // MatrixForm
\( \begin{pmatrix} 1&3&-2&0&2&0&-3 \\ 0&0&1&2&0&3&-9 \\ 0&0&0&0&0&1&-3 \\ 0&0&0&0&0&0&0 \end{pmatrix} \)
Multiplying the third row by -3 and adding to the second row gives
A[[2]] = A[[2]] - 3*A[[3]] A // MatrixForm
\( \begin{pmatrix} 1&3&-2&0&2&0&-3 \\ 0&0&1&2&0&0&0 \\ 0&0&0&0&0&1&-3 \\ 0&0&0&0&0&0&0 \end{pmatrix} \)
Multiplying the second row by 2 and adding to the first row gives
A[[1]] = A[[1]] + 2*A[[2]] A // MatrixForm
\[ \begin{pmatrix} \color{red}{1}&3&0&4&2&0&-3 \\ 0&0&\color{red}{1}&2&0&0&0 \\ 0&0&0&0&0&\color{red}{1}&-3 \\ 0&0&0&0&0&0&0 \end{pmatrix} . \]
As we see, the given matrix A is reduced to the rref form that contains three pivots in columns 1, 3, and 6. Therefore, the corresponding system of linear equations has three leading variables x1, x3, and x6, and three free variables x2, x4, x5. Solving for the leading variables (marked in red), we obtain
\[ \begin{split} x_1 &= -3-3\,x_2 -4\,x_4 -2\,x_5 , \\ x_3 &= -2\,x_4 , \\ x_6 &= -3 , \end{split} \]
where x2, x4, and x5 are free variables. Now we use Mathematica to convert the augmented matrix into its rref form:
A = {{1, 3, -2, 0, 2, 0}, {2, 6, -5, -2, 4, -3}, {0, 0, 5, 10, 0, 1}, {2, 6, 0, 8, 4, 1}}
RowReduce[A]
{{1, 3, 0, 4, 2, 0}, {0, 0, 1, 2, 0, 0}, {0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0}}
If we swap rows 1 and 3 of A, and then use the RowReduce command, the output will be the same.
AM = {A[[3]], A[[2]], A[[1]], A[[4]]}
RowReduce[AM]
{{1, 3, 0, 4, 2, 0}, {0, 0, 1, 2, 0, 0}, {0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0}}
It is clear that the rref matrix is independent of the order of the equations, or equivalently, the rows in the augmented matrix associated with the linear system. Now we solve the given system of linear equations using standard Solve command:
Solve[{x1 + 3*x2 - 2*x3 + 2*x5 == -3, 2*x1 + 6*x2 - 5*x3 - 2*x4 + 4*x5 - 3*x6 == 3, 5*x3 + 10*x4 + x6 == -3, 2*x1 + 6*x2 + 8*x4 + 4*x5 + x6 == -9}, {x1, x3, x6}]
{{x1 -> -3 - 3 x2 - 4 x4 - 2 x5, x3 -> -2 x4, x6 -> -3}}
However, if you choose another variable including a free one, the output will be empty:
Solve[{x1 + 3*x2 - 2*x3 + 2*x5 == -3, 2*x1 + 6*x2 - 5*x3 - 2*x4 + 4*x5 - 3*x6 == 3, 5*x3 + 10*x4 + x6 == -3, 2*x1 + 6*x2 + 8*x4 + 4*x5 + x6 == -9}, {x1, x3, x4}]
{}
In this situation, you cannot determine a solution. ■

A system of linear equations is said to be homogeneous if the constant terms are all zeroes; that is, the system has the form

\[ \begin{split} a_{11} x_1 + a_{12} x_2 + \cdots + a_{1n} x_n &= 0, \\ a_{21} x_1 + a_{22} x_2 + \cdots + a_{2n} x_n &= 0, \\ \quad\vdots \qquad \vdots\qquad & \vdots \\ a_{m1} x_1 + a_{m2} x_2 + \cdots + a_{mn} x_n &= 0. \end{split} \]
Every homogeneous system of linear equations is consistent because all such systems have a trivial solution: x1 = x2 = ... = xn = 0. If there are other solutions, they are called nontrivial solutions. So there are only two possibilities for homogeneous systems of equations:
Example: Use Gauss-Jordan elimination to solve the homogeneous linear system:
\[ \begin{split} x_1 -2\,x_3 +2\,x_4 &=0 , \\ 2\,x_1 - x_2 - x_3 +3\,x_4 &=0, \\ 3\,x_1 + 5\,x_2 -4\, x_3 + x_4 &= 0, \\ x_1 - x_2 + x_3 + x_4 &=0 . \end{split} \]
The augmented matrix for the system becomes
\[ \begin{bmatrix} 1&0&-2&2&0 \\ 2&-1&-1&3&0 \\ 3&5&-4&1&0 \\ 1&=1&1&1&0 \end{bmatrix} . \]
A = {{1, 0, -2, 2}, {2, -1, -1, 3}, {3, 5, -4, 1}, {1, -1, 1, 1}}
b = {0, 0, 0, 0}
B=Insert[A // Transpose, b, 7] // Transpose
We apply the standard Mathematica command to obtain its rref form:
RowReduce[B]
{{1, 0, 0, 14/17, 0}, {0, 1, 0, -(13/17), 0}, {0, 0, 1, -(10/17), 0}, {0, 0, 0, 0, 0}}
% // MatrixForm
\( \begin{pmatrix} 1&0&0&\frac{14}{17}&0 \\ 0&1&0&-\frac{13}{17}&0 \\ 0&0&1&-\frac{10}{17}&0 \\ 0&0&0&0&0 \end{pmatrix} \)
The corresponding system of equations is
\[ \begin{split} x_1 + \frac{14}{17}\, x_4 &=0 , \\ x_2 - \frac{13}{17}\, x_4 &=0 , \\ x_3 - \frac{10}{17}\, x_4 &=0 . \end{split} \]
If we assign the free variable x4 arbitrary value, say 14t, we can express the solution set parametrically as
\[ x_1 = - 14\,t , \qquad x_2 = 13\,t, \qquad x_3 = 10\, t . \]
Note that the trivial solution results when t = 0. ■
Example: Consider the system of linear equations:
\begin{align*} x_1 + 2\,x_2 - x_3 &= 6 , \\ 2\,x_1 - x_2 + 3\, x_3 &= -3 , \\ 3\,x_1 -3\, x_2 - 4\, x_3 &= 1 . \end{align*}
We write the given system in augmented matrix form:
\[ {\bf A} = \begin{bmatrix} 1&2&-1&6 \\ 2&-1&3&-3 \\ 3&-3&-4&1 \end{bmatrix} . \]
The first elimination step is to eliminate the elements a21 = 2 and a31 = 3 in the original matrix A = [aij] by subtracting the multiples m21 = 2 and m31 = 3 of row 1 from rows 2 and 3, respectively, which gives
\[ {\bf A} = \begin{bmatrix} 1&2&-1&6 \\ 2&-1&3&-3 \\ 3&-3&-4&1 \end{bmatrix}_{-2\times R_1 + R_2 \mapsto R_2} \sim \begin{bmatrix} 1&2&-1&6 \\ 0&-5&5&-15 \\ 3&-3&-4&1 \end{bmatrix}_{-3\times R_1 + R_3 \mapsto R_3} = \begin{bmatrix} 1&2&-1&6 \\ 0&-5&5&-15 \\ 0&-9&-1&-17 \end{bmatrix} . \]
The second elimination step is to eliminate the element a32 = -9 by subtracting the second row times (9/5) from the third row:
\[ {\bf A} \sim \begin{bmatrix} 1&2&-1&6 \\ 0&-5&5&-15 \\ 0&-9&-1&-17 \end{bmatrix}_{-(9/5)\times R_2 + R_3 \mapsto R_3} = \begin{bmatrix} 1&2&-1&6 \\ 0&-5&5&-15 \\ 0&0&-10&10 \end{bmatrix} . \]
This completes the forward elimination stage required by the Gaussian elimination. To obtain the reduced row echelon form, we first need to divide the second row and the third row by -5 and -10, respectively, in order to see leading 1's on the main diagonal:
\[ {\bf A} = \begin{bmatrix} 1&2&-1&6 \\ 2&-1&3&-3 \\ 3&-3&-4&1 \end{bmatrix} \sim \begin{bmatrix} 1&2&-1&6 \\ 0&1&-1&3 \\ 0&0&1&-1 \end{bmatrix} . \]
The first elimination step in the backward stage is to eliminate the element a21 = 2 in the first row by subtracting 2 times the second row from the first one:
\[ {\bf A} \sim \begin{bmatrix} 1&2&-1&6 \\ 0&1&-1&3 \\ 0&0&1&-1 \end{bmatrix}_{-2\times R_2 + R_1 \mapsto R_1} \sim \begin{bmatrix} 1&0&1&0 \\ 0&1&-1&3 \\ 0&0&1&-1 \end{bmatrix}. \]
The second elimination step in the backward stage is to eliminate the elements a31 = 1 and a32 = -1. So we add the last row to the second one and subtract it from the first row. This yields
\[ {\bf A} \sim \begin{bmatrix} 1&0&1&0 \\ 0&1&-1&3 \\ 0&0&1&-1 \end{bmatrix}_{\times R_3 + R_2 \mapsto R_2} \sim \begin{bmatrix} 1&0&1&0 \\ 0&1&0&2 \\ 0&0&1&-1 \end{bmatrix}_{-\times R_3 + R_1 \mapsto R_1} = \begin{bmatrix} \color{red}{1}&0&0&1 \\ 0&\color{red}{1}&0&2 \\ 0&0&\color{red}{1}&-1 \end{bmatrix} . \]
We check with Mathematica:
M = {{1, 2, -1, 6}, {2, -1, 3, -3}, {3, -3, -4, 1}}
GaussJordan[M, 3]
\( \begin{pmatrix} 1&2&-1&6 \\ 2&-1&3&3 \\ 3&-3&-4& 1 \end{pmatrix} \)
\( \begin{pmatrix} 1&2&-1&6 \\ 0&-5&5&-15 \\ 0&-9&-1& -17 \end{pmatrix} \)
\( \begin{pmatrix} 1&0&1&0 \\ 0&1&-1&3 \\ 0&0&-10& 10 \end{pmatrix} \)
\( \begin{pmatrix} 1&0&1&0 \\ 0&1&0&2 \\ 0&0&1& -1 \end{pmatrix} \)
Or we can use the standard Mathematica command:
A = {{1, 2, -1}, {2, -1, 3}, {3, -3, -4}}
b = {6, -3, 1}
LinearSolve[A, b]
{1, 2, -1}
The same output is obtained with the reduced row echelon form:
GaussianElimination[A_?MatrixQ, b_?VectorQ] := Last /@ RowReduce[Flatten /@ Transpose[{A, b}]]
A = {{1, 2, -1}, {2, -1, 3}, {3, -3, -4}}
b = {6, -3, 1}
GaussianElimination[A, b]
This completes the Gauss-Jordan elimination procedure. Obviously, the original set of equations has been transformed to a diagonal form. Now expressing the set in algebraic form leads to
\[ x_1 =1, \quad x_2 =2 , \quad x_3 =-1 , \]
which is the required solution of the given system. ■
Example: Consider the system of linear equations
\begin{align*} x_1 + 2\,x_2 + x_3 - x_4 &= 5 , \\ 3\,x_1 + 2\,x_2 + 4\,x_3 + 4\,x_4 &= -17, \\ 4\, x_1 + 4\,x_2 + 3\,x_3 + 4\, x_4 &= -2, \\ 2\,x_1 + x_3 + 5\, x_4 &= -10 . \end{align*}
This system can be represented by the coefficient matrix A and right-hand side vector b, as follows:
\[ {\bf A} = \begin{pmatrix} 1&2&1&-1 \\ 3&2&4&4 \\ 4&4&3&4 \\ 2&0&1&5 \end{pmatrix} , \quad {\bf b} = \begin{bmatrix} 5 \\ -17 \\ -2 \\ -10 \end{bmatrix}. \]
To perform row operations to reduce this system to upper triangular form, we define the augmented matrix
\[ {\bf B} = \left[ {\bf A} \ {\bf b} \right] = \begin{bmatrix} 1&2&1&-1&5 \\ 3&2&4&4&-17 \\ 4&4&3&4&-2 \\ 2&0&1&5&-10 \end{bmatrix}. \]
A = {{1, 2, 1, -1}, {3, 2, -4, 4}, {4, 4, 3, 4}, {2, 0, 1, 5}}
b = {5, -17, -2, -10}
B = Insert[A // Transpose, b, 5] // Transpose
Of course, we can use the standard Mathematica command to determine its rref form:
RowReduce[B]
% // MatrixForm
{{1, 0, 0, 0, -1}, {0, 1, 0, 0, 1}, {0, 0, 1, 0, 2}, {0, 0, 0, 1, -2}}
\( \begin{pmatrix} \color{red}{1}&0&0&0&-1 \\ 0&\color{red}{1}&0&0&1 \\ 0&0&\color{red}{1}&0&2 \\ 0&0&0&\color{red}{1}&-2 \end{pmatrix} \)
All three methods
Solve[{x1+2 x2 + x3 -x4==5, 3 x1 + 2 x2 + 4 x3 + 4 x4 ==-17, 4 x1 + 4 x2 + 3 x3 + 4 x4 == -2, 2 x1 + x3 + 5 x4 ==-10}, {x1,x2,x3,x4}]
Inverse[A].b
(Solve, Inverse, and RowReduce) used above to solve this system agree on the final answer. We can obtain the answer with one Mathematica command:
A = {{1, 2, 1, -1}, {3, 2, -4, 4}, {4, 4, 3, 4}, {2, 0, 1, 5}}
b = {{5}, {-17}, {-2}, {-10}}
RowReduce[Join[A, b, 2]] // MatrixForm
We first define B(1) to be the original augmented matrix. Then, we denote by B(2) the result of the first elementary row operation, which entails subtracting 3 times the first row from the second in order to eliminate x1 from the second equation:
\[ {\bf B}^{(2)} = \begin{bmatrix} 1&2&1&-1&5 \\ 0&-4&1&7&? \\ 4&4&3&4&22 \\ 2&0&1&5&21 \end{bmatrix}. \]
Next, we eliminate x1 from the third equation by subtracting 4 times the first row from the third:
\[ {\bf B}^{(3)} = \begin{bmatrix} 1&2&1&-1&-1 \\ 0&-4&1&7&? \\ 0&-4&-1&8&? \\ 2&0&1&5&21 \end{bmatrix}. \]
Then, we complete the elimination of x1 by subtracting 2 times the first row from the fourth:
\[ {\bf B}^{(4)} = \begin{bmatrix} 1&2&1&-1&-1 \\ 0&-4&1&7&? \\ 0&-4&-1&8&? \\ 0&-4&-1&7&? \end{bmatrix}. \]
We now need to eliminate x2 from the third and fourth equations. This is accomplished by subtracting the second row from the third, which yields
\[ {\bf B}^{(5)} = \begin{bmatrix} 1&2&1&-1&-1 \\ 0&-4&1&7&? \\ 0&0&-2&1&? \\ 0&-4&-1&7&? \end{bmatrix}. \]
and the fourth, which yields
\[ {\bf B}^{(6)} = \begin{bmatrix} 1&2&1&-1&-1 \\ 0&-4&1&7&? \\ 0&0&-2&1&? \\ 0&0&-2&0&? \end{bmatrix}. \]
Finally, we subtract the third row from the fourth to obtain the augmented matrix of an upper-triangular system,
\[ {\bf B}^{(7)} = \begin{bmatrix} 1&2&1&-1&-1 \\ 0&-4&1&7&? \\ 0&0&-2&1&? \\ 0&0&0&-1&-3 \end{bmatrix}. \]
Example:
  1. Determine which matrices are in reduced echelon form and which others are only in echelon form. \[ (a)\ \begin{bmatrix} 1&0&0&0 \\ 0&1&0&3 \\0&0&1&6 \end{bmatrix}, \qquad (b) \ \begin{bmatrix} 1&0&1&2&0&4 \\ 0&1&-2&2&0&3 \\ 0&0&0&0&1&5 \end{bmatrix}, \qquad (c) \ \begin{bmatrix} 1&0&1&0 \\ 0&1&1&0 \\ 0&0&0&1 \end{bmatrix}, \] \[ (d)\ \begin{bmatrix} 1&0&0&0 \\ 0&1&1&0 \\ 0&0&0&0 \\ 0&0&0&1 \end{bmatrix}, \qquad (e) \ \begin{bmatrix} 1&2&0&1&4 \\ 0&2&0&3&3 \\ 0&0&0&2&2 \\ 0&0&0&0&1 \end{bmatrix} , \qquad (f) \ \begin{bmatrix} 1&1&0&0 \\ 0&1&1&0 \\ 0&0&1&1 \end{bmatrix} . \]
  2. Consider the two 3�4 matrices below \[ {\bf B} = \begin{bmatrix} 1&3&-2&2 \\ -1&-2&-1&-1 \\ -1&-5&8&-3 \end{bmatrix}, \quad {\bf C} = \begin{bmatrix} 1&2&1&2 \\ 1&1&4&0 \\ -1&-1&-4&1 \end{bmatrix} . \] Row-reduce each matrix and determine that the reduced row-echelon forms of B and C are identical. From this argue that B and C are row-equivalent.
  3. Perform Gauss--Jordan elimination on the following systems of equations, but without partial pivoting (without swapping rows).
    (a) \( \begin{align*} 2\,x + 3\,y + 4\,z + 2\,u &= 5, \\ x+y+z &=1 , \\ 2\,x+2\,y + 3\,z + u &=4 , \\ 6\,x + 9\,y + 12\,z + 6\,u &= 15 ; \end{align*} \) (b) \( \begin{align*} 4\,x + 6\,y + z - 7\,u &= 8, \\ x+y+z+u &=3 , \\ 2\,x+2\,y + 2\,z -3\, u &=5 , \\ -x + y + z + u &= 6 ; \end{align*} \)
    (c) \( \begin{align*} -2\,x + 6\,y + 3\,z &= 4, \\ x-3\,y+z &= 6 , \\ 2\,x+5\,y + 6\,z &=1 ; \end{align*} \) (d) \( \begin{align*} 2\,x + 3\,y - 5\,z &= 7, \\ 3\,x +2\,y+7\,z &= 8 , \\ 4\,x+6\,y + 2\,z &=1 ; \end{align*} \)
    (e) \( \begin{align*} -2\,x + 4\,y - 8\,z -3\,u + v &= 2, \\ 3\,x +4\,y - 7\,z -u -5\,v &= 1 , \\ 5\,x+4\,z + u+v &=-1 ; \end{align*} \) (f) \( \begin{align*} -2\,x + 4\,y - 8\,z &= 2, \\ 3\,x +4\,y -7\,z &= 1 , \\ 5\,x + z &=-1 , \\ -3\,x + 4\,y + 3\,z &= 2. \end{align*} \)
  4. For the following problems, use Gauss-Jordan Elimination to put the matrices in RREF.
    \[ (a) \quad A= \begin{bmatrix} 4 & 4 & 1 & 24\\ 2 & -4 & 1 & 0\\ 5 & -4 & -5 & 12\\ \end{bmatrix} ; \qquad\quad (b) \quad A= \begin{bmatrix} 2 & 3 & -1 & 24\\ 1 & 1 & 4 & 13\\ 3 & -2 & 1 & -7\\ \end{bmatrix} \]
  5. Perform Gauss--Jordan elimination on the following matrices:
    \[ (a) \quad \begin{bmatrix} 2&-3&6 \\ 4&-2&1 \end{bmatrix} , \quad (b) \quad \begin{bmatrix} 2&-3&6&1 \\ 8&-4&2&3 \\ 7&-3&2&5 \end{bmatrix} , \quad (c) \quad \begin{bmatrix} 2&-3&4&-5&-10 \\ 1&1&-1&1&4 \\ 3&5&-9&7&24 \\ 2&2&-2&3&9 \end{bmatrix} . \]
  6. Suppose that the prices of three goods sold by a firm are interrelated by the linear system
    \[ 5\,x_1 - 2\,x_2 - 2\, x_3 =5, \quad -2\,x_1 + 5\,x_2 -2\,x_3 =5, \quad -2\,x_1 -2\, x_2 + 5\,x_3 =5 , \]
    where x1, x2, and x3 denote prices in dollars of goods 1, 2, and 3, respectively. Find the prices that solve the system using the Gauss-Jordan elimination method.
  7. If matrix A represents an augmented matrix system of equations of the form \( ax_{1}+bx_{2}+cx_{3}=d, \) find the values of \( x_{1}, x_{2} , \) and x3.
    \[ (a) \quad \begin{bmatrix} -1 & -5 & -5 & 22\\ 4 & -5 & 4 & 19\\ 1 & 5 & -1 & -10\\ \end{bmatrix} , \qquad\quad (b) \quad A= \begin{bmatrix} 1 & 2 & 2 & 2\\ 1 & 1 & 1 & 3\\ 1 & 0 & 2 & 4\\ \end{bmatrix} \]

  1. Althoen, Steven C., and McLaughlin, Renate (1987), Gauss--Jordan reduction: a brief history, The American Mathematical Monthly, Mathematical Association of America, 94, No. 2: 130�142, doi:, ISSN 0002-989
  2. Gauss--Jordan Elimination Method
  3. G. Peters and J.H. Wilkinson, On the stability of Gauss--Jordan elimination with pivoting, National Physical Laboratory Teddington, Middlesex, England.
  4. Beezer, Robert A., Reduced Row-Echelon Form
  5. Gaussian Elimination A collection of codes.
  6. Jie Ma and Yongshu Li, Gauss--Jordan elimination method for computing all types of generalized inverses related to the {1}-inverse, Journal of Computational and Applied Mathematics, Volume 321, September 2017, Pages 26-43. https://doi.org/10.1016/j.cam.2017.02.010
  7. Program for Gauss--Jordan Elimination Method
  8. Module for Gauss--Jordan Elimination