Various modifications have been made to the method of elimination, and we present one of these, known as the Gauss--Jordan method. It was the German geodesist Wilhelm Jordan (1842-1899) and not a French mathematician Camille Jordan (1838-1922) who introduced the Gauss-Jordan method of solving systems of linear equations. Camille Jordan is credited for Jordan normal form, a well known linear algebra topic.

   
REFF application:

We apply row operation to reduce matrix B to upper triangular matrix. To achieve this, we use the following subroutine written by Nasser M. Abbasi (https://12000.org/my_notes/rref/index.htm).

displayRREF[Ain_?(MatrixQ[#]&),displayMat_:True,normalizePivot_:True]:=Module[ {multiplier,j,i,pivotRow,pivotCol,nRows,nCols,p,tmp,startAgain,A=Ain,n,m,pivotsFound,keepEliminating,nIter,entry,debug=False}, {A,pivotsFound} = displayREF[Ain,displayMat,normalizePivot]; If[debug,Print["Completed forward"]]; If[debug,Print["pivotsFound",pivotsFound]]; (*If[displayMat,Print@makeNiceMatrix[A,{n,m}]];*) Print[">>>>>>Starting backward elimination phase. The pivots are ",pivotsFound]; Do[ pivotRow=First@entry; pivotCol=Last@entry; If[debug,Print["pivotRow= ",pivotRow]]; If[debug,Print[" pivotCol= ", pivotCol]]; If[pivotRow>1, Do[ If[ A[[i,pivotCol]] =!= 0, Print["Zeroing out element A(",i,",",pivotCol,") using row(",i,")=row(",i,")-A(",i,",",pivotCol,")*row(",pivotRow,")"]; A[[i,;;]] = A[[i,;;]] - A[[i,pivotCol]]*A[[pivotRow,;;]]; A=Simplify[A]; If[displayMat,Print@makeNiceMatrix[A,{pivotRow,pivotCol}]] ] , {i,pivotRow-1,1,-1} ] ] , {entry,pivotsFound} ]; {A,pivotsFound} ]; (*----------------------------------------------------------*) makeSolutionSpecialCase[A_?(MatrixQ[#]&),b_?(VectorQ[#]&),pivotCols_List]:=Module[ {nRows,nCols,nLeadingVariables,nFreeVariables,n,m,k,variables={},eq,freeVariables,sol={}}, Print["Pivot columns are ",MatrixForm[pivotCols]]; ClearAll[x,t]; (*did not make them local, to prevent $ from showing in print*) {nRows,nCols} = Dimensions[A]; nLeadingVariables = Length[pivotCols]; nFreeVariables = nCols-nLeadingVariables; Print["There are ",nLeadingVariables," leading variables and ",nFreeVariables," free variables. These are "]; Array[t, nFreeVariables]; Array[x, nCols]; m=0;k=0; Do[ If[Not[MemberQ[pivotCols,n]], m++; Print[x[n]," is a free variable. Let ",x[n],"=",t[m]]; AppendTo[variables,t[m]]; AppendTo[sol,0] , Print[x[n]," is a leading variable"]; AppendTo[variables,x[n]]; AppendTo[sol,b[[++k]]] ] ,{n,1,nCols} ]; freeVariables=(t[#]&/@Range[nFreeVariables]); Print["Hence the system after RREF is the following>>>>>>"]; Print[MatrixForm[A . variables],"=",MatrixForm[b]]; Print["There is different solution for different value of the free variables."]; Print["Setting free variable ", freeVariables," to zero gives"]; variables=variables/.((t[#]->0)&/@Range[nFreeVariables]); Print[MatrixForm[A . variables],"=",MatrixForm[b]]; Print["Therefore the final solution is "]; Print[MatrixForm[x[#]&/@Range[nCols]],"=",MatrixForm[sol]] ] makeNiceMatrix[mat_?MatrixQ,pivot_List]:=Module[{g,nRow,nCol}, {nRow,nCol} = Dimensions[mat]; g = Grid[mat,Frame->{None,None,{pivot->True}}]; MatrixForm[{{g}}] ] (*thanks to http://mathematica.stackexchange.com/questions/60613/how-to-add-a-vertical-line-to-a-matrix*) (*makes a dash line inside Matrix*) Format[matWithDiv[n_,opts:OptionsPattern[Grid]][m_?MatrixQ]]:=MatrixForm[{{Grid[m,opts,Dividers->{n->{Red,Dashed}}]}}];
   ■
End of Mathematica script

Reduce Row Reduction Form

Amazingly, there’s a fairly simple procedure, called the Gauss–Jordan algorithm—that finds all solutions of any n × m linear system, no matter how big m and n are. Admittedly, it can require many steps. In such cases humans get tired, bored, and tend to make mistakes. Computers, however, have none of those problems, and can solve systems with millions of equations and millions of unknowns.

Elimination produces an upper triangular system, called row echelon form for Gauss elimination and reduced row echelon form for Gauss--Jordan algorithm. The Gauss elimination introduces zeroes below the pivots, while Gauss--Jordan algorithm contains additional phase in which it introduces zeroes above the pivots. Since this phase involves roughly 50% more operations than Gaussian elimination, most computer algorithms are based on the latter method.

Carl Gauss
Wilhelm Jordan

In this section, we discuss the algorithm for reducing any matrix, whether or not the matrix is viewed as an augmented matrix for linear system, to a simple form that could be used for further analysis and/or computation. This form is known as the reduced row echelon form, which is usually abbreviated as rref. This algorithm utilizes row operations to put the augmented matrix into a special form, which is usually called as Gauss--Jordan elimination; it consists of two stages, a forward phase (usually referred to as Gaussian elimination) in which matrix is transferred to a kind of upper triangular or "steplike" matrix (see the previous section), and a backward phase in which zeroes are introduced in each column containing a pivot using row operations. When a linear system of equations is solved numerically, it nearly always uses Gaussian elimination. The second stage is not so essential for solving linear systems of equations because the Gauss--Jordan method is inefficient for practical calculations. The main reason for this is that the Gaussian elimination uses backward substitution while the Gauss--Jordan algorithm suggests to implement backward elimination, which involves roughly 50% more operations than Gaussian elimination. The Gauss--Jordan elimination is an essential algorithm for solving other problems such as matrix equations, but for linear systems of equations it is pure theoretical.

The basic idea of the backward elimination was popularized by the German engineer Wilhelm Jordan in his book on geodesy (the science of measuring Earth shapes) entitled Handbuch der Vermessungskunde and published in 1888. However, the method was also appeared in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss--Jordan elimination independently. Johann Carl Friedrich Gauss (1777--1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and sciences. Wilhelm Jordan (1842--1899) was a German geodesist who conducted surveys in Germany and Africa and founded the German geodesy journal.

Since the first stage of Gauss--Jordan algorithm was discussed in the previous section, it is assumed that the reader is familiar with it. For further exposition, we need some new definitions along with the previously discussed terminology. The aim of the Gaussian elimination is to reduce a full system of linear equations in n unknowns to triangular form using elementary row operations, thereby reducing a problem that we can’t solve to one that we can. Mathematica has a special command, RowReduce, for the Gauss--Jordan algorithm. The only unfortune aspect of using the command RowReduce is that it does not actually give the solution to the original system. You must still write out the solution yourself from rref matrix. On the other hand, the Solve command gives the actual solution to the system where you can specify which variables to solve for in the system. In this case, you should know in advance the dimension of the solution set.

There are three types of elementary row operations that do not change the solution set:
  • Swapping two rows,
  • Multiplying a row by a nonzero number,
  • Adding a multiple of one row to another row.
Two matrices are row-equivalent if one can be obtained from the other by a sequence of row operations.

0
p1
0 0 0
0 0
p2
0 0
0 0 0
p3
0
0 0 0 0 0
pr
0 0 0 0 0 0 0
rank r

Theorem 1: Suppose that two matrices are row-equivalent augmented matrices. Then the systems of linear equations that they represent are equivalent systems.

A matrix is in reduced row echelon form (rref) if it meets all of the following conditions:
  1. If there is a row (called a zero row) where every entry is zero, then this row lies below any other row that contains a nonzero entry.
  2. The first nonzero entry of a nonzero row is a 1. This is called a leading 1 and its location in the matrix is referred to as the pivot position. A column that contains a pivot position is called a pivot column.
  3. The leftmost nonzero entry of a row is the only nonzero entry, the leading 1, in its column.
  4. Any column in which a leading 1 appears has zeroes in every other entry.

Theorem 2: Every matrix has a unique reduced row echelon form.

Suppose that A is an m×n matrix and that is row equivalent to two m×n matrices B and C in reduced row-echelon form. We need to show that B = C.

If B and C are both row-equivalent to A, then they are row-equivalent to each other. Repeated row operations on a matrix combine the rows with each other using operations that are linear, and are identical in each column. A key observation for this proof is that each individual row of B is linearly related to the rows of C. This relationship is different for each row of B, but once we fix a row, the relationship is the same across columns. More precisely, there are scalars δij, i,j = 1,2, ... , m, such that

\[ \left[ {\bf B} \right]_{j,j} = \sum_{k=1}^m \delta_{ik} \left[{\bf C}\right]_{kj} . \]
So an entry of row i of matrix B (in column j) is a linear function of the entries of all the rows of matrix C that are also in column j, and the scalars (δik) depend on which row of B we are considering (the i subscript on δik), but are the same for every column (no dependence on j in δik).

We now repeatedly exploit the fact that B and C are in reduced row-echelon form. Recall that a pivot column is all zeroes, except a single one. More carefully, if R is a matrix in reduced row-echelon form, and d is the index of a pivot column, then [R]kd=1 precisely when k=ℓ and is otherwise zero. Notice also that any entry of R that is both below the entry in row ℓ and to the left of column d is also zero (with below and left understood to include equality). In other words, look at examples of matrices in reduced row-echelon form and choose a leading 1 (with a box around it). The rest of the column is also zeros, and the lower left “quadrant” of the matrix that begins here is totally zeros.

Assuming no relationship about the form of B and C, let B have r nonzero rows and denote the pivot columns as D = { d1, d2, ... , dr |. For C, let p denote the number of nonzero rows and denote the pivot columns as P = \{ e1, e2, ... , ep }. There are four steps in the proof, and the first three are about showing that B and C have the same number of pivot columns, in the same places. In other words, the “e” symbols are a necessary fiction.

First Step. Suppose that d1 < e1. Then

\begin{eqnarray*} 1 &=& \left[ {\bf B} \right]_{1, d_1} = \sum_{k=1}^m \delta_{1,k} \left[ {\bf C} \right]_{k, d_1} \\ &=& \sum_{k=1}^m \delta_{1,k} (0) \qquad d_1 < e_1 , \\ &=& 0 . \end{eqnarray*}
The entries of C are all zero since they are left and below of the leading 1 in row 1 and column e1 of C. This is a contradiction, so we know that d1e1. Together this means that d1 = e1.

Second Step. Suppose that we have determined that d1 = e1, d2 = e2, ... , du = eu. Let us now show that du+1 = eu+1. Working towards a contradiction, suppose that du+1 < eu+1. For 1 ≤ ℓ ≤ u.

\begin{eqnarray*} 0 &=& \left[ {\bf B} \right]_{u+1, d_{\ell}} = \sum_{k=1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{\ell}} \\ &=& \sum_{k=1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, e_{\ell}} \\ &=& \delta_{u+1, \ell} \left[ {\bf C} \right]_{\ell , e_{\ell}} + \sum_{k=1, k \ne \ell}^m \, \delta_{u+1,k} \left[ {\bf C} \right]_{k , e_{\ell}} \\ &=& \delta_{u+1, \ell} (1) + \sum_{k=1, k \ne \ell}^m \, \delta_{u+1,k} (0) \\ &=& \delta_{u+1, \ell} . \end{eqnarray*}
Now
\begin{eqnarray*} 1 &=& \left[ {\bf B} \right]_{u+1, d_{u+1}} \\ &=& \sum_{k=1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{u+1}} \\ &=& \sum_{k=1}^u \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{u+1}} + \sum_{k=r+1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{u+1}} \\ &=& \sum_{k=1}^u \left( 0 \right) \left[ {\bf C} \right]_{k, d_{u+1}} + \sum_{k=u+1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{u+1}} \\ &=& \sum_{k=u+1}^m \delta_{u+1,k} \left[ {\bf C} \right]_{k, d_{u+1}} = \sum_{k=u+1}^m \delta_{u+1,k} (0) \\ &=&0 . \end{eqnarray*}
This contradiction shows that du+1eu+1. By an entirely similar argument, we could conclude that du+1eu+1, and therefore du+1 = eu+1.

Third Step. Now we establish that r = p. Suppose that p < r. By the arguments above, we know that d1 = e1, d2 = e2, ... , dp = ep. For 1 ≤ ℓ ≤ u < r,

\begin{eqnarray*} 0 &=& \left[ {\bf B} \right]_{u, d_{\ell}} = \sum_{k=1}^m \delta_{r,k} \left[ {\bf C} \right]_{k, d_{\ell}} \\ &=& \sum_{k=1}^p \delta_{r,k} \left[ {\bf C} \right]_{k, d_{\ell}} + \sum_{k = p+1}^m \delta_{r,k} \left[ {\bf C} \right]_{k, d_{\ell}} \\ &=& \sum_{k=1}^p \delta_{r,k} \left[ {\bf C} \right]_{k, d_{\ell}} + \sum_{k = p+1}^m \delta_{r,k} \left( 0 \right) \\ &=& \sum_{k=1}^p \delta_{r,k} \left[ {\bf C} \right]_{k, d_{\ell}} = \sum_{k=1}^p \delta_{r,k} \left[ {\bf C} \right]_{k, e_{\ell}} \\ &=& \delta_{r,\ell} \left[ {\bf C} \right]_{\ell , e_{\ell}} + \sum_{k=1, k \ne \ell}^p \delta_{r,k} \left[ {\bf C} \right]_{k , e_{\ell}} \\ &=& \delta_{r,\ell} \left( 1 \right) + \sum_{k=1, k \ne \ell}^p \delta_{r,k} \left( 0 \right) \\ &=& \delta_{r,\ell} . \end{eqnarray*}
Now examine the entries of row r of B,
\begin{eqnarray*} \left[ {\bf B} \right]_{r, j} &=& \sum_{k = 1}^m \delta_{r,k} \left[ {\bf C} \right]_{k, j} \\ &=& \sum_{k=1}^u \delta_{r,k} \left[ {\bf C} \right]_{k, j} + \sum_{k=u+1}^m \delta_{r,k} \left[ {\bf C} \right]_{k, j} \\ &=& \sum_{k=1}^u \delta_{r,k} \left[ {\bf C} \right]_{k, j} + \sum_{k=u+1}^m \delta_{r,k} \left( 0 \right) \\ &=& \sum_{k=1}^u \delta_{r,k} \left[ {\bf C} \right]_{k, j} \\ &=& \sum_{k=1}^u \left( 0 \right) \left[ {\bf C} \right]_{k, j} \\ &=& 0 . \end{eqnarray*}
So row r is a totally zero row, contradicting that this should be the bottom most nonzero row of B. Hence, ur. By an entirely similar argument, reversing the roles of B and C, we would conclude that ur, and therefore u = r. Thus, combining the first three steps we can say that D = P. In other words, B and C have the same pivot columns, in the same locations.

Fourth Step. In this final step, we will not argue by contradiction. Our intent is to determine the values of the δij. Notice that we can use the values of the di interchangeably for B and C. Here we go,

\begin{eqnarray*} 1 &=& \left[ {\bf B} \right]_{i, d_i} = \sum_{k=1}^m \delta_{i,k} \left[ {\bf C} \right]_{k, d_i} \\ &=& \delta_{i,i}\left[ {\bf C} \right]_{i, d_i} + \sum_{k=1, k\ne i} \, \delta_{i,k}\left[ {\bf C} \right]_{k, d_i} \\ &=& \delta_{i,i}\left( 1 \right) + \sum_{k=1, k\ne i} \, \delta_{i,k}\left[ {\bf C} \right]_{k, d_i} \\ &=& \delta_{i,i}\left( 1 \right) + \sum_{k=1, k\ne i} \, \delta_{i,k}\left( 0 \right) \\ &=& \delta_{i,i} , \end{eqnarray*}
and for ℓ ≠ i
\begin{eqnarray*} 0 &=& \left[ {\bf B} \right]_{i, d_{\ell}} = \sum_{k=1}^m \delta_{i,k} \left[ {\bf C} \right]_{k, d_{\ell}} \\ &=& \delta_{i,\ell} \left[ {\bf C} \right]_{\ell , d_{\ell}} + \sum_{k=1, k\ne i} \, \delta_{i,k}\left[ {\bf C} \right]_{k, d_{\ell}} \\ &=& \delta_{i,\ell} \left( 1 \right) + \sum_{k=1, k\ne i} \, \delta_{i,k} \left( 0 \right) \\ &=& \delta_{i,\ell} . \end{eqnarray*}
Finally, having determined the values of the δij, we can show that B=C. For 1 ≤ im,
\begin{eqnarray*} \left[ {\bf B} \right]_{i, j} &=& \sum_{k=1}^m \delta_{i,k} \left[ {\bf C} \right]_{k, j} \\ &=& \delta_{i,i} \left[ {\bf C} \right]_{i, j} + \sum_{k=1, k\ne i}^m \delta_{i,k} \left[ {\bf C} \right]_{k, j} \\ &=& \left( 1 \right) \left[ {\bf C} \right]_{i j} + \sum_{k=1, k\ne i}^m \left( 0 \right) \left[ {\bf C} \right]_{k, j} \\ &=& \left[ {\bf C} \right]_{i j} . \end{eqnarray*}
So B and C have equal values in every entry, and so are the same matrix. ■

input m, n and A
r := 0
for j := 1 to n
   i := r+1
   while i <= m and A[i,j] == 0
       i := i+1
   if i < m+1
       r := r+1
       swap rows i and r of A (row op 1)
       scale A[r,j] to a leading 1 (row op 2)
       for k := 1 to m, k <> r
           make A[k,j] zero (row op 3, employing row r)
output r and A	
Matlab code:

function sol=GaussJ(Ab)
[m,n]=size(Ab);
for i=1:m 
    Ab(i,:)=Ab(i,:)/Ab(i,i); 
    for j=1:m 
    if j==i; continue; 
    end
Ab(j,:)=Ab(j,:)-Ab(j,i)*Ab(i,:);
end; end; sol=Ab;
Mathematica code without swapping rows:
GaussJordan[A0_ , n_ ] :=
Module[{A=A0}, Print[MatrixForm[A]];
For[p=1, p<=n, p++,
A[[p]] = A[[p]]/A[[p,p]];
For[i=1, 1<=n, i++,
If[i| != p,
A[[i]] = A[[i]] - A[[i,p]] A[[p]] ];];
Print[MatrixForm[A]];];]
Mathematica code with swapping rows:
GaussJordan[A0_ , n_ ] :=
Module[{A=A0}, Print[MatrixForm[A]];
For[p=1, p<=n, p++,
For[k=p+1, k<=n, k++,
If[Abs[A[[k,p]] ] > Abs[A[[p,p]] ],
A[[p,k]] = A[[p,k]] ;];];
A[[p]] = A[[p]]/A[[p,p]];
For[i=1, i<=n, i++,
If[i != p,
A[[i]] = A[[i]] - A[[i,p]] A[[p]] ];];
Print[MatrixForm[A]];];
Return[A];]
Another way is to use the standard Mathematica code that checks whether A is a rectangular matrix and vector b has appropriate dimension:
GaussianElimination[A_?MatrixQ, b_?VectorQ] := Last /@ RowReduce[Flatten /@ Transpose[{A, b}]]
Example 1: In the following matrices, any real or complex number can be substituted instead of *.
\[ \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{bmatrix} , \quad \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&* \\ 0&0&0&0 \end{bmatrix} , \quad \begin{bmatrix} 1&*&0&* \\ 0&0&1&* \\ 0&0&0&0 \\ 0&0&0&0 \end{bmatrix} , \quad \begin{bmatrix} 0&1&*&0&0&0&*&*&0&* \\ 0&0&0&1&0&0&*&*&0&* \\ 0&0&0&0&1&0&*&*&0&* \\ 0&0&0&0&0&1&*&*&0&* \\ 0&0&0&0&0&0&0&0&1&* \end{bmatrix} . \]
End of Example 1
Example 2: Solve by Gauss-Jordan elimination:
\[ \begin{split} x_1 + 3\,x_2 -2\,x_3 +2\,x_5 &=-3 , \\ 2\,x_1 + 6\,x_2 -5\,x_3 -2\,x_4 + 4\,x_5 -3\,x_6 &=3, \\ 5\,x_3 + 10\,x_4 + x_6 &= -3, \\ 2\,x_1 + 6\,x_2 + 8\, x_4 + 4\, x_5 + x_6 &= -9 . \end{split} \]
The augmented matrix for the system becomes
\[ \begin{bmatrix} 1&3&-2&0&2&0&-3 \\ 2&6&-5&-2&4&-3&3 \\ 0&0&5&10&0&1&-3 \\ 2&6&0&8&4&1&-9 \end{bmatrix} . \]
A = {{1, 3, -2, 0, 2, 0}, {2, 6, -5, -2, 4, -3}, {0, 0, 5, 10, 0, 1}, {2, 6, 0, 8, 4, 1}}
b = {-3, 3, -3, -9}
B=Insert[A // Transpose, b, 7] // Transpose
Adding -2 times the first row to the second and fourth rows gives
A[[2]] = A[[2]] - 2*A[[1]]
A[[4]] = A[[4]] - 2*A[[1]]
A // MatrixForm
\( \begin{pmatrix} 1&3&-2&0&2&0&-3 \\ 0&0&-1&-2&0&-3&9 \\ 0&0&5&10&0&1&-3 \\ 0&0&4&8&0&1&-3 \end{pmatrix} \)
Multiplying the second row by 5 and then adding to the third row, and then multiplying the second row by 4 and adding to the last one gives
A[[3]] = A[[3]] + 5*A[[2]]
A[[4]] = A[[4]] + 4*A[[2]]
A // MatrixForm
\( \begin{pmatrix} 1&3&-2&0&2&0&-3 \\ 0&0&-1&-2&0&-3&9 \\ 0&0&0&0&0&-14&42 \\ 0&0&0&0&0&-11&33 \end{pmatrix} \)
Multiply the second row by -1:
A = A*{1, -1, 1, 1}
Divide the third row by -14 and the last row by -11; this yields
A = A*{1, 1, -1/14, -1/11} A // MatrixForm
\( \begin{pmatrix} 1&3&-2&0&2&0&-3 \\ 0&0&1&2&0&3&-9 \\ 0&0&0&0&0&1&-3 \\ 0&0&0&0&0&1&-3 \end{pmatrix} \)
Subtract the third row from the fourth one:
A[[4]] = A[[4]] - A[[3]] A // MatrixForm
\( \begin{pmatrix} 1&3&-2&0&2&0&-3 \\ 0&0&1&2&0&3&-9 \\ 0&0&0&0&0&1&-3 \\ 0&0&0&0&0&0&0 \end{pmatrix} \)
Multiplying the third row by -3 and adding to the second row gives
A[[2]] = A[[2]] - 3*A[[3]] A // MatrixForm
\( \begin{pmatrix} 1&3&-2&0&2&0&-3 \\ 0&0&1&2&0&0&0 \\ 0&0&0&0&0&1&-3 \\ 0&0&0&0&0&0&0 \end{pmatrix} \)
Multiplying the second row by 2 and adding to the first row gives
A[[1]] = A[[1]] + 2*A[[2]] A // MatrixForm
\[ \begin{pmatrix} \color{red}{1}&3&0&4&2&0&-3 \\ 0&0&\color{red}{1}&2&0&0&0 \\ 0&0&0&0&0&\color{red}{1}&-3 \\ 0&0&0&0&0&0&0 \end{pmatrix} . \]
As we see, the given matrix A is reduced to the rref form that contains three pivots in columns 1, 3, and 6. Therefore, the corresponding system of linear equations has three leading variables x1, x3, and x6, and three free variables x2, x4, x5. Solving for the leading variables (marked in red), we obtain
\[ \begin{split} x_1 &= -3-3\,x_2 -4\,x_4 -2\,x_5 , \\ x_3 &= -2\,x_4 , \\ x_6 &= -3 , \end{split} \]
where x2, x4, and x5 are free variables. Now we use Mathematica to convert the augmented matrix into its rref form:
A = {{1, 3, -2, 0, 2, 0}, {2, 6, -5, -2, 4, -3}, {0, 0, 5, 10, 0, 1}, {2, 6, 0, 8, 4, 1}}
RowReduce[A]
{{1, 3, 0, 4, 2, 0}, {0, 0, 1, 2, 0, 0}, {0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0}}
If we swap rows 1 and 3 of A, and then use the RowReduce command, the output will be the same.
AM = {A[[3]], A[[2]], A[[1]], A[[4]]}
RowReduce[AM]
{{1, 3, 0, 4, 2, 0}, {0, 0, 1, 2, 0, 0}, {0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0}}
It is clear that the rref matrix is independent of the order of the equations, or equivalently, the rows in the augmented matrix associated with the linear system. Now we solve the given system of linear equations using standard Solve command:
Solve[{x1 + 3*x2 - 2*x3 + 2*x5 == -3, 2*x1 + 6*x2 - 5*x3 - 2*x4 + 4*x5 - 3*x6 == 3, 5*x3 + 10*x4 + x6 == -3, 2*x1 + 6*x2 + 8*x4 + 4*x5 + x6 == -9}, {x1, x3, x6}]
{{x1 -> -3 - 3 x2 - 4 x4 - 2 x5, x3 -> -2 x4, x6 -> -3}}
However, if you choose another variable including a free one, the output will be empty:
Solve[{x1 + 3*x2 - 2*x3 + 2*x5 == -3, 2*x1 + 6*x2 - 5*x3 - 2*x4 + 4*x5 - 3*x6 == 3, 5*x3 + 10*x4 + x6 == -3, 2*x1 + 6*x2 + 8*x4 + 4*x5 + x6 == -9}, {x1, x3, x4}]
{}
In this situation, you cannot determine a solution.
End of Example 2

A system of linear equations is said to be homogeneous if the constant terms are all zeroes; that is, the system has the form

\[ \begin{split} a_{11} x_1 + a_{12} x_2 + \cdots + a_{1n} x_n &= 0, \\ a_{21} x_1 + a_{22} x_2 + \cdots + a_{2n} x_n &= 0, \\ \quad\vdots \qquad \vdots\qquad & \vdots \\ a_{m1} x_1 + a_{m2} x_2 + \cdots + a_{mn} x_n &= 0. \end{split} \]
Every homogeneous system of linear equations is consistent because all such systems have a trivial solution: x1 = x2 = ... = xn = 0. If there are other solutions, they are called nontrivial solutions. So there are only two possibilities for homogeneous systems of equations:
  • The system has only the trivial solution.
  • The system has infinitely many solutions in addition to the trivial solution.
Example 3: Use Gauss-Jordan elimination to solve the homogeneous linear system:
\[ \begin{split} x_1 -2\,x_3 +2\,x_4 &=0 , \\ 2\,x_1 - x_2 - x_3 +3\,x_4 &=0, \\ 3\,x_1 + 5\,x_2 -4\, x_3 + x_4 &= 0, \\ x_1 - x_2 + x_3 + x_4 &=0 . \end{split} \]
The augmented matrix for the system becomes
\[ \begin{bmatrix} 1&0&-2&2&0 \\ 2&-1&-1&3&0 \\ 3&5&-4&1&0 \\ 1&=1&1&1&0 \end{bmatrix} . \]
A = {{1, 0, -2, 2}, {2, -1, -1, 3}, {3, 5, -4, 1}, {1, -1, 1, 1}}
b = {0, 0, 0, 0}
B=Insert[A // Transpose, b, 7] // Transpose
We apply the standard Mathematica command to obtain its rref form:
RowReduce[B]
{{1, 0, 0, 14/17, 0}, {0, 1, 0, -(13/17), 0}, {0, 0, 1, -(10/17), 0}, {0, 0, 0, 0, 0}}
% // MatrixForm
\( \begin{pmatrix} 1&0&0&\frac{14}{17}&0 \\ 0&1&0&-\frac{13}{17}&0 \\ 0&0&1&-\frac{10}{17}&0 \\ 0&0&0&0&0 \end{pmatrix} \)
The corresponding system of equations is
\[ \begin{split} x_1 + \frac{14}{17}\, x_4 &=0 , \\ x_2 - \frac{13}{17}\, x_4 &=0 , \\ x_3 - \frac{10}{17}\, x_4 &=0 . \end{split} \]
If we assign the free variable x4 arbitrary value, say 14t, we can express the solution set parametrically as
\[ x_1 = - 14\,t , \qquad x_2 = 13\,t, \qquad x_3 = 10\, t . \]
Note that the trivial solution results when t = 0.
End of Example 3
Example 4: Consider the system of linear equations:
\begin{align*} x_1 + 2\,x_2 - x_3 &= 6 , \\ 2\,x_1 - x_2 + 3\, x_3 &= -3 , \\ 3\,x_1 -3\, x_2 - 4\, x_3 &= 1 . \end{align*}
We write the given system in augmented matrix form:
\[ {\bf A} = \begin{bmatrix} 1&2&-1&6 \\ 2&-1&3&-3 \\ 3&-3&-4&1 \end{bmatrix} . \]
The first elimination step is to eliminate the elements a21 = 2 and a31 = 3 in the original matrix A = [aij] by subtracting the multiples m21 = 2 and m31 = 3 of row 1 from rows 2 and 3, respectively, which gives
\[ {\bf A} = \begin{bmatrix} 1&2&-1&6 \\ 2&-1&3&-3 \\ 3&-3&-4&1 \end{bmatrix}_{-2\times R_1 + R_2 \mapsto R_2} \sim \begin{bmatrix} 1&2&-1&6 \\ 0&-5&5&-15 \\ 3&-3&-4&1 \end{bmatrix}_{-3\times R_1 + R_3 \mapsto R_3} = \begin{bmatrix} 1&2&-1&6 \\ 0&-5&5&-15 \\ 0&-9&-1&-17 \end{bmatrix} . \]
The second elimination step is to eliminate the element a32 = -9 by subtracting the second row times (9/5) from the third row:
\[ {\bf A} \sim \begin{bmatrix} 1&2&-1&6 \\ 0&-5&5&-15 \\ 0&-9&-1&-17 \end{bmatrix}_{-(9/5)\times R_2 + R_3 \mapsto R_3} = \begin{bmatrix} 1&2&-1&6 \\ 0&-5&5&-15 \\ 0&0&-10&10 \end{bmatrix} . \]
This completes the forward elimination stage required by the Gaussian elimination. To obtain the reduced row echelon form, we first need to divide the second row and the third row by -5 and -10, respectively, in order to see leading 1's on the main diagonal:
\[ {\bf A} = \begin{bmatrix} 1&2&-1&6 \\ 2&-1&3&-3 \\ 3&-3&-4&1 \end{bmatrix} \sim \begin{bmatrix} 1&2&-1&6 \\ 0&1&-1&3 \\ 0&0&1&-1 \end{bmatrix} . \]
The first elimination step in the backward stage is to eliminate the element a21 = 2 in the first row by subtracting 2 times the second row from the first one:
\[ {\bf A} \sim \begin{bmatrix} 1&2&-1&6 \\ 0&1&-1&3 \\ 0&0&1&-1 \end{bmatrix}_{-2\times R_2 + R_1 \mapsto R_1} \sim \begin{bmatrix} 1&0&1&0 \\ 0&1&-1&3 \\ 0&0&1&-1 \end{bmatrix}. \]
The second elimination step in the backward stage is to eliminate the elements a31 = 1 and a32 = -1. So we add the last row to the second one and subtract it from the first row. This yields
\[ {\bf A} \sim \begin{bmatrix} 1&0&1&0 \\ 0&1&-1&3 \\ 0&0&1&-1 \end{bmatrix}_{\times R_3 + R_2 \mapsto R_2} \sim \begin{bmatrix} 1&0&1&0 \\ 0&1&0&2 \\ 0&0&1&-1 \end{bmatrix}_{-\times R_3 + R_1 \mapsto R_1} = \begin{bmatrix} \color{red}{1}&0&0&1 \\ 0&\color{red}{1}&0&2 \\ 0&0&\color{red}{1}&-1 \end{bmatrix} . \]
We check with Mathematica:
M = {{1, 2, -1, 6}, {2, -1, 3, -3}, {3, -3, -4, 1}}
GaussJordan[M, 3]
\( \begin{pmatrix} 1&2&-1&6 \\ 2&-1&3&3 \\ 3&-3&-4& 1 \end{pmatrix} \)
\( \begin{pmatrix} 1&2&-1&6 \\ 0&-5&5&-15 \\ 0&-9&-1& -17 \end{pmatrix} \)
\( \begin{pmatrix} 1&0&1&0 \\ 0&1&-1&3 \\ 0&0&-10& 10 \end{pmatrix} \)
\( \begin{pmatrix} 1&0&1&0 \\ 0&1&0&2 \\ 0&0&1& -1 \end{pmatrix} \)
Or we can use the standard Mathematica command:
A = {{1, 2, -1}, {2, -1, 3}, {3, -3, -4}}
b = {6, -3, 1}
LinearSolve[A, b]
{1, 2, -1}
The same output is obtained with the reduced row echelon form:
GaussianElimination[A_?MatrixQ, b_?VectorQ] := Last /@ RowReduce[Flatten /@ Transpose[{A, b}]]
A = {{1, 2, -1}, {2, -1, 3}, {3, -3, -4}}
b = {6, -3, 1}
GaussianElimination[A, b]
This completes the Gauss-Jordan elimination procedure. Obviously, the original set of equations has been transformed to a diagonal form. Now expressing the set in algebraic form leads to
\[ x_1 =1, \quad x_2 =2 , \quad x_3 =-1 , \]
which is the required solution of the given system.
End of Example 4
Example 5: Consider the system of linear equations
\begin{align*} x_1 + 2\,x_2 + x_3 - x_4 &= 5 , \\ 3\,x_1 + 2\,x_2 + 4\,x_3 + 4\,x_4 &= -17, \\ 4\, x_1 + 4\,x_2 + 3\,x_3 + 4\, x_4 &= -2, \\ 2\,x_1 + x_3 + 5\, x_4 &= -10 . \end{align*}
This system can be represented by the coefficient matrix A and right-hand side vector b, as follows:
\[ {\bf A} = \begin{pmatrix} 1&2&1&-1 \\ 3&2&4&4 \\ 4&4&3&4 \\ 2&0&1&5 \end{pmatrix} , \quad {\bf b} = \begin{bmatrix} 5 \\ -17 \\ -2 \\ -10 \end{bmatrix}. \]
To perform row operations to reduce this system to upper triangular form, we define the augmented matrix
\[ {\bf B} = \left[ {\bf A} \ {\bf b} \right] = \begin{bmatrix} 1&2&1&-1&5 \\ 3&2&4&4&-17 \\ 4&4&3&4&-2 \\ 2&0&1&5&-10 \end{bmatrix}. \]
A = {{1, 2, 1, -1}, {3, 2, -4, 4}, {4, 4, 3, 4}, {2, 0, 1, 5}}
b = {5, -17, -2, -10}
B = Insert[A // Transpose, b, 5] // Transpose
Of course, we can use the standard Mathematica command to determine its rref form:
RowReduce[B]
% // MatrixForm
{{1, 0, 0, 0, -1}, {0, 1, 0, 0, 1}, {0, 0, 1, 0, 2}, {0, 0, 0, 1, -2}}
\( \begin{pmatrix} \color{red}{1}&0&0&0&-1 \\ 0&\color{red}{1}&0&0&1 \\ 0&0&\color{red}{1}&0&2 \\ 0&0&0&\color{red}{1}&-2 \end{pmatrix} \)
All three methods
Solve[{x1+2 x2 + x3 -x4==5, 3 x1 + 2 x2 + 4 x3 + 4 x4 ==-17, 4 x1 + 4 x2 + 3 x3 + 4 x4 == -2, 2 x1 + x3 + 5 x4 ==-10}, {x1,x2,x3,x4}]
Inverse[A].b
(Solve, Inverse, and RowReduce) used above to solve this system agree on the final answer. We can obtain the answer with one Mathematica command:
A = {{1, 2, 1, -1}, {3, 2, -4, 4}, {4, 4, 3, 4}, {2, 0, 1, 5}}
b = {{5}, {-17}, {-2}, {-10}}
RowReduce[Join[A, b, 2]] // MatrixForm
We first define B(1) to be the original augmented matrix. Then, we denote by B(2) the result of the first elementary row operation, which entails subtracting 3 times the first row from the second in order to eliminate x1 from the second equation:
\[ {\bf B}^{(2)} = \begin{bmatrix} 1&2&1&-1&5 \\ 0&-4&1&7&? \\ 4&4&3&4&22 \\ 2&0&1&5&21 \end{bmatrix}. \]
Next, we eliminate x1 from the third equation by subtracting 4 times the first row from the third:
\[ {\bf B}^{(3)} = \begin{bmatrix} 1&2&1&-1&-1 \\ 0&-4&1&7&? \\ 0&-4&-1&8&? \\ 2&0&1&5&21 \end{bmatrix}. \]
Then, we complete the elimination of x1 by subtracting 2 times the first row from the fourth:
\[ {\bf B}^{(4)} = \begin{bmatrix} 1&2&1&-1&-1 \\ 0&-4&1&7&? \\ 0&-4&-1&8&? \\ 0&-4&-1&7&? \end{bmatrix}. \]
We now need to eliminate x2 from the third and fourth equations. This is accomplished by subtracting the second row from the third, which yields
\[ {\bf B}^{(5)} = \begin{bmatrix} 1&2&1&-1&-1 \\ 0&-4&1&7&? \\ 0&0&-2&1&? \\ 0&-4&-1&7&? \end{bmatrix}. \]
and the fourth, which yields
\[ {\bf B}^{(6)} = \begin{bmatrix} 1&2&1&-1&-1 \\ 0&-4&1&7&? \\ 0&0&-2&1&? \\ 0&0&-2&0&? \end{bmatrix}. \]
Finally, we subtract the third row from the fourth to obtain the augmented matrix of an upper-triangular system,
\[ {\bf B}^{(7)} = \begin{bmatrix} 1&2&1&-1&-1 \\ 0&-4&1&7&? \\ 0&0&-2&1&? \\ 0&0&0&-1&-3 \end{bmatrix}. \]
End of Example 5
Example 6:
End of Example 6

Naïve Gaussian Elimination Method

The naïve Gaussian Elimination method includes two stages:

  • Forward Elimination of Unknowns: the unknown is eliminated in each equation starting with the first equation. This way, the equations are reduced to one equation and one unknown in each equation.
  • Back Substitution: starting from the last equation, each of the unknowns is found.

We demonstrate how the method works in the following example.

Example 7:

Example: Consider four equations with four unknowns. First, we enter data.

n = 4 (* number of equations *)
A = Table[ 1 + 10^i *j^2 + j*Floor[2.5^(i + j), {i, 0, n - 1}, {j, 0, n - 1}]; A // MatrixForm
\[ {\bf A} = \left[ \begin{array}{cccc} 1& 4& 17& 55 \\ 1& 17& 71& 208 \\ 1& 116& 479& 1192 \\ 1& 1040& 4195& 9733\\ \end{array} \right] . \]
Then we define the right-hand side vector:
b = Table[137.04 + 154.57*i, {i, 0, n - 1}]; b // MatrixForm
\[ {\bf b} = \left[ \begin{array}{ccc} 137.04\\ 291.61 \\ 446.18\\ 600.75 \\ \end{array} \right] . \]
Remember that we write vector b in column form, but Mathematica keeps the its entries as a list. Next, we build the augmented matrix from A and b:
Aug = Transpose[Append[Transpose[A], b]]; Aug // MatrixForm
Now we perform forward elimination that consists of (n-1) steps. In each step k, the coefficient of the kth unknown will be zeroed from every subsequent equation that follows the kth row. For example, in step 2 (i.e., k = 2), the coefficient of x2 will be zeroed from rows 3..n. With each step that is conducted, a new matrix is generated until the coefficient matrix is transformed to an upper triangular matrix. The following procedure calculates the upper triangular matrix produced for each step k.
Print["Start", Aug // MatrixForm]; Print[" "]
For[k = 1, k <= n - 1, k++,
For[i = k + 1, i <= n, i++,
multiplier = N[Aug[[i, k]]/Aug[[k, k]]];
For[j = k, j <= n + 1, j++,
Aug[[i, j]] = Aug[[i, j]] - multiplier*Aug[[k, j]]]];
Print["Step=", k, " ", Aug // MatrixForm]; Print[" "]]
Start \( \left( \begin{array}{cccc} 1 &4 &17 &55 &137.04 \\ 1& 17& 71& 208& 291.61 \\ 1 &116 &479 &1192 &446.18 \\ 1 &1040 &4195 &9733 &600.75 \end{array} \right) \)
Step=1 \( \left( \begin{array}{cccc} 1& 4& 17& 55& 137.04 \\ 0.& 13.& 54. &153. &154.57 \\ 0.& 112.& 462.& 1137.& 309.14 \\ 0.& 1036.& 4178.& 9678.& 463.71 \end{array} \right) \)
Step=2 \( \left( \begin{array}{cccc} 1& 4& 17& 55& 137.04 \\ 0.& 13.& 54.& 153.& 154.57 \\ 0.& -1.42109 \times 10^{-14} & -3.23077& -181.154& -1022.54\\ 0.& 0.& -125.385& -2514.92& -11854.3 \end{array} \right) \)
Step=3 \( \left( \begin{array}{cccc} 1& 4& 17& 55& 137.04 \\ 0.& 13.& 54.& 153.& 154.57 \\ 0.& -1.42109\times 10^{-14} & -3.23077& -181.154& -1022.54 \\ 0.& 0.& 0.& 4515.57& 27830. \end{array} \right) \)
The new upper triangular coefficient matrix, A1, can be extracted from the final augmented matrix Aug:
A1 = Take[Aug, {1, n}, {1, n}]; MatrixForm[A1]
\( \left( \begin{array}{cccc} 1& 4& 17& 55 \\ 0.& 13.& 54.& 153. \\ 0.& -1.42109\times 10^{-14} & -3.23077& -181.154 \\ 0.& 0.& 0.& 4515.57 \end{array} \right) \)
Notice that the final row, n, has only one unknown to be solved for. The new right-hand side array/vector, b1, is
b1 = Take[Aug, {1, n}, {n + 1, n + 1}]; MatrixForm[b1]
\( \left( \begin{array}{cccc} 137.04 \\ 154.57 \\ -1022.54 \\ 27830. \end{array} \right) \)
This is the end of the forward elimination stage. The new upper triangular coefficient matrix A1 and right hand side array b1 permit solving for the solution vector using backward substitution. It begins with solving the last equation as it has only one unknown:
\[ x_n = \frac{{\mbox rhs}_n}{a_{n,n}} = \frac{27830}{4515.57} \approx 6.16311 . \]
The remaining equations can be solved by using the following formula:
\[ x_i = \frac{1}{a_{i,i}} \left[ b_i - \sum_{j=i+1}^n a_{i,j} x_j \right] . \]

First, we define the vector x:

X = Array[x, {n, 1}];
Solving for the nth equation as it has only one unknown:
X[[n]] = b1[[n]]/A1[[n, n]]
Solving for the remaining n-1 unknowns working backwards from the (n-1)th equation to the first equation:
Do[summ = Sum[A1[[i, j]]*X[[j]], {j, i + 1, n}]; X[[i]] = (b1[[i]] - summ)/A1[[i, i]], {i, n - 1, 1, -1}]
The solution vector X is
X // MatrixForm
\( \left( \begin{array}{cccc} 51.8315 \\ 60.1233 \\ -29.0739 \\ 6.16311 \end{array} \right) \)
Using Mathematica's built-in tools, the exact solution is given below.
exactsoln = LinearSolve[A, b]; MatrixForm[exactsoln]
which gives exactly the same solution.
End of Example 7

Are there any pitfalls of the Naïve Gauss elimination method?

Yes, there are two pitfalls of the Naïve Gauss elimination method.
Division by zero: It is possible for division by zero to occur during the beginning of the n - 1 steps of forward elimination.

Round-off error: The Naïve Gauss elimination method is prone to round-off errors. This is true when there are large numbers of equations as errors propagate. Also, if there is subtraction of numbers from each other, it may create large error. ■

Algorithm: Homogeneous Equations

Most of the work required to solve an arbitrary system in reduced row echelon form arises in solving the related system we get by setting all right-hand sides to zero.
A linear system is called homogeneous if the right- hand side of each equation is zero. Otherwise, the system is inhomogeneous. Given an inhomogenous system A x = b, we call A x = 0 its homogeneous version. Both systems have the same coefficient matrix A, but their augmented matrices differ in the last column.
Observe that a homogeneous system A x = 0 always has at least one solution, namely the trivial one x = 0.
Suppose the coefficient matrix of a system has reduced row echelon form. If column j contains a pivot, we call it a pivot column, and we call xj a pivot variable. Otherwise we say the column or variable is free.
Each column of the coefficient matrix, and each variable, is one or the other: pivot or free.

We now give a simple five-step procedure for finding all solutions to a homogeneous system in reduced row echelon form. After listing the steps, we illustrate with some typical examples.

  1. Identify the free columns—let’s say there are k of them—and delete (or cross out) the rows of zeros.
  2. Write down k blank generating vectors h₁, h₂, … , hk in 𝔽n×1. (Write them vertically to make Step 4 a bit easier.)
  3. Set all free variables equal to 1 or 0: For each j = 1, 2, … , k, select hj, and set the jth free variable in it equal to 1. Set all other free variables in hj equal to zero.
  4. Fill in the pivot variables. For each j = 1, 2, … , k, identify the jth free column in the matrix. Set the pivot variables of hj equal to the opposites of the entries in that column, in order of their appearance, top to bottom.
  5. Express the complete solution set as all linear combinations of the hj’s: \[ {\bf x}_p = {\bf H}\,{\bf s} = s_1 {\bf h}_1 + s_2 {\bf h}_2 + \cdots + s_k {\bf h}_k , \] where H = [h₁, h₂, … , hk] is the matrix having the hi’s as columns, while allowing s = (s₁, s₂, … , sk) to be any vector in 𝔽k.
   
Example 7: Let us consider the homogeneous system with coefficient matrix \[ {\bf B} = \begin{bmatrix} 0&1&2&0&3 \\ 0&0&0&1&4 \\ 0&0&0&0&0 \end{bmatrix} . \] The matrix has the reduce row echelon form, so we can apply the algorithm

Step 1: identify the free variables and delete the rows of zeros.

Columns 2 and 4 have pivots, so x₁, x₃, and x₅ are free. Three free variables makes k = 3 . There is one row of zeros, so we discard it. Our matrix now looks like this: \[ \begin{bmatrix} 0&1&2&0&3 \\ 0&0&0&1&4 \end{bmatrix} . \]

Step 2: Set up k “blank” vertical generating vectors h₁, h₂, h₃ ∈ ℝ5. So we get three generators with blank entries, entries represented here by p’s (for pivot variables) and f’s (free variables): \[ {\bf h}_1 = \left[ \begin{array}{c} f \\ p \\ f \\ p \\ f \end{array} \right] , \qquad {\bf h}_2 = \left[ \begin{array}{c} f \\ p \\ f \\ p \\ f \end{array} \right] , \qquad {\bf h}_3 = \left[ \begin{array}{c} f \\ p \\ f \\ p \\ f \end{array} \right] . \]

Step 3: Fill in the free variables. Taking each generating vector h₁, h₂, h₃ in turn, set the jth free variable equal to 1 and all other free variables equal to zero. That is, the first free variable (x₁) in h₁ is a 1, the second free variable (x₃) in h₂ is a 1, the third free variable (x₅) in h₃ is a 1. All other free variables in all three generators are zeros. The pivot variables x₂ and x₄ remain undetermined. This fills in the entries for x₁, x₃, and x₅ in each generator: \[ {\bf h}_1 = \left[ \begin{array}{c} 1 \\ p \\ 0 \\ p \\ 0 \end{array} \right] , \qquad {\bf h}_2 = \left[ \begin{array}{c} 0 \\ p \\ 1 \\ p \\ 0 \end{array} \right] , \qquad {\bf h}_3 = \left[ \begin{array}{c} 0 \\ p \\ 0 \\ p \\ 1 \end{array} \right] . \]

Step 4: Set the pivot variables of each h₁, h₂, h₃ equal to the opposites of the values in the jth free variable column of the non-zero rows of the matrix—in order of their appearance, top to bottom.

We start with j = 1: The first free variable is x₁, so we take the entries in the two remaining rows of column 1. In this example both those entries are zero. We then use their opposites (still zero) to set the pivot variables x₂ and x₄ of h₁. We thus get \[ {\bf h}_1 = \left[ \begin{array}{c} \phantom{-}1 \\ -0 \\ \phantom{-}0 \\ -0 \\ \phantom{-}0 \end{array} \right] . \] (The minus signs are irrelevant here since −0 = 0, but they do show which entries we just filled in.) It’s easy to check that h1 now solves the homogeneous system—just dot it with each row of the coefficient matrix: We get zero every time.

Now set j = 2. The second free variable is x₃, so we take the entries in the two rows of column three: 2 and 0 . We then put minus 2 and minus 0 in the pivot positions of the second generator to obtain \[ {\bf h}_2 = \left[ \begin{array}{c} \phantom{-}0 \\ -2 \\ \phantom{-}1 \\ -0 \\ \phantom{-}0 \end{array} \right] \]

Finally we complete h₃. The third free variable is x₅, so we take the entries in column 5, namely 3 and 4, flip their signs to get −3 and −4, and put these in the pivot positions to have \[ {\bf h}_3 = \left[ \begin{array}{c} 0 \\ -3 \\ 0 \\ -4 \\ 1 \end{array} \right] . \]

Step 5: Express the homogeneous solution as all linear combinations of the generating vectors: \[ {\bf x}_h = c_1 {\bf h}_1 + c_2 {\bf h}_2 + c_3 {\bf h}_3 = {\bf H}\,{\bf c}, \qquad c_1 , c_2 , c_3 \in \mathbb{R} . \] Here, the full homogeneous solution set is expressed via matrix \[ {\bf H} = \begin{bmatrix} 1& 0& 0 \\ 0& -2& -3 \\ 0& 1& 0 \\ 0& 0& -4 \\ 0& 0& 1 \end{bmatrix} . \] We check with Mathematica

B = {{0, 1, 2, 0, 3}, {0, 0, 0, 0, 4}, {0, 0, 0, 0, 0}};
H = {{1, 0, 0}, {0, -2, -3}, {0, 1, 0}, {0, 1, -4}, {0, 0, 1}};
B . H
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
   ■
End of Example 1
   
Example 7:    ■
End of Example 1

Algorithm: Inhomogeneous Equations

Finding a particular solution to a system in reduced row echelon form is far simpler than finding the full homogeneous solution. Two quick steps are all it takes: To produce a particular solution xp of a system in reduced row echelon form, write down a blank vector xp ∈ 𝔽n×1, and then fill it in by
  • setting all free variables equal to zero;
  • setting each pivot variable equal to the rightmost entry in the row of that pivot.
Theorem 3: A pivot in the last column of the RREF of the augmented matrix means the system has no solution.
We demonstrate with some examples.    
Example 7: Let us find a particular solution of the system with this augmented matrix: \[ {\bf B} = \left[ \begin{array}{cccc|c} 1&2&0&0&3 \\ 0&0&0&1&4 \end{array} \right] . \] The matrix B does have reduced row echelon form (the procedure doesn’t apply otherwise!), and there are four variables, so we write down a blank vector xp ∈ 𝔽4×1: \[ {\bf x}_p^{\mathrm T} = \left[ \underline{ }, \ \underline{ }, \ \underline{ }, \ \underline{ } \right] . \] The free variables are x2 and x3 , and the first instruction has us set those equal to zero: \[ {\bf x}_p^{\mathrm T} = \left[ \underline{ }, \ 0, \ 0, \ \underline{ } \right] . \] We fill in the pivot slots using the last column of the augmented matrix. We find the x₁ pivot in row 1, where the last entry is 3, so we set x₁ = 3. We find the x₄ pivot in row 2, where the last entry is 4, so we set x₄ = 4 , and that’s it—we have a particular solution: \[ {\bf x}_p^{\mathrm T} = \left[ 3, \ 0, \ 0, \ 4 \right] . \] We check the answer with Mathematica:
B = {{1, 2, 0, 0}, {0, 0, 0, 1}};
B . {3, 0, 0, 4}
{3, 4}
   ■
End of Example 1
   
Example 7: Let us consider the system with augmented matrix \[ {\bf B} = \left[ \begin{array}{cccc|c} 1&0&0&2&3 \\ 0&1&0&4&5 \\ 0&0&1&6&7 \end{array} \right] . \] The only free variable is x₄, so we put a zero in that slot. The right- hand sides of the pivot rows are 3, 5, and 7, so we set the corresponding pivot variables equal to those values, and we’re done: \[ {\bf x}_p^{\mathrm T} = \left[ 3,\ 5, \ 7,\ 0 \right] . \] Again, we check with Mathematica:
B = {{1, 0, 0, 2}, {0, 1, 0, 4}, {0, 0, 1, 6}};
B . {3, 5, 7, 0}
{3, 5, 7}
   ■
End of Example 1

 

  1. Determine which matrices are in reduced echelon form and which others are only in echelon form. \[ (a)\ \begin{bmatrix} 1&0&0&0 \\ 0&1&0&3 \\0&0&1&6 \end{bmatrix}, \qquad (b) \ \begin{bmatrix} 1&0&1&2&0&4 \\ 0&1&-2&2&0&3 \\ 0&0&0&0&1&5 \end{bmatrix}, \qquad (c) \ \begin{bmatrix} 1&0&1&0 \\ 0&1&1&0 \\ 0&0&0&1 \end{bmatrix}, \] \[ (d)\ \begin{bmatrix} 1&0&0&0 \\ 0&1&1&0 \\ 0&0&0&0 \\ 0&0&0&1 \end{bmatrix}, \qquad (e) \ \begin{bmatrix} 1&2&0&1&4 \\ 0&2&0&3&3 \\ 0&0&0&2&2 \\ 0&0&0&0&1 \end{bmatrix} , \qquad (f) \ \begin{bmatrix} 1&1&0&0 \\ 0&1&1&0 \\ 0&0&1&1 \end{bmatrix} . \]
  2. Consider the two 3×4 matrices below \[ {\bf B} = \begin{bmatrix} 1&3&-2&2 \\ -1&-2&-1&-1 \\ -1&-5&8&-3 \end{bmatrix}, \quad {\bf C} = \begin{bmatrix} 1&2&1&2 \\ 1&1&4&0 \\ -1&-1&-4&1 \end{bmatrix} . \] Row-reduce each matrix and determine that the reduced row-echelon forms of B and C are identical. From this argue that B and C are row-equivalent.
  3. Perform Gauss--Jordan elimination on the following systems of equations, but without partial pivoting (without swapping rows).
    (a) \( \begin{align*} 2\,x + 3\,y + 4\,z + 2\,u &= 5, \\ x+y+z &=1 , \\ 2\,x+2\,y + 3\,z + u &=4 , \\ 6\,x + 9\,y + 12\,z + 6\,u &= 15 ; \end{align*} \) (b) \( \begin{align*} 4\,x + 6\,y + z - 7\,u &= 8, \\ x+y+z+u &=3 , \\ 2\,x+2\,y + 2\,z -3\, u &=5 , \\ -x + y + z + u &= 6 ; \end{align*} \)
    (c) \( \begin{align*} -2\,x + 6\,y + 3\,z &= 4, \\ x-3\,y+z &= 6 , \\ 2\,x+5\,y + 6\,z &=1 ; \end{align*} \) (d) \( \begin{align*} 2\,x + 3\,y - 5\,z &= 7, \\ 3\,x +2\,y+7\,z &= 8 , \\ 4\,x+6\,y + 2\,z &=1 ; \end{align*} \)
    (e) \( \begin{align*} -2\,x + 4\,y - 8\,z -3\,u + v &= 2, \\ 3\,x +4\,y - 7\,z -u -5\,v &= 1 , \\ 5\,x+4\,z + u+v &=-1 ; \end{align*} \) (f) \( \begin{align*} -2\,x + 4\,y - 8\,z &= 2, \\ 3\,x +4\,y -7\,z &= 1 , \\ 5\,x + z &=-1 , \\ -3\,x + 4\,y + 3\,z &= 2. \end{align*} \)
  4. For the following problems, use Gauss-Jordan Elimination to put the matrices in RREF.
    \[ (a) \quad A= \begin{bmatrix} 4 & 4 & 1 & 24\\ 2 & -4 & 1 & 0\\ 5 & -4 & -5 & 12\\ \end{bmatrix} ; \qquad\quad (b) \quad A= \begin{bmatrix} 2 & 3 & -1 & 24\\ 1 & 1 & 4 & 13\\ 3 & -2 & 1 & -7\\ \end{bmatrix} \]
  5. Perform Gauss--Jordan elimination on the following matrices:
    \[ (a) \quad \begin{bmatrix} 2&-3&6 \\ 4&-2&1 \end{bmatrix} , \quad (b) \quad \begin{bmatrix} 2&-3&6&1 \\ 8&-4&2&3 \\ 7&-3&2&5 \end{bmatrix} , \quad (c) \quad \begin{bmatrix} 2&-3&4&-5&-10 \\ 1&1&-1&1&4 \\ 3&5&-9&7&24 \\ 2&2&-2&3&9 \end{bmatrix} . \]
  6. Suppose that the prices of three goods sold by a firm are interrelated by the linear system
    \[ 5\,x_1 - 2\,x_2 - 2\, x_3 =5, \quad -2\,x_1 + 5\,x_2 -2\,x_3 =5, \quad -2\,x_1 -2\, x_2 + 5\,x_3 =5 , \]
    where x1, x2, and x3 denote prices in dollars of goods 1, 2, and 3, respectively. Find the prices that solve the system using the Gauss-Jordan elimination method.
  7. If matrix A represents an augmented matrix system of equations of the form \( ax_{1}+bx_{2}+cx_{3}=d, \) find the values of \( x_{1}, x_{2} , \) and x3.
    \[ (a) \quad \begin{bmatrix} -1 & -5 & -5 & 22\\ 4 & -5 & 4 & 19\\ 1 & 5 & -1 & -10\\ \end{bmatrix} , \qquad\quad (b) \quad A= \begin{bmatrix} 1 & 2 & 2 & 2\\ 1 & 1 & 1 & 3\\ 1 & 0 & 2 & 4\\ \end{bmatrix} \]
  8. What 2 × 2 matrix A in RREF yields
    1. The single homogeneous system of equations generator h = (−3, 1)?
    2. The single homogeneous generator h = (0, 1)?
  9. What 3 × 3 matrix A in RREF yields What 2 × 2 matrix A in RREF yields
    1. The single homogeneous system of equations generator h = (3, 2, 1)?
    2. These two homogenous generators: h₁ = (1, 3, 0), and h₂ = (0, −2, 2)?
  10. What 2 × 4 matrix A in rre form yields
    1. These two homogeneous generators: h₁ = (3, 1, 0, 0), and h₂ = (−3, 0, 1, 0)?
    2. These two homogeneous generators: h₁ = (1, 0, 0, 1), and h₂ = (4, 0, 1, 0)?

  1. Althoen, Steven C., and McLaughlin, Renate (1987), Gauss--Jordan reduction: a brief history, The American Mathematical Monthly, Mathematical Association of America, 94, No. 2: 130–142, doi:, ISSN 0002-989
  2. Gauss--Jordan Elimination Method
  3. G. Peters and J.H. Wilkinson, On the stability of Gauss--Jordan elimination with pivoting, National Physical Laboratory Teddington, Middlesex, England.
  4. Beezer, Robert A., Reduced Row-Echelon Form
  5. Gaussian Elimination A collection of codes.
  6. Jie Ma and Yongshu Li, Gauss--Jordan elimination method for computing all types of generalized inverses related to the {1}-inverse, Journal of Computational and Applied Mathematics, Volume 321, September 2017, Pages 26-43. https://doi.org/10.1016/j.cam.2017.02.010
  7. Program for Gauss--Jordan Elimination Method
  8. Module for Gauss--Jordan Elimination
  9. Thomas Yuster, The Reduced Row Echelon Form of a Matrix is Unique: A Simple Proof, Mathematics Magazine, Vol. 57 (1984), no. 2 (March), 93–94. MR1572501