Linear Systems of Algebraic Equations

This page presents some topics from Linear Algebra needed for construction of solutions to systems of linear algebraic equations and some applications. We use matrices and vectors as essential elements in obtaining and expressing the solutions.

Sage solves linear systems

A system of \( m \) equations in \( n \) unknowns \( x_1 , \ x_2 , \ \ldots , \ x_n \) is a set of \( m \) equations of the form
\[ \begin{cases} a_{11} \,x_1 + a_{12}\, x_2 + \cdots + a_{1n}\, x_n &= b_1 , \\ a_{21} \,x_1 + a_{22}\, x_2 + \cdots + a_{2n}\, x_n &= b_2 , \\ \qquad \vdots & \qquad \vdots \\ a_{m1} \,x_1 + a_{m2}\, x_2 + \cdots + a_{mn}\, x_n &= b_n , \end{cases} \]
The numbers \( a_{ij} \) are known as the coefficients of the system. The matrix \( {\bf A} = [\,a_{ij}\,] , \) whose \( (i,\, j) \) entry is the coefficient \( a_{ij} \) of the system of linear equations is called the coefficient matrix and is denoted by
\[ {\bf A} = \left[ \begin{array}{cccc} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{array} \right] \qquad\mbox{or} \qquad {\bf A} = \left( \begin{array}{cccc} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{array} \right) . \]
Let \( {\bf x} =\left[ x_1 , x_2 , \ldots x_n \right]^{\mathrm T} \) be the vector of unknowns. Then the product \( {\bf A}\,{\bf x} \) of the \( m \times n \) coefficient matrix A and the \( n \times 1 \) column vector x is an \( m \times 1 \) matrix
\[ \left[ \begin{array}{cccc} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{array} \right] \, \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} a_{1,1} x_1 + a_{1,2} x_2 + \cdots + a_{1,n} x_n \\ a_{2,1} x_1 + a_{2,2} x_2 + \cdots + a_{2,n} x_n \\ \vdots \\ a_{m,1} x_1 + a_{m,2} x_2 + \cdots + a_{m,n} x_n \end{bmatrix} , \]

whose entries are the right-hand sides of our system of linear equations.

If we define another column vector \( {\bf b} = \left[ b_1 , b_2 , \ldots b_m \right]^{\mathrm T} \) whose components are the right-hand sides \( b_{i} ,\) the system is equivalent to the vector equation

\[ {\bf A} \, {\bf x} = {\bf b} . \]

We say that \( s_1 , \ s_2 , \ \ldots , \ s_n \) is a solution of the system if all \( m \) equations hold true when

\[ x_1 = s_1 , \ x_2 = s_2 , \ \ldots , \ x_n = s_n . \]
Sometimes a system of linear equations is known as a set of simultaneous equations; such terminology emphasises that a solution is an assignment of values to each of the \( n \) unknowns such that each and every equation holds with this assignment. It is also referred to simply as a linear system.

Sage has built-in commands that will solve a linear system of equations, given a coefficient matrix and a vector of constants. We need to learn some more theory before we can entirely understand this command, but we can begin to explore its use. For now, consider these methods experimental and do not let it replace row-reducing augmented matrices:

\[ {\bf A}\, {\bf x} = {\bf b} . \]
The command A.solve_right(b) will provide information about solutions to the linear system \( {\bf A}\,{\bf x} = {\bf b} \) of equations with coefficient matrix A and vector of constants b. The reason for the “right” (and the corresponding command named with “left”) will have to wait for a while. For now, it is generally correct in this course to use the “right” variant of any Sage linear algebra command that has both “left” and “right” variants. Lets apply the .solve_right() command to a system with no solutions:

An important use of Schur complements is the partitioned solution of linear systems. In fact this is the most important application in Finite Element Method (FEM for short), in connection with superelement analysis. Suppose that we have the linear system \( {\bf M}\,{\bf z} = {\bf r} , \) in which the given coefficient matrix M is partitioned into 2 by 2 blocks: \( {\bf M} = \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} . \) Suppose that the rigth-hand side given vector is also broken as \( {\bf r} = [{\bf p}\ {\bf q}]^T , \) while \( {\bf z} = [{\bf x}\ {\bf y}]^T \) is unknown. The system is equivalent to the pair of vector equations:

\[ \begin{split} {\bf A}\, {\bf x} + {\bf B}\, {\bf y} = {\bf p} , \\ {\bf C}\, {\bf x} + {\bf D}\, {\bf y} = {\bf q} . \end{split} \]
If D is nonsingular, eliminating y and solving for x yields
\[ {\bf x} = \left( {\bf A} - {\bf B}\,{\bf D}^{-1} {\bf C} \right)^{-1} \left( {\bf p} - {\bf B} \,{\bf D}^{-1} {\bf q} \right) = \left( {\bf M} / {\bf D} \right)^{-1} \left( {\bf p} - {\bf B} \,{\bf D}^{-1} {\bf q} \right) , \]
where \( {\bf M} / {\bf D} = {\bf A} - {\bf B}\,{\bf D}^{-1} {\bf C} \) is the Schur complement. Similarly, if A is nonsingular, eliminating x and solving for y yields
\[ {\bf y} = \left( {\bf D} - {\bf C}\,{\bf A}^{-1} {\bf B} \right)^{-1} \left( {\bf q} - {\bf C} \,{\bf A}^{-1} {\bf p} \right) = \left( {\bf M} / {\bf A} \right)^{-1} \left( {\bf q} - {\bf C} \,{\bf A}^{-1} {\bf p} \right) , \]
where \( {\bf M} / {\bf A} = {\bf D} - {\bf C}\,{\bf A}^{-1} {\bf B} . \)

In general, we say that a linear system of algebraic equations is consistent if it has at least one solution and inconsistent if it has no solution. Thus, a consistent linear system has either one solution or infinitely many solutions---there are no other possibilities.

Theorem: Let A be an \( m \times n \) matrix.

  • (Overdetermined case) If m > n, then the linear system \( {\bf A}\,{\bf x} = {\bf b} \) is inconsistent for at least one vector b in \( \mathbb{R}^n . \)
  • (Underdetermined case) If m < n, then for each vector b in \( \mathbb{R}^m \) the linear system \( {\bf A}\,{\bf x} = {\bf b} \) is either inconsistent or has infinite many solutions.

Theorem: If A is an \( m \times n \) matrix with \( m < n, \) then \( {\bf A}\,{\bf x} = {\bf 0} \) has infinitely many solutions.

Theorem: A system of linear equations \( {\bf A}\,{\bf x} = {\bf b} \) is consistent if and only if b is in the column space of A.

Theorem: A linear system of algebraic equations \( {\bf A}\,{\bf x} = {\bf b} \) is consistent if and only if the vector b is orthogonal to every solution y of the adjoint homogeneous equation \( {\bf A}^{\mathrm T} \,{\bf y} = {\bf 0} \) or \( {\bf y}^T {\bf A} = {\bf 0} . \)

Theorem: The general solution x of a consistent linear system of algebraic equations \( {\bf A}\,{\bf x} = {\bf b} \) can be expressed as the sum of a particular solution x0 of that system and the general solution of the corresponding homogeneous system

\[ {\bf x} = {\bf x}_0 + c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_k {\bf v}_k . \]
Here vector \( \{ {\bf v}_1 , \ldots , {\bf v}_k \} \) is a basis for the kernel (= null space) of A and \( c_1 , \ldots , c_k \) are arbitrary constants. ■

Sage uses standard commands to solve linear systems of algebraic equations:

\[ {\bf A} \, {\bf x} = {\bf b} , \qquad {\bf A} = \left[ \begin{array}{cccc} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{array} \right] , \quad {\bf x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} , \qquad {\bf b} = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_m \end{bmatrix} . \]

Here b is a given vector and column-vector x is to be determined. An augmented matrix is a matrix obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices. We associate with the given system of linear equations \( {\bf A}\,{\bf x} = {\bf b} \) an augmented matrix by appending the column-vector b to the matrix A. Such matrix is denoted by \( {\bf M} = \left( {\bf A}\,\vert \,{\bf b} \right) \) or \( {\bf M} = \left[ {\bf A}\,\vert \,{\bf b} \right] : \)

\[ {\bf M} = \left( {\bf A}\,\vert \,{\bf b} \right) = \left[ {\bf A}\,\vert \,{\bf b} \right] = \left[ \begin{array}{cccc|c} a_{1,1} & a_{1,2} & \cdots & a_{1,n} & b_1 \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} & b_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots\\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} & b_m \end{array} \right] . \]

Theorem: Let A be an \( m \times n \) matrix. A linear system of equations \( {\bf A}\,{\bf x} = {\bf b} \) is consistent if and only if the rank of the coefficient matrice A is the same as the rank of the augmented matrix \( \left[ {\bf A}\,\vert \, {\bf b} \right] . \)

The actual use of the term augmented matrix appears to have been introduced by the American mathematician Maxime Bocher (1867--1918) in 1907.

We show how it works by examples. The system can be solved using solve command

sage: solve([eq1==0,eq2==0],x1,x2)
However, this is somewhat complex and we use vector approach:
sage: A = Matrix([[1,2,3],[3,2,1],[1,1,1]])
sage: b = vector([0, -4, -1]) sage: x = A.solve_right(b) sage: x (-2, 1, 0) sage: A * x # checking our answer... (0, -4, -1)

A backslash \ can be used in the place of solve_right; use A \ b instead of A.solve_right(b).

sage: A \ b
(-2, 1, 0)

If there is no solution, Sage returns an error:

sage: A.solve_right(w)
Traceback (most recent call last):
...
ValueError: matrix equation has no solutions

Similarly, use A.solve_left(b) to solve for x in \( {\bf x}\, {\bf A} = {\bf b}\).

If a square matrix is nonsingular (invertible), then solutions of the vector equation \( {\bf A}\,{\bf x} = {\bf b} \) can be obtained by applying the inverse matrix: \( {\bf x} = {\bf A}^{-1} {\bf b} . \) We will discuss this method later in the section devoted to inverse matrices.

Example. The linear system

\[ \begin{cases} x_1 + x_2 + x_3 + x_4 + x_5 &= 3 , \\ 2\,x_1 + x_2 + x_3 + x_4 + 2\, x_5 &= 4 , \\ x_1 - x_2 - x_3 + x_4 + x_5 &= 5 , \\ x_1 + x_4 + x_5 &= 4 , \end{cases} \]
is an example of a system of four equations in five unknowns, \( x_1 , \ x_2 , \ x_3 , \ x_4 , \ x_5 . \) One solution of this system is
\[ x_1 = -1 , \ x_2 = -2 , \ x_3 =1, \ x_4 = 3 , \ x_5 = 2 , \]
as you can easily verify by substituting these values into the equations. Every equation is satisfied for these values of \( x_1 , \ x_2 , \ x_3 , \ x_4 , \ x_5 . \) However, there is not the only solution to this system of equations. There are many others.

On the other hand, the system of linear equations

\[ \begin{cases} x_1 + x_2 + x_3 + x_4 + x_5 &= 3 , \\ 2\,x_1 + x_2 + x_3 + x_4 + 2\, x_5 &= 4 , \\ x_1 - x_2 - x_3 + x_4 + x_5 &= 5 , \\ x_1 + x_4 + x_5 &= 6 , \end{cases} \]
has no solution. There are no numbers we can assign to the unknowns \( x_1 , \ x_2 , \ x_3 , \ x_4 , \ x_5 \) so that all four equations are satisfied. We can solve these two equations simultaneously by considering the augmented matrix. An efficient way to do this is to form the partitioned matrix \( \left[ {\bf A} \, \vert \, {\bf b}_1 \, \vert {\bf b}_2 \right] \) in which the coefficient matrix A is "augmented" by all 2 vectors. Then thsi augmented matrix is reduced using Gauss--Jordan elimination to
\[ \left( \begin{array}{ccccc|cc} 1&1&1&1&1&3&3 \\ 2&1&1&1&2&4&4 \\ 1&-1&-1&1&1&5&5 \\ 1&0&0&1&1&4&6 \end{array} \right) \quad \Longrightarrow \quad \left( \begin{array}{ccccc|cc} 1&0&0&0&1&1&0 \\ 0&1&1&0&0&-1&0 \\ 0&0&0&1&0&3&0 \\ 0&0&0&0&0&0&1 \end{array} \right) . \]
The sixth column gives the required solution for the former system of equations. The last row in reduced form shows that the latter system of equations has no solution. ■

Table Of Contents

Previous topic

Matrix Algebra

Next topic

Systems of linear ODEs