The numbers \( a_{ij} \) are known as the coefficients of the system. The matrix
\( {\bf A} = [\,a_{ij}\,] , \) whose \( (i,\, j) \) entry is the
coefficient \( a_{ij} \) of the system of linear equations is called the coefficient matrix, and is denoted by
Let \( {\bf x} =\left[ x_1 , x_2 , \ldots x_n \right]^{\mathrm T} \) be the vector of unknowns. Then the product
\( {\bf A}\,{\bf x} \) of the \( m \times n \) coefficient matrix A and the
\( n \times 1 \) column vector x is an \( m \times 1 \) matrix
whose entries are the right-hand sides of our system of linear equations.
If we define another column vector \( {\bf b} = \left[ b_1 , b_2 , \ldots b_m \right]^{\mathrm T} \)
whose components are the right-hand sides \( b_{i} ,\) the system is equivalent to the vector equation
\[
{\bf A} \, {\bf x} = {\bf b} .
\]
We say that \( s_1 , \ s_2 , \ \ldots , \ s_n \) is a solution of the system if all
\( m \) equations hold true when
Sometimes a system of linear equations is known as a set of simultaneous
equations; such terminology emphasizes that a solution is an assignment of
values to each of the n unknowns such that each and every equation
holds with this assignment. It is also referred to simply as a linear system.
MuPad has built-in command matlinsolve(A, b) that solves a
linear system of equations, given a coefficient matrix A and
a vector of constants b. We need to learn some more theory
before we can entirely understand this command, but we can begin to explore
its use.
For now, consider these methods experimental and do not let it replace row-reducing augmented matrices:
\[
{\bf A}\, {\bf x} = {\bf b} ,
\]
where A is a given matrix and b is specified column vector, may have a unique solution x, may have infinite many solutions, or may gave no solution at all.
The command matlinsolve(A, b) will provide the vector
x of solutions to the linear system \( {\bf A}\,{\bf x} = {\bf b} \) of equations with coefficient matrix A and vector
of constants
b.
Let's apply the matlinsolve(A, b)
command to a system with no solutions:
Example: Let us take a look at the system of algebraic equations
MatR := Dom::Matrix(Dom::Real):
A := MatR([[1, 2], [2, 4]]); b := MatR([3, 6])
The output consists of two vectors. the first one \(
{\bf x}_p = \begin{pmatrix} 3 \\ 0 \end{pmatrix} \) shows a particular
solution of the system that has infinite many solutions:
\[
{\bf x} = {\bf x}_p + {\bf x}_h ,
\]
where xh is the general solution of the homogeneous equation
A xh = 0:
Therefore, MuPad tells us that the given system of linear algebraic equations
has inifinite many solutions because the matrix A is singular
(its determinant is zero).
Vector xh generates the null space (= kernel) of matrix
A, so any vector from the kernel of A is a constant multiple of
xh. Indeed, we verify this stement with the following MuPad
command:
Example: Consider the singular matrix \( {\bf A} = \begin{bmatrix} 2&1&1 \\ 0&1&0 \\ 1&-2& 1 \end{bmatrix} .\)
We want to determine when the linear equation \( {\bf A} \, {\bf x} = {\bf b} \) has a solution; namely, we need to find conditions on the vector b so the equation \( {\bf A} \, {\bf x} = {\bf b} \) has a solution. We use Mathematica to answer this question.
A = {{2, 1, 2}, {0, 1, 0}, {1, -2, 1}};
Det[A]
Out[2]= 0
The command MatrixRank[A] provides the rank of a matrix A.
MatrixRank[A]
Out[3]= 2
Atranspose = Transpose[A]
Out[4]= {{2, 0, 1}, {1, 1, -2}, {2, 0, 1}}
Atranspose = ConjugateTranspose[A]
Out[5]= {{2, 0, 1}, {1, 1, -2}, {2, 0, 1}}
x = NullSpace[Atranspose]
Out[6]= {{-1, 5, 2}}
y = {y1, y2, y3};
Dot[x, y] == 0
Out[8]= {-y1 + 5 y2 + 2 y3} == 0
x.y == 0
The algorithmitically fastest way to solve a linear algebraic equation is to use Gauss elimination procedure. Mathematica offers a special command to achieve this: RowReduce[A]
■
An important use of Schur complements is the partitioned solution of linear systems. In fact this
is the most important application in Finite Element Method (FEM for short), in connection with superelement analysis. Suppose that
we have the linear system \( {\bf M}\,{\bf z} = {\bf r} , \) in which the given coefficient matrix M is partitioned
into 2 by 2 blocks: \( {\bf M} = \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} . \)
Suppose that the right-hand side given vector is also broken as \( {\bf r} = [{\bf p}\ {\bf q}]^T , \)
while \( {\bf z} = [{\bf x}\ {\bf y}]^T \) is unknown. The system is equivalent to
the pair of vector equations:
where \( {\bf M} / {\bf D} = {\bf A} - {\bf B}\,{\bf D}^{-1} {\bf C} \) is the Schur complement.
Similarly, if A is nonsingular, eliminating x and solving for y yields
In general, we say that a linear system of algebraic equations is consistent if it has at least one solution and inconsistent if it has no solution. Thus, a consistent linear system has either one solution or infinitely many solutions---there are no other possibilities.
Theorem: Let A be an \( m \times n \) matrix.
(Overdetermined case) If m > n, then the linear system \( {\bf A}\,{\bf x} = {\bf b} \)
is inconsistent for at least one vector b in \( \mathbb{R}^n . \)
(Underdetermined case) If m < n, then for each vector b in \( \mathbb{R}^m \)
the linear system \( {\bf A}\,{\bf x} = {\bf b} \) is either inconsistent or has infinite many solutions.
Theorem: If A is an \( m \times n \) matrix with
\( m < n, \) then \( {\bf A}\,{\bf x} = {\bf 0} \) has
infinitely many solutions.
Theorem: A system of linear equations \( {\bf A}\,{\bf x} = {\bf b} \) is consistent if and only if b is in the column space of A.
Theorem: A linear system of algebraic equations \( {\bf A}\,{\bf x} = {\bf b} \) is consistent if and only if the vector b is orthogonal
to every solution y of the adjoint homogeneous equation \( {\bf A}^{\mathrm T} \,{\bf y} = {\bf 0} \) or \( {\bf y}^T {\bf A} = {\bf 0} . \)
Theorem: The general solution x of a consistent linear system of algebraic equations
\( {\bf A}\,{\bf x} = {\bf b} \) can be expressed as the sum of a particular solution
x0 of that system and the general solution of the corresponding homogeneous system
Here vector \( \{ {\bf v}_1 , \ldots , {\bf v}_k \} \) is a basis for the kernel
(= null space) of A and \( c_1 , \ldots , c_k \) are arbitrary constants.
■
Sage uses standard commands to solve linear systems of algebraic equations:
Here b is a given vector and column-vector x is to be determined. An augmented matrices is a matrix
obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices.
We associate with the given system of linear equations \( {\bf A}\,{\bf x} = {\bf b} \) an augmented matrix by appending the column-vector b to the matrix A.
Such matrix is denoted by \( {\bf M} = \left( {\bf A}\,\vert \,{\bf b} \right) \) or \( {\bf M} = \left[ {\bf A}\,\vert \,{\bf b} \right] : \)
Theorem: Let A be an \( m \times n \) matrix. A linear system of equations
\( {\bf A}\,{\bf x} = {\bf b} \) is consistent if and only if the rank of the coefficient matrix A is the same as the rank of the augmented matrix \( \left[ {\bf A}\,\vert \, {\bf b} \right] . \) ■
The actual use of the term augmented matrix appears to have been introduced by the American mathematician Maxime Bôcher (1867--1918) in 1907.
We show how it works by examples. The system can be solved using solve command
sage: solve([eq1==0,eq2==0],x1,x2)
However, this is somewhat complex and we use vector approach:
sage: A=Matrix([[1,2,3],[3,2,1],[1,1,1]]) sage: b =vector([0,-4,-1])sage: x =A.solve_right(b)sage: x
(-2, 1, 0)sage: A* x # checking our answer...(0, -4, -1)
A backslash \ can be used in the place of solve_right; use
A\ b instead of A.solve_right(b).
sage: A \ b
(-2, 1, 0)
If there is no solution, Sage returns an error:
sage: A.solve_right(w)Traceback (most recent call last):...ValueError: matrix equation has no solutions
Similarly, use A.solve_left(b) to solve for x in
\( {\bf x}\, {\bf A} = {\bf b}\).
If a square matrix is nonsingular (invertible), then solutions of the vector equation \( {\bf A}\,{\bf x} = {\bf b} \)
can be obtained by applying the inverse matrix:
\( {\bf x} = {\bf A}^{-1} {\bf b} . \) We will discuss this method later in the section devoted to inverse matrices.
as you can easily verify by substituting these values into the equations. Every equation is satisfied for these values of \( x_1 , \ x_2 , \ x_3 , \ x_4 , \ x_5 . \)
However, there is not the only solution to this system of equations. There are many others.
has no solution. There are no numbers we can assign to the unknowns
\( x_1 , \ x_2 , \ x_3 , \ x_4 , \ x_5 \) so that all four equations are satisfied.
We can solve these two equations simultaneously by considering the augmented matrix. An efficient way to do this is to
form the partitioned matrix \( \left[ {\bf A} \, \vert \, {\bf b}_1 \, \vert {\bf b}_2 \right] \)
in which the coefficient matrix A is "augmented" by all 2 vectors. Then this augmented matrix is reduced using
Gauss--Jordan elimination to
The sixth column gives the required solution for the former system of equations. The last row in reduced form shows
that the latter system of equations has no solution.
■
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions