We will discuss the four fundamental subspaces associated with every
m×n real or complex matrix.
Four Fundamental subspaces associated with every
m×n real matrix β A:
The row space of matrix A, which is subspace of
ℝn. The row space is the column space for the transposed
matrix AT or adjoint matrix A✶.
The column space (also called the range or image when matrix A
is considered as a transformation) of matrix A, which is subspace of
ℝm. The column space is the row space for the transposed
matrix AT or adjoint matrix A✶.
The nullspace (also called kernel when matrix A
is considered as a transformation) of matrix A, which is subspace of
ℝn. It consists of all n-vectors x that are
mapped by A into zero vector: Ax = 0.
The cokernel (also called left nullspace) of A, which is the
kernel of AT. It is subspace of ℝm. The
cokernel includes all solutions of the vector equation yTA = 0 or A✶y = 0.
Recall that a set of vectors β is said to generate or span a vector space V if every element from
V can be represented as a linear combination of vectors from β.
Since this matrix has three pivots in the first three rows, rank of A is
3. Both kernel and cokernel are one dimensional. The row space is spanned on
the first three rows either matrix A
The easiest way to determine the basis of the kernel is to use the standard
Mathematica command:
NullSpace[A]
{{55, -53, -56, 65}}
This vector is orthogonal to each basis vector of the row space. To verify
this, we calculate its dot products with
every vector generating the row space:
n = {55, -53, -56, 65}
n.{3,4,-2,-1}
0
n.{5, 1, -3, -6}
0
n.{1,4,3,5}
0
n.{13, 0, 0, -11}
0
n.{0, 65, 0, 53}
0
n.{0, 0, 65, 56}
0
Multiplying A from right by the elmentary matrix E, we reduce
the given matrix to lower triangular form (such operation is equivalent to
column elementary operations):
Note that this vector is orthogonal (so its dot product is zero) to every
vector from the column space. The basis vector [ 55, -53, -56, 65 ] is
orthogonal to every vector from the row space.
■
Theorem (Fundamental theorem of Linear Algebra):
The nullspace is the orthogonal complement of the row space
(in ℝn).
The cokernel is the orthogonal complement of the column space
(in ℝm).
Since the latter has three pivots, rank of matrix A is 3. The nullspace
has dimension 5 - 3 = 2, and cokernel is spanned on one vector because
4 - 3 = 1. The row space is spanned on the first three rows of either matrix
A
In order to determine the basis for kernel and cokernel, we find its
LU-decompositions: first time using elementary row operations and then
elementary column operations. So multiplying matrix A by the
lower triangular matrix E1 from left, we get
In \( \mathbb{R}^n , \) the vectors
\( e_1 [1,0,0,\ldots , 0] , \quad e_2 =[0,1,0,\ldots , 0], \quad \ldots , e_n =[0,0,\ldots , 0,1] \)
form a basis for n-dimensional real space, and it is called the standard basis. Its dimension is n.
■
A function T : D ↦ R, with domain set D and range set
R, is said to be onto (surjective) if T(D) = R, that is, if
\( R = \left\{ T(x) \,\big\vert \, x \in D \right\} . \)
A function T : D ↦ R, with domain set D and range set
R, is said to be one-to-one (or injective) if it preserves distinctness: it never maps distinct elements of its domain to the same element of its range, that is, if whenever T(x) = T(y) for x,y ∈
D, then x = y.
A linear mapT is a function from ℝn to
ℝm that preserves linear combinations, and it is denoted as
T : ℝn ↦ ℝm. Thus, for any vectors
x,y ∈ ℝn and any scalar a,b,
we have T(ax + by) =
aT(x) + bT(y).
A function T : D ↦ R, with domain set D and range set
R, is said to be bijection if T is both one-to-one and onto. A linear bijection
T : V ↦ U from vector space V into another vector space
U is called isomorphism.
Theorem:
A linear map T : ℝn ↦ ℝm is
onto if Im(T) = ℝm, so dim(Im(T)) = m.
This means that the corresponding m×n
matrix is of full row rank (= its rows are linearly independent).
The onto condition follows automatically from the definition.
Theorem:
A linear map T : ℝn ↦ ℝm is
one-to-one if ker(T) = {0}, and it is onto if Range(T) =
ℝm.
Another way of saying this is that T is one-to-one if dim(ker(T)) = 0, and onto if dim(Image(T)) = m.
If T(x) = T(y) for x,y ∈
ℝn, then T(x) - T(y) = 0.
However, since T is linear, this gives T(x-y) =
0. We can therefore conclude that x-y ∈
ker(T). So T is one-to-one exactly when ker(T) =
{0}. The onto condition is automatic from the definition of onto and
the range of the map.
Theorem:
A linear map T : ℝn ↦ ℝm is
bijection if and only if T sends a basis of the domain ℝn to a basis of the range ℝm.
Corollary:
A linear map T : ℝn ↦ ℝm is
bijection if and only if dim(kerT)) = 0 and
dim(Im(T)) = m.
Corollary:
A linear map T : ℝn ↦ ℝm is
bijection if and only if n = m and the matrix A
representing T is invertible.
Corollary:
A linear map T : ℝn ↦ ℝn is
onto if and only if for any spanning subset S of ℝn,
we have that T(S) is a spanning subset of ℝn.
Theorem:
A linear map T : ℝn ↦ ℝm is
onto if Im(T) = ℝm, so dim(Im(T)) = m.
This means that the corresponding m×n
matrix is of full row rank (= its rows are linearly independent).
The onto condition follows automatically from the definition.
Example: Let us consider the
following 3×5 matrix followed by its LU-decomposition:
The first square matrix L is actually the inverse of the elementary
matrix that reduces the given 3×5 matrix into upper triangular form
(which is its reduced row echelon form):
We can make the first easy observation that matrix A has two pivots;
so its rank is 2. The dimension of the kernel is 5 - 2 = 3, and the dimension
of the cokernel is 3 - 2 = 1. Both spaces, row space and column space, are
two dimensional.
The row space for matrix A is spanned on its first two rows or on
rows [1,-2,0,6,7] and [0,0,1,-2,3] of its upper triangular matrix U.
The column space has the basis of two column vectors corresponding to pivots
in positions 1 and 3:
with respect to leading variables x1 and
x3, while x2, x4, and
x5 are free variables. Therefore, the solution of the linear
system of equations Ax = 0 yields
The third option to determine a basis for kernel is to use elementary column
operations to obtain its LU-factorization. Indeed, multiplying the given
matrix A from right by
are situated in the first and third columns, a basis for nullspace is
determined by by columns in the second, fourth, and fifth columns. However,
since there was one swap of pivot columns (second and third), we need to
switch signs of ones on the diagonal and read the basis of kernel as
First, we obtain its LU-factorization using elementary column operations.
Multiplying A from right by the elementary 5×5 matrix E,
we reduce the given matrix to lower triangular form:
The matrix L contains all information about pivots---there are two of
them in rows 1 and 2. So rank of matrix A is 2. Knowing this number
allows one to determine the dimensions of the kernel (5 - 2 =3) and cokernel
(3 - 2 = 1).
Since we use column elementary operations, the columns 1 and 3 in matrix
L provide the basis in columns space: