Every subspace of 𝔽n can be described in essentially just two dual ways: as a span—the span of a generating set, or as an intersection of hyperplanes.

Subspaces

In many applications, a vector space under consideration is too large to provide an insight to the problem. It leads to looking at smaller subsets that are called subspaces as they inherit vector addition and scalar multiplication from the larger space. As a rule, subspaces occur in three cases: as a null space of a homogeneous equation, as a span of some vectors, or when an auxiliary condition is imposed on the elements of the large space. Three examples following the definition clarify theses cases.

A subset W of a vector space V is called a subspace of V if W is itself a vector space under the addition and scalar multiplication defined on V.

The linear span (also called just span) of a set of vectors in a vector space is the intersection of all linear subspaces which each contain every vector in that set. Alternatively, the span of a set S of vectors may be defined as the set of all finite linear combinations of elements of S. The linear span of a set of vectors is therefore a vector space.
Example 1: Let us consider the homogeneous differential equation
\[ \frac{{\text d}^2 y}{{\text d} x^2} + \omega^2 y = 0 , \]
where ω is a positive number. Its general solution
\[ y(x) = C_1 \cos\omega x + C_2 \sin \omega x \]
depends on two arbitrary constants. So the differential operator \( \displaystyle L\left[ \texttt{D} \right] = \texttt{D}^2 + \omega^2 , \) where \( \texttt{D} = {\text d}/{\text d}x , \) has a two-dimensional null-space that spans on two functions, cos(ωx) and sin(ωx).
DSolve[y''[x] + \[Omega]^2*y[x] == 0, y[x], x]
{{y[x] -> C[1] Cos[x \[Omega]] + C[2] Sin[x \[Omega]]}}

Example 2: Given two vectors u and v in ℝ³. The span of these vectors consists of all linear combinations
\[ c_1 {\bf u} + c_2 {\bf v} , \]
for some real constants c₁ and c₂. We denote the set of all linear combinations by W. We want to show that W is a subspace of ℝ³.

Obviously, the zero vector is in W because it corresponds to c₁ = c₂ = 0. Let us choose two arbitrary vectors from W:

\[ {\bf w}_1 = c_1 {\bf u} + c_2 {\bf v} \qquad \mbox{and} \qquad {\bf w}_2 = s_1 {\bf u} + s_2 {\bf v} . \]
Then their sum
\begin{align*} {\bf w}_1 + {\bf w}_2 &= c_1 {\bf u} + c_2 {\bf v} + s_1 {\bf u} + s_2 {\bf v} \\ &= \left( c_1 + s_1 \right) {\bf u} + \left( c_2 + s_2 \right) {\bf v} \end{align*}
also belongs to W. So this set is closed under addition operation. For arbitrary constant k, we have
\[ k\,{\bf w}_1 = k \left( c_1 {\bf u} + c_2 {\bf v} \right) = k\,c_1 {\bf u} + k\,c_2 {\bf v} \in W. \]
We are left to verify that all eight axioms from the definition of vector space (see section on vector spaces) are fulfilled.
u1 = {4, 6, 10}; v1 = {7, 3, 10}; arr1 = Graphics3D[{Arrowheads[.04], Thick, Red, Arrow[{{0, 0, 0}, u1}]}]; arr2 = Graphics3D[{Arrowheads[.04], Thick, Blue, Arrow[{{0, 0, 0}, v1}]}]; plane1 = Plot3D[x + y, {x, -10, 10}, {y, -10, 10}, PlotStyle -> Opacity[.2], Mesh -> 20, PlotRange -> All]; GraphicsRow[{Show[plane1, arr1, arr2, Axes -> True(*, ViewPoint->{-2.892,1.742,-0.235}*), ViewPoint -> {-2.854, 1.488, 1.0450}], Show[plane1, arr1, arr2, Axes -> True, ViewPoint -> {-2.892, 1.742, -0.235}(*,ViewPoint->{-2.854,1.488, 1.0450}*)]}]
Subspace of u₁
     
Subspace of u₂
u2 = {4, 6, 20}; v2 = {7, 3, 20}; arr3 = Graphics3D[{Arrowheads[.04], Thick, Red, Arrow[{{0, 0, 0}, u2}]}]; arr4 = Graphics3D[{Arrowheads[.04], Thick, Blue, Arrow[{{0, 0, 0}, v2}]}]; GraphicsRow[{Show[plane1, arr3, arr4, Axes -> True(*, ViewPoint->{-2.892,1.742,-0.235}*), ViewPoint -> {-2.854, 1.488, 1.0450}], Show[plane1, arr3, arr4, Axes -> True, ViewPoint -> {-2.892, 1.742, -0.235}(*,ViewPoint->{-2.854,1.488, 1.0450}*)]}]

Below is an animation of the case where a vector is common to many planes.

anim1 = Animate[ Graphics3D[{{Arrowheads[.1], Thick, Red, Arrow[{{0, 0, 0}, {-.8, 0, 0}}]}, InfinitePlane[{0, 0, 0}, {{1, 0, 0}, {1, Cos[\[Theta]], Sin[\[Theta]]}}]}, Axes -> True, PlotRange -> {{-1, 1}, {-1, 1}, {-1, 1}}, ImageSize -> Small], {\[Theta], 0, \[Pi], \[Pi]/16}, AnimationRunning -> False]
(*Export["[insert your filepath here]\anim1.gif",anim1,"GIF"]*);

Example 3: Let ℓ be the vector space of infinite sequences with bounded entries:
\[ \ell_{\infty} = \left\{ {\bf x} = [ x_0, x_1 , x_2 , \ldots ]\,:\, \max | x_k | < C \right\} . \]
We consider a subspace of ℓ that consists of all sequences of real numbers such that
\[ \ell_{2} = \left\{ {\bf x} = [ x_0, x_1 , x_2 , \ldots ]\,:\, \sum_{k\ge 0} | x_k |^2 < C \right\} . \]
End of Example 3

Every vector space V has at least two subspaces: the whole space itself VV and the vector space consisting of the single element---the zero vector, {0} ⊆ V. These subspaces are called the trivial subspaces. All other subspaces (if any) are called proper subspaces. Perhaps the name "sub vector space" would be better, but the only kind of spaces we consider are vector spaces, so "subspace" will do. Conversely, every vector space is a subspace of itself and possibly of other larger spaces.

  In general, to show that a nonempty set W with two operations (inner operation is usually called addition and outer operation of multiplication by scalars) is a vector space one must verify the eight vector space axioms. However, if W is a subspace of a known vector space V, then certain axioms need not be verified because they are "inherited" from V. For example, commutative property needs not to be verified because it holds for all vectors from V.

Theorem 1: If W is a nonempty set of vectors from a vector space V, then W is a subspace of V if and only if the following conditions hold.
  • The zero vector of V is in W.
  • If u and v are vectors in W, then u + v is in W.
  • If k is a scalar and u is a vector in W, then ku is in W.     ▣
So just three conditions, plus being a subset of a known vector space, gets us all eight postulates used to define of a vector space. Fabulous! This theorem can be paraphrased by saying that a subspace is “a nonempty subset (of a vector space) that is closed under vector addition and scalar multiplication."
Corollary 1: Let V be an 𝔽-vector space (where 𝔽 is either ℝ or ℚ or ℂ) and let U be a nonempty subset of V. Then U is a subspace of V if and only if ku + vU whenever u, vU and k ∈ 𝔽.
If U is a subspace of V, then ku + vU whenever u, vU and k ∈ 𝔽 because a subspace is closed under scalar multiplication and vector addition.

Conversely, suppose that ku + vU whenever u, vU and k ∈ 𝔽. We must verify the properties:

  1. Sums and scalar multiples of elements of U are in U (that is, U is closed under vector addition and scalar multiplication).
  2. U contains the zero vector of V.
  3. U contains an additive inverse for each of its elements.
Let uU. Then (−1)u is the additive inverse of u, so 0 = (−1)u + uU. This verifies part (b). Since (−1)u = (−1)u = ,b.0,/b., it follows that the additive inverse of u is in U. This verifies part (c). We have ku = ku + 0U and u + v = 1u + vU. This verifies part (a).
Example 4: The vector space ℝ² is not a subspace of ℝ³ because ℝ² is not even a subset of ℝ³. The vectors in ℝ² all have two coordinates, whereas the vectors in ℝ³ have three components.

On the other hand, the set (we use the ket notation |x> for column vector)

\[ V = \left\{ | {\bf x} > \, = \begin{pmatrix} a \\ 0 \\ b \end{pmatrix}\, : \, a \mbox{ and } b \mbox{ are real numbers} \right\} \]
is a subset of ℝ³ that "looks" and "acts" like ℝ², although it is logically distinct from ℝ². We say in this case that the space V is isomorphic (see also section) to ℝ².

We check all three conditions of Corollary 1. Indeed, when 𝒶 = b = 0, we get a zero vector, so V. Using properties of real numbers, we get

\[ \begin{pmatrix} a_1 + k\,a_2 \\ 0 \\ b_1 + k\,b_2 \end{pmatrix} = \begin{pmatrix} a_1 \\ 0 \\ b_1 \end{pmatrix} + k \begin{pmatrix} a_2 \\ 0 \\ b_2 \end{pmatrix} \]
for any real numbers 𝒶₁, 𝒶₂, b;₁, b;₂, and k.
Example 5: Consider a subset U of ;ℝ³ defined by
\[ U = \left\{ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = [ x_1 , x_2 , x_3 ]^{\mathrm{T}} \,: \, x_1 - 2\,x_2 + 3\,x_3 = 0 \right\} . \]
From the equation x₁ −2 x₂ + 3 x₃ = 0, we find x₁ = 2 x₂ −3 x₃. Therefore, the vector space U is formed by the linear combinations:
\[ U = \begin{bmatrix} 2\,x_2 - 3\, x_3 \\ x_2 \\ x_3 \end{bmatrix} = x_2 \begin{bmatrix} 2 \\ 1 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} -3 \\ \phantom{-}0 \\ \phantom{-}1 \end{bmatrix} . \]
Thus, the null space is spanned on two vectors: [ 2, 1 , 0 ]T and [ -3, 0, 1]T.
Example 6: Consider a subset U of ;ℝ³ defined by
\[ U = \left\{ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = [ x_1 , x_2 , x_3 ]^{\mathrm{T}} \,: \, x_1 - 2\,x_2 + 3\,x_3 = 4 \right\} . \]
This is a non-homogeneous equation with three unknowns. Solving for one variable, we get
Solve[x1 - 2*x2 + 3*x3 == 4, x1]
{{x1 -> 4 + 2 x2 - 3 x3}}
Of course, we can solve it for x₂, but its solution contains fractions---too much for a lazy person as me.
Solve[x1 - 2*x2 + 3*x3 == 4, x2]
{{x2 -> 1/2 (-4 + x1 + 3 x3)}}
The solution set of the nonhomogeneous equation x₁ − 2 x₂ + 3 x₃ =4 is
\[ \begin{bmatrix} 2\,x_2 - 3\, x_3 + 4 \\ x_2 \\ x_3 \end{bmatrix} = x_2 \begin{bmatrix} 2 \\ 1 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} -3 \\ \phantom{-}0 \\ \phantom{-}1 \end{bmatrix} + \begin{bmatrix} 4 \\ 0 \\ 0 \end{bmatrix} \tag{6.1} \]
because the first component x₁ can be expressed through two others from the given equation
\[ x_1 - 2\,x_2 + 3\,x_3 = 4 \qquad \Longrightarrow \qquad x_1 = 2\, x_2 - 3\,x_3 + 4 . \]
Sum of two solutions of (6.1) does not belong to U because its first component is doubles upon addition. Indeed, let
\[ {\bf v} = x_2 \begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix} + x_3 \begin{pmatrix} -3 \\ \phantom{-}0 \\ \phantom{-}1 \end{pmatrix} + \begin{pmatrix} 4 \\ 0 \\ 0 \end{pmatrix} \quad \mbox{and} \quad {\bf u} = y_2 \begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix} + y_3 \begin{pmatrix} -3 \\ \phantom{-}0 \\ \phantom{-}1 \end{pmatrix} + \begin{pmatrix} 4 \\ 0 \\ 0 \end{pmatrix} \]
be two elements of the set U for some real numbers x₂, x₃, y₂, y₃ ∈ ℝ. Then their sum is
\[ {\bf v} + {\bf u} = \left( x_2 + y_2 \right) \begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix} + \left( x_3 + y_3 \right) \begin{pmatrix} -3 \\ \phantom{-}0 \\ \phantom{-}1 \end{pmatrix} + \begin{pmatrix} 4 \\ 0 \\ 0 \end{pmatrix} + \begin{pmatrix} 4 \\ 0 \\ 0 \end{pmatrix} \notin U \]
because the nonhomogeneous vector (4, 0, 0) is doubled.

Another way to view it is that there is only one equation and two unknowns. The 2nd and 3rd rows are mere tautologies: x₂ + 0 + 0 = x₂ and x₃ = 0 + x₃ + 0 = x₃.

Quiet[Solve[{2 x2 - 3 x3 + 4 == 0, x2 == x2 + 0 + 0, x3 == 0 + x3 + 0}, {x2, x3}]]
{{x3 -> 4/3 + (2 x2)/3}}
Example 7: Let U be the set of all points (x, y) in ℝ² for which x ≤ 0 and y ≥ 0 (second quadrant). This set is not a subspace of ℝ² because it is not closed under scalar multiplication by a negative number.
Second quadrant
     
Graphics[{Style[RegionUnion[Rectangle[{-1, 0}]], LightGray], {Black, Circle[{0, 0}, 1], Red, Arrow[Circle[{0, 0}, .25, {2 \[Pi], \[Pi]/4}]]}}, Epilog -> {Text[ "Quadrant II:\n Multiplying any y value by\na negative \ number would\nplace the point in \nQuadrant I", {-.5, .5}], Text["Quadrant I", {.5, .5}], Text["Quadrant III", {-.5, -.5}], Text["Quadrant IV", {.5, -.5}]}, Axes -> True, Frame -> True ]

Theorem 3: Let W be a subspace of an n dimensional vector space V. Then W is finite dimensional and dimWn, with equality if and only if W = V.
If V = {0}, then dimU = 0 and there is nothing to prove; so we may assume that V ≠ {0}. Let v1V be nonzero. If span{v1} = U, then dimV = 1. If span{v1} ≠ V, then there is a v2V such that the set {v1, v2} is linearly independent. If span{v1, v2} = V, then dimV = 2; otherwise, there exists v3V such that the set {v1, v2, v3} is linearly independent. Repeat until a linearly independent spanning list is obtained. Since no linearly independent set of vectors in U contains more than n elements, this process terminates in rn steps with a linearly independent set of vectors v1, v2, ... , vr whose span is V. Thus, r = dimVn with equality only if v1, v2, ... , vn is a basis for U.
Example 8: The set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n \right\} \) form a basis in the set ℘≤n of all polynomials of degree up to n. It has dimension n+1. It is a subspace of the vector space of all polynomials ℘. However, the set of polynomials of degree n is not a subspace of ℘≤n or ℘ because it is not closed under additions. For example, the sum of two polynomials
\[ p(x) = 1 + x^2 \qquad\mbox{and}\qquad q(x) = 1 - x^2 \]
is a polynomial of zero degree.

However, the set of all polynomials of even degree is a subset of the vector space ℘. For example, the set V of all polynomials of even degree up to two is a subset of ℘≤2, which dimension is 3. On other hand, V has dimension 2 because it is spanned on two monomials { 1, x² }. ■

 

Fundamental Subspaces


Let A be an m × n matrix over either the field of real numbers ℝ or complex numbers ℂ. The space of all such matrices is denoted by ℳm,n(𝔽) or simply ℳm,n. Then A determines four important vector spaces known as the fundamental subspaces determined by A. They are column space 𝒞(A), the row space ℛ(A), the null space 𝒩(A), also called the kernel of matrix A denoted by ker(A), and cokernel of matrix A, denoted coker(A), which is the null space of adjoint matrix A*. In other words, the cokernel of matrix A includes all vectors such that
\[ \mbox{coker}({\bf A}) = \left\{ {\bf y}\in \mathbb{R}^m \, : \, {\bf A}^{\ast} {\bf y} = {\bf 0} \right\} . \]
Two of these spaces (ℛ(A) and ker) are subspaces of 𝔽n and two of them (𝒞(A) and coker) are subspaces of 𝔽m. These subspaces will discussed in detail in further sections.

 

Theorem 4: Suppose that A∈ℳm,n(𝔽) is an m×n matrix under field 𝔽 (either ℚ or ℝ or ℂ). Then its null space (also known as kernel) is a subspace of 𝔽n.
We will examine the two requirements of Theorem 1. First, we check that it is not empty. It is obviously true because the null space contains a zero vector.

Second, we check addition closure by taking two arbitrary vectors v and u from the null space. Since

\[ {\bf A}\,{\bf u} = {\bf 0}, \qquad {\bf A}\,{\bf v} = {\bf 0}, \qquad \Longrightarrow \qquad {\bf A} \left( {\bf u} + {\bf v} \right) = {\bf 0} + {\bf 0} = {\bf 0} . \]
So the sum u + v is qualified for membership in the null space.

For scalar multiplication, we have

\[ {\bf A} \left( k\,{\bf u} \right) = k\,{\bf A}\,{\bf u} = k\,{\bf 0} = {\bf 0} . \]
Example 9: Let us consider ℳm,n(ℝ), the set of all real m × n matrices, and let \( {\bf M}_{i,j} \) denote the matrix whose only nonzero entry is a 1 in the i-th row and j-th column. Then the set \( {\bf M}_{i,j} \ : \ 1 \le i \le m , \ 1 \le j \le n \) is a basis for the set of all such real matrices. Its dimension is mn.

Let A ∈ ℳm,n(ℝ) and let U be a subspace of ℳm,n(ℝ). Then the set

\[ {\bf A}U = \left\{ {\bf A\,X} \, : \, {\bf X} \in U \right\} \]
is a subspace of ℳm,n(ℝ). Since 0U, we have 0 = A0U, which is therefore not empty. Moreover, kAX + AY = A(kX + Y) ∈ U for any scalar k and any X, YU. Corollary 1 ensures that AU is a subspace of ℳm,n(ℝ). For example, we take U = ℳ3,3(ℝ) and
\[ {\bf A} = \begin{bmatrix} 1&2 &3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix} , \qquad\mbox{with} \quad \det {\bf A} = 0. \]
Then AU consists of all singular matrices, which is a subspace of ℳ3,3(ℝ). Indeed,
\[ \det \left( {\bf A\,B} \right) = \det \left( {\bf A} \right) \cdot \det \left( {\bf B} \right) = 0 \cdot \det \left( {\bf B} \right) = 0 . \]
A = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}};
Det[A]
X = {{x1, x2, x3}, {x4, x5, x6}, {x7, x8, x9}};
Det[A.X]
<
Theorem 5: Suppose that A ∈ ℳm,n(𝔽) is an m×n matrix under field 𝔽 (either ℚ or ℝ or ℂ). Then its column space 𝒞(A) is subspace of 𝔽n.
A column space 𝒞(A) is spanned of columns of matrix A. Then it consists of all possible linear combinations of the colon vectors. To prove closure under addition, let
\[ {\bf u} = c_1 {\bf w}_1 + c_2 {\bf w}_2 + \cdots + c_n {\bf w}_n \qquad \mbox{and} \qquad {\bf v} = k_1 {\bf w}_1 + k_2 {\bf w}_2 + \cdots + k_n {\bf w}_n \]
be two vectors in 𝒞(A). Their sum can be written as
\[ {\bf u} + {\bf v} = \left( c_1 + k_1 \right) {\bf w}_1 + \left( c_2 + k_2 \right) {\bf w}_2 + \cdots + \left( c_n + k_n \right) {\bf w}_n , \]
which is a linear combination of column vectors, so it belongs to 𝒞(A). Similarly, it can be shown that this set is closed under scalar multiplication. So span is a subspace of 𝔽n.
Example 10: Let us consider the matrix
\[ {\bf A} = \begin{bmatrix} 1 & 2 & 3 \\ 4& 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}. \]
Since the determinant of this matrix is zero, we know that its column vectors are linearly dependent. So we choose any two of them, say
\[ {\bf u} = \begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix} \qquad \mbox{and} \qquad {\bf v} = \begin{bmatrix} 2 \\ 5 \\ 8 \end{bmatrix} . \]
The column space of matrix A is spanned on these two vectors, so any vector of the form
\[ {\bf w} = c {\bf u} + k {\bf v} \]
belongs to 𝒞(A). ℙ ■
Theorem 6: Suppose that A ∈ ℳm,n(𝔽) is an m×n matrix under field 𝔽 (either ℚ or ℝ or ℂ). Then its row space ℛ(A) is subspace of 𝔽m.
A row space ℛ(A) is spanned of rows of matrix A. This is the column space of the transposed matrix (see proof of the previous theorem).
Example 11: Let us consider the 2×3 matrix
\[ {\bf A} = \begin{bmatrix} 1 & 2 & 3 \\ 4& 5 & 6 \end{bmatrix}. \]
Its row space ℛ(A) is two-dimensional space spanned on two vectors
\[ {\bf v} = \left[ 1\ 2\ 3 \right] \qquad\mbox{and} \qquad {\bf u} = \left[ 4\ 5\ 6 \right] . \]

 

 

  1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
  2. Beezer, R.A., A First Course in Linear Algebra, 2017.
  3. Fitzpatrick, S., Linear Algebra: A second course, featuring proofs and Python. 2023.