This section analyzes sets of vectors whose elements are independent in the sense that that no one of them can be expressed as a linear combination of others. This is important to know because the existence of such relationship often signals that we can reduce a problem under consideration to a similar problem formulated for this set.
Linear Independence
Before introducing the property of independence, we remind, for convenience, the following definition.
Let S = { v_{1}, v_{2}, ... ,
v_{n} } be a set of n vectors in a vector spaceV
over the field of scalars 𝔽 (that is either rational numbers or real numbers or complex numbers). If a_{1},
a_{2}, ... , a_{n} are scalars from the same
field, then the linear combination of those vectors with those scalars as
coefficients is
Example 1:
Every vector in the plane with Cartesian coordinates can be expressed in exactly one way as a linear combination of standard unit vectors. For example, the only way to express the vector (3, 4) as a linear combination of i = (1, 0) and j = (0, 1) is
The numbers in that list are the coordinates of a point at the end of an arrow (red) which originates at the origin of a Cartesian plot. The basis vectors in the plot below (green) represent, respectively, i and j in Equation 1.1; also, when in matrix form these make up the two dimensional "Identity Matrix."
Our vector above, v1, maybe thought of as one in a "family" of vectors (perhaps of differing lengths) all pointing in the same direction(see plot below). The members of the family represent linear combinations which are not independent of one another. Note the white arrow, v2, is one half the red arrow, which is our original vector multiplied by .5, colinear with our original vector.
Let us introduce a third vector v that makes an angle of 60° with the abscissa. As illustrated in the following figure, the unit vector colinear with v is:
Whereas expansion (1.1) shows the only way to express the vector (3, 4) as a linear combination of i = (1, 0) and j = (0, 1), there are infinitely many ways to express this vector (3, 4) as a linear combination of three vectors: i, j, and v. Three possibilities are shown below
In short, by introducing a new axis v we created the complication of having multiple ways of assigning coordinates to the points in the plane. What makes the vector v superfluous is the fact that it can be expressed as a linear combination of the unit vectors i and j, that is
\[
{\bf v} = \frac{1}{2}\,{\bf i} + \frac{\sqrt{3}}{2}\,{\bf j} .
\]
However, if we remove one vector from the set of our three vectors, considering only two of them, say T = {v, j}, we will arrive at the unique decomposition:
\[
(3, 4) = 6\,{\bf v} + \left( 4 - 3\sqrt{3} \right) {\bf j} .
\]
End of Example 1
Let S be a subset of a vector space V.
S is a linearly independent subset of V if and only if no vector in S can be expressed as a linear combination of the other vectors in S.
S is a linearly dependent subset of V if and only if some vector v in S
can be expressed as a linear combination of the other vectors in S.
Theorem 1:
A nonempty set \( S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_r \} \) of nonzero vectors
in a vector space V is linearly independent if and only if the only coefficients satisfying the vector equation
can be satisfied with coefficients that are not all zero, then at least one of
the vectors in S must be expressible as a linear combination of the
others. To be more specific, suppose a_{1} ≠ 0. Then we can
write
with at least one difference k_{i} - p_{i} ≠ 0. Then we conclude that there exist a set of scalars a_{i} =
k_{i} - p_{i}, not all zero, for which we have
One such solution is a_{1} = 7, a_{2} = -6,
a_{3} = -1, and
a_{4} = -6. Thus, S is a linearly dependent subset of
ℝ^{4}.
End of Example 2
■
Theorem 2:
An indexed set \( S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_r \} \) of two or more vectors (r ≥ 2)
in a vector space V is linearly dependent if and only if at least one of the vectors in S is a linear combination of the others.
If some v_{j} in S equals a linear combination of the other vectors, then v_{j} can be subtracted from both sides of the equation, producing a linear dependence relation with a nonzero weight (−1) on v_{j}. Thus, S is linearly dependent set.
Conversely, suppose S is linearly dependent. If v₁ is zero, then it is a trivial linear combination of the other vectors in S. Otherwise, v₁ ≠ 0, and there exist weights c_{1}, … , c_{r}, not all zero, such that
\[
c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_r {\bf v}_r = {\bf 0} .
\]
Example 3:
Consider the set
\[
S = \left\{ (1,3,-4),\quad (1,2,-3), \quad (1,-3,2) \right\}
\]
in ℝ³. To determine whether S is linearly dependent, we
must show that one of the vectors is a linear combination of other vectors:
\[
\left( \begin{array}{c} 1 \\ 3 \\ -4 \end{array} \right) = a \left( \begin{array}{c} 1 \\ 2 \\ -3 \end{array} \right) + b \left( \begin{array}{c} 1 \\ -3 \\ 2 \end{array} \right) ,
\]
for some scalars 𝑎 and b. Hence, we need to solve the system of algebraic equations
\[
\begin{split}
a + b &= 1 , \\
2a -3 b &= 3 , \\
-3a + 2b &= -4.
\end{split}
\]
Expressing 𝑎 = 1 - b from the first equation and substituting its value into other equations, we obtain
\[
\begin{split}
2\left( 1 - b \right) -3 b &= 3 , \qquad \Longrightarrow \qquad b = -1/5 \\
-3\left( 1 - b \right) + 2b &= -4 \qquad \Longrightarrow \qquad b = -1/5 .
\end{split}
\]
In other words, a set of vectors is linearly independent if the only
representations of 0 as a linear combination of its vectors is the
trivial representation in which all the scalars a_{i} are zero.
The alternate definition, that a set of vectors is linearly dependent if and
only if some vector in that set can be written as a linear combination of the
other vectors, is only useful when the set contains two or more vectors. Two
vectors are linearly dependent if and only if one of them is a constant
multiple of another.
Example 4:
The most well known set of linearly
independent vectors in ℝ^{n} is the set of standard unit
vectors
are a_{1} = 0, a_{2} = 0, a_{3} =
0. But this becomes evident by writing this equation in its component form
\[
\left( a_1 , a_2 , a_3 \right) = (0,0,0).
\]
End of Example 4
■
Corollary 1:
Let V be a vector space over a field 𝔽. A subset
S = {v_{1} , v_{2}, … , v_{n}} of nonzero vectors of V is linearly dependent if and only if \( \displaystyle {\bf v}_i = \sum_{j\ne i} c_j {\bf v}_j , \) for some i, where c₁, c₂, … , c_{n} are some scalars.
If S is linearly dependent, then there exist scalars k_{i} ∈ 𝔽 not all zero such such that
\( \displaystyle \sum_i k_i {\bf v}_i = {\bf 0} . \) Suppose k_{i} ≠ 0 for some index i. Then this linear combination can be written as \( \displaystyle {\bf v}_i = - k^{-1} \sum_{j\ne i} k_j {\bf v}_j , \) so v_{i} is expressed as a linear combination of other elements from S.
Conversely, if for some i, v_{i} can be expressed as a linear combination of other elements from S, i.e., \( \displaystyle {\bf v}_i = \sum_{j\ne i} \alpha_j {\bf v}_j , \) where α_{j} ∈ 𝔽, then this yields that
\[
\alpha_1 {\bf v}_1 + \alpha_2 {\bf v}_2 + \cdots + (-1)\,{\bf v}_i + \alpha_{i+1} {\bf v}_{i+1} + \cdots + \alpha_n {\bf v}_n = 0 .
\]
This shows that there exist scalars α_{1}, α_{2}, … , α_{n} with α_{i} = −1 such that
\( \displaystyle \sum_i \alpha_i {\bf v}_i = {\bf 0} , \) and hence S is linearly dependent.
Example 5:
Determine whether or not the following set of matrices linearly independent in ℝ^{2,2}:
\[
S = \left\{ \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} , \quad \begin{bmatrix} \phantom{-}3 & -2 \\ -2 & \phantom{-}3 \end{bmatrix} , \quad \begin{bmatrix} -4 & \phantom{-}5 \\ \phantom{-}5 & -4 \end{bmatrix} \right\} .
\]
Solution:
Since this set is finite, we want to check whether the equation
\[
c_1 \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} + c_2 \begin{bmatrix} \phantom{-}3 & -2 \\ -2 & \phantom{-}3 \end{bmatrix} + c_3 \begin{bmatrix} -4 & \phantom{-}5 \\ \phantom{-}5 & -4 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}
\]
has a unique solution (which would necessarily be c₁ = c₂ = c₃ = 0.
corresponding to linear independence) or infinitely many solutions (corresponding to linear dependence). We can solve for c₁, c₂, and c₃ by
comparing entries of the matrices on the left- and right-hand sides above
to get the linear system
\[
\begin{split}
2\, c_+ 3\, c_2 - 4\, c_3 &= 0 , \\
c_1 - 2\, c_2 + 5\, c_3 &= 0 , \\
c_1 - 2\, c_2 + 5\, c_3 &= 0 , \\
2\, c_+ 3\, c_2 - 4\, c_3 &= 0 .
\end{split}
\]
Solving this linear system via our usual methods or asking Mathematica reveals that
c₃ is a
free variable (so there are infinitely many solutions) and c₁ = −c₃, c₂ = 2c₃. It follows that S is linearly dependent, and in particular, choosing ;c₃ = 3 gives c₁ = −3 and c₂ = 6, so
\[
-3 \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} + 6 \begin{bmatrix} \phantom{-}3 & -2 \\ -2 & \phantom{-}3 \end{bmatrix} + 3 \begin{bmatrix} -4 & \phantom{-}5 \\ \phantom{-}5 & -4 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} .
\]
End of Example 5
■
Corollary 2:
A finite set of vectors that contains zero vector is linearly dependent.
For any vectors v_{1}, v_{1}, … ,
v_{r}, the set S = {v_{1}, v_{1}, … ,
v_{r}, 0} is linearly dependent because the equation
\[
0{\bf v}_1 + 0 {\bf v}_2 + \cdots + 0{\bf v}_r + 1 \cdot {\bf 0} = {\bf 0}
\]
expresses 0 as a linear combination of the vectors in S with coefficients not all zero.
Example 6:
Let us consider the set of Pauli matrices. This set includes three 2 × 2 complex matrices that are self-adjoint (Hermitian), Hermitian), involutory and unitary. Usually they are indicated by the Greek letter sigma (σ),
\[
\sigma_1 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} , \qquad \sigma_2 = \begin{bmatrix} 0 & -{\bf j} \\ {\bf j} & \phantom{-}0 \end{bmatrix} , \qquad \sigma_3 = \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} .
\]
These matrices are named after the Austrian theoretical physicist Wolfgang Pauli (1900--1958), Nobel Prize winner in Physics.
We want to know whether or not there exist complex numbers c₁, c₂, and c₃, such that the identity matrix I = σ_{0} is a linear combination of the Pauli matrices:
\[
{\bf I} = \sigma_0 = c_1 \sigma_1 + c_2 \sigma_2 + c_3 \sigma_3 .
\]
Writing this equation more explicitly gives
\[
\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = c_1 \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 & -{\bf j} \\ {\bf j} & \phantom{-}0 \end{bmatrix} + c_3 \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix}
\]
This is equivalent to four linear equations:
\begin{align*}
1 & = c_3 \\
0 &= c_1 - {\bf j}\,c_2 , \\
0 &= c_1 + {\bf j}\,c_2 , \\
1 &= - c_3
\end{align*}
Since c₃ cannot be equal 1 and −1 simultaneous, we conclude that this system of equations has no solution and this set of Pauli matrices together with the identity matrix is linearly independent.
End of Example 6
■
The empty subset ∅ of V is linearly independent, as the condition of linear independence holds vacuously for ∅.
A set containing a single vector {v}, where v ∈ V, is linearly independent if and only if v ≠ 0.
Corollary 3:
Any two nonzero vectors are linearly dependent in a vector space if and only if one is a scalar multiple of the other.
Example 7:
Determine if the following sets of vectors are linearly independent.
\[
{\bf a}. \quad {\bf v} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} , \qquad {\bf u} = \begin{bmatrix} 4 \\ 6 \end{bmatrix} ; \qquad {\bf b}. \quad {\bf v} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} , \qquad {\bf u} = \begin{bmatrix} 3 \\ 2 \end{bmatrix} .
\]
Solution:a. Notice that u is a multiple of v, namely, u = 2v. Hence, 2v −u = 0, which shows that the set of two vectors {v, u} is linearly dependent.
Figure 3, Lay, page 59
code:
b. The vectors v and u are certainly not multiples of one another. Could they be linearly dependent? Suppose that there are scalars c₁ and c₂ satisfying
\[
c_1 {\bf v} + c_2 {\bf u} = {\bf 0} .
\]
If c₁ ≠ 0, then we can solve for v in terms of u; that is, v = (−c₂/c₁)u, This is impossible because v is not a multiple of u. So c₁ must be zero. Similarly, c₂ must also be zero. Thus, {v, u} is linear independent set.
Figure 3, Lay, page 59
code:
End of Example 7
■
Corollary 4:
If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set {v_{1}, v_{2}, … , v_{r}} in 𝔽^{n} is linearly dependent if r > n.
We build a matrix from the vectors in S: A = [v_{1}, v_{2}, … , v_{r}]. Then A has dimensions n × r and the equation A x = 0 corresponds to a system of n equations in r unknowns. If n < r, there are more variables than equations. so there must be a free variable. Hence, A x = 0 has a nontrivial solution, and the columns of A are linearly dependent.
Example 8:
The set of vectors
\[
\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} , \quad \begin{bmatrix} 3 \\ 2 \\ 1 \end{bmatrix} , \quad\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} , \quad \begin{bmatrix} -2 \\ 2 \\ 1 \end{bmatrix}
\]
are linearly dependent
End of Example 8
■
Corollary 5:
A non-empty subset of a set of linearly independent vectors consists of vectors that are still linearly independent.
Suppose, by contradiction, that S is a set of linearly independent vectors and that ∅ ≠ A ⊆ S is a subset of linearly dependent vectors. Then there exists a vector in
A which is written as a linear combination of the others. But then it is also expressed
as a linear combination of vectors of S and then S is a set of linearly dependent
vectors, contradicting the hypothesis.
Example 9:
Let us consider an infinite set of monomials S = {1, x, x², x³, …}. We are going to show that S is linearly independent. Since we cannot apply Theorem 1, we are asking whether or not there exists some finite linear combination of 1, x, x², x³, … that adds to 0 (and does not have all coefficients equal to zero).
Let p be the largest power of x in such a linear combination—we want to know if there exists (not all zero) scalars c₀, c₁, c₂, … , c_{p} such that
\[
c_0 + c_1 x + c_2 x^2 + \cdots + c_p x^p = 0 .
\tag{9.1}
\]
By plugging x = 0 into that equation, we see that c₀ = 0.
Taking the derivative of both sides of Equation (9.1) then reveals that
\[
c_1 + 2\,c_2 x + 3\, c_3 x^2 + \cdots + p\,c_p x^{p-1} = 0 ,
\]
and plugging x = 0 into this equation gives c₁ = 0.
By repeating this
procedure (i.e., taking the derivative and then plugging in x = 0), we
similarly see that c₂ = c₂ = ⋯ = c_{p} = 0, so S is linearly independent.
End of Example 9
■
Homogeneous Equations as Vector Combinations
Most likely you have the following question: how this topic about linearly independent and linearly dependent vectors is related to the fundamental problem of linear algebra: how to solve a system of algebraic equations. To answer this question, we consider a system of linear equations
We immediately spot that Eq.\eqref{EqInde.2} provides linearly dependence of column vectors a_{1}, a_{2}, … , a_{n}. Therefore, homogeneous linear system \eqref{EqInde.1} has a nontrivial solution if and only if column vectors of matrix A are linearly dependent.
Example 10:
Let \( \displaystyle {\bf v}_1 = \left[ \begin{array}{c} 2 \\ 6 \\ 2 \end{array} \right] , \quad {\bf v}_2 = \left[ \begin{array}{c} 5 \\ 8 \\ 4 \end{array} \right] , \quad {\bf v}_1 = \left[ \begin{array}{c} 1 \\ 3 \\ 1 \end{array} \right] . \) In order to determine whether these three vectors are linearly dependent or independent, we show that the corresponding system
\[
x_1 {\bf v}_1 + x_2 {\bf v}_2 + x_3 {\bf v}_3 = 0
\tag{10.1}
\]
has a nontrivial solution in x = (x₁, x₂, x₃). So we build the associated augmented matrix
\[
\begin{bmatrix} 2 & 5 & 1 & 0 \\ 6 & 8 & 3 & 0 \\ 2 & 4 & 1 & 0 \end{bmatrix} \,\sim\, \begin{bmatrix} 2 & \phantom{-}5 & 1 & 0 \\ 0 & -7& 0 & 0 \\ 0 & -1 & 0 & 0 \end{bmatrix}\,\sim\, \begin{bmatrix} 2 & \phantom{-}5 & 1 & 0 \\ 0 & -7& 0 & 0 \\ 0 & \phantom{-}0 & 0 & 0 \end{bmatrix} .
\]
Clearly, x₁ and x₂ are basic variables, and x₃ is free. Each nonzero value value of x₃ determines a nontrivial solution of Eq.(10.1). Hence, v₁, v₂, and v₃ are linearly dependent.
To find a linear dependence relation among v₁, v₂, and v₃, completely row reduce the augmented matrix and write the new system
\[
\begin{bmatrix} 2 & 5 & 1 & 0 \\ 6 & 8 & 3 & 0 \\ 2 & 4 & 1 & 0 \end{bmatrix} \,\sim\, \begin{bmatrix} 2 &\phantom{-}0 & 1 & 0 \\ 0 & -7& 0 & 0 \\ 0 & \phantom{-}0 & 0 & 0 \end{bmatrix} .
\]
This yields
\[
\begin{split}
2\,x_1 \phantom{2x3} + x_3 &= 0 , \\
\phantom{-12} - 7\, x_2 \phantom{2x3}&= 0 ,
\\
0&= 0 .
\end{split}
\]
Thus, 2x₁ + x₃ = 0 and x₂ = 0. This yields the linear dependence equation:
\[
{\bf v}_1 + 0\,{\bf v}_2 - 2\,{\bf v}_3 = {\bf 0} .
\]
End of Example 10
■
Example 11:
Let
\[
{\bf v} = \begin{bmatrix} 1\\ 2 \\ 3 \end{bmatrix} , \quad {\bf u} = \begin{bmatrix} \phantom{-}1 \\ \phantom{-}3 \\ -4 \end{bmatrix} , \quad {\bf w} = \begin{bmatrix} \phantom{-}1 \\ \phantom{-}2 \\ -3 \end{bmatrix} , \quad {\bf z} = \begin{bmatrix} -2 \\ \phantom{-}7 \\ -5 \end{bmatrix} .
\]
We consider the following sets S₁ = {v, u, w}, S₂ = {v, u, z}, S₃ = {u, w, z}, S₄ = {v, u, w, z}.
In order to determine whether set S₁ is linearly dependent of independent, we consider the system of homogeneous equations
\[
x_1 {\bf v} + c_2 {\bf u} + c_3 {\bf w} = {\bf 0} ,
\]
which we rewrite in coordinate form:
\[
\begin{split}
x_1 + x_2 + x_3 &= 0 \qquad \Longrightarrow \qquad x_1 + x_3 = - x_2 , \\
2\, x_1 + 3\,x_2 + 2\,x_3 &= 0 \qquad \Longrightarrow \qquad x_2 = 0 , \\
3\, x_1 -4\, x_2 - 3\, x_3 &= 0 .
\end{split}
\]
Since this system has only trivial solution x₁ = x₂ = x₃, set S₁ is linearly independent.
For S₂, we have to solve the system of homogeneous equations:
End of Example 11
■
Linear Independence of Matrix Columns
Suppose that we are given a matrix equation A x = 0, where matrix A is specified by column vectors: A = [a_{1}, a_{2}, … , a_{n}]. Matrix equation can be written as
Theorem 3:
A nonempty set \( S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_r \} \) of r nonzero vectors
in a vector space V is linearly independent if and only if the matrix of the column-vectors from S has rankr.
in ℝ^{4}. To determine whether S is linearly independent, we
must show that the only linear combination of vectors in S that equals
the zero vector is the one in which all the coefficients are zero. Suppose that
a_{1}, a_{2}, a_{3}, and
a_{4} are scalars such that
Equating the corresponding coordinates of the vectors on the left and the
right sides of this system of equations, we obtain the following system of
linear equations
Since its rank is 4, the only solution to the above system is
a_{1}=a_{2}=a_{3}=a_{4}=0, and so S is linearly independent.
End of Example 12
■
Spans of Vectors
The most important application of linear combination is presented in the
following definition.
For a given vector space V over field 𝔽 (which is either field of rational numbers ℚ or
real numbers ℝ or complex
numbers ℂ), the span of a set S of vectors is the set of all finite linear
combinations of elements of S:
\[
\mbox{span}( S ) = \left\{ \left. \sum_{k=1}^n c_k {\bf v}_k \
\right\vert \ {\bf v}_k \in V \right\}
\]
for any positive
integer n ∈ ℕ and for any scalars c_{k}. We also say that the set Sgenerates the vector space span(S).
If S is an infinite set, linear combinations used to form the span of
S are assumed to be only finite.
The definition above of span can be reformulated as the intersection of all
subspaces of V that contain S.
If V is a vector space and v ∈ V, then the subspace generated by v is the set of all multiples of v, i.e., span(v) = {kv : k ∈ 𝔽. Moreover, the subspace (= span) generated by the zero vector is the trivial subspace, which contains only the zero vector, span(0) = {0}.
Let V be a vector space and let S = {v_{1}, v_{2}, … , v_{n}} be a set of vectors of
V. We say that SgeneratesV or S is a set of generators of V if span(S) = V.
It may happen that a set of generators for a subspace is redundant in the sense that a vector can be removed from this set. This situation occurs only when the vector to be removed is a linear combination of other vectors. In other words, a set of generators is redundant if and only if this set of vectors is linearly dependent.
Example 13:
Linear independence has the following geometric interpretations in ℝ² and ℝ³:
Two vectors in ℝ² or ℝ³ are linearly independent if and only if (iff) they do not lie on the same line when they have the initial points at the origin. Otherwise, one would be a scalar multiple of the other.
Figure 4.3.3, Anton, page 207
code:
Three vectors in ℝ³ are linearly independent if and only if they do not lie in the same plane hen they have their initial points at the origin. Otherwise, one would be a scalar multiple of the other.
Figure 4.3.4, Anton, page 207
code:
Theorem 4:
Every spanning set S of a vector space V must contain at least
as many elements as any linearly independent set of vectors from V.
In ℝ^{n}, for example, the span of a single vector is the line through the origin
in the direction of that vector, and the span of two non-parallel vectors is the
plane through the origin containing those vectors. When we work in other vector spaces, we lose much of this geometric interpretation, but algebraically spans still work much like they do in ℝ^{n}. For
example, span(1, x, x²) = ℝ_{≤2}[x] (the vector space of real-valued polynomials with degree at
most 2) since every polynomial p ∈ ℝ_{≤2}[x] can be written in the form p(x) =
c₀ + c₁ x + c₂ x² for some c₀, c₁, c₂ ∈ ℝ. Indeed, this is exactly what it means for
a polynomial to have degree at most 2. More generally, span(1, x, … , x^{m}) = ℝ_{≤m}[x]
Theorem 5:
The span of any subset S of a vector space V is a subspace of
V. Moreover, any subspace of V that contains S must also
contain the span of S.
This result is immediate is S = ∅ because span( ∅ ) = { 0 }, which is a subspace that is contained in any subspace of V.
Is S ≠ ∅, then S contains an element z. So 0z = 0 is an element of span( S ). Let x,y ∈ span( S ). Then there exist elements u_{1}, u_{2}, ... , u_{m}, v_{1}, v_{1}, ... , v_{n}, in S and scalars a_{1}, a_{2}, ... , a_{m}, b_{1}, b_{2}, ... , b_{n} such that
are clearly linear combinations of the elements of S; so x + y and cx are elements of span( S ). Thus span( S ) is a subspace of V.
Now let W denote any subspace of V that contains S. If
w ∈ span( S ), then w has the form
w = c_{1}w_{1} + c_{2}w_{2} + ... + c_{k}w_{k} for some elements w_{1}, w_{2}, ... , w_{k} in S and some scalars c_{1}, c_{2}, ... , c_{k}. Since S⊆W, we have w_{1}, w_{2}, ... , w_{k} ∈ W. Therefore,
w = c_{1}w_{1} + c_{2}w_{2} + ... + c_{k}w_{k} is an element
of W. Since w, an arbitrary element of span( S ), belongs
to W, it follows that span( S ) ⊆ W, completing
the proof.
■
Are the following 2×2 matrices \(
\begin{bmatrix} -3&2 \\ \phantom{-}1& 2 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}6&-4 \\ -2&-4
\end{bmatrix} \)
linearly dependent or independent?
In each part, determine whether the vectors are linearly independent or a
linearly dependent in ℝ³.
(2 ,-3, 1), (-1, 4, 5), (3, 2, -1);
(1, -2, 0), (-2, 3, 2), (4, 3, 2);
(7, 6, 5), (4, 3, 2), (1, 1, 1), (1, 2, 3);
In each part, determine whether the vectors are linearly independent or
linearly dependent in ℝ^{4}.
(8, −9, 6, 5), (1,−3, 7, 1), (1, 2, 0, −3);
(2, 0, 2, 8), (2, 1, 0, 6), (1, −2, 5, 8);
In each part, determine whether the vectors are linearly independent or a
linearly dependent in the space ℝ_{≤3}[x] of all polynomials of degree up to 3.
{3, x + 4, x³ −5x, 6}.
{0, 2x, 3 −x, x³}.
{x³ −2x, {x³ + 2x, x + 1, x −3}.
In each part, determine whether the 2×2 matrices are linearly
independent or linearly dependent.