This section analyzes sets of vectors whose elements are independent in the sense that that no one of them can be expressed as a linear combination of others. This is important to know because the existence of such relationship often signals that we can reduce a problem under consideration to a similar problem formulated for this set.

Linear Independence

Before introducing the property of independence, we remind, for convenience, the following definition.
Let S = { v1, v2, ... , vn } be a set of n vectors in a vector space V over the field of scalars 𝔽 (that is either rational numbers or real numbers or complex numbers). If a1, a2, ... , an are scalars from the same field, then the linear combination of those vectors with those scalars as coefficients is
\[ a_1 {\bf v}_1 + a_2 {\bf v}_2 + \cdots + a_n {\bf v}_n . \]
We start with the following motivated example.
Example 1: Every vector in the plane with Cartesian coordinates can be expressed in exactly one way as a linear combination of standard unit vectors. For example, the only way to express the vector (3, 4) as a linear combination of i = (1, 0) and j = (0, 1) is
\[ (3, 4) = 3\,{\bf i} + 4\,{\bf j} = 3 \left( 1 , 0 \right) + 4 \left( 0, 1 \right) . \tag{1.1} \]
In Mathematica, we represent a vector by a list.
v1 = {3, 4}
VectorQ[v1]
{3, 4}
True
The numbers in that list are the coordinates of a point at the end of an arrow (red) which originates at the origin of a Cartesian plot. The basis vectors in the plot below (green) represent, respectively, i and j in Equation 1.1; also, when in matrix form these make up the two dimensional "Identity Matrix."
basisM = IdentityMatrix[2];
% // MatrixForm
\( \displaystyle \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \)
The "Dot Product" (to be covered later) of the Identity Matrix and our vector returns our vector.
v1.basisM
{3, 4}

Vector (3, 4).
     
eg1 = Graphics[{Green, Thick, Arrowheads -> .08, Arrow[{{0, 0}, basisM[[1]]}], Arrow[{{0, 0}, basisM[[2]]}], Thickness[.02], Red, Arrow[{{0, 0}, v1}], Red, PointSize -> Medium, Point[v1]}, Axes -> True, PlotRange -> {{0, 4}, {0, 5}}, AxesLabel -> {"x", "y"}, GridLines -> {{v1[[1]]}, {v1[[2]]}}, Epilog -> {Red, Text[Style[ToString[v1], FontSize -> 12, Bold], {3.25, 4.15}]}]; Labeled[eg1, "Vector {3,4}"]

Our vector above, v1, maybe thought of as one in a "family" of vectors (perhaps of differing lengths) all pointing in the same direction(see plot below). The members of the family represent linear combinations which are not independent of one another. Note the white arrow, v2, is one half the red arrow, which is our original vector multiplied by .5, colinear with our original vector.

Equivalent vectors.
     
v2 = .5 v1
vp1 = VectorPlot[v1, {x, 0, 4}, {y, 0, 5}, VectorPoints -> 6, VectorMarkers -> "Dart", VectorSizes -> 1.5]; colinear1 = Graphics[{Thickness[.008], White, Arrow[{{0, 0}, v2}], PointSize -> Medium, Point[{1.5, 2}]}]; Show[eg1, vp1, colinear1, GridLines -> {{v1[[1]], 1.5}, {v1[[2]], 2}}, Epilog -> {Text[ Style[ToString[v2], FontSize -> 12, Bold], {1.25, 2.17}], Red, Text[Style[ToString[v1], FontSize -> 12, Bold], {3.20, 4.14}]}]; Labeled[%, "Equivalent Vectors"]

Let us introduce a third vector v that makes an angle of 60° with the abscissa. As illustrated in the following figure, the unit vector colinear with v is:
v = {1/2, Sqrt[3]/2}
% // N
\( \displaystyle \left\{ \frac{1}{2}, \,\frac{\sqrt{3}}{2} \right\} \)
{0.5, 0.866025}
\[ {\bf v} = \frac{1}{2}\,{\bf i} + \frac{\sqrt{3}}{2}\,{\bf j} = \left( \frac{1}{2} , \frac{\sqrt{3}}{2} \right) \approx (0.5, 0.866025) . \tag{1.2} \]

Vector v =0.5(1,√3)
     
v3 = {1/2, Sqrt[3]/2}; Null
basisM = IdentityMatrix[2]; Null
eg2 = Graphics[{Hue[5/6, 1, 1/2], Thickness[0.015], Arrowheads -> 0.08, Arrow[{{0, 0}, Part[basisM, 1]}], Arrow[{{0, 0}, Part[basisM, 2]}], Thickness[0.015], Orange, Arrow[{{0, 0}, v3}], Orange, PointSize -> Medium, Point[v3]}, Axes -> True, PlotRange -> {{0, 1.5}, {0, 1.5}}, AxesLabel -> {"x", "y"}, GridLines -> {{Part[v3, 1]}, {Part[v3, 2]}}, Epilog -> {Hue[5/6, 1, 1/2], Text[Style["i={1,0}", FontSize -> 12, Bold], {1.05, 0.05}], Hue[5/6, 1, 1/2], Text[Style["j={0,1}", FontSize -> 12, Bold], {0.1, 1.05}], Orange, Text[Style[ "v={\!\(\*FractionBox[\(1\), \ \(2\)]\),\!\(\*FractionBox[SqrtBox[\(3\)], \(2\)]\)}", FontSize -> 12, Bold], {0.65, 1}]}]; Null Labeled[eg2, "Vector {\!\(\*FractionBox[\(1\), \ \(2\)]\),\!\(\*FractionBox[SqrtBox[\(3\)], \(2\)]\)}"]

Whereas expansion (1.1) shows the only way to express the vector (3, 4) as a linear combination of i = (1, 0) and j = (0, 1), there are infinitely many ways to express this vector (3, 4) as a linear combination of three vectors: i, j, and v. Three possibilities are shown below
\begin{align*} (3, 4) &= 3 \left( 1, 0 \right) + 4 \left( 0, 1 \right) + 0 \, {\bf v} , \\ (3, 4) &= 2\,{\bf i} + \left( 4 - \sqrt{3}\right) {\bf j} + 2\,{\bf v} , \\ (3, 4) &= \left( 3 - 2\sqrt{3} \right) {\bf i} + 4\sqrt{3} \,{\bf v} - 2 \, {\bf j} . \end{align*}
In short, by introducing a new axis v we created the complication of having multiple ways of assigning coordinates to the points in the plane. What makes the vector v superfluous is the fact that it can be expressed as a linear combination of the unit vectors i and j, that is \[ {\bf v} = \frac{1}{2}\,{\bf i} + \frac{\sqrt{3}}{2}\,{\bf j} . \] However, if we remove one vector from the set of our three vectors, considering only two of them, say T = {v, j}, we will arrive at the unique decomposition: \[ (3, 4) = 6\,{\bf v} + \left( 4 - 3\sqrt{3} \right) {\bf j} . \]
End of Example 1
Let S be a subset of a vector space V.
  1. S is a linearly independent subset of V if and only if no vector in S can be expressed as a linear combination of the other vectors in S.
  2. S is a linearly dependent subset of V if and only if some vector v in S can be expressed as a linear combination of the other vectors in S.

Theorem 1: A nonempty set \( S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_r \} \) of nonzero vectors in a vector space V is linearly independent if and only if the only coefficients satisfying the vector equation
\[ k_1 {\bf v}_1 + k_2 {\bf v}_2 + \cdots + k_r {\bf v}_r = {\bf 0} \]
are \( k_1 =0, \ k_2 =0, \ \ldots , \ k_r =0 . \)
We prove this theorem for the case when n ≥ 2. If the equation
\[ a_1 {\bf v}_1 + a_2 {\bf v}_2 + \cdots + a_n {\bf v}_n = {\bf 0} \]
can be satisfied with coefficients that are not all zero, then at least one of the vectors in S must be expressible as a linear combination of the others. To be more specific, suppose a1 ≠ 0. Then we can write
\[ {\bf v}_1 = - \frac{a_2}{a_1}\, {\bf v}_2 - \cdots - \frac{a_n}{a_1}\, {\bf v}_n , \]
which expresses v1 as a linear combination of the other vectors in S. Then we have at least two linear combinations for the same vector
\[ {\bf v} = k_1 {\bf v}_1 + k_2 {\bf v}_2 + \cdots + k_n {\bf v}_n = k_1 \left( - \frac{a_2}{a_1}\, {\bf v}_2 - \cdots - \frac{a_n}{a_1}\, {\bf v}_n \right) + k_2 {\bf v}_2 + \cdots + k_n {\bf v}_n . \]

Conversely, suppose that we have two distinct representations as linear combinations for some vector:

\[ {\bf v} = k_1 {\bf v}_1 + k_2 {\bf v}_2 + \cdots + k_n {\bf v}_n = p_1 {\bf v}_1 + p_2 {\bf v}_2 + \cdots + p_n {\bf v}_n , \]
with some coefficients. Then subtracting one from another, we get
\[ {\bf v} - {\bf v} = {\bf 0} = \left( k_1 - p_1 \right) {\bf v}_1 + \left( k_2 - p_2 \right) {\bf v}_2 + \cdots + \left( k_n - p_n \right) {\bf v}_n , \]
with at least one difference ki - pi ≠ 0. Then we conclude that there exist a set of scalars ai = ki - pi, not all zero, for which we have
\[ a_1 {\bf v}_1 + a_2 {\bf v}_2 + \cdots + a_n {\bf v}_n = {\bf 0} . \]
Example 2: Consider the set
\[ S = \left\{ (1,3,-4,2),\ (2,2,-3,2), \ (1,-3,2,-4),\ (-1,2,-2,1) \right\} \]
in ℝ4. To determine whether S is linearly dependent, we must attempt to find scalars a1, a2, a3, and a4, not all zero, such that
\[ a_1 (1,3,-4,2) + a_2 (2,2,-3,2) + a_3 (1,-3,2,-4) + a_4 (-1,2,-2,1) = (0,0,0,0) . \]
Finding such scalars amounts to finding a nonzero solution to the system of linear equations
\begin{align*} a_1 + 2\,a_2 + a_3 - a_4 &= 0 , \\ 3\,a_1 + 2\,a_2 -3\,a_3 + 2\, a_4 &= 0, \\ -4\,a_1 -3\, a_2 +2\,a_3 -2\, a_4 &= 0, \\ 2\,a_1 + 2\, a_2 -4\, a_3 + a_4 &= 0 . \end{align*}
One such solution is a1 = 7, a2 = -6, a3 = -1, and a4 = -6. Thus, S is a linearly dependent subset of ℝ4.
End of Example 2
Theorem 2: An indexed set \( S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_r \} \) of two or more vectors (r ≥ 2) in a vector space V is linearly dependent if and only if at least one of the vectors in S is a linear combination of the others.
If some vj in S equals a linear combination of the other vectors, then vj can be subtracted from both sides of the equation, producing a linear dependence relation with a nonzero weight (−1) on vj. Thus, S is linearly dependent set.

Conversely, suppose S is linearly dependent. If v₁ is zero, then it is a trivial linear combination of the other vectors in S. Otherwise, v₁ ≠ 0, and there exist weights c1, … , cr, not all zero, such that \[ c_1 {\bf v}_1 + c_2 {\bf v}_2 + \cdots + c_r {\bf v}_r = {\bf 0} . \]

Example 3: Consider the set
\[ S = \left\{ (1,3,-4),\quad (1,2,-3), \quad (1,-3,2) \right\} \]
in ℝ³. To determine whether S is linearly dependent, we must show that one of the vectors is a linear combination of other vectors: \[ \left( \begin{array}{c} 1 \\ 3 \\ -4 \end{array} \right) = a \left( \begin{array}{c} 1 \\ 2 \\ -3 \end{array} \right) + b \left( \begin{array}{c} 1 \\ -3 \\ 2 \end{array} \right) , \] for some scalars 𝑎 and b. Hence, we need to solve the system of algebraic equations \[ \begin{split} a + b &= 1 , \\ 2a -3 b &= 3 , \\ -3a + 2b &= -4. \end{split} \] Expressing 𝑎 = 1 - b from the first equation and substituting its value into other equations, we obtain \[ \begin{split} 2\left( 1 - b \right) -3 b &= 3 , \qquad \Longrightarrow \qquad b = -1/5 \\ -3\left( 1 - b \right) + 2b &= -4 \qquad \Longrightarrow \qquad b = -1/5 . \end{split} \]
Solve[{a + b == 1, 2*a - 3*b == 3, -3*a + 2*b == -4}, {a, b}]
{{a -> 6/5, b -> -(1/5)}}
Therefore, \[ \left( \begin{array}{c} 1 \\ 3 \\ -4 \end{array} \right) = \frac{6}{5} \left( \begin{array}{c} 1 \\ 2 \\ -3 \end{array} \right) - \frac{1}{5} \left( \begin{array}{c} 1 \\ -3 \\ 2 \end{array} \right) . \]
End of Example 3

In other words, a set of vectors is linearly independent if the only representations of 0 as a linear combination of its vectors is the trivial representation in which all the scalars ai are zero. The alternate definition, that a set of vectors is linearly dependent if and only if some vector in that set can be written as a linear combination of the other vectors, is only useful when the set contains two or more vectors. Two vectors are linearly dependent if and only if one of them is a constant multiple of another.

Example 4: The most well known set of linearly independent vectors in ℝn is the set of standard unit vectors
\[ {\bf e}_1 = (1,0,0,\ldots , 0), \quad {\bf e}_2 = (0, 1,0,\ldots , 0), \quad \ldots , \quad {\bf e}_n = (0,0,0,\ldots , 1) . \]
To illustrate in ℝ³, consider the standard unit vectors that are usually labeled as
\[ {\bf i} = (1,0,0), \quad {\bf j} = (0,1,0) , \quad {\bf k} = (0,0,1) . \]
To prove their linear independence, we must show that the only coefficients satisfying the vector equation
\[ a_1 {\bf i} + a_2 {\bf j} + a_3 {\bf k} = {\bf 0} \]
are a1 = 0, a2 = 0, a3 = 0. But this becomes evident by writing this equation in its component form
\[ \left( a_1 , a_2 , a_3 \right) = (0,0,0). \]
End of Example 4
Corollary 1: Let V be a vector space over a field 𝔽. A subset S = {v1 , v2, … , vn} of nonzero vectors of V is linearly dependent if and only if \( \displaystyle {\bf v}_i = \sum_{j\ne i} c_j {\bf v}_j , \) for some i, where c₁, c₂, … , cn are some scalars.
If S is linearly dependent, then there exist scalars ki ∈ 𝔽 not all zero such such that \( \displaystyle \sum_i k_i {\bf v}_i = {\bf 0} . \) Suppose ki ≠ 0 for some index i. Then this linear combination can be written as \( \displaystyle {\bf v}_i = - k^{-1} \sum_{j\ne i} k_j {\bf v}_j , \) so vi is expressed as a linear combination of other elements from S.

Conversely, if for some i, vi can be expressed as a linear combination of other elements from S, i.e., \( \displaystyle {\bf v}_i = \sum_{j\ne i} \alpha_j {\bf v}_j , \) where αj ∈ 𝔽, then this yields that \[ \alpha_1 {\bf v}_1 + \alpha_2 {\bf v}_2 + \cdots + (-1)\,{\bf v}_i + \alpha_{i+1} {\bf v}_{i+1} + \cdots + \alpha_n {\bf v}_n = 0 . \] This shows that there exist scalars α1, α2, … , αn with αi = −1 such that \( \displaystyle \sum_i \alpha_i {\bf v}_i = {\bf 0} , \) and hence S is linearly dependent.

Example 5: Determine whether or not the following set of matrices linearly independent in ℝ2,2: \[ S = \left\{ \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} , \quad \begin{bmatrix} \phantom{-}3 & -2 \\ -2 & \phantom{-}3 \end{bmatrix} , \quad \begin{bmatrix} -4 & \phantom{-}5 \\ \phantom{-}5 & -4 \end{bmatrix} \right\} . \] Solution: Since this set is finite, we want to check whether the equation \[ c_1 \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} + c_2 \begin{bmatrix} \phantom{-}3 & -2 \\ -2 & \phantom{-}3 \end{bmatrix} + c_3 \begin{bmatrix} -4 & \phantom{-}5 \\ \phantom{-}5 & -4 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \] has a unique solution (which would necessarily be c₁ = c₂ = c₃ = 0. corresponding to linear independence) or infinitely many solutions (corresponding to linear dependence). We can solve for c₁, c₂, and c₃ by comparing entries of the matrices on the left- and right-hand sides above to get the linear system \[ \begin{split} 2\, c_+ 3\, c_2 - 4\, c_3 &= 0 , \\ c_1 - 2\, c_2 + 5\, c_3 &= 0 , \\ c_1 - 2\, c_2 + 5\, c_3 &= 0 , \\ 2\, c_+ 3\, c_2 - 4\, c_3 &= 0 . \end{split} \] Solving this linear system via our usual methods or asking Mathematica reveals that
Solve[{2*c1 + 3*c2 - 4*c3 == 0, c1 - 2*c2 + 5*c3 == 0}, {c1, c2}]
{{c1 -> -c3, c2 -> 2 c3}}
c₃ is a free variable (so there are infinitely many solutions) and c₁ = −c₃, c₂ = 2c₃. It follows that S is linearly dependent, and in particular, choosing ;c₃ = 3 gives c₁ = −3 and c₂ = 6, so \[ -3 \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} + 6 \begin{bmatrix} \phantom{-}3 & -2 \\ -2 & \phantom{-}3 \end{bmatrix} + 3 \begin{bmatrix} -4 & \phantom{-}5 \\ \phantom{-}5 & -4 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} . \]
End of Example 5
Corollary 2: A finite set of vectors that contains zero vector is linearly dependent.
For any vectors v1, v1, … , vr, the set S = {v1, v1, … , vr, 0} is linearly dependent because the equation \[ 0{\bf v}_1 + 0 {\bf v}_2 + \cdots + 0{\bf v}_r + 1 \cdot {\bf 0} = {\bf 0} \] expresses 0 as a linear combination of the vectors in S with coefficients not all zero.
Example 6: Let us consider the set of Pauli matrices. This set includes three 2 × 2 complex matrices that are self-adjoint (Hermitian), Hermitian), involutory and unitary. Usually they are indicated by the Greek letter sigma (σ), \[ \sigma_1 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} , \qquad \sigma_2 = \begin{bmatrix} 0 & -{\bf j} \\ {\bf j} & \phantom{-}0 \end{bmatrix} , \qquad \sigma_3 = \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} . \] These matrices are named after the Austrian theoretical physicist Wolfgang Pauli (1900--1958), Nobel Prize winner in Physics.

We want to know whether or not there exist complex numbers c₁, c₂, and c₃, such that the identity matrix I = σ0 is a linear combination of the Pauli matrices: \[ {\bf I} = \sigma_0 = c_1 \sigma_1 + c_2 \sigma_2 + c_3 \sigma_3 . \] Writing this equation more explicitly gives \[ \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = c_1 \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 & -{\bf j} \\ {\bf j} & \phantom{-}0 \end{bmatrix} + c_3 \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} \] This is equivalent to four linear equations: \begin{align*} 1 & = c_3 \\ 0 &= c_1 - {\bf j}\,c_2 , \\ 0 &= c_1 + {\bf j}\,c_2 , \\ 1 &= - c_3 \end{align*} Since c₃ cannot be equal 1 and −1 simultaneous, we conclude that this system of equations has no solution and this set of Pauli matrices together with the identity matrix is linearly independent.

End of Example 6
The empty subset ∅ of V is linearly independent, as the condition of linear independence holds vacuously for ∅.

A set containing a single vector {v}, where vV, is linearly independent if and only if v ≠ 0.

Corollary 3: Any two nonzero vectors are linearly dependent in a vector space if and only if one is a scalar multiple of the other.
Example 7: Determine if the following sets of vectors are linearly independent. \[ {\bf a}. \quad {\bf v} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} , \qquad {\bf u} = \begin{bmatrix} 4 \\ 6 \end{bmatrix} ; \qquad {\bf b}. \quad {\bf v} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} , \qquad {\bf u} = \begin{bmatrix} 3 \\ 2 \end{bmatrix} . \] Solution: a. Notice that u is a multiple of v, namely, u = 2v. Hence, 2vu = 0, which shows that the set of two vectors {v, u} is linearly dependent.

A vector on the plane.
      Figure 3, Lay, page 59
code:

b. The vectors v and u are certainly not multiples of one another. Could they be linearly dependent? Suppose that there are scalars c₁ and c₂ satisfying \[ c_1 {\bf v} + c_2 {\bf u} = {\bf 0} . \] If c₁ ≠ 0, then we can solve for v in terms of u; that is, v = (−c₂/c₁)u, This is impossible because v is not a multiple of u. So c₁ must be zero. Similarly, c₂ must also be zero. Thus, {v, u} is linear independent set.

A vector on the plane.
      Figure 3, Lay, page 59
code:

End of Example 7
Corollary 4: If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set {v1, v2, … , vr} in 𝔽n is linearly dependent if r > n.
We build a matrix from the vectors in S: A = [v1, v2, … , vr]. Then A has dimensions n × r and the equation A x = 0 corresponds to a system of n equations in r unknowns. If n < r, there are more variables than equations. so there must be a free variable. Hence, A x = 0 has a nontrivial solution, and the columns of A are linearly dependent.
Example 8: The set of vectors \[ \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} , \quad \begin{bmatrix} 3 \\ 2 \\ 1 \end{bmatrix} , \quad\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} , \quad \begin{bmatrix} -2 \\ 2 \\ 1 \end{bmatrix} \] are linearly dependent
End of Example 8
Corollary 5: A non-empty subset of a set of linearly independent vectors consists of vectors that are still linearly independent.
Suppose, by contradiction, that S is a set of linearly independent vectors and that ∅ ≠ AS is a subset of linearly dependent vectors. Then there exists a vector in A which is written as a linear combination of the others. But then it is also expressed as a linear combination of vectors of S and then S is a set of linearly dependent vectors, contradicting the hypothesis.
Example 9: Let us consider an infinite set of monomials S = {1, x, x², x³, …}. We are going to show that S is linearly independent. Since we cannot apply Theorem 1, we are asking whether or not there exists some finite linear combination of 1, x, x², x³, … that adds to 0 (and does not have all coefficients equal to zero).

Let p be the largest power of x in such a linear combination—we want to know if there exists (not all zero) scalars c₀, c₁, c₂, … , cp such that \[ c_0 + c_1 x + c_2 x^2 + \cdots + c_p x^p = 0 . \tag{9.1} \] By plugging x = 0 into that equation, we see that c₀ = 0.

Taking the derivative of both sides of Equation (9.1) then reveals that \[ c_1 + 2\,c_2 x + 3\, c_3 x^2 + \cdots + p\,c_p x^{p-1} = 0 , \] and plugging x = 0 into this equation gives c₁ = 0. By repeating this procedure (i.e., taking the derivative and then plugging in x = 0), we similarly see that c₂ = c₂ = ⋯ = cp = 0, so S is linearly independent.

End of Example 9

 

Homogeneous Equations as Vector Combinations


Most likely you have the following question: how this topic about linearly independent and linearly dependent vectors is related to the fundamental problem of linear algebra: how to solve a system of algebraic equations. To answer this question, we consider a system of linear equations
\begin{equation} \label{EqInde.1} {\bf A}\,{\bf x} = {\bf 0} , \qquad {\bf A}\in \mathbb{F}^{m,n} , \quad {\bf x} \in \mathbb{F}^{n,1} . \end{equation}
Since m × n matrix A = [𝑎i,j] can be written as an arrow of its column vectors
\[ {\bf A} = \left[ {\bf a}_1 \ {\bf a}_2 \ \cdots \ {\bf a}_n \right] , \qquad {\bf a}_i = \left( \begin{array}{c} a_{i,1} \\ a_{i,2} \\ \vdots \\ a_{i,m} \end{array} \right) , \]
we rewrite Eq.\eqref{EqInde.1} as
\begin{equation} \label{EqInde.2} {\bf A}\,{\bf x} = {\bf 0} \qquad \iff \qquad x_1 {\bf a}_1 + x_2 {\bf a}_2 + \cdots + x_n {\bf a}_n = {\bf 0} . \end{equation}
We immediately spot that Eq.\eqref{EqInde.2} provides linearly dependence of column vectors a1, a2, … , an. Therefore, homogeneous linear system \eqref{EqInde.1} has a nontrivial solution if and only if column vectors of matrix A are linearly dependent.
Example 10: Let \( \displaystyle {\bf v}_1 = \left[ \begin{array}{c} 2 \\ 6 \\ 2 \end{array} \right] , \quad {\bf v}_2 = \left[ \begin{array}{c} 5 \\ 8 \\ 4 \end{array} \right] , \quad {\bf v}_1 = \left[ \begin{array}{c} 1 \\ 3 \\ 1 \end{array} \right] . \) In order to determine whether these three vectors are linearly dependent or independent, we show that the corresponding system \[ x_1 {\bf v}_1 + x_2 {\bf v}_2 + x_3 {\bf v}_3 = 0 \tag{10.1} \] has a nontrivial solution in x = (x₁, x₂, x₃). So we build the associated augmented matrix \[ \begin{bmatrix} 2 & 5 & 1 & 0 \\ 6 & 8 & 3 & 0 \\ 2 & 4 & 1 & 0 \end{bmatrix} \,\sim\, \begin{bmatrix} 2 & \phantom{-}5 & 1 & 0 \\ 0 & -7& 0 & 0 \\ 0 & -1 & 0 & 0 \end{bmatrix}\,\sim\, \begin{bmatrix} 2 & \phantom{-}5 & 1 & 0 \\ 0 & -7& 0 & 0 \\ 0 & \phantom{-}0 & 0 & 0 \end{bmatrix} . \] Clearly, x₁ and x₂ are basic variables, and x₃ is free. Each nonzero value value of x₃ determines a nontrivial solution of Eq.(10.1). Hence, v₁, v₂, and v₃ are linearly dependent.

To find a linear dependence relation among v₁, v₂, and v₃, completely row reduce the augmented matrix and write the new system \[ \begin{bmatrix} 2 & 5 & 1 & 0 \\ 6 & 8 & 3 & 0 \\ 2 & 4 & 1 & 0 \end{bmatrix} \,\sim\, \begin{bmatrix} 2 &\phantom{-}0 & 1 & 0 \\ 0 & -7& 0 & 0 \\ 0 & \phantom{-}0 & 0 & 0 \end{bmatrix} . \] This yields \[ \begin{split} 2\,x_1 \phantom{2x3} + x_3 &= 0 , \\ \phantom{-12} - 7\, x_2 \phantom{2x3}&= 0 , \\ 0&= 0 . \end{split} \] Thus, 2x₁ + x₃ = 0 and x₂ = 0. This yields the linear dependence equation: \[ {\bf v}_1 + 0\,{\bf v}_2 - 2\,{\bf v}_3 = {\bf 0} . \]

End of Example 10

Example 11: Let \[ {\bf v} = \begin{bmatrix} 1\\ 2 \\ 3 \end{bmatrix} , \quad {\bf u} = \begin{bmatrix} \phantom{-}1 \\ \phantom{-}3 \\ -4 \end{bmatrix} , \quad {\bf w} = \begin{bmatrix} \phantom{-}1 \\ \phantom{-}2 \\ -3 \end{bmatrix} , \quad {\bf z} = \begin{bmatrix} -2 \\ \phantom{-}7 \\ -5 \end{bmatrix} . \] We consider the following sets S₁ = {v, u, w}, S₂ = {v, u, z}, S₃ = {u, w, z}, S₄ = {v, u, w, z}.

In order to determine whether set S₁ is linearly dependent of independent, we consider the system of homogeneous equations \[ x_1 {\bf v} + c_2 {\bf u} + c_3 {\bf w} = {\bf 0} , \] which we rewrite in coordinate form: \[ \begin{split} x_1 + x_2 + x_3 &= 0 \qquad \Longrightarrow \qquad x_1 + x_3 = - x_2 , \\ 2\, x_1 + 3\,x_2 + 2\,x_3 &= 0 \qquad \Longrightarrow \qquad x_2 = 0 , \\ 3\, x_1 -4\, x_2 - 3\, x_3 &= 0 . \end{split} \] Since this system has only trivial solution x₁ = x₂ = x₃, set S₁ is linearly independent.

For S₂, we have to solve the system of homogeneous equations:

End of Example 11

 

Linear Independence of Matrix Columns


Suppose that we are given a matrix equation A x = 0, where matrix A is specified by column vectors: A = [a1, a2, … , an]. Matrix equation can be written as
\[ {\bf A}\in \mathbb{F}^{m,n} , \quad {\bf x} \in \mathbb{F}^{n,1} . \]
Theorem 3: A nonempty set \( S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_r \} \) of r nonzero vectors in a vector space V is linearly independent if and only if the matrix of the column-vectors from S has rank r.
Example 12: Consider the set
\[ S = \left\{ (1,3,-4,2),\ (2,2,-3,5), \ (1,-3,2,-4),\ (-1,3,1,1) \right\} \]
in ℝ4. To determine whether S is linearly independent, we must show that the only linear combination of vectors in S that equals the zero vector is the one in which all the coefficients are zero. Suppose that a1, a2, a3, and a4 are scalars such that
\[ a_1 (1,3,-4,2) + a_2 (2,2,-3,5) + a_3 (1,-3,2,-4) + a_4 (-1,3,1,1) = (0,0,0,0) . \]
Equating the corresponding coordinates of the vectors on the left and the right sides of this system of equations, we obtain the following system of linear equations
\begin{align*} a_1 + 2\,a_2 + a_3 - a_4 &= 0 , \\ 3\,a_1 + 2\,a_2 -3\,a_3 + 3\, a_4 &= 0, \\ -4\,a_1 -3\, a_2 +2\,a_3 + a_4 &= 0, \\ 2\,a_1 + 5\, a_2 -4\, a_3 + a_4 &= 0 . \end{align*}
We build a corresponding matrix
\[ {\bf A} = \begin{bmatrix} \phantom{-}1 & \phantom{-}2 & \phantom{-}1 & -1 \\ \phantom{-}3 & \phantom{-}2& -3& \phantom{-}3 \\ -4& -3& \phantom{-}2& \phantom{-}1 \\ \phantom{-}2 &\phantom{-}5 &-4& \phantom{-}1 \end{bmatrix} \]
Now we ask Mathematica for help:
MatrixRank[{{1, 2, 1, -1}, {3, 2, -3, 3}, {-4, -3, 2, 1}, {2, 5, -4, 1}}]
Since its rank is 4, the only solution to the above system is a1=a2=a3=a4=0, and so S is linearly independent.
End of Example 12

 

Spans of Vectors


The most important application of linear combination is presented in the following definition.

For a given vector space V over field 𝔽 (which is either field of rational numbers ℚ or real numbers ℝ or complex numbers ℂ), the span of a set S of vectors is the set of all finite linear combinations of elements of S:
\[ \mbox{span}( S ) = \left\{ \left. \sum_{k=1}^n c_k {\bf v}_k \ \right\vert \ {\bf v}_k \in V \right\} \]
for any positive integer n ∈ ℕ and for any scalars ck. We also say that the set S generates the vector space span(S).

If S is an infinite set, linear combinations used to form the span of S are assumed to be only finite.

The definition above of span can be reformulated as the intersection of all subspaces of V that contain S. If V is a vector space and vV, then the subspace generated by v is the set of all multiples of v, i.e., span(v) = {kv : k ∈ 𝔽. Moreover, the subspace (= span) generated by the zero vector is the trivial subspace, which contains only the zero vector, span(0) = {0}.

Let V be a vector space and let S = {v1, v2, … , vn} be a set of vectors of V. We say that S generates V or S is a set of generators of V if span(S) = V.
It may happen that a set of generators for a subspace is redundant in the sense that a vector can be removed from this set. This situation occurs only when the vector to be removed is a linear combination of other vectors. In other words, a set of generators is redundant if and only if this set of vectors is linearly dependent.
Example 13: Linear independence has the following geometric interpretations in ℝ² and ℝ³:
  • Two vectors in ℝ² or ℝ³ are linearly independent if and only if (iff) they do not lie on the same line when they have the initial points at the origin. Otherwise, one would be a scalar multiple of the other.

    A vector on the plane.
          Figure 4.3.3, Anton, page 207
    code:

  • Three vectors in ℝ³ are linearly independent if and only if they do not lie in the same plane hen they have their initial points at the origin. Otherwise, one would be a scalar multiple of the other.

    A vector on the plane.
          Figure 4.3.4, Anton, page 207
    code:

Theorem 4: Every spanning set S of a vector space V must contain at least as many elements as any linearly independent set of vectors from V.

In ℝn, for example, the span of a single vector is the line through the origin in the direction of that vector, and the span of two non-parallel vectors is the plane through the origin containing those vectors. When we work in other vector spaces, we lose much of this geometric interpretation, but algebraically spans still work much like they do in ℝn. For example, span(1, x, x²) = ℝ≤2[x] (the vector space of real-valued polynomials with degree at most 2) since every polynomial p ∈ ℝ≤2[x] can be written in the form p(x) = c₀ + cx + cx² for some c₀, c₁, c₂ ∈ ℝ. Indeed, this is exactly what it means for a polynomial to have degree at most 2. More generally, span(1, x, … , xm) = ℝ≤m[x]

Theorem 5: The span of any subset S of a vector space V is a subspace of V. Moreover, any subspace of V that contains S must also contain the span of S.

This result is immediate is S = ∅ because span( ∅ ) = { 0 }, which is a subspace that is contained in any subspace of V.

Is S ≠ ∅, then S contains an element z. So 0z = 0 is an element of span( S ). Let x,y ∈ span( S ). Then there exist elements u1, u2, ... , um, v1, v1, ... , vn, in S and scalars a1, a2, ... , am, b1, b2, ... , bn such that

\[ {\bf x} = a_1 {\bf u}_1 + a_2 {\bf u}_2 + \cdots + a_m {\bf u}_m \quad\mbox{and}\quad {\bf y} = b_1 {\bf v}_1 + b_2 {\bf v}_2 + \cdots + b_n {\bf v}_n . \]
Then
\[ {\bf x} + {\bf y} = a_1 {\bf u}_1 + a_2 {\bf u}_2 + \cdots + a_m {\bf u}_m + b_1 {\bf v}_1 + b_2 {\bf v}_2 + \cdots + b_n {\bf v}_n , \]
and for any scalar c
\[ c\,{\bf x} = \left( c\,a_1 \right) {\bf u}_1 + \left( c\,a_2 \right) {\bf u}_2 + \cdots + \left( c\,a_1 \right) {\bf u}_m \]
are clearly linear combinations of the elements of S; so x + y and cx are elements of span( S ). Thus span( S ) is a subspace of V.

Now let W denote any subspace of V that contains S. If w ∈ span( S ), then w has the form w = c1w1 + c2w2 + ... + ckwk for some elements w1, w2, ... , wk in S and some scalars c1, c2, ... , ck. Since SW, we have w1, w2, ... , wkW. Therefore, w = c1w1 + c2w2 + ... + ckwk is an element of W. Since w, an arbitrary element of span( S ), belongs to W, it follows that span( S ) ⊆ W, completing the proof. ■

  1. Are the following 2×2 matrices \( \begin{bmatrix} -3&2 \\ \phantom{-}1& 2 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}6&-4 \\ -2&-4 \end{bmatrix} \) linearly dependent or independent?
  2. In each part, determine whether the vectors are linearly independent or a linearly dependent in ℝ³.
    1. (2 ,-3, 1), (-1, 4, 5), (3, 2, -1);
    2. (1, -2, 0), (-2, 3, 2), (4, 3, 2);
    3. (7, 6, 5), (4, 3, 2), (1, 1, 1), (1, 2, 3);
  3. In each part, determine whether the vectors are linearly independent or linearly dependent in ℝ4.
    1. (8, −9, 6, 5), (1,−3, 7, 1), (1, 2, 0, −3);
    2. (2, 0, 2, 8), (2, 1, 0, 6), (1, −2, 5, 8);
  4. In each part, determine whether the vectors are linearly independent or a linearly dependent in the space ℝ≤3[x] of all polynomials of degree up to 3.
    1.    {3, x + 4, x³ −5x, 6}.
    2.    {0, 2x, 3 −x, x³}.
    3.    {x³ −2x, {x³ + 2x, x + 1, x −3}.
  5. In each part, determine whether the 2×2 matrices are linearly independent or linearly dependent.
    1. \( \begin{bmatrix} 1&\phantom{-}0 \\ 2& -1 \end{bmatrix} , \ \begin{bmatrix} 0&\phantom{-}5 \\ 1&-5 \end{bmatrix} , \ \begin{bmatrix} -2&-1 \\ \phantom{-}1&\phantom{-}3 \end{bmatrix} ; \)
    2. \( \begin{bmatrix} -1&0 \\ \phantom{-}1& 2 \end{bmatrix} , \ \begin{bmatrix} 1&2 \\ 2&1 \end{bmatrix} , \ \begin{bmatrix} 0&1 \\ 2&1 \end{bmatrix} ; \)
    3. ????????? \( \begin{bmatrix} -2&9 \\ \phantom{-}3& 5 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}1&7 \\ -2&3 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}4&9 \end{bmatrix} \)
  6. Determine all values of k for which the following matrices are linearly dependent in ℝ2,2, the space of all 2×2 matrices.
    1. \( \begin{bmatrix} 1&2 \\ 0& 0 \end{bmatrix} , \ \begin{bmatrix} k&0 \\ 4&0 \end{bmatrix} , \ \begin{bmatrix} -1&k-2 \\ \phantom{-}k&0 \end{bmatrix} ; \)
    2. ?????
    3. \( \begin{bmatrix} -1&9 \\ \phantom{-}3& 4 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}5&6 \\ -3&1 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}1&7 \end{bmatrix} \)
    4. \( \begin{bmatrix} -2&9 \\ \phantom{-}3& 5 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}1&7 \\ -2&3 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}4&9 \end{bmatrix} \)
  7. In each part, determine whether the three vectors lie in a plane in ℝ³.
    1. Coffee
    2. Tea
    3. Milk
  8. Determine whether the given vectors v₁ , v₂ , and v₃ form a linearly dependent or independent set in ℝ³.
    1. v₁ = (−3, 0, 4), v₂ = (5, −1, 2), and v₃ = (3, 3, 9);
    2. v₁ = (−4, 0, 2), v₂ = (3, 2, 5), and v₃ = (6, $minus;1, 1);
    3. v₁ = (0. 0. 1), v₂ = (0, 5, −8), and v₃ = (−4, 3, 1);
    4. v₁ = (−2, 3, 1), v₂ = (1, −2, 4), and v₃ = (2, 4, 1);
    5. v₁ = (−5, 7, 8), v₂ = (−1, 1, 3), and v₃ = (1, 4, −7).
  9. Determine for which values of k the vectors x² + 2x + k, 5x² + 2kx + k², kx² + x + 3 generate ℝ≤2[x]
  10. Given the vectors \[ {\bf v} = \begin{pmatrix} 2 \\ 2 \end{pmatrix}, \quad {\bf u} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \quad {\bf w} = \begin{pmatrix} 1 \\ -1 \end{pmatrix}, \] determine if they are linearly independent and determine the subspace generated by them.
  11. Determine if x³ −x belongs to the span of the vectors span(x³ + x² + x, x² + 2x, x²).
  12. Find the value of k for which the set of vectors is linearly dependent. ????????????????
    1. \( \begin{bmatrix} -2&4 \\ \phantom{-}4& 5 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}6&7 \\ -1&8 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}5&1 \end{bmatrix} \)
    2. \( \begin{bmatrix} -1&9 \\ \phantom{-}3& 4 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}5&6 \\ -3&1 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}1&7 \end{bmatrix} \)
    3. \( \begin{bmatrix} -2&9 \\ \phantom{-}3& 5 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}1&7 \\ -2&3 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}4&9 \end{bmatrix} \)
  13. Are the vectors v₁ , v₂ , and v₃ in part (a) of the accompanying figure linearly independent? What about those in part (b) ?

    ??????????????????????// Anton page 211, # 15

  14. Find the solution of the system with parameter k: \[ \begin{split} kx - ky + 2z &= 0, \\ x - z &= 1, \\ 2x + 3ky -11z &= -1. \end{split} \]

  1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International