es
This section analyzes sets of vectors whose elements are independent in the sense that that no one of them can be expressed as a linear combination of others. This is important to know because the existence of such relationship often signals that we can reduce a problem under consideration to a similar problem formulated for this set.

# Linear Independence

Before introducing the property of independence, we remind, for convenience, the following definition.
Let S = { v1, v2, ... , vn } be a set of n vectors in a vector space V over the field of scalars 𝔽 (that is either rational numbers or real numbers or complex numbers). If a1, a2, ... , an are scalars from the same field, then the linear combination of those vectors with those scalars as coefficients is
$a_1 {\bf v}_1 + a_2 {\bf v}_2 + \cdots + a_n {\bf v}_n .$
Example 1: Every vector in the plane with Cartesian coordinates can be expressed in exactly one way as a linear combination of standard unit vectors. For example, the only way to express the vector (3, 4) as a linear combination of i = (1, 0) and j = (0, 1) is
$(3, 4) = 3\,{\bf i} + 4\,{\bf j} = 3 \left( 1 , 0 \right) + 4 \left( 0, 1 \right) . \tag{1.1}$
In Mathematica, we represent a vector by a list.
v1 = {3, 4}
VectorQ[v1]
{3, 4}
True
The numbers in that list are the coordinates of a point at the end of an arrow (red) which originates at the origin of a Cartesian plot. The basis vectors in the plot below (green) represent, respectively, i and j in Equation 1.1; also, when in matrix form these make up the two dimensional "Identity Matrix."
basisM = IdentityMatrix[2];
% // MatrixForm
$$\displaystyle \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$$
The "Dot Product" (to be covered later in section) of the Identity Matrix and our vector returns our vector.
v1.basisM
{3, 4}

 Vector (3, 4). eg1 = Graphics[{Green, Thick, Arrowheads -> .08, Arrow[{{0, 0}, basisM[[1]]}], Arrow[{{0, 0}, basisM[[2]]}], Thickness[.02], Red, Arrow[{{0, 0}, v1}], Red, PointSize -> Medium, Point[v1]}, Axes -> True, PlotRange -> {{0, 4}, {0, 5}}, AxesLabel -> {"x", "y"}, GridLines -> {{v1[[1]]}, {v1[[2]]}}, Epilog -> {Red, Text[Style[ToString[v1], FontSize -> 12, Bold], {3.25, 4.15}]}]; Labeled[eg1, "Vector {3,4}"]

Our vector above, v1, maybe thought of as one in a "family" of vectors (perhaps of differing lengths) all pointing in the same direction(see plot below). The members of the family represent linear combinations which are not independent of one another. Note the white arrow, v2, is one half the red arrow, which is our original vector multiplied by .5, colinear with our original vector.

 Equivalent vectors. v2 = .5 v1 vp1 = VectorPlot[v1, {x, 0, 4}, {y, 0, 5}, VectorPoints -> 6, VectorMarkers -> "Dart", VectorSizes -> 1.5]; colinear1 = Graphics[{Thickness[.008], White, Arrow[{{0, 0}, v2}], PointSize -> Medium, Point[{1.5, 2}]}]; Show[eg1, vp1, colinear1, GridLines -> {{v1[[1]], 1.5}, {v1[[2]], 2}}, Epilog -> {Text[ Style[ToString[v2], FontSize -> 12, Bold], {1.25, 2.17}], Red, Text[Style[ToString[v1], FontSize -> 12, Bold], {3.20, 4.14}]}]; Labeled[%, "Equivalent Vectors"]

Let us introduce a third vector v that makes an angle of 60° with the abscissa. As illustrated in the following figure, the unit vector colinear with v is:
v = {1/2, Sqrt[3]/2}
% // N
$$\displaystyle \left\{ \frac{1}{2}, \,\frac{\sqrt{3}}{2} \right\}$$
{0.5, 0.866025}
${\bf v} = \frac{1}{2}\,{\bf i} + \frac{\sqrt{3}}{2}\,{\bf j} = \left( \frac{1}{2} , \frac{\sqrt{3}}{2} \right) \approx (0.5, 0.866025) . \tag{1.2}$

 Vector v =0.5(1,√3) v = {1/2, Sqrt[3]/2}; Null basisM = IdentityMatrix[2]; Null eg2 = Graphics[{Hue[5/6, 1, 1/2], Thickness[0.015], Arrowheads -> 0.08, Arrow[{{0, 0}, Part[basisM, 1]}], Arrow[{{0, 0}, Part[basisM, 2]}], Thickness[0.015], Orange, Arrow[{{0, 0}, v}], Orange, PointSize -> Medium, Point[v]}, Axes -> True, PlotRange -> {{0, 1.5}, {0, 1.5}}, AxesLabel -> {"x", "y"}, GridLines -> {{Part[v, 1]}, {Part[v, 2]}}, Epilog -> {Hue[5/6, 1, 1/2], Text[Style["i={1,0}", FontSize -> 12, Bold], {1.05, 0.05}], Hue[5/6, 1, 1/2], Text[Style["j={0,1}", FontSize -> 12, Bold], {0.1, 1.05}], Orange, Text[Style[ "v={\!$$\*FractionBox[\(1$$, \ $$2$$]\),\!$$\*FractionBox[SqrtBox[\(3$$], $$2$$]\)}", FontSize -> 12, Bold], {0.65, 1}]}]; Null Labeled[eg2, "Vector {\!$$\*FractionBox[\(1$$, \ $$2$$]\),\!$$\*FractionBox[SqrtBox[\(3$$], $$2$$]\)}"]

Whereas expansion (1.1) shows the only way to express the vector (3, 4) as a linear combination of i = (1, 0) and j = (0, 1), there are infinitely many ways to express this vector (3, 4) as a linear combination of three vectors: i, j, and v. Three possibilities are shown below

\begin{align*} (3, 4) &= 3 \left( 1, 0 \right) + 4 \left( 0, 1 \right) + 0 \, {\bf v} , \\ (3, 4) &= 2\,{\bf i} + \left( 4 - \sqrt{3}\right) {\bf j} + 2\,{\bf v} , \\ (3, 4) &= \left( 3 - 2\sqrt{3} \right) {\bf i} + 4\sqrt{3} \,{\bf v} - 2 \, {\bf j} . \end{align*}
Note that Mathematica produces the same answer for all three
i = basisM[[1]]; j = basisM[[2]];
3 i + 4 j + 0 v
2 i + (4 - Sqrt[3]) j + 2 v
(3 - 2 Sqrt[3]) i + 4 Sqrt[3] v - 2 j
In short, by introducing a new vector v we created the complication of having multiple ways of assigning linear combinations to the points in the plane. What makes the vector v superfluous is the fact that it can be expressed as a linear combination of the unit vectors i and j, that is, ${\bf v} = \frac{1}{2}\,{\bf i} + \frac{\sqrt{3}}{2}\,{\bf j} .$ However, if we remove one vector from the set of our three vectors, considering only two of them, say β = {v, j}, we will arrive at the unique decomposition: $(3, 4) = 6\,{\bf v} + \left( 4 - 3\sqrt{3} \right) {\bf j} .$
6 v + (4 - 3 Sqrt[3]) j
{3, 4}
End of Example 1
Let S be a subset of a vector space V.
1. S is a linearly independent subset of V if and only if no vector in S can be expressed as a linear combination of the other vectors in S.
2. S is a linearly dependent subset of V if and only if some vector v in S can be expressed as a linear combination of the other vectors in S.

Theorem 1: A nonempty set $$S = \{ {\bf v}_1 , \ {\bf v}_2 , \ \ldots , \ {\bf v}_r \}$$ of vectors in a vector space V is linearly independent if and only if the only coefficients satisfying the vector equation
$k_1 {\bf v}_1 + k_2 {\bf v}_2 + \cdots + k_r {\bf v}_r = {\bf 0}$
are $$k_1 =0, \ k_2 =0, \ \ldots , \ k_r =0 .$$
We prove this theorem for the case when n ≥ 2. If the equation
$a_1 {\bf v}_1 + a_2 {\bf v}_2 + \cdots + a_n {\bf v}_n = {\bf 0}$
can be satisfied with coefficients that are not all zero, then at least one of the vectors in S must be expressible as a linear combination of the others. To be more specific, suppose a1 ≠ 0. Then we can write
${\bf v}_1 = - \frac{a_2}{a_1}\, {\bf v}_2 - \cdots - \frac{a_n}{a_1}\, {\bf v}_n ,$
which expresses v1 as a linear combination of the other vectors in S. Then we have at least two linear combinations for the same vector
${\bf v} = k_1 {\bf v}_1 + k_2 {\bf v}_2 + \cdots + k_n {\bf v}_n = k_1 \left( - \frac{a_2}{a_1}\, {\bf v}_2 - \cdots - \frac{a_n}{a_1}\, {\bf v}_n \right) + k_2 {\bf v}_2 + \cdots + k_n {\bf v}_n .$

Conversely, suppose that we have two distinct representations as linear combinations for some vector:

${\bf v} = k_1 {\bf v}_1 + k_2 {\bf v}_2 + \cdots + k_n {\bf v}_n = p_1 {\bf v}_1 + p_2 {\bf v}_2 + \cdots + p_n {\bf v}_n ,$
with some coefficients. Then subtracting one from another, we get
${\bf v} - {\bf v} = {\bf 0} = \left( k_1 - p_1 \right) {\bf v}_1 + \left( k_2 - p_2 \right) {\bf v}_2 + \cdots + \left( k_n - p_n \right) {\bf v}_n ,$
with at least one difference ki - pi ≠ 0. Then we conclude that there exist a set of scalars ai = ki - pi, not all zero, for which we have
$a_1 {\bf v}_1 + a_2 {\bf v}_2 + \cdots + a_n {\bf v}_n = {\bf 0} .$
Example 2: Consider the set
$S = \left\{ (1,3,-4,2),\ (2,2,-3,2), \ (1,-3,2,-4),\ (-1,2,-2,1) \right\}$
in ℝ4. To determine whether S is linearly dependent, we must attempt to find scalars a1, a2, a3, and a4, not all zero, such that
$a_1 (1,3,-4,2) + a_2 (2,2,-3,2) + a_3 (1,-3,2,-4) + a_4 (-1,2,-2,1) = (0,0,0,0) .$
Finding such scalars amounts to finding a nonzero solution to the system of linear equations
\begin{align*} a_1 + 2\,a_2 + a_3 - a_4 &= 0 , \\ 3\,a_1 + 2\,a_2 -3\,a_3 + 2\, a_4 &= 0, \\ -4\,a_1 -3\, a_2 +2\,a_3 -2\, a_4 &= 0, \\ 2\,a_1 + 2\, a_2 -4\, a_3 + a_4 &= 0 . \end{align*}
One such solution is a1 = 7, a2 = -6, a3 = -1, and a4 = -6. Thus, S is a linearly dependent subset of ℝ4.

Mathematica confirms

m = {{1, 2, 1, -1}, {3, 2, -3, 2}, {-4, -3, 2, -2}, {2, 2, -4, 1}};
mat = m.{a1, a2, a3, a4};
MatrixForm[mat]
sol1 = {7, -6, -1, -6};
m.sol1 == {0, 0, 0, 0}
True
End of Example 2
Example 3: Consider the set
$S = \left\{ (1,3,-4),\quad (1,2,-3), \quad (1,-3,2) \right\}$
in ℝ³. To determine whether S is linearly dependent, we must show that one of the vectors is a linear combination of other vectors: $\left( \begin{array}{c} 1 \\ 3 \\ -4 \end{array} \right) = a \left( \begin{array}{c} 1 \\ 2 \\ -3 \end{array} \right) + b \left( \begin{array}{c} 1 \\ -3 \\ 2 \end{array} \right) ,$ for some scalars 𝑎 and b. Hence, we need to solve the system of algebraic equations $\begin{split} a + b &= 1 , \\ 2a -3 b &= 3 , \\ -3a + 2b &= -4. \end{split}$ Expressing 𝑎 = 1 - b from the first equation and substituting its value into other equations, we obtain $\begin{split} 2\left( 1 - b \right) -3 b &= 3 , \qquad \Longrightarrow \qquad b = -1/5 \\ -3\left( 1 - b \right) + 2b &= -4 \qquad \Longrightarrow \qquad b = -1/5 . \end{split}$
Clear[a, b]; Solve[{a + b == 1, 2*a - 3*b == 3, -3*a + 2*b == -4}, {a, b}]
{{a -> 6/5, b -> -(1/5)}}
Therefore, $\left( \begin{array}{c} 1 \\ 3 \\ -4 \end{array} \right) = \frac{6}{5} \left( \begin{array}{c} 1 \\ 2 \\ -3 \end{array} \right) - \frac{1}{5} \left( \begin{array}{c} 1 \\ -3 \\ 2 \end{array} \right) .$
End of Example 3

A set of vectors is linearly independent if the only representations of 0 as a linear combination of its vectors is the trivial representation in which all the scalars ai are zero. The alternate definition, that a set of vectors is linearly dependent if and only if some vector in that set can be written as a linear combination of the other vectors, is only useful when the set contains two or more vectors. Two vectors are linearly dependent if and only if one of them is a constant multiple of another.

Example 4: The most well known set of linearly independent vectors in ℝn is the set of standard unit vectors
${\bf e}_1 = (1,0,0,\ldots , 0), \quad {\bf e}_2 = (0, 1,0,\ldots , 0), \quad \ldots , \quad {\bf e}_n = (0,0,0,\ldots , 0, 1) .$
The same set of vectors is independent also in ℚ and ℂ.

To illustrate in ℝ³, consider the standard unit vectors that are usually labeled as

${\bf i} = (1,0,0), \quad {\bf j} = (0,1,0) , \quad {\bf k} = (0,0,1) .$
To prove their linear independence, we must show that the only coefficients satisfying the vector equation
$a_1 {\bf i} + a_2 {\bf j} + a_3 {\bf k} = {\bf 0}$
are a1 = 0, a2 = 0, a3 = 0. But this becomes evident by writing this equation in its component form
$\left( a_1 , a_2 , a_3 \right) = (0,0,0).$
Mathematica represents the standard unit vectors as a list of lists
{i, j, k} = {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}};
MatrixForm[%]
$$\displaystyle \begin{pmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{pmatrix}$$
In matrix form this is the Identity Matrix
MatrixForm[IdentityMatrix[3]]
End of Example 4
Corollary 1: Let V be a vector space over a field 𝔽. A subset S = {v1 , v2, … , vn} of nonzero vectors of V is linearly dependent if and only if $$\displaystyle {\bf v}_i = \sum_{j\ne i} c_j {\bf v}_j ,$$ for some i, where c₁, c₂, … , cn are some scalars.
If S is linearly dependent, then there exist scalars ki ∈ 𝔽 not all zero such such that $$\displaystyle \sum_i k_i {\bf v}_i = {\bf 0} .$$ Suppose ki ≠ 0 for some index i. Then this linear combination can be written as $$\displaystyle {\bf v}_i = - k^{-1} \sum_{j\ne i} k_j {\bf v}_j ,$$ so vi is expressed as a linear combination of other elements from S.

Conversely, if for some i, vi can be expressed as a linear combination of other elements from S, i.e., $$\displaystyle {\bf v}_i = \sum_{j\ne i} \alpha_j {\bf v}_j ,$$ where αj ∈ 𝔽, then this yields that $\alpha_1 {\bf v}_1 + \alpha_2 {\bf v}_2 + \cdots + (-1)\,{\bf v}_i + \alpha_{i+1} {\bf v}_{i+1} + \cdots + \alpha_n {\bf v}_n = 0 .$ This shows that there exist scalars α1, α2, … , αn with αi = −1 such that $$\displaystyle \sum_i \alpha_i {\bf v}_i = {\bf 0} ,$$ and hence S is linearly dependent.

Example 5: Determine whether or not the following set of matrices linearly independent in ℝ2,2: $S = \left\{ \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} , \quad \begin{bmatrix} \phantom{-}3 & -2 \\ -2 & \phantom{-}3 \end{bmatrix} , \quad \begin{bmatrix} -4 & \phantom{-}5 \\ \phantom{-}5 & -4 \end{bmatrix} \right\} .$ Solution: Since this set is finite, we want to check whether the equation $c_1 \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} + c_2 \begin{bmatrix} \phantom{-}3 & -2 \\ -2 & \phantom{-}3 \end{bmatrix} + c_3 \begin{bmatrix} -4 & \phantom{-}5 \\ \phantom{-}5 & -4 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$ has a unique solution (which would necessarily be c₁ = c₂ = c₃ = 0. corresponding to linear independence) or infinitely many solutions (corresponding to linear dependence). We can solve for c₁, c₂, and c₃ by comparing entries of the matrices on the left- and right-hand sides above to get the linear system $\begin{split} 2\, c_1 + 3\, c_2 - 4\, c_3 &= 0 , \\ c_1 - 2\, c_2 + 5\, c_3 &= 0 , \\ c_1 - 2\, c_2 + 5\, c_3 &= 0 , \\ 2\, c_1 + 3\, c_2 - 4\, c_3 &= 0 . \end{split}$ Solving this linear system via our usual methods or asking Mathematica reveals that
Solve[{2*c1 + 3*c2 - 4*c3 == 0, c1 - 2*c2 + 5*c3 == 0}, {c1, c2}]
{{c1 -> -c3, c2 -> 2 c3}}
c₃ is a free variable (so there are infinitely many solutions) and c₁ = −c₃, c₂ = 2c₃. It follows that S is linearly dependent, and in particular, choosing c₃ = 3 gives c₁ = −3 and c₂ = 6, so $-3 \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} + 6 \begin{bmatrix} \phantom{-}3 & -2 \\ -2 & \phantom{-}3 \end{bmatrix} + 3 \begin{bmatrix} -4 & \phantom{-}5 \\ \phantom{-}5 & -4 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} .$ Mathematica confirms
-3 {{2, 1}, {1, 2}} + 6 {{3, -2}, {-2, 3}} + 3 {{-4, 5}, {5, -4}} // MatrixForm
$$\displaystyle \begin{pmatrix} 0&0 \\ 0&0 \end{pmatrix}$$
End of Example 5
Corollary 2: A finite set of vectors that contains zero vector is linearly dependent.
For any vectors v1, v1, … , vr, the set S = {v1, v1, … , vr, 0} is linearly dependent because the equation $0{\bf v}_1 + 0 {\bf v}_2 + \cdots + 0{\bf v}_r + 1 \cdot {\bf 0} = {\bf 0}$ expresses 0 as a linear combination of the vectors in S with coefficients not all zero.
Example 6: Let us consider the set of Pauli matrices. This set includes three 2 × 2 complex matrices that are self-adjoint (Hermitian), involutory, and unitary. Usually they are indicated by the Greek letter sigma (σ), $\sigma_1 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} , \qquad \sigma_2 = \begin{bmatrix} 0 & -{\bf j} \\ {\bf j} & \phantom{-}0 \end{bmatrix} , \qquad \sigma_3 = \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} .$ These matrices are named after the Austrian theoretical physicist Wolfgang Pauli (1900--1958), Nobel Prize winner in Physics (1945).

We want to know whether or not there exist complex numbers c₁, c₂, and c₃, such that the identity matrix I = σ0 is a linear combination of the Pauli matrices: ${\bf I} = \sigma_0 = c_1 \sigma_1 + c_2 \sigma_2 + c_3 \sigma_3 .$ Writing this equation more explicitly gives $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = c_1 \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 & -{\bf j} \\ {\bf j} & \phantom{-}0 \end{bmatrix} + c_3 \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix}$ This is equivalent to four linear equations: \begin{align*} 1 & = c_3 \\ 0 &= c_1 - {\bf j}\,c_2 , \\ 0 &= c_1 + {\bf j}\,c_2 , \\ 1 &= - c_3 \end{align*} Since c₃ cannot be equal 1 and −1 simultaneously, we conclude that this system of equations has no solution and this set of Pauli matrices together with the identity matrix is linearly independent.

Mathematica confirms

Clear[j];
eqns = c1 {{0, 1}, {1, 0}} + c2 {{0, -j}, {j, 0}} + c3 {{1, 0}, {0, -1}};
Grid[{{MatrixForm[IdentityMatrix[2]], "=", MatrixForm[eqns]}}] Grid[{{Column[Flatten[IdentityMatrix[2]]], Column[{"=", "=", "=", "="}], Column[Flatten[eqns]]}}]
$$\displaystyle \begin{pmatrix} 1&0 \\ 0&1 \end{pmatrix} = \begin{pmatrix} c3&c1 - c2\,{\bf j} \\ c1 - c2\,{\bf j}&-c3 \end{pmatrix}$$
1 = c3
0 = c1 - c2 j
0 = c1 - c2 j
1 = c3
End of Example 6
The empty subset ∅ of V is linearly independent, as the condition of linear independence holds vacuously for ∅.

A set containing a single vector {v}, where vV, is linearly independent if and only if v ≠ 0.

Corollary 3: Any two nonzero vectors are linearly dependent in a vector space if and only if one is a scalar multiple of the other.
Example 7: Determine if the following sets of vectors are linearly independent. ${\bf a}. \quad {\bf v} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} , \qquad {\bf u} = \begin{bmatrix} 4 \\ 6 \end{bmatrix} ; \qquad {\bf b}. \quad {\bf v} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} , \qquad {\bf u} = \begin{bmatrix} 3 \\ 2 \end{bmatrix} .$ Solution: a. Notice that u is a multiple of v, namely, u = 2v. Hence, 2vu = 0, which shows that the set of two vectors {v, u} is linearly dependent.

 Linearly dependent vectors. ar1 = Graphics[{Arrowheads[0.15], Thickness[0.03], Purple, Arrow[{{0, 0}, {2, 3}}]}]; ar2 = Graphics[{Arrowheads[0.08], Thickness[0.015], Blue, Arrow[{{0, 0}, {4, 6}}]}]; txt1 = Graphics[ Text[Style["(2, 3)", FontSize -> 16, Black, Bold], {1.5, 3}]]; txt2 = Graphics[ Text[Style["(4, 6)", FontSize -> 16, Black, Bold], {3.5, 6}]]; Show[ar1, ar2, txt1, txt2, Axes -> True, GridLines -> {{2, 4}, {3, 6}}]

b. The vectors v and u are certainly not multiples of one another. Could they be linearly dependent? Suppose that there are scalars c₁ and c₂ satisfying $c_1 {\bf v} + c_2 {\bf u} = {\bf 0} .$ If c₁ ≠ 0, then we can solve for v in terms of u; that is, v = (−c₂/c₁)u, This is impossible because v is not a multiple of u. So c₁ must be zero. Similarly, c₂ must also be zero. Thus, {v, u} is linear independent set.

 Two vectors on the plane. ar1 = Graphics[{Arrowheads[0.1], Thickness[0.02], Purple, Arrow[{{0, 0}, {2, 3}}]}]; ar2 = Graphics[{Arrowheads[0.1], Thickness[0.02], Blue, Arrow[{{0, 0}, {3, 2}}]}]; txt1 = Graphics[ Text[Style["(2, 3)", FontSize -> 16, Black, Bold], {1.5, 3}]]; txt2 = Graphics[ Text[Style["(3, 2)", FontSize -> 16, Black, Bold], {2.5, 2}]]; Show[ar1, ar2, txt1, txt2]

End of Example 7
Corollary 4: If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set {v1, v2, … , vr} in 𝔽n is linearly dependent if r > n.
We build a matrix from the vectors in S: A = [v1, v2, … , vr]. Then A has dimensions n × r and the equation A x = 0 corresponds to a system of n equations in r unknowns. If n < r, there are more variables than equations. so there must be a free variable. Hence, A x = 0 has a nontrivial solution, and the columns of A are linearly dependent.
Example 8: The set of vectors $\left[ \begin{array}{c} 1 \\ 2 \\ 3 \end{array} \right] , \quad \left[ \begin{array}{c} 3 \\ 2 \\ 1 \end{array} \right] , \quad \left[ \begin{array}{c} 1 \\ 1 \\ 1 \end{array} \right] , \quad \begin{bmatrix} -2 \\ \phantom{-}2 \\ \phantom{-}1 \end{bmatrix}$ are linearly dependent because there are four vectors of dimension 3. Therefore, we use Mathematica for their visualization.
Clear[{v1, v2, v3}];
v1 = {1, 2, 3};
v2 = {3, 2, 1};
v3 = {1, 1, 1};
v4 = {-2, 2, 1};
We ask Mathematica if v1 and v3 combine to equal v1
TrueQ[v1 v3 == v1]
True

 Four vectors. But the plot of all four looks like they are all independent. vecs1 = Graphics3D[{Thickness[0.01], Arrowheads[Large], Arrow[{{0, 0, 0}, v2}], Arrow[{{0, 0, 0}, v4}], Arrow[{{0, 0, 0}, v1}], Arrow[{{0, 0, 0}, v3}]}, Axes -> True]

 Four vectors with colors. This improves when we add color to the two suspect vectors vecs2 = Graphics3D[{{Arrowheads[0.06], Thickness[0.01], Arrow[{{0, 0, 0}, v2}], Purple}, {Arrowheads[0.06], Thickness[0.01], Arrow[{{0, 0, 0}, v4}], Red}, {Arrowheads[0.06], Thickness[0.01], Orange, Arrow[{{0, 0, 0}, v1}]}, {Arrowheads[0.06], Thickness[0.01], Blue, Arrow[{{0, 0, 0}, v3}]}, Text["v1", {.9*1, .9*2, 3}], Text["v3", {1.2*1, .9*1, 1}], Purple, Text["v4", {-2, .9*2, .9*1}], Black, Text["v2", {3, 2, .8*1}]}, Axes -> True]

 Four vectors and a plane. It improves further by adding a plane showing v1 and v3 together infPl = Graphics3D[{Opacity[.2], InfinitePlane[{{0, 0, 0}, v1, v3}, Mesh -> True]}]; vecs3 = Show[vecs2, infPl, ViewPoint -> {1.5346926334477773, -2.116735802004978, 2.148056811481361}]

 Four vectors; three colinear. It helps to rotate the box so that the plane disappears, visually, to become a line. However, when we do that it appears that not two but THREE of our vectors are in the same plane (note the black arrowhead of v2 appears to be colinear with v1 and v3) vecs4 = Show[vecs2, infPl, ViewPoint -> {2.5066082420449374, 0.103322560978032, -2.2707354688085837}]

This exercise reminds us that there are numerous ways that vectors can be dependent upon one another.
TrueQ[v1 # == v1] & /@ {v2, v3, v4}
{False, True, False}
TrueQ[v3 v2 == v2]
True
This exercise reminds us that there are numerous ways that vectors can be dependent upon one another.
TrueQ[v1 # == v1] & /@ {v2, v3, v4}
{False, True, False}
TrueQ[v3 v2 == v2]
True
End of Example 8
Corollary 5: A non-empty subset of a set of linearly independent vectors consists of vectors that are still linearly independent.
Suppose, by contradiction, that S is a set of linearly independent vectors and that ∅ ≠ AS is a subset of linearly dependent vectors. Then there exists a vector in A which is written as a linear combination of the others. But then it is also expressed as a linear combination of vectors of S and then S is a set of linearly dependent vectors, contradicting the hypothesis.
Example 9: Let us consider an infinite set of monomials S = {1, x, x², x³, …}. We are going to show that S is linearly independent in the vector space ℝ[x] of all polynomials. Since we cannot apply Theorem 1, we are asking whether or not there exists some finite linear combination of 1, x, x², x³, … that adds to 0 (and does not have all coefficients equal to zero).

Let p be the largest power of x in such a linear combination—we want to know if there exists (not all zero) scalars c₀, c₁, c₂, … , cp such that $c_0 + c_1 x + c_2 x^2 + \cdots + c_p x^p = 0 . \tag{9.1}$ By plugging x = 0 into that equation, we see that c₀ = 0. Setting p = 10 in Mathematica, we get

Plus @@ (Subscript[c, #] x^# & /@ Range[0, 10]) == 0
% /. x -> 0
c₀ + x c₁ + x² c₂ + x³ c₃ + x4c₄ + x5c₅ + x6c₆ + x7c₇ + x8c₈ + x9c₉ + x10c10 = 0
c₀ = 0

Taking the derivative of both sides of Equation (9.1) then reveals that $c_1 + 2\,c_2 x + 3\, c_3 x^2 + \cdots + p\,c_p x^{p-1} = 0 ,$ and plugging x = 0 into this equation gives c₁ = 0. By repeating this procedure (i.e., taking the derivative and then plugging in x = 0), we similarly see that c₁ = c₂ = ⋯ = cp = 0, so S is linearly independent.

Again, for example, using p = 10

D[(Plus @@ (Subscript[c, #] x^# & /@ Range[0, 10])) == 0, x]
% /. x -> 0
c₁ + 2 x c₂ + 3 x² c₃ + 4 x³ c₄ + 5 x4c₅ + 6 x6c₆ + 7 x6c₇ + 8 x7c₈ + 9 x8c₉ + 10 x9c10 = 0
c₁ = 0
End of Example 9

Homogeneous Equations as Vector Combinations

Most likely you have the following question: how this topic about linearly independent and linearly dependent vectors is related to the fundamental problem of linear algebra: how to solve a system of algebraic equations. To answer this question, we consider a system of linear equations
$$\label{EqInde.1} {\bf A}\,{\bf x} = {\bf 0} , \qquad {\bf A}\in \mathbb{F}^{m,n} , \quad {\bf x} \in \mathbb{F}^{n,1} .$$
Since m × n matrix A = [𝑎i,j] can be written as an row of its column vectors
${\bf A} = \left[ {\bf a}_1 \ {\bf a}_2 \ \cdots \ {\bf a}_n \right] , \qquad {\bf a}_i = \left( \begin{array}{c} a_{i,1} \\ a_{i,2} \\ \vdots \\ a_{i,m} \end{array} \right) , \quad i = 1, 2, \ldots , n,$
we rewrite Eq.\eqref{EqInde.1} as
$$\label{EqInde.2} {\bf A}\,{\bf x} = {\bf 0} \qquad \iff \qquad x_1 {\bf a}_1 + x_2 {\bf a}_2 + \cdots + x_n {\bf a}_n = {\bf 0} ,$$
because multiplication of matrices is performed according to the rule: "row-by-column." In our case, we have A written as a single row (with entries as column vectors, but it does not matter for matrix multiplication) and it must be multiplied by a single column vector x = [x1, x2, … , xn]. This yields
$\left[ {\bf a}_1 , {\bf a}_2 , \ldots , {\bf a}_n \right] \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right] = x_1 {\bf a}_1 + x_2 {\bf a}_2 + \cdots + x_n {\bf a}_n .$
Since the latter is just a linear combination of column vectors a1, a2, … , an with coefficients x1, x2, … , xn, we immediately spot that Eq.\eqref{EqInde.2} provides linearly dependence of column vectors a1, a2, … , an. Therefore, homogeneous linear system \eqref{EqInde.1} has a nontrivial solution if and only if column vectors of matrix A are linearly dependent.
Example 10: Let $$\displaystyle {\bf v}_1 = \left[ \begin{array}{c} 2 \\ 6 \\ 2 \end{array} \right] , \quad {\bf v}_2 = \left[ \begin{array}{c} 5 \\ 8 \\ 4 \end{array} \right] , \quad {\bf v}_3 = \left[ \begin{array}{c} 1 \\ 3 \\ 1 \end{array} \right] .$$ In order to determine whether these three vectors are linearly dependent or independent, we show that the corresponding system $x_1 {\bf v}_1 + x_2 {\bf v}_2 + x_3 {\bf v}_3 = 0 \tag{10.1}$ has a nontrivial solution in x = (x₁, x₂, x₃). So we build the associated augmented matrix $\begin{bmatrix} 2 & 5 & 1 & 0 \\ 6 & 8 & 3 & 0 \\ 2 & 4 & 1 & 0 \end{bmatrix} \,\sim\, \begin{bmatrix} 2 & \phantom{-}5 & 1 & 0 \\ 0 & -7& 0 & 0 \\ 0 & -1 & 0 & 0 \end{bmatrix}\,\sim\, \begin{bmatrix} 2 & \phantom{-}5 & 1 & 0 \\ 0 & -7& 0 & 0 \\ 0 & \phantom{-}0 & 0 & 0 \end{bmatrix} .$ Since the reduced form of the given matrix (it is the last matrix) has pivots at the first column (2≠0) and in the second column (−7≠0), the variables x₁ and x₂ are basic variables, and x₃ is free. Each nonzero value of x₃ determines a nontrivial solution of Eq.(10.1). Hence, v₁, v₂, and v₃ are linearly dependent.

To find a linear dependence relation among v₁, v₂, and v₃, completely row reduce the augmented matrix and write the new system $\begin{bmatrix} 2 & 5 & 1 & 0 \\ 6 & 8 & 3 & 0 \\ 2 & 4 & 1 & 0 \end{bmatrix} \,\sim\, \begin{bmatrix} 2 &\phantom{-}0 & 1 & 0 \\ 0 & -7& 0 & 0 \\ 0 & \phantom{-}0 & 0 & 0 \end{bmatrix} .$ This yields $\begin{split} 2\,x_1 \phantom{2x3} + x_3 &= 0 , \\ \phantom{-12} - 7\, x_2 \phantom{2x3}&= 0 , \\ 0&= 0 . \end{split}$ Thus, 2x₁ + x₃ = 0 and x₂ = 0. This yields the linear dependence equation: ${\bf v}_1 + 0\,{\bf v}_2 - 2\,{\bf v}_3 = {\bf 0} .$ We verify our conclusion with Mathematica:

AugmentedMatrix[{}, vars_] := {} AugmentedMatrix[matA_List == matB_List, vars_] := AugmentedMatrix[matA - matB, vars] AugmentedMatrix[mat_, vars_] := With[{temp = CoefficientArrays[Flatten[Thread /@ mat], vars]}, Normal[Transpose[Join[Transpose[temp[[2]]], -{temp[[1]]}]]]]
Then we clear variables and define matrices:
Clear[b, v1, v2, v3, x, x1, x2, x3];
v1 = {2, 6, 2};
v2 = {5, 8, 4};
v3 = {1, 3, 1};
b = {0, 0, 0};
Now, the augmented matrix becomes
augM = AugmentedMatrix[ $$\displaystyle \begin{bmatrix} 2 & 5 & 1 \\ 6 & 8 & 3 \\ 2 & 4 & 1 \end{bmatrix} , \ \begin{bmatrix} x1 \\ x2 \\ x3 \end{bmatrix} == \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} , \$$ {x1, x2, x3}];
MatrixForm[%]
$$\displaystyle \begin{pmatrix} 2 & 5 & 1 & 0 \\ 6 & 8 & 3 & 0 \\ 2 & 4 & 1 & 0 \end{pmatrix}$$
Applying row reduction, we obtain
RowReduce[augM];
MatrixForm[%]
So Mathematica provides the the matrix $\begin{pmatrix} 1 & 0 & \frac{1}{2} & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} ,$ which is equivalent to our matrix $$\displaystyle \begin{bmatrix} 2 & 0 & 1 & 0 \\ 0 & -7 & 0 & 0 \\ 0&0&0&0 \end{bmatrix} . \$$ As you see, Mathematica places 1's for pivots, while a lazy person like me does not care what is the value of pivots---they must be not equal to zero.
End of Example 10

Example 11: Let ${\bf v} = \begin{bmatrix} 1\\ 2 \\ 3 \end{bmatrix} , \quad {\bf u} = \begin{bmatrix} \phantom{-}1 \\ \phantom{-}3 \\ -4 \end{bmatrix} , \quad {\bf w} = \begin{bmatrix} \phantom{-}1 \\ \phantom{-}2 \\ -3 \end{bmatrix} , \quad {\bf z} = \begin{bmatrix} -2 \\ \phantom{-}7 \\ -5 \end{bmatrix} .$ There are several questions regarding these vectors that we would like to address.
1. Are the sets of two vectors {v, u}, {v, w}, {v, z}, {u, w}, {u, z}, and {w, z} each linearly independent?

Solution: We know from Corollary 3 that two vectors are linearly dependent if and only if one of them is a constant (scalar) multiple of another. Since every pair consists of not parallel vectors, every subset of two vectors is linearly independent.

Below we see a graphical portrayal of the six pairs of vectors, clearly none are parallel

Clear[v, u, w, z];
sset2 = Subsets[{v, u, w, z}, {2}]
Then we define the vectors:
v = {1, 2, 3}; u = {1, 3, -4}; w = {1, 2, -3}; z = {-2, 7, -5};
set = {v, u, w, z};
{{{1, 2, 3}, {1, 3, -4}}, {{1, 2, 3}, {1, 2, -3}}, {{1, 2, 3}, {-2, 7, -5}}, {{1, 3, -4}, {1, 2, -3}}, {{1, 3, -4}, {-2, 7, -5}}, {{1, 2, -3}, {-2, 7, -5}}}
ssets = Drop[Subsets[set, 2], 5]
{{{1, 2, 3}, {1, 3, -4}}, {{1, 2, 3}, {1, 2, -3}}, {{1, 2, 3}, {-2, 7, -5}}, {{1, 3, -4}, {1, 2, -3}}, {{1, 3, -4}, {-2, 7, -5}}, {{1, 2, -3}, {-2, 7, -5}}}
Grid[{ssets[[;; 3]], Map[Graphics3D[{Thick, Arrow[{{0, 0, 0}, ssets[[#, 1]]}], Arrow[{{0, 0, 0}, ssets[[#, 2]]}]}] &, Range[3]], ssets[[4 ;; 6]], Map[Graphics3D[{Thick, Arrow[{{0, 0, 0}, ssets[[#, 1]]}], Arrow[{{0, 0, 0}, ssets[[#, 2]]}]}] &, {4, 5, 6}]}, Frame -> All]

 (1, 2, 3),    (1, 3, −4) (1, 2, 3),    (1, 2, −3) (1, 2, 3),    (−2, 7, −5)

 (1, 3, −4),    (1, 2, −3) (1, 3, −4),    (−2, 7, −5) (1, 2, −3),    (−2, 7, −5)

Although usually we do need termwise matrix division or multiplication, sometimes it is convenient to perform term by term arithmetic operations on matrices. This example provides a proper choice for term by term division of matrices (of course, of the same dimensions) and Mathematica supports these arithmetic operations. For example, we can divide or multiply term-by-term matrices, For instance, we define 2 × 3 matrices and divide or multiply them term-by-term:
A = {{1, 2}, {3, 4}, {5, 6}}; B = {{2, 3}, {1, 4}, {-1, -2}};
A/B
{{1/2, 2/3}, {3, 1}, {-5, -3}}
or multiply
A*B
{{2, 6}, {3, 16}, {-5, -12}}
Since vectors constitute a particular case of matrices, we can divide vectors of the same size element by element. If any pair of vectors is linearly dependent, then their division produces a vector of the same scalar multiple. This is not the case below so we conclude there are no scalar dependencies between these pairs.
v = {1, 2, 3}; u = {1, 3, -4}; w = {1, 2, -3}; z = {-2, 7, -5};
set = {v, u, w, z};
ssets = Drop[Subsets[set, 2], 5]
(#[[1]]/#[[2]]) & /@ ssets
{{1, 2/3, -(3/4)}, {1, 1, -1}, {-(1/2), 2/7, -(3/5)}, {1, 3/2, 4/ 3}, {-(1/2), 3/7, 4/5}, {-(1/2), 2/7, 3/5}}
2. Is the set of four vectors {v, u, w, z} linearly dependent?

Solution: From Corollary 4, it follows that these four vectors of size 3 are linearly dependent.

Grid[{set, Map[Graphics3D[{Thick, Arrow[{{0, 0, 0}, set[[#]]}], Arrow[{{0, 0, 0}, set[[#]]}]}] &, Range[4]]}, Frame -> All]
Adding a plane to our plot is not determinative unless we change the viewpoint such that the plane disappears.

 Plane of vectors. infPl = Graphics3D[{Opacity[.4], InfinitePlane[{{0, 0, 0}, u, w}, Mesh -> True]}]; vecsUW = Graphics3D[{Thick, Arrow[{{0, 0, 0}, u}], Arrow[{{0, 0, 0}, w}]}]; Show[infPl, vecsUW, ViewPoint -> {1.7677868473826688, -1.813323932632412, 2.244278498217939}] Show[infPl, vecsUW, ViewPoint -> {0.8398423137197476, -2.7043140351649644, 1.8523904791635182}]

3. We consider the following sets of three vectors: S₁ = {v, u, w}, S₂ = {v, u, z}, S₃ = {v, w, z}, S₄ = {u, w, z}. Determine which of these four sets is linearly dependent.

Solution: In order to determine whether set S is linearly dependent or independent, we consider the system of homogeneous equations with respect to unknown constants c₁, c₂, and c₃: $c_1 {\bf v} + c_2 {\bf u} + c_3 {\bf w} = {\bf 0} .$ We rewrite it in coordinate form: $c_1 \left[ \begin{array}{c} 1 \\ 2 \\ 3 \end{array} \right] + c_2 \left[ \begin{array}{c} \phantom{-}1 \\ \phantom{-}3 \\ -4 \end{array} \right] + c_3 \left[ \begin{array}{c} \phantom{-}1 \\ \phantom{-}2 \\ -3 \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right]$ Equating their three coordinates, we obtain the system of linear equations: $\begin{split} c_1 \phantom{2}+ c_2 \phantom{3}+ \phantom{2}c_3 &= 0 \qquad \Longrightarrow \qquad c_1 + c_3 = - c_2 , \\ 2\, c_1 + 3\,c_2 + 2\,c_3 &= 0 \qquad \Longrightarrow \qquad c_2 = 0 , \\ 3\, c_1 -4\, c_2 - 3\, c_3 &= 0 \qquad \Longrightarrow \qquad c_1 - c_3 = 0. \end{split}$ Since this system has only trivial solution c₁ = c₂ = c₃ = 0, set S₁ is linearly independent.

Mathematica can solve this system in a number of ways, all of which produce the same answer.

LinearSolve[{Subscript[c, 1], Subscript[c, 2], Subscript[c, 3]} Subscript[s, 1], {0, 0, 0}]
{0, 0, 0}
Solve[{Subscript[c, 1], Subscript[c, 2], Subscript[c, 3]}.Subscript[s, 1] == 0, {Subscript[c, 1], Subscript[c, 2], Subscript[c, 3]}]
{{Subscript[c, 1] -> 0, Subscript[c, 2] -> 0, Subscript[c, 3] -> 0}}
Reduce[{Subscript[c, 1], Subscript[c, 2], Subscript[c, 3]}.Subscript[ s, 1] == 0, {Subscript[c, 1], Subscript[c, 2], Subscript[c, 3]}]
Subscript[c, 1] == 0 && Subscript[c, 2] == 0 && Subscript[c, 3] == 0
AugmentedMatrix[{}, vars_] := {} AugmentedMatrix[matA_List == matB_List, vars_] := AugmentedMatrix[matA - matB, vars] AugmentedMatrix[mat_, vars_] := With[{temp = CoefficientArrays[Flatten[Thread /@ mat], vars]}, Normal[Transpose[Join[Transpose[temp[[2]]], -{temp[[1]]}]]]]
AugmentedMatrix[ $$\displaystyle \begin{pmatrix} 1&1&1 \\ 2&3&2 \\ 3&-4&-3 \end{pmatrix} \cdot \begin{pmatrix} c_1 \\ c_2 \\ c_3 \end{pmatrix} == \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} ,$$ {c₁, c₂, c₃}];
MatrixForm[%]
$$\displaystyle \begin{pmatrix} 1&1&1&0 \\ 2&3&2&0 \\ 3&-4&-3&0 \end{pmatrix}$$
RowReduce[augM1];
MatrixForm[%]
$$\displaystyle \begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \end{pmatrix}$$

For S, we have to solve the system of homogeneous equations: $c_1 {\bf v} + c_2 {\bf u} + c_3 {\bf z} = {\bf 0} ,$ which we rewrite as $c_1 \left[ \begin{array}{c} 1 \\ 2 \\ 3 \end{array} \right] + c_2 \left[ \begin{array}{c} \phantom{-}1 \\ \phantom{-}3 \\ -4 \end{array} \right] + c_3 \left[ \begin{array}{c} -2 \\ \phantom{-}7 \\ -5 \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right]$ This leads to the system of linear equations: $\begin{split} c_1 \phantom{2}+ c_2 \phantom{3} -2\,c_3 &= 0 \qquad \Longrightarrow \qquad c_1 + c_2 = 2\, c_3 , \\ 2\, c_1 + 3\,c_2 + 7\,c_3 &= 0 \qquad \Longrightarrow \qquad 11\,c_1 + 13\,c_2 = 0 , \\ 3\, c_1 -4\, c_2 - 5\, c_3 &= 0 \qquad \Longrightarrow \qquad c_1 - 13\,c_2 = 0. \end{split}$ Since this system has only trivial solution, we claim that the set S₂ is linearly independent.

LinearSolve[{Subscript[c, 1], Subscript[c, 2], Subscript[c, 3]} Subscript[s, 2], {0, 0, 0}]
{0, 0, 0}

Let us consider the set S. In order to determine whether this set is linearly dependent or independent, we consider the system of homogeneous equations with respect to unknown constants c₁, c₂, and c₃: $c_1 {\bf v} + c_2 {\bf w} + c_3 {\bf z} = {\bf 0} ,$ which we rewrite as $c_1 \left[ \begin{array}{c} 1 \\ 2 \\ 3 \end{array} \right] + c_2 \left[ \begin{array}{c} \phantom{-}1 \\ \phantom{-}2 \\ -3 \end{array} \right] + c_3 \left[ \begin{array}{c} -2 \\ \phantom{-}7 \\ -5 \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right]$ The corresponding system of algebraic equations is $\begin{split} c_1 \phantom{2}+ c_2 \phantom{3} -2\,c_3 &= 0 \qquad \Longrightarrow \qquad c_1 + c_2 = 2\, c_3 , \\ 2\, c_1 + 2\,c_2 + 7\,c_3 &= 0 \qquad \Longrightarrow \qquad 11\,c_1 + 13\,c_2 = 0 , \\ 3\, c_1 -3\, c_2 - 5\, c_3 &= 0 \qquad \Longrightarrow \qquad c_1 - 13\,c_2 = 0. \end{split}$ This system has only trivial solution, so set S₃ is linearly independent.

Solve[{Subscript[c, 1], Subscript[c, 2], Subscript[c, 3]}.Subscript[s, 3] == 0, {Subscript[c, 1], Subscript[c, 2], Subscript[c, 3]}]
{{Subscript[c, 1] -> 0, Subscript[c, 2] -> 0, Subscript[c, 3] -> 0}}

Finally, we consider the set S. We build the system: $c_1 {\bf u} + c_2 {\bf w} + c_3 {\bf z} = {\bf 0} ,$ which we rewrite as $c_1 \left[ \begin{array}{c} \phantom{-}1 \\ \phantom{-}3 \\ -4 \end{array} \right] + c_2 \left[ \begin{array}{c} \phantom{-}1 \\ \phantom{-}2 \\ -3 \end{array} \right] + c_3 \left[ \begin{array}{c} -2 \\ \phantom{-}7 \\ -5 \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right]$ The corresponding system of algebraic equations is $\begin{split} c_1 \phantom{2}+ c_2 \phantom{3} -2\,c_3 &= 0 \qquad \Longrightarrow \qquad c_1 + c_2 = 2\, c_3 , \\ 3\, c_1 + 2\,c_2 + 7\,c_3 &= 0 \qquad \Longrightarrow \qquad 13\,c_1 + 11\,c_2 = 0 , \\ -4\, c_1 -3\, c_2 - 5\, c_3 &= 0 \qquad \Longrightarrow \qquad 13\, c_1 + 11\,c_2 = 0. \end{split}$ The last two equations are the same, so we have $c_2 = - \frac{13}{11}\, c_1 , \qquad c_3 = - \frac{1}{11}\, c_1 .$ Therefore, the system S₄ is linearly dependent.

Reduce[{Subscript[c, 1], Subscript[c, 2], Subscript[c, 3]}.Subscript[ s, 4] == 0, {Subscript[c, 1], Subscript[c, 2], Subscript[c, 3]}]
Subscript[c, 2] == -((13 Subscript[c, 1])/11) && Subscript[c, 3] == -(Subscript[c, 1]/11)
augM4 = AugmentedMatrix[ $$\displaystyle \begin{pmatrix} 1&1&-2 \\ 3&2&7 \\ -4&-3&-5 \end{pmatrix} \cdot \begin{pmatrix} c_1 \\ c_2 \\ c_3 \end{pmatrix} == \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} ,$$ {c₁, c₂, c₃}];
MatrixForm[%]
$$\displaystyle \begin{pmatrix} 1&1&-2&0 \\ 3&2&7&0 \\ -4&-3&-5&0 \end{pmatrix}$$
owReduce[augM4];
MatrixForm[%]
$$\displaystyle \begin{pmatrix} 1&0&11&0 \\ 0&1&-13&0 \\ 0&0&0&0 \end{pmatrix} ,$$
End of Example 11
Example 12: We consider a set of three functions S = {sin(x), cos(x), cos²(x)}

in order to determine whether these functions are linearly independent or dependent, we set a system of equations $c_1 \cos x + c_2 \cos x + c_3 \cos^2 x = 0 ,$

Clear[c, c1, c2, c3, x]
{c1, c2, c3}.{Cos[x], Cos[x], (Cos[x])^2}
and ask whether there exist scalars c₁, c₂, c₃ ∈ ℝ (not all equal to 0). Plugging in x = 0 tells us that c₂ + c₃ = 0. Then plugging in x = π/2 yields c₁ = 0. Finally, plugging in x = 3π/2 tells us that −c₂ + c₃ = 0. Solving this system of equations with respect to c₂ and c₃ reveals that c₂ = c₃ = 0. Hence the set S is linearly independent.

With[{x = 0}, Reduce[({c1, c2, c3}.{Cos[x], Cos[x], (Cos[x])^2}) == 0, {c1, c2, c3}]]
c3 == -c1 - c2
With[{x = $Pi]/2}, Reduce[({c1, c2, c3}.{Cos[x], Cos[x], (Cos[x])^2}) == 0, {c1, c2, c3}]] True With[{x = (3 \[Pi])/2}, Reduce[({c1, c2, c3}.{Cos[x], Cos[x], (Cos[x])^2}) == 0, {c1, c2, c3}]] True On the other hand the set B = {cos(2x), sin²(x), cos²(x)} is linearly dependent. Since this set is finite, we want to determine whether or not there exist scalars c₁, c₂, c₃ ∈ ℝ (not all equal to 0) such that \[ c_1 \cos (2x) + c_2 \sin^2 x + c_3 \cos^2 x = 0.$ We implement the same approach and plug in x = 0, π/2, and 3π/2. This yields the system of equations: $\begin{cases} c_1 + c_3 &= 0 , \\ -c_1 + c_2 &= 0 , \\ -c_1 + c_2 &= 0 . \end{cases}$ This system has multiple solutions $c_2 = c_1 , \qquad c_3 = -c_1 ,$ where c₁ is a free variable. In particular, we get a relation $\cos (2x) = \cos^2 x - \sin^2 x .$ This relation shows that these three functions are linearly dependent.

End of Example 12

Spans of Vectors

The most important application of linear combination is presented in the following definition.

For a given vector space V over field 𝔽 (which is either field of rational numbers ℚ or real numbers ℝ or complex numbers ℂ), the span of a set S of vectors is the set of all finite linear combinations of elements of S:
$\mbox{span}( S ) = \left\{ \left. \sum_{k=1}^n c_k {\bf v}_k \ \right\vert \ {\bf v}_k \in S \right\}$
for any positive integer n ∈ ℕ = {0, 1, 2, …} and for any scalars ck. We also say that the set S generates the vector space span(S).

If S is an infinite set, linear combinations used to form the span of S are assumed to be only finite.

The definition above of span can be reformulated as the intersection of all subspaces of V that contain S. If V is a vector space and vV, then the subspace generated by v is the set of all multiples of v, i.e., span(v) = {kv : k ∈ 𝔽. Moreover, the subspace (= span) generated by the zero vector is the trivial subspace, which contains only the zero vector, span(0) = {0}.

Let V be a vector space and let S = {v1, v2, … , vn} be a set of vectors of V. We say that S generates V or S is a set of generators of V if span(S) = V.
It may happen that a set of generators for a subspace is redundant in the sense that a vector can be removed from this set. This situation occurs only when the vector to be removed is a linear combination of other vectors. In other words, a set of generators is redundant if and only if this set of vectors is linearly dependent.
Example 13: Linear independence has the following geometric interpretations in ℝ² and ℝ³:
• Two vectors in ℝ² or ℝ³ are linearly independent if and only if (iff) they do not lie on the same line when they have the initial points at the origin. Otherwise, one would be a scalar multiple of the other.
vect2D[vec_] := Grid[{{"Linearly\nDependent", "Linearly\nDependent", "Linearly\nIndependent"}, {Graphics[{Thickness[.04], Red, Arrowheads[.15], Arrow[{{0, 0}, vec}], Thick, Blue, Arrow[{{0, 0}, 2 vec}], Text[Style["v1", Bold, Red], {.85, 1}*vec], Text[Style["v2", Bold, Blue], {1.8, 2} vec]}, Axes -> True], Graphics[{Thickness[.02], Red, Arrowheads[.15], Arrow[{{0, 0}, vec}], Thick, Blue, Arrow[{{0, 0}, -2 vec}], Text[Style["v1", Bold, Red], {.85, 1}*vec], Text[Style["v3", Bold, Blue], vec*{-1.7, -1}]}, Axes -> True], Graphics[{Thickness[.02], Red, Arrowheads[.15], Arrow[{{0, 0}, vec}], Thick, Blue, Arrow[{{0, 0}, {2, 2/3}*vec}], Text[Style["v1", Bold, Red], {.9, 1}*vec], Text[Style["v4", Bold, Blue], {1.8, 2/3}*vec]}, Axes -> True]}}, Frame -> All]; vect2D[{2, 3}]

 Linearly dependent. Linearly dependent. Independent.

• Three vectors in ℝ³ are linearly independent if and only if they do not lie in the same plane when they have their initial points at the origin. Otherwise, one would be a scalar multiple of the other.
Clear[v, gr1, gr2, gr3, plane1];
v = {2, 2, 2}; plane1 = Plot3D[3*a - 2*b, {a, -3, 4}, {b, -3, 4}, PlotStyle -> {Blue, Opacity[0.5]}, Axes -> True, Mesh -> None, Axes -> False]; gr1 = Graphics3D[{Black, Arrowheads[0.02], Thickness[0.01], Arrow[{{0, 0, 0}, v}], Arrow[{{0, 0, 0}, v*{-1, 1, -5}}], Arrow[{{0, 0, 0}, v*{.5, 1, .5}}], White, Text["v", v*{.85, 1.1, 1}], Text["u", v*{-.95, .8, -5}], Text["w", v*{.25, 1.05, .5}]}];
gr2 = Graphics3D[{Black, Arrowheads[0.02], Thickness[0.01], Arrow[{{0, 0, 0}, v}], Arrow[{{0, 0, 0}, v*{-1, 1, -5}}], Arrow[{{0, 0, 0}, 2*v}], White, Text["v", v*{.85, 1.1, 1}], Text["u", v*{-.95, .8, -5}], Text["w1", {3.4, 2.9, 3.5}]}]; v1 = {2, 3, 5}; u1 = {-2, 4, -10}; gr3 = Graphics3D[{Black, Arrowheads[0.02], Thickness[0.01], Arrow[{{0, 0, 0}, v1}], Arrow[{{0, 0, 0}, u1}], Arrow[{{0, 0, 0}, w1}], White, Text["v1", {1.4, 2.9, 4.5}], Text["u1", {-2, 3.3, -10}], Text["w1", {3.7, 2.5, 3.5}]}];
Grid[{{"Linearly\nDependent", "Linearly\nDependent", "Linearly\nIndependent"}, {Show[plane1, gr1], Show[plane1, gr2, PlotRange -> All], Show[plane1, gr3, PlotRange -> All, ViewPoint -> {-1.565, -1.627, 2.521}]}}, Frame -> All]
GraphicsGrid[{{sh1, sh2, sh3}}, Frame -> All, ImageSize -> 1200]

 Plane with linearly dependent. Plane with two vectors. Independent vectors.

Theorem 2: Every spanning set S of a vector space V must contain at least as many elements as any linearly independent set of vectors from V.

Example 14: The span of the empty set $$\varnothing$$ consists of a unique element 0. Therefore, $$\varnothing$$ is linearly independent and it is a basis for the trivial vector space consisting of the unique element---zero. Its dimension is zero.
End of Example 14
In ℝn, for example, the span of a single vector is the line through the origin in the direction of that vector, and the span of two non-parallel vectors is the plane containing the origin and those vectors. When we work in higher dimension vector spaces, we lose much of this geometric interpretation, but algebraically spans still work much like they do in ℝn. For example, span(1, x, x²) = ℝ≤2[x] (the vector space of real-valued polynomials with degree at most 2) since every polynomial p ∈ ℝ≤2[x] can be written in the form p(x) = c₀ + cx + cx² for some c₀, c₁, c₂ ∈ ℝ. Indeed, this is exactly what it means for a polynomial to have degree at most 2. More generally, span(1, x, … , xm) = ℝ≤m[x]

Theorem 3: The span of any subset S of a vector space V is a subspace of V. Moreover, any subspace of V that contains S must also contain the span of S.

This result is immediate if S = ∅ because span( ∅ ) = { 0 }, which is a subspace that is contained in any subspace of V.

If S ≠ ∅, then S contains an element z. So 0z = 0 is an element of span( S ). Let x,y ∈ span( S ). Then there exist elements u1, u2, ... , um, v1, v1, ... , vn, in S and scalars a1, a2, ... , am, b1, b2, ... , bn such that

${\bf x} = a_1 {\bf u}_1 + a_2 {\bf u}_2 + \cdots + a_m {\bf u}_m \quad\mbox{and}\quad {\bf y} = b_1 {\bf v}_1 + b_2 {\bf v}_2 + \cdots + b_n {\bf v}_n .$
Then
${\bf x} + {\bf y} = a_1 {\bf u}_1 + a_2 {\bf u}_2 + \cdots + a_m {\bf u}_m + b_1 {\bf v}_1 + b_2 {\bf v}_2 + \cdots + b_n {\bf v}_n ,$
and for any scalar c
$c\,{\bf x} = \left( c\,a_1 \right) {\bf u}_1 + \left( c\,a_2 \right) {\bf u}_2 + \cdots + \left( c\,a_1 \right) {\bf u}_m$
are clearly linear combinations of the elements of S; so x + y and cx are elements of span( S ). Thus span( S ) is a subspace of V.

Now let W denote any subspace of V that contains S. If w ∈ span( S ), then w has the form w = c1w1 + c2w2 + ... + ckwk for some elements w1, w2, ... , wk in S and some scalars c1, c2, ... , ck. Since SW, we have w1, w2, ... , wkW. Therefore, w = c1w1 + c2w2 + ... + ckwk is an element of W. Since w, an arbitrary element of span( S ), belongs to W, it follows that span( S ) ⊆ W, completing the proof. ■

Example 15: It is important to keep in mind that linear combinations are always finite, even if the generating set S is not. In vector spaces, infinite sums are not defined---to incorporate them vector spaces should be equipped with additional structure, called topology (see Part 5 of this tutorial). To illustrate this point, consider the vector space ℝ[x] = span(1, x, x², x³, …), which is the set of all polynomials (of any degree). If we recall from calculus that we can represent the function f(x) = sin(x) in the form of Taylor's series $\sin x = \sum_{n\ge 0} (-1)^n \frac{x^{2n+1}}{(2n+1)!} ,$ we might expect that sin(x) ∈ ℝ[x], since we have written f(x) as a sum of scalar multiples of 1, x, x², x³, and so on. However, sin(x) ∉ ℝ[x] because sin(x) can only be written as an infinite sum of polynomials, not a finite one.
End of Example 15

Theorem 4: Let S, T be nonempty subsets of a vector space V over a field 𝔽, then

1. If ST, then span(S) ⊆ span(T);
2. span(span(S)) = span(S).
1. Assume that ST. Then ST ⊆ span(T), and hence S ⊆ span(T). According to Theorem 3, the span of S is the smallest subspace of V containing S. So span(S) ⊆ span(T).
2. Obviously span(S) ⊆ span(span(S)). Now let v ∈ span(span(S)). Then $$\displaystyle {\bf v} = \sum_{1 \le j \le m} c_j {\bf v}_j ,$$ with c ∈ 𝔽 and vj ∈ span(S)). On other hand, each of vj ∈ span(S) can be written as $$\displaystyle {\bf v}_j = \sum_{1 \le k \le p} b_{jk} {\bf u}_k ,$$ with bk ∈ 𝔽 and ukS. This shows that every vector v ∈ span(span(S)) can be expanded as a linear combination of elements ukS. Therefore, span(span(S)) ⊆ span(S).

Theorem 5: Let S be a linearly independent subset of a vector space V, and let v be an element of V that is not in S. Then $$S \cup \{ {\bf v} \}$$ is linearly dependent if and only if v belongs to the span of the set S.

If $$S \cup \{ {\bf v} \}$$ is linearly dependent, then there are vectors $${\bf u}_1 , \ {\bf u}_2 , \ \ldots , \ {\bf u}_n$$ in $$S \cup \{ {\bf v} \}$$ such that $$a_1 {\bf u}_1 + a_2 {\bf u}_2 + \cdots + a_n {\bf u}_n = {\bf 0}$$ for some not all zeroes scalars $$a_1 , a_2 , \ldots , a_n .$$ Because S is linearly independent, one of the ui's, say u1, equals v. Thus, $$a_1 {\bf v} + a_2 {\bf u}_2 + \cdots + a_n {\bf u}_n = {\bf 0} ,$$ and so
${\bf v} = a_1^{-1} \left( -a_2 {\bf u}_2 - \cdots - a_n {\bf u}_n \right) = - \left( a_1^{-1} a_2 \right) {\bf u}_2 - \cdots - \left( a_1^{-1} a_n \right) {\bf u}_n .$
Since v is a linear combination of u1, ... , un, which are elements of S, we conclude that v belongs to the span of S.

Conversely, let $${\bf v} \in \mbox{span}(S) .$$ Then there exist vectors v1, ... , vm in S and scalars b1, b2, ... , bm such that $${\bf v} = b_1 {\bf v}_1 + b_2 {\bf v}_2 + \cdots + b_m {\bf v}_m .$$ Hence,

${\bf 0} = b_1 {\bf v}_1 + b_2 {\bf v}_2 + \cdots + b_m {\bf v}_m + (-1)\,{\bf v} .$
Since $${\bf v} \ne {\bf v}_i$$ for $$i=1,2,\ldots , m ,$$ the coefficient of v in this linear combination is nonzero, and so the set $$\left\{ {\bf v} , {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_m \right\}$$ is linearly dependent. Therefore, $$S \cup \{ {\bf v} \}$$ is linearly dependent.

The next example demonstrates how Mathematica can determine the basis or set of linearly independent vectors from the given set. Note that basis is not unique and even changing the order of vectors, a software can provide you another set of linearly independent vectors.

Example 16: Suppose we are given four linearly dependent vectors:

MatrixRank[m = {{1, 2, 0, -3, 1, 0}, {1, 2, 2, -3, 1, 2}, {1, 2, 1, -3, 1, 1}, {3, 6, 1, -9, 4, 3}}]

Out[1]= 3

Then each of the following scripts determine a subset of linearly independent vectors:

m[[ Flatten[ Position[#, Except[0, _?NumericQ], 1, 1]& /@
Last @ QRDecomposition @ Transpose @ m ] ]]

Out[2]= {{1, 2, 0, -3, 1, 0}, {1, 2, 2, -3, 1, 2}, {3, 6, 1, -9, 4, 3}}

or, using subroutine

MinimalSublist[x_List] :=
Module[{tm, ntm, ytm, mm = x}, {tm = RowReduce[mm] // Transpose,
ntm = MapIndexed[{#1, #2, Total[#1]} &, tm, {1}],
ytm = Cases[ntm, {___, ___, d_ /; d == 1}]};
Cases[ytm, {b_, {a_}, c_} :> mm[[All, a]]] // Transpose]

we apply it to our set of vectors.

m1 = {{1, 2, 0, -3, 1, 0}, {1, 2, 1, -3, 1, 2}, {1, 2, 0, -3, 2, 1}, {3, 6, 1, -9, 4, 3}};
MinimalSublist[m1]
Out[3]= {{1, 0, 1}, {1, 1, 1}, {1, 0, 2}, {3, 1, 4}}

You see 1 row and n columns together in m1, so you can transpose it to see it as column vector

{{1, 1, 1, 3}, {0, 1, 0, 1}, {1, 1, 2, 4}}

One can use also the standard Mathematica command: LinearlyIndependent.
End of Example 16
1. Are the following 2×2 matrices $$\begin{bmatrix} -3&2 \\ \phantom{-}1& 2 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}6&-4 \\ -2&-4 \end{bmatrix}$$ linearly dependent or independent?
2. In each part, determine whether the vectors are linearly independent or a linearly dependent in ℝ³.
1. (2 ,-3, 1), (-1, 4, 5), (3, 2, -1);
2. (1, -2, 0), (-2, 3, 2), (4, 3, 2);
3. (7, 6, 5), (4, 3, 2), (1, 1, 1), (1, 2, 3);
3. In each part, determine whether the vectors are linearly independent or linearly dependent in ℝ4.
1. (8, −9, 6, 5), (1,−3, 7, 1), (1, 2, 0, −3);
2. (2, 0, 2, 8), (2, 1, 0, 6), (1, −2, 5, 8);
4. In each part, determine whether the vectors are linearly independent or a linearly dependent in the space ℝ≤3[x] of all polynomials of degree up to 3.
1.    {3, x + 4, x³ −5x, 6}.
2.    {0, 2x, 3 −x, x³}.
3.    {x³ −2x, {x³ + 2x, x + 1, x −3}.
5. In each part, determine whether the 2×2 matrices are linearly independent or linearly dependent.
1. $$\begin{bmatrix} 1&\phantom{-}0 \\ 2& -1 \end{bmatrix} , \ \begin{bmatrix} 0&\phantom{-}5 \\ 1&-5 \end{bmatrix} , \ \begin{bmatrix} -2&-1 \\ \phantom{-}1&\phantom{-}3 \end{bmatrix} ;$$
2. $$\begin{bmatrix} -1&0 \\ \phantom{-}1& 2 \end{bmatrix} , \ \begin{bmatrix} 1&2 \\ 2&1 \end{bmatrix} , \ \begin{bmatrix} 0&1 \\ 2&1 \end{bmatrix} ;$$
3. ????????? $$\begin{bmatrix} -2&9 \\ \phantom{-}3& 5 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}1&7 \\ -2&3 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}4&9 \end{bmatrix}$$
6. Determine all values of k for which the following matrices are linearly dependent in ℝ2,2, the space of all 2×2 matrices.
1. $$\begin{bmatrix} 1&2 \\ 0& 0 \end{bmatrix} , \ \begin{bmatrix} k&0 \\ 4&0 \end{bmatrix} , \ \begin{bmatrix} -1&k-2 \\ \phantom{-}k&0 \end{bmatrix} ;$$
2. ?????
3. $$\begin{bmatrix} -1&9 \\ \phantom{-}3& 4 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}5&6 \\ -3&1 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}1&7 \end{bmatrix}$$
4. $$\begin{bmatrix} -2&9 \\ \phantom{-}3& 5 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}1&7 \\ -2&3 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}4&9 \end{bmatrix}$$
7. In each part, determine whether the three vectors lie in a plane in ℝ³.
1. Coffee
2. Tea
3. Milk
8. Determine whether the given vectors v₁ , v₂ , and v₃ form a linearly dependent or independent set in ℝ³.
1. v₁ = (−3, 0, 4), v₂ = (5, −1, 2), and v₃ = (3, 3, 9);
2. v₁ = (−4, 0, 2), v₂ = (3, 2, 5), and v₃ = (6, \$minus;1, 1);
3. v₁ = (0. 0. 1), v₂ = (0, 5, −8), and v₃ = (−4, 3, 1);
4. v₁ = (−2, 3, 1), v₂ = (1, −2, 4), and v₃ = (2, 4, 1);
5. v₁ = (−5, 7, 8), v₂ = (−1, 1, 3), and v₃ = (1, 4, −7).
9. Determine for which values of k the vectors x² + 2x + k, 5x² + 2kx + k², kx² + x + 3 generate ℝ≤2[x]
10. Given the vectors ${\bf v} = \begin{pmatrix} 2 \\ 2 \end{pmatrix}, \quad {\bf u} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \quad {\bf w} = \begin{pmatrix} 1 \\ -1 \end{pmatrix},$ determine if they are linearly independent and determine the subspace generated by them.
11. Determine if x³ −x belongs to the span of the vectors span(x³ + x² + x, x² + 2x, x²).
12. Find the value of k for which the set of vectors is linearly dependent. ????????????????
1. $$\begin{bmatrix} -2&4 \\ \phantom{-}4& 5 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}6&7 \\ -1&8 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}5&1 \end{bmatrix}$$
2. $$\begin{bmatrix} -1&9 \\ \phantom{-}3& 4 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}5&6 \\ -3&1 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}1&7 \end{bmatrix}$$
3. $$\begin{bmatrix} -2&9 \\ \phantom{-}3& 5 \end{bmatrix} , \ \begin{bmatrix} \phantom{-}1&7 \\ -2&3 \end{bmatrix} , \ \begin{bmatrix} -2&8 \\ \phantom{-}4&9 \end{bmatrix}$$
13. Are the vectors v₁ , v₂ , and v₃ in part (a) of the accompanying figure linearly independent? What about those in part (b) ?

??????????????????????// Anton page 211, # 15

14. Find the solution of the system with parameter k: $\begin{split} kx - ky + 2z &= 0, \\ x - z &= 1, \\ 2x + 3ky -11z &= -1. \end{split}$

1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
2. Beezer, R.A., A First Course in Linear Algebra, 2017.