Vladimir Dobrushkin
https://math.uri.edu/~dobrush/

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the appendix entitled GNU Free Documentation License.

Bases and Dimension

A basis β for a vector space V is a linearly independent subset of V that generates or span V. If β is a basis for V, we also say that elements of β form a basis for V. This means that every vector from V is a finite linear combination of elements from the basis.

  Recall that a set of vectors β is said to generate or span a vector space V if every element from V can be represented as a linear combination of vectors from β.

Example: The span of the empty set \( \varnothing \) consists of a unique element 0. Therefore, \( \varnothing \) is linearly independent and it is a basis for the trivial vector space consisting of the unique element---zero. Its dimension is zero.


Example: In \( \mathbb{R}^n , \) the vectors \( {\bf e}_1 =[1,0,0,\ldots , 0] , \quad {\bf e}_2 =[0,1,0,\ldots , 0], \quad \ldots , {\bf e}_n =[0,0,\ldots , 0,1] \) form a basis for n-dimensional real space, and it is called the standard basis. Its dimension is n.


Example: Let us consider the set of all real \( m \times n \) matrices, and let \( {\bf M}_{i,j} \) denote the matrix whose only nonzero entry is a 1 in the i-th row and j-th column. Then the set \( {\bf M}_{i,j} \ : \ 1 \le i \le m , \ 1 \le j \le n \) is a basis for the set of all such real matrices. Its dimension is mn.


Example: The set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n \right\} \) form a basis in the set of all polynomials of degree up to n. It has dimension n+1. ■


Example: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \) form a basis in the set of all polynomials. ■

 

Theorem: Let V be a vector space and \( \beta = \left\{ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_n \right\} \) be a subset of V. Then β is a basis for V if and only if each vector v in V can be uniquely decomposed into a linear combination of vectors in β, that is, can be uniquely expressed in the form

\[ {\bf v} = \alpha_1 {\bf u}_1 + \alpha_2 {\bf u}_2 + \cdots + \alpha_n {\bf u}_n \]

for unique scalars \( \alpha_1 , \alpha_2 , \ldots , \alpha_n . \)

Let β be a basis for V. If \( {\bf v} \in V , \) then v belongs to the span of the basis β because β is a basis for V, so it generates this vector space. Thus, v is a linear combination of the elments of β. Suppose that
\[ {\bf v} = \alpha_1 {\bf u}_1 + \alpha_2 {\bf u}_2 + \cdots + \alpha_n {\bf u}_n \qquad\mbox{and} \qquad {\bf v} = \beta_1 {\bf u}_1 + \beta_2 {\bf u}_2 + \cdots + \beta_n {\bf u}_n \]
are two such representations of v. Subtracting the second equation from the first gives
\[ {\bf 0} = \left(\alpha_1 - \beta_1 \right) {\bf u}_1 + \left( \alpha_2 - \beta_2 \right) {\bf u}_2 + \cdots + \left( \alpha_n - \beta_n \right) {\bf u}_n . \]
Since β consists of linearly independent vectors, it follows that \( \alpha_1 - \beta_1 = \alpha_2 - \beta_2 = \cdots = \alpha_n - \beta_n =0 . \) Hence, \( \alpha_1 = \beta_1 , \ \alpha_2 = \beta_2 , \ \ldots , \ \alpha_n = \beta_n , \) and so v is uniquely expressible as a linear combination of the elements of β.

Now we prove the converse that if every vector from V

is uniquely expressed as a linear combination of elments from β. It remains to prove that vectors from β are linearly independent because we know that the set of vectors from β generates the vector space. Suppose that this statement is not true and one vector from β, say u1 is expressed as a linear combination of other elements:
\[ {\bf u}_1 = a_2 {\bf u}_2 + \cdots + a_n {\bf u}_n . \]
Then arbitrary vector v from V will have two distinct linear combination representations:
\begin{eqnarray*} {\bf v} &=& b_1 {\bf u}_1 + b_2 {\bf u}_2 + \cdots + b_n {\bf u}_n \\ &=& b_1 \left( a_2 {\bf u}_2 + \cdots + a_n {\bf u}_n \right) {\bf u}_2 + b_2 {\bf u}_2 + \cdots + b_n {\bf u}_n \\ &=& \left( b_1 a_2 + b_2 \right) {\bf u}_2 + \left( b_1 a_3 + b_3 \right) {\bf u}_3 + \cdots + \left( b_1 a_n + b_n \right) {\bf u}_n . \end{eqnarray*}

  If the vectors \( \left\{ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_n \right\} \) form a basis for a vector space V, then every vector in V can be uniquely expressed in the form

\[ {\bf v} = \alpha_1 {\bf u}_1 + \alpha_2 {\bf u}_2 + \cdots + \alpha_n {\bf u}_n \]
for appropriately chosen scalars \( \alpha_1 , \alpha_2 , \ldots , \alpha_n . \) Therefore, v determines a unqiue n-tuple of scalars \( \left[ \alpha_1 , \alpha_2 , \ldots , \alpha_n \right] \) and, conversely, each n-tuple of scalars determines a unique vector \( {\bf v} \in V \) by using the entries of the n-tuple as the coefficients of a linear combination of \( {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_n . \) This fact suggests that V is like the n-dimensional vector space \( \mathbb{R}^n , \) where n is the number of vectors in the basis for V.

Theorem: Let S be a linearly independent subset of a vector space V, and let v be an element of V that is not in S. Then \( S \cup \{ {\bf v} \} \) is linearly dependent if and only if v belongs to the span of the set S.

If \( S \cup \{ {\bf v} \} \) is linearly dependent, then there are vectors \( {\bf u}_1 , \ {\bf u}_2 , \ \ldots , \ {\bf u}_n \) in \( S \cup \{ {\bf v} \} \) such that \( a_1 {\bf u}_1 + a_2 {\bf u}_2 + \cdots + a_n {\bf u}_n = {\bf 0} \) for some not all zeroes scalars \( a_1 , a_2 , \ldots , a_n . \) Because S is linearly independent, one of the ui's, say u1, equals v. Thus, \( a_1 {\bf v} + a_2 {\bf u}_2 + \cdots + a_n {\bf u}_n = {\bf 0} , \) and so
\[ {\bf v} = a_1^{-1} \left( -a_2 {\bf u}_2 - \cdots - a_n {\bf u}_n \right) = - \left( a_1^{-1} a_2 \right) {\bf u}_2 - \cdots - \left( a_1^{-1} a_n \right) {\bf u}_n . \]
Since v is a linear combination of u1, ... , un, which are elements of S, we conclude that v belongs to the span of S.

Conversely, let \( {\bf v} \in \mbox{span}(S) . \) Then there exist vectors v1, ... , vm in S and scalars b1, b2, ... , bm such that \( {\bf v} = b_1 {\bf v}_1 + b_2 {\bf v}_2 + \cdots + b_m {\bf v}_m . \) Hence,

\[ {\bf 0} = b_1 {\bf v}_1 + b_2 {\bf v}_2 + \cdots + b_m {\bf v}_m + (-1)\,{\bf v} . \]
Since \( {\bf v} \ne {\bf v}_i \) for \( i=1,2,\ldots , m , \) the coefficient of v in this linear combination is nonzero, and so the set \( \left\{ {\bf v} , {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_m \right\} \) is linearly dependent. Therefore, \( S \cup \{ {\bf v} \} \) is linearly dependent.

Theorem: If a vector space V is generated by a finite set S, then some subset of S is a basis for V

If \( S = \varnothing \ \mbox{or } \ S = \{ 0 \} , \) then \( V = \{ 0 \} \) and \( \varnothing \) is a subset of S that is a basis for V. Otherewise, S contains a nonzero element u1. Recall that

  The next example demonstrates how Mathematica can determine the basis or set of linearly independent vectors from the given set. Note that basis is not unique and even changing the order of vectors, a software can provide you another set of linearly independent vectors.

Example: Suppose we are given four linearly dependent vectors:

MatrixRank[m = {{1, 2, 0, -3, 1, 0}, {1, 2, 2, -3, 1, 2}, {1, 2, 1, -3, 1, 1}, {3, 6, 1, -9, 4, 3}}]

Out[1]= 3

Then each of the following scripts determine a subset of linearly independent vectors:

m[[ Flatten[ Position[#, Except[0, _?NumericQ], 1, 1]& /@
Last @ QRDecomposition @ Transpose @ m ] ]]

Out[2]= {{1, 2, 0, -3, 1, 0}, {1, 2, 2, -3, 1, 2}, {3, 6, 1, -9, 4, 3}}

or, using subroutine

MinimalSublist[x_List] :=
Module[{tm, ntm, ytm, mm = x}, {tm = RowReduce[mm] // Transpose,
ntm = MapIndexed[{#1, #2, Total[#1]} &, tm, {1}],
ytm = Cases[ntm, {___, ___, d_ /; d == 1}]};
Cases[ytm, {b_, {a_}, c_} :> mm[[All, a]]] // Transpose]

we apply it to our set of vectors.

m1 = {{1, 2, 0, -3, 1, 0}, {1, 2, 1, -3, 1, 2}, {1, 2, 0, -3, 2, 1}, {3, 6, 1, -9, 4, 3}};
MinimalSublist[m1]

Out[3]= {{1, 0, 1}, {1, 1, 1}, {1, 0, 2}, {3, 1, 4}}

In m1 you see 1 row and n columns together,so you can transpose it to see it as column {{1, 1, 1, 3}, {0, 1, 0, 1}, {1, 1, 2, 4}}

One can use also the standard Mathematica command: IndependenceTest. ■


A vector space is called finite-dimensional if it has a basis consisting of a finite number of elements. The unique number of elements in each basis for V is called the dimension of V and is denoted by dim(V). A vector space that is not finite-dimensional is called infinite-dimensional.

Example: Over the field of complex numbers, the vector space \( \mathbb{C} \) of all complex numbers has dimension 1 because its basis consists of one element \( \{ 1 \} . \)

Over the field of real numbers, the vector space \( \mathbb{C} \) of all complex numbers has dimension 2 because its basis consists of two elements \( \{ 1, {\bf j} \} . \)