This section establishes a connection between vector spaces and show that all finite-dimensional spaces are equivalent to 𝔽n, where 𝔽 is either ℚ (rational numbers) or ℝ (real numbers) or ℂ (complex numbers). This aloows us to extend vector operations from 𝔽n to a vector space. Therefore, any finite dimensional vector space has the same algebraic structure as 𝔽n even though its vectors may not be expressed as n-tuples.
Isomorphism
We have already studied invertible square matrices. Let us generalize the results
obtained in the general context of vector spaces. Recall that every finite-dimensional vector space V has a basis β, and we can
use that basis to represent a vector v ∈ V as a coordinate vector [v]β ∈ 𝔽n,
where 𝔽 is the ground field (either ℚ or ℝ or ℂ). We used this correspondence between V and
𝔽n to motivate the idea that these vector spaces are “the same” in the sense
that, in order to do a linear algebraic calculation in V, we can instead do the
corresponding calculation on coordinate vectors in 𝔽n.
We now make this idea of vector spaces being “the same” a bit more precise
and clarify under exactly which conditions this “sameness” happens.
A vector transformation T : V ⇾ W that is both one-to-one and onto is said to be an isomorphism, and W is said to be isomorphic to V, which is abbreviated as V ≌ W.
We can think of isomorphic vector spaces as having the same structure and
the same vectors as each other, but different labels on those vectors.
Note that if T : V ⇾ W is an isomorphism, then the inverse T−1: W ⇾ V is
linear, hence also an isomorphism.
Theorem 1:
Two finite-dimensional vector spaces V and W are isomorphic
precisely when they have the same dimension.
Take any isomorphism T : V ⇾ W. Since T is injective, kerT = {0}, dim kerT = 0. Since T is surjective, imT = V, dim imT = dimV. Hence
\[
\dim V = \dim\,\mbox{im}\,T + \dim\,\ker\,T = \dim W.
\]
Conversely, if V and W have the same dimension n, take bases { ei} of V and { ui} of W. Since they have the same number of elements by assumption, we may index them by the same index set 1 ≤ i ≤ n. The mapping
\[
{\bf x} = \sum_p x_i {\bf e}_i \ \mapsto \ {\bf y} = \sum_p x_i {\bf u}_i
\]
defines an isomorphism between V and W.
Example 1:
Let is start with the vector space 𝔽m,n of rectangular m×n matrices with entries from ground field 𝔽. Our first example starts with the simplest 1×1 matrices [𝑎]. Since we have an obvious mapping [𝑎] ↦ 𝑎, which establishes an isomorphism 𝔽1,1 ⇾ 𝔽.
Consider the direct product of real axis ℝn = ℝ × ℝ × ⋯ × ℝ that is built of ordered n-tuples (x1, … , xn) ∈ ℝn. We can organize these n-tuples as row vectors or as column vectors; which shows that these three vectors spaces are isomorphic: ℝn ≌ ℝ1,n ≌ ℝn,1.
The fact that we write the entries of vectors in 𝔽1,n in a row whereas we
write those from 𝔽n,1 in a column is often just as irrelevant as if we used a
different font when writing the entries of the vectors in one of these vector
spaces. Indeed, vector addition and scalar multiplication in these spaces are
both performed entrywise, so it does not matter how we arrange or order those
entries. Moreover, we can identify these vectors with diagonal square matrices, or write them in come other fancy way (say in circle)m the resulting objects will be again isomorpic to 𝔽n.
The fact that 𝔽n, 𝔽1,n, and 𝔽n,1 are isomorphic justifies something that
is typically done right from the beginning in linear algebra—treating members of 𝔽n (vectors), members of 𝔽n,1 (column vectors), and members of 𝔽1,n (row vectors) as the same thing.
Furthermore, the set 𝔽m,n of m × n matrices is isomorphic to the set 𝔽n,m of n × m matrices because transposing
twice gets us back to where we started.
Let us consider two four dimensional vector spaces, 𝔽4 of 4-tuples and 𝔽2,2 of 2×2 matrices.
They are isomorphic vector spaces because there is a natural isomorphism
between them:
\[
\left[ \begin{array}{c} a \\ b \\ c \\ d \end{array} \right] \, \mapsto \, \begin{bmatrix} a & b \\ c & d \end{bmatrix} .
\]
End of Example 1
■
When we speak about isomorphisms and vector spaces being “the
same”, we only mean with respect to the 8 defining properties of vector
spaces (i.e., properties based on vector addition and scalar multiplication).
We can add column vectors in the exact same way that we add row vectors, or we can add diagonal matrices,
and similarly scalar multiplication works the exact same for those three
types of vectors. However, other operations like matrix multiplication may
behave differently on these three sets (e.g., if A ∈ 𝔽m,n, then A x makes sense when x is a column vector, but not when it is a row vector).
As an even simpler example of an isomorphism, we have implicitly been
using one when we say things like vTu = v · u for all v, u ∈ ℝn. Indeed, the
quantity v · u is a scalar in ℝ, whereas vTu is actually a 1 × 1 matrix (after
all, it is obtained by multiplying a 1 × n matrix by an n × 1 matrix), so it does
not quite make sense to say that they are “equal” to each other. However, the
spaces ℝ and ℝ1,1 are trivially isomorphic, so we typically sweep this technicality under the rug.
Theorem 2:
Every finite-dimensional vector space V of dimension n ≥ 1 is
isomorphic to 𝔽n.
Let us choose a basis { ei }, 1 ≤ i ≤ n, of the vector space V. Hence each
element x∈V has a unique representation as a linear combination of the basis elements, say
\( \displaystyle {\bf x} = \sum_i x_i {\bf e}_i . \) Let f(x) denote the n-tuple formed by the components of x:
\[
f({\bf x}) = \left( x_1 , x_2 , \ldots , x_n \right) \in \mathbb{F}^n
\]
This map is bijective by definition. It is linear and its inverse 𝔽n ⇾ V is
\[
\left\{ x_i \right\}_{1 \le i \le n} \,\mapsto \, \sum_{i=1}^n x_i {\bf e}_i .
\]
Example 2:
For any real number 𝑎, the vector space V = span{ e𝑎x, xe𝑎x, x²e𝑎x} and ℝ³ are isomorphic. The standard way to show that two spaces are isomorphic is to construct an isomorphism between them. To this end, consider the linear
transformation T : ℝ³ ⇾ V, defined by
\[
T \left( c_1 , c_2 , c_3 \right) = c_1 e^{ax} + c_2 x\,e^{ax} + c_3 x^2 e^{ax} .
\]
It is straightforward to show that this function is a linear transformation, so we just need to convince ourselves that it is invertible. To this
end, we need to show that vectors (functions) e𝑎x, xe𝑎x, x²e𝑎x are linearly independent. Indeed, suppose we have the relation
\[
c_1 e^{ax} + c_2 x\,e^{ax} + c_3 x^2 e^{ax} = 0 \qquad \iff \qquad c_1 + c_2 x + c_3 x^2 = 0
\tag{A}
\]
because the function e𝑎x ≠ 0 for any x. Therefore, this function can be canceled out and we reduce Eq.(A) is reduced to
\[
c_1 + c_2 x + c_3 x^2 = 0 ,
\]
which is equivalent to show that monomials 1, x, and x² are linearly independent.
Hence, three functions e𝑎x, xe𝑎x, x²e𝑎x form a basis for V. Therefore, we can construct the standard matrix
[T]β←α, where α = {i, j, k} is the standard basis of ℝ³.
Since [T]β←α is clearly invertible (the identity matrix is its own inverse),
T is invertible too and is thus an isomorphism.
We can generalize this example and consider the set ℝ≤n[x] of polynomials with real coefficients of degree up to n. This set is isomorphic to ℝn+1:
\[
T \left( a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n \right) = \left( a_0 , a_1 , a_2 , \ldots , a_n \right) .
\]
It is straightforward to show that this function is a linear transformation,
so we just need to convince ourselves that it is invertible. To this end, we
just explicitly construct its inverse T−1 : ℝn+1 ℝ≤n[x], so
\[
T^{-1} \left( a_0 , a_1 , a_2 , \ldots , a_n \right) = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n .
\]
End of Example 2
■
While it’s true that there’s nothing really “mathematically” new about
isomorphisms, the important thing is the new perspective that it gives us.
It is very useful to be able to think of vector spaces as being the same as
each other, as it can provide us with new intuition or cut down the amount
of work that we have to do. More generally, isomorphisms are used throughout all of mathematics, not just in linear algebra. In general, they are defined to be invertible maps that preserve whatever the relevant structures or operations are.
In our setting, the relevant operations are scalar multiplication and vector
addition, and those operations being preserved is exactly equivalent to the
invertible map being a linear transformation.