This section establishes a connection between vector spaces and show that all finite-dimensional spaces are equivalent to π½n, where π½ is either β (rational numbers) or β (real numbers) or β (complex numbers). This aloows us to extend vector operations from π½n to a vector space. Therefore, any finite dimensional vector space has the same algebraic structure as π½n even though its vectors may not be expressed as n-tuples.
Isomorphism
We have already studied invertible square matrices. Let us generalize the results
obtained in the general context of vector spaces. Recall that every finite-dimensional vector space V has a basis Ξ², and we can
use that basis to represent a vector v β V as a coordinate vector [v]Ξ² β π½n,
where π½ is the ground field (either β or β or β). We used this correspondence between V and
π½n to motivate the idea that these vector spaces are βthe sameβ in the sense
that, in order to do a linear algebraic calculation in V, we can instead do the
corresponding calculation on coordinate vectors in π½n.
We now make this idea of vector spaces being βthe sameβ a bit more precise
and clarify under exactly which conditions this βsamenessβ happens.
A vector transformation Tβ: V βΎ W that is both one-to-one and onto is said to be an isomorphism, and W is said to be isomorphic to V, which is abbreviated as V β W.
We can think of isomorphic vector spaces as having the same structure and
the same vectors as each other, but different labels on those vectors.
Note that if Tβ: V βΎ W is an isomorphism, then the inverse Tβ1: W βΎ V is
linear, hence also an isomorphism.
Theorem 1:
Two finite-dimensional vector spaces V and W are isomorphic
precisely when they have the same dimension.
Take any isomorphism Tβ: V βΎ W. Since T is injective, kerT = {0}, dim kerT = 0. Since T is surjective, imT = V, dim imT = dimV. Hence
dimV=dimimT+dimkerT=dimW.
Conversely, if V and W have the same dimension n, take bases { ei} of V and { ui} of W. Since they have the same number of elements by assumption, we may index them by the same index set 1 β€ i β€ n. The mapping
x=βpxieiβ¦y=βpxiui
defines an isomorphism between V and W.
Example 1:
Let is start with the vector space π½m,n of rectangular mΓn matrices with entries from ground field π½. Our first example starts with the simplest 1Γ1 matrices [π]. Since we have an obvious mapping [π] β¦ π, which establishes an isomorphism π½1,1 βΎ π½.
Consider the direct product of real axis βn = β Γ β Γ β― Γ β that is built of ordered n-tuples (x1, β¦ , xn) β βn. We can organize these n-tuples as row vectors or as column vectors; which shows that these three vectors spaces are isomorphic: βn β β1,n β βn,1.
The fact that we write the entries of vectors in π½1,n in a row whereas we
write those from π½n,1 in a column is often just as irrelevant as if we used a
different font when writing the entries of the vectors in one of these vector
spaces. Indeed, vector addition and scalar multiplication in these spaces are
both performed entrywise, so it does not matter how we arrange or order those
entries. Moreover, we can identify these vectors with diagonal square matrices, or write them in come other fancy way (say in circle)m the resulting objects will be again isomorpic to π½n.
The fact that π½n, π½1,n, and π½n,1 are isomorphic justifies something that
is typically done right from the beginning in linear algebraβtreating members of π½n (vectors), members of π½n,1 (column vectors), and members of π½1,n (row vectors) as the same thing.
Furthermore, the set π½m,n of m Γ n matrices is isomorphic to the set π½n,m of n Γ m matrices because transposing
twice gets us back to where we started.
Let us consider two four dimensional vector spaces, π½4 of 4-tuples and π½2,2 of 2Γ2 matrices.
They are isomorphic vector spaces because there is a natural isomorphism
between them:
[abcd]β¦[abcd].
End of Example 1
β
When we speak about isomorphisms and vector spaces being βthe
sameβ, we only mean with respect to the 8 defining properties of vector
spaces (i.e., properties based on vector addition and scalar multiplication).
We can add column vectors in the exact same way that we add row vectors, or we can add diagonal matrices,
and similarly scalar multiplication works the exact same for those three
types of vectors. However, other operations like matrix multiplication may
behave differently on these three sets (e.g., if A β π½m,n, then Aβx makes sense when x is a column vector, but not when it is a row vector).
As an even simpler example of an isomorphism, we have implicitly been
using one when we say things like vTu = v Β· u for all v, u β βn. Indeed, the
quantity v Β· u is a scalar in β, whereas vTu is actually a 1 Γ 1 matrix (after
all, it is obtained by multiplying a 1 Γ n matrix by an n Γ 1 matrix), so it does
not quite make sense to say that they are βequalβ to each other. However, the
spaces β and β1,1 are trivially isomorphic, so we typically sweep this technicality under the rug.
Theorem 2:
Every finite-dimensional vector space V of dimension n β₯ 1 is
isomorphic to π½n.
Let us choose a basis { ei }, 1 β€ i β€ n, of the vector space V. Hence each
element xβV has a unique representation as a linear combination of the basis elements, say
x=βixiei. Let f(x) denote the n-tuple formed by the components of x:
f(x)=(x1,x2,β¦,xn)βFn
This map is bijective by definition. It is linear and its inverse π½n βΎ V is
{xi}1β€iβ€nβ¦nβi=1xiei.
Example 2:
For any real number π, the vector space V = span{ eπx, xeπx, xΒ²eπx} and βΒ³ are isomorphic. The standard way to show that two spaces are isomorphic is to construct an isomorphism between them. To this end, consider the linear
transformation Tβ: βΒ³ βΎ V, defined by
T(c1,c2,c3)=c1eax+c2xeax+c3x2eax.
It is straightforward to show that this function is a linear transformation, so we just need to convince ourselves that it is invertible. To this
end, we need to show that vectors (functions) eπx, xeπx, xΒ²eπx are linearly independent. Indeed, suppose we have the relation
c1eax+c2xeax+c3x2eax=0βΊc1+c2x+c3x2=0
because the function eπx β 0 for any x. Therefore, this function can be canceled out and we reduce Eq.(A) is reduced to
c1+c2x+c3x2=0,
which is equivalent to show that monomials 1, x, and xΒ² are linearly independent.
Hence, three functions eπx, xeπx, xΒ²eπx form a basis for V. Therefore, we can construct the standard matrix
[T]Ξ²βΞ±, where Ξ± = {i, j, k} is the standard basis of βΒ³.
Since [T]Ξ²βΞ± is clearly invertible (the identity matrix is its own inverse),
T is invertible too and is thus an isomorphism.
We can generalize this example and consider the set ββ€n[x] of polynomials with real coefficients of degree up to n. This set is isomorphic to βn+1:
T(a0+a1x+a2x2+β―+anxn)=(a0,a1,a2,β¦,an).
It is straightforward to show that this function is a linear transformation,
so we just need to convince ourselves that it is invertible. To this end, we
just explicitly construct its inverse Tβ1 : βn+1 β ββ€n[x], so
Tβ1(a0,a1,a2,β¦,an)=a0+a1x+a2x2+β―+anxn.
End of Example 2
β
While itβs true that thereβs nothing really βmathematicallyβ new about
isomorphisms, the important thing is the new perspective that it gives us.
It is very useful to be able to think of vector spaces as being the same as
each other, as it can provide us with new intuition or cut down the amount
of work that we have to do. More generally, isomorphisms are used throughout all of mathematics, not just in linear algebra. In general, they are defined to be invertible maps that preserve whatever the relevant structures or operations are.
In our setting, the relevant operations are scalar multiplication and vector
addition, and those operations being preserved is exactly equivalent to the
invertible map being a linear transformation.