Every linear map f : V ⇾ U, with V and U vector spaces over field 𝔽, defines a dual transformation f′ : U* ⇾ V* according to the following definition.
A dual map f′ : U* ⇾ V* corresponding to a linear transformation f : V ⇾ U between two vector spaces over the same field 𝔽, is defined as
\begin{equation} \label{EqDualT.1}
f' (\psi ) := \psi \circ f , \qquad \forall \psi \in U^{\ast} .
\end{equation}
As often in linear algebra, we can get to a much more
elaborating picture if we introduce bases and work out the matrices associated to the linear transformations.
Let S = S_{A} be a linear transformation from one 𝔽-vector space V into U under the same field 𝔽, i.e.,
S : V ⇾ U. Let β = { v_{1}, v_{2}, … , v_{n} } and γ = { u_{1}, u_{2}, … , u_{n} } be bases in vector spaces V and U, respectively. Then transformation S has matrix representation in these bases:
Suppose that we know their dual bases: β* = { v^{1}, v^{2}, … , v^{n} } and γ* = { u^{1}, u^{2}, … , u^{n} }, respectively. Then we can express entries of matrix A through elements of the dual basis:
Here ⟨φ∣ ∈ V* is a bra vector, and ∣v⟩ ∈ V is a ket vector in Dirac's notation. The bra–ket notation, or Dirac notation establishes ubiquitously a bilinear map
As you see, matrix A sits symmetrically between bra and ket vectors acting from right on the ket-vector and from the left on the bra-vector. The latter can be rewritten as action on bra-vectors from right, but this action is defined by the transpose matrix.
Theorem 1:
Let V and U be finite-dimensional vector spaces over the same field, f : V ⇾ U be a
linear map, and f′ : U* ⇾ V* its dual transformation, as defined in Eq.\eqref{EqDualT.1}. If f is described
by a matrix A relative to a choice of bases on V and U, then f′ is described by the transpose matrix A^{T}, relative to the dual choice of bases on U* and V*.
Let {v_{1}, v_{2}, … , v_{n}} and {u_{1}, u_{2}, … , u_{m}} be bases in V and U, respectively. Then linear map
f : V ⇾ U and its dual counterpart f′ : U* ⇾ V* can be identified with matrices A = [𝑎_{ij}] and B = [b_{ij}], respectively. So we have
\[
f \left( {\bf v}_j \right) = \sum_{i=1}^m a_{ij} {\bf u}_i , \qquad f' \left( {\bf u}^j \right) = \sum_{i=1}^n b_{ij} {\bf v}^i .
\tag{P.1}
\]
To understand the relationship between matrices A and B, we simply evaluate the definition (1) with ψ = u^{j} and apply it to a basis vector v_{k}.
This equation holds on the basis {v_{1}, v_{2}, … , v_{n}}, so we can drop the argument v_{k} on either
side and conclude, by comparison with Eq.(P.1), that B = A^{T}.
Example 1:
Let us consider a linear transformation f : ℝ² ⇾ ℝ³, defined by
\[
f(x, y) = (2x , x-y, 2x+3y) .
\tag{1.1}
\]
Assuming that we use the standard bases {i, j} and {i, j, k}, the linear map (1.1) is described by the matrix
\[
{\bf A} = \begin{bmatrix} 2 & \phantom{-}0 \\
1 & -1 \\ 2 & \phantom{-}3 \end{bmatrix} .
\tag{1.2}
\]
Then the dual map f′ : ℝ³* ⇾ ℝ²* is described by the transposed matrix
\[
{\bf A}^{\mathrm{T}} = \begin{bmatrix} 2 & \phantom{-}1 & 2 \\ 0 & -1 & 3 \end{bmatrix} .
\]
Indeed,
\[
f' (\psi^1 , \psi^2 , \psi^3 ) = \left( 2 \psi^1 + \psi^2 + 2\psi^3 , -\psi^2 + 3 \psi^3 \right) .
\]
where ∣i⟩ = ∣e_{i}⟩ and ⟨j∣ = ⟨e^{j}∣ are basic elements in V and its dual V*, respectively.
Kernel and image of the dual map
Associated to each linear map are two vector subspaces, the kernel and image. The
next theorem states an interesting relation between these spaces and their counterparts
for the dual map.
Theorem 2:
For finite-dimensional vector spaces V and W , the kernel and image
of the linear map f : V ⇾ U and its dual map f′ : U* ⇾ V* are related by
\[
\mbox{Ker}(f' ) = \left( \mbox{Im}(f) \right)^0 , \qquad \mbox{Im}(f') = \left( \mbox{Ker}(f) \right)^0 .
\]
We use the definition of dual map f′(ψ) = ψ ⚬ f and apply it to the annihilator
\[
S^0 = \left\{ \varphi \in V^{\ast} \mid \varphi ({\bf u}) = 0 \quad \forall {\bf u} \in U \right\} .
\]
Let ψ ∈ Ker(f′), then f′(ψ) = ψ ⚬ f = 0, which is equivalent to
\[
\psi (f({\bf v})) = 0 \quad \forall {\bf v} \in V .
\]
Therefore, ψ belongs to(Im(f))^{0} .
The second part can be shown in a similar way.
Example 2:
Suppose that a linear map ℝ³ ⇾ ℝ³ is specified by the matrix
\[
{\bf A} = \begin{bmatrix} \phantom{-}3 & 8 & \phantom{-}2 \\ -1 & 2 & \phantom{-}4 \\ \phantom{-}2 & 3 & -1 \end{bmatrix} .
\]
Since its determinant is zero,
A = {{3, 8, 2}, {-1, 2, 4}, {2, 3, -1}}
Det[A]
0
we know that its column vectors are linearly dependent. So the column space of matrix A is spanned on two first column vectors
\[
{\bf a}_1 = \begin{pmatrix} \phantom{-}3 \\ -1 \\ \phantom{-}2 \end{pmatrix} \qquad \mbox{and} \qquad {\bf a}_2 = \begin{pmatrix} 8 \\ 2 \\ 3 \end{pmatrix} .
\]
Hence, the third column is a linear combination of these two vectors:
\[
{\bf a}_3 = \begin{pmatrix} \phantom{-}2 \\ \phantom{-}4 \\ -1 \end{pmatrix} = c_1 {\bf a}_1 + c_2 {\bf a}_2 = \frac{13}{7}\, {\bf a}_1 + \frac{15}{14}\, {\bf a}_2 .
\]
This means that Im(A) = span(a₁, a₂), so dim(Ker(A)) = 3 − 2 = 1. Since the solutions to equation A v = 0 constitute the null space of matrix A, we ask Mathematica for help to determine these solutions:
where
\[
\varphi_1 = [1, 0, -2], \qquad \varphi_2 = [0, 1, 1] .
\]
Of course, all these calculations were done with Mathematica:
Cross[{3, -1, 2}, {8, 2, 3}]
{-7, 7, 14}
Components of φ are determined from the equation φ • v = 0:
Solve[{2*f1 - f2 + f3 == 0}, {f1, f2, f3}]
{{f3 -> -2 f1 + f2}}
On the other hand, we can compute the kernel and image of the dual map, described by the transpose matrix
\[
{\bf A}^{\mathrm{T}} = \begin{bmatrix} 3 & -1 & \phantom{-}2 \\ 8 & \phantom{-}2 & \phantom{-}3 \\ 2 & \phantom{-}4 & -1 \end{bmatrix} = \left[ {\bf a}_1^{\mathrm{T}} , {\bf a}_2^{\mathrm{T}} , {\bf a}_3^{\mathrm{T}} \right] ,
\]
where
\[
{\bf a}_1^{\mathrm{T}} = \begin{pmatrix} 3 \\ 8 \\ 2 \end{pmatrix} , \quad {\bf a}_2^{\mathrm{T}} = \begin{pmatrix} -1 \\ \phantom{-}2 \\ \phantom{-}4 \end{pmatrix} , \quad {\bf a}_3^{\mathrm{T}} = \begin{pmatrix} \phantom{-}2 \\ \phantom{-}3 \\ -1 \end{pmatrix} .
\]
We check the identity A^{T}ψ^{T} = 0 with Mathematica:
B = Transpose[A]
{{3, -1, 2}, {8, 2, 3}, {2, 4, -1}}
B.{{-7}, {7}, {14}}
{{0}, {0}, {0}}
Therefore, Ker(A^{T}) = Span(ψ) = (Im(A))^{0}. Further, since φ₁ = (a₁^{T} −4a₂^{T})/7 and φ₂ = (a₁^{T} + 3a₂^{T})/14, the image of A^{T} is Im(A^{T}) = span(φ₁, φ₂) = (Ker(A))^{0}, all in accordance with Theorem 2.
Finally, we verify that the dual map is identified by the transposed matrix according to Theorem 1. Let ψ = (\psi;₁, \psi;₂, \psi;₃) be arbitrary vector from U′ = ℝ³. From Eq.\eqref{EqDualT.1}, it follows
\[
f' (\psi ) := \psi \circ f = \left[ \psi_1 \circ f , \psi_2 \circ f , \psi_3 \circ f \right] .
\]
We apply f′(ψ) to an arbitrary vector v = [v^{1}, v^{2}, v^{3}]^{T} ∈ V = ℝ³. For k-th component (k = 1, 2, 3), we have
\[
\psi_k \circ f({\bf v}) = \psi_k \left( {\bf A}\,{\bf v} \right) =\left( \left( {\bf A}^{\mathrm{T}} \psi \right) ({\bf v}) \right)_k .
\]
End of Example 2
Corollary 1:
For finite-dimensional vector spaces V and U over the same field, a linear map f : V ⇾ U
and its dual map f′ : U* ⇾ V* satisfy
\[
\mbox{rank}(f') = \mbox{rank}(f) .
\]
Example 9:
Many quantum mechanical systems are based on infinite dimensional vector spaces ℌ, so
their mathematics is somewhat beyond our scope. In this application, we discuss a quantum
system based on a single spin (such as the spin of an electron). The associated vector space
V over ℂ is two-dimensional and has an ortho-normal basis (| ↑⟩, | ↓⟩) of two states which
are interpreted as ’spin up’ and ’spin down’. A general element |ψ⟩ ∈ V has the form
\[
\mid \psi \rangle = \sum_{s = \uparrow ,\downarrow} c_s \mid s \rangle = c_{\uparrow} \mid \uparrow \rangle + c_{\downarrow} \mid \downarrow \rangle ,
\]
where c_{↑}, c_{↓} ∈ ℂ. If we normalize the state, ⟨ψ|ψ⟩ = |c_{↑}|² + |c_{↓}|² = 1, the complex moduli
|c_{↑}|² and |c_{↓}|² of the coordinates should be interpreted as the probabilities of ’spin up’ and
’spin down’ when measuring the state |ψ⟩.
Recall that in quantum mechanics physical quantities are represented by Hermitian (self-adjoint) linear operators V ⇾ V.
A particularly important such transformation is the Hamilton operator (=total energy operator) H : V ↦ V that corresponds to the energy. Its matrix elements
\[
h_{s,t} = \langle s \mid H \mid t \rangle
\]
must form a Hermitian 2 × 2 matrix H. Then the Hamiltonian matrix can be expanded as
\[
H = a\,{\bf I} + {\bf b} \bullet \sigma ,
\]
where σ = (σ₁, σ₂, σ₃) is a formal vector which contains the Pauli matrices, 𝑎 ∈ ℝ, and
b ∈ ℝ³. In a physical context, the term 𝑎I represents on overall energy contribution which affects all spin states equally (for example a kinetic energy of the electron) while the term b • σ may describe the effect of a magnetic field proportional to b
The eigenvalues of hermitian matrix are obtained by equating the characteristic polynomial to zero:
\[
\chi (E) = \det \begin{bmatrix} a - E + b_3 & b_1 + {\bf j} b_2 \\ b_1 - {\bf j} b_2 & a - E - b_3 \end{bmatrix} = \left( a - E \right)^2 - |{\bf b}|^2 = 0
\]
shows that the two energy eigenvalues are E_{±} = 𝑎±|b|. Here j is the imaginary unit on complex plane ℂ, so j² = −1. The corresponding eigenstates |E_{±}⟩ form an ortho-normal basis of V and satisfy the eigenvalue equation H|E_{±}⟩ = E_{±}|E_{±}⟩. Let
us consider two simple special cases.
First assume that b₁ = b₂ = 0, so that H = diag(𝑎 + b₃, 𝑎 − b₃) with energy eigenvalues E_{±} = 𝑎 ± b₃. The corresponding eigenvectors are the standard unit vector e₁, e₂, so the energy eigenstates
\[
\mid E_{+} \rangle = \mid \uparrow \rangle , \qquad \mid E_{-} \rangle = \mid \downarrow \rangle ,
\]
are the spin up and spin down states. Hence, for the energy eigenstate |E_{+}⟩, the probability f measuring spin up is ∣⟨↑, E_{+}⟩∣² = 1 while the probability for measuring spin down is ∣⟨↓, E_{-}⟩∣² = 0. The situation is of course reversed for the energy eigenstate ∣E_{-}⟩.
As a second example consider b₂ = b₃ = 0 and b₁ > 0, so that eigenvalues and
eigenvectors of operator H are given by
\[
H = \begin{bmatrix} a & b_1 \\ b_1 & a \end{bmatrix} , \qquad E_{\pm} = a \pm b_1 , \quad {\bf v}_{\pm} = \frac{1}{\sqrt{2}} \left( 1 , \pm 1 \right) .
\]
Now the energy eigenstates are
\[
\mid E_{\pm}\rangle = \frac{1}{\sqrt{2}} \left( \mid \uparrow \rangle \pm \mid \downarrow \rangle \right) ,
\tag{Q.1}
\]
so the probability of measuring a spin up state with energy E_{+} is ∣⟨↑ ∣ E_{+}|² = ½ .
The evolution of a state |ψ(t)⟩ with time t is governed by the time-dependent Schrödinger equation
\[
H \mid \psi (t) \rangle = {\bf j}\,\frac{\text d}{{\text d}t} \mid \psi (t) \rangle .
\]
The simplest way to solve this equation is by writing the state ∣ψ(t)⟩ as a linear combination
\[
\mid \psi (t)\rangle = c_{+}(t) \mid E_{+} \rangle + c_{-}(t) \mid E_{-} \rangle
\]
of the energy eigenstates, with time-dependent coordinates c_{±}(t). Inserting this into the Schrödinger equation, and taking into account eigenvalue expansion H∣E_{±}⟩ = E_{±}∣E_{±}⟩, we obtain simple differential equations \( \displaystyle \dot{c}_{\pm} = -{\bf j}\,E_{\pm} c_{\pm}(t) . \) This linear differential equation has the general solution \( \displaystyle c_{\pm} (t) = \beta_{\pm} \exp \left\{ -{\bf j}\,E_{\pm} t \right\} , \) where β_{±} ∈ ℂ are integration constants, so that the complete solution reads
\[
\mid \psi (t) \rangle = \beta_{+} e^{-{\bf j}\,E_{+} t} \mid E_{+} \rangle + \beta_{-} e^{-{\bf j}\,E_{-} t} \mid E_{-} \rangle .
\tag{Q.2}
\]
Consider the second case above where b₂ = b₃ = 0, b₁ > 0, E_{±} = 𝑎 ± b₁ and the energy eigenstates are given by Eq. (Q.1). The constants β_{±} allow us to specify a state at some initial time, say t = 0. Let us
assume that the system is initially in a spin-up state, so
|ψ(0)⟩ = |↑⟩ = (|E_{+}&rangle + |E_{-}⟩)/√2. This fixes the constants to β_{±} = 1/√2. Inserting into
Eq. (Q.2), the resulting time-dependent solution reads
\[
\mid \psi (t) \rangle = \frac{e^{-{\bf j}at}}{\sqrt{2}} \left( e^{-{\bf j}\,b_1 t} \mid E_{+} \rangle + e^{{\bf j}\,b_1 t} \mid E_{-} \rangle \right) .
\]
Solutions such as these can be used to answer questions about the probability of measuring certain quantities as a function of time. For example, if we want to know the probability for the spin pointing downwards, we compute
\[
\langle \downarrow \mid \psi (t) \rangle = \frac{e^{-{\bf j}at}}{2} \left( \langle E_{+} \mid - \langle E_{-} \mid \right) \left( e^{-{\bf j} b_1 t} \mid E_{+} \rangle + e^{{\bf j} b_1 t} \mid E_{-} \rangle \right) = \sin (b_1 t) ,
\]
where ∣↓⟩ = (E_{+}⟩ −E_{-}⟩)/√2 and the ortho-normality of the states ∣E_{±}⟩ has been used.
Hence, the probability for measuring a downward spin at time t is ∣⟨↓∣ψ(t)⟩∣² = sin²(b_{1}t). By a similar calculation, the probability for an upward spin is ∣⟨↑∣ψ(t)⟩∣² = cos²(b_{1}t).
A three-state quantum system is described by the Hamiltonian operator
\[
H = \begin{bmatrix} a & 0 b \\ 0 & a & 0 \\ b & 0 & a \end{bmatrix} ,
\]
where 𝑎, b ∈ ℝ. The Hamiltonian acts in three-dimensional vector space V with
ortho-normal basis |s⟩, where s = −1, 0, 1.
Find the eigenvalues and eigenvectors of H.
What is the time evolution of ∣ψ(t)⟩ ∈ V ?
If ∣ψ(0)⟩ = ∣s⟩, what is the probability of measuring |r⟩ at time t, where s, r = −1, 0, 1 ?