If V is a Euclidean space, i.e., a space with a scalar product ⟨ x, y ⟩, then this allows us to define another isomorphism VV*, different from the one described in Part 3. This isomorphism associates with a vector vV a linear function δv(x) = ⟨ v, x ⟩. We will denote the corresponding map VV* by δ. Thus, we have δ(v) = δv for any vector vV.

The isomorphism δ is independent of a choice of an orthonormal basis, but is still not completely canonical: it depends on a choice of a scalar product---it is canonical for real vector spaces. However, when talking about Euclidean spaces, this isomorphism is considered to be canonical.

Dual Transformations

Let T : VW be linear transformation of vector space V into another vector space over the same field 𝔽. The dual map (or transpose) of T is the map T′ : W* ⇾ V* defined by \[ T' \varphi = \varphi \circ T = \varphi \left( T \,\cdot \right) \qquad \forall \varphi \in W^{\ast} , \]
In other words, T′ sends a linear functional φ on W to the composition φ⚬T, which is a linear functional on V.
Theorem 1: Suppose that V and W are finite-dimensional vector spaces over the sane field 𝔽 and let α = { e1, e2, … , en } and β = { w1, w2, … , wm } be bases of V and W, respectively. Let α* = { e1, e2, … , en } and β* = { w1, w2, … , wm } be the corresponding dual bases of V* and W*, respectively. For a linear transformation A ∈ ℒ(V, W), the matrix of A′ ∈ ℒ(W*, V*) with respect to the dual bases α* and β* is the transpose of the matrix of A with respect α and β, i.e.,
\[ \left[ A' \right] = \left[ A \right]^{\mathrm{T}} . \]
By the definition of the matrix of a linear map we should take vectors of the basis β = { w1, w2, … , wm }, apply to them the map A′, expand the images in the basis α = { e1, e2, … , en } and write the components of these vectors as columns of the matrix A′. Set yi = A′(wi), i = 1, … m. For any vector \( \displaystyle {\bf u} = \sum_{i=1}^n u_i {\bf e}_i \in V , \) we have yi(u) = wi(A(u)). The coordinates of the vector A(u) in the basis β maybe obtained by multiplying the matrix A by the column | u ⟩ = [ u1, u2, … , un ]>sup>T. Hence,
\[ {\bf y}_i ({\bf u}) = {\bf w}_i \left( A({\bf u}) \right) = \sum_{j=1}^n a_{ij} u_j . \]
However, we also have
\[ \sum_{j=1}^n a_{ij} {\bf e}_j = \sum_{j=1}^n a_{ij} u_j . \]
So the linear function yiV* has an expression \( \displaystyle \sum_{j=1}^n a_{ij} {\bf e}_j \) in the basis α of V. Hence the i-th column of the matrix A′ equals \( \displaystyle \begin{bmatrix} u_{i1} \\ \vdots \\ u_{in} \end{bmatrix} , \) so that the whole matrix A′ has the form
\[ {\bf A}' = \begin{bmatrix} a_{11} & \cdots & a_{m1} \\ \vdots & \ddots & \vdots \\ a_{1n} & \cdots & a_{mn} \end{bmatrix} = {\bf A}^{\mathrm{T}} . \]
This gives the “true” meaning of taking the transpose of the matrix: if a matrix represents an operator on some finite-dimensional space, then its tranpose represents the dual operator.
Example 1: Let ℝ≤n[x] be a set of polynomials in variable
End of Example 1
Theorem 2: Consider linear maps \[ V \maps U \maps W . \] Then (B

Euclidean Dual Transformations

Note that under the identification of V with V* given by an inner product, the dual map corresponds exactly to the adjoint of T with respect to that inner product; indeed, this is why the dual map and adjoint use the same notation.

 

Annihilators


We remind the following definition.
If S is a subset (not need to be a subspace) of a vector space V, then the annihilator S0 is the set of all linear functionals φ such that φ(v) = <φ|v> = 0 for all vS. When V is a Euclidean space, then its annihilator is denoted by S.
Note that S0 is a subspace of V*.
Theorem 1: Let U be a subspace of a vector space V over field 𝔽. There exists a natural isomorphism between U0 and (V/U)*. Hence, we can identify linear functionals on V/U with elements of U0.
Let φ ∈ U0. This means that φ is a linear functional on V that vanishes on the subspace U. Define a linear functional T on V/U by \[ \left( T\,\var[hi \right) ({\bf v}) = \varphi ({\bf v}) . \] In other words, Tφ sends the coset v + U to the scalar φ(v). First we need to know that this definition of T is well-defined. Suppose that v + U = x + U. We must check that evaluating T on either one gives the same result. Since φ vanishes on U and vxU, we have \[ 0 = \varphi ({\bf v} - {\bf x}) = \varphi ({\bf v}) - \varphi ({\bf x}) , \] so φ(v) = φ(x), showing that T is well-defined.

This then defines a map T : U ⇾ (V/U)*. From the definition of addition and scalar multiplication of linear functionals, it follows that T is linear. We claim that T is invertible.

To show that T is injective, suppose that φ ∈ ker(T). Then T is the zero functional on V/U so \[ 0 = T \left( {\bf v} + U \right) = \varphi ({\bf v}) \qquad \mbox{for} \quad \forall {\bf v} \in V . \] Thus φ is the zero functional on V (i.e., the zero element of U0) so ker(T) = {0} and T is injective.

Finally, to show that T is surjective (note that we are not assuming V to be finite-dimensional), let φ ∈ (V/U)*. Define an element ψ ∈ V* by \[ \psi ({\bf v}) = \varphi \left( {\bf v} + U \right) \qquad \mbox{for} \quad \foeall {\bf v} \in V . \] We claim that φ is actually in U0. Indeed, if {\bf u} ∈ U, the ψ(u + U) = ψ(U) = 0 because U is the zero element of V/U and ψ is linear. Hence ψ(v) = 0, so φ ∈ U0. By definition of T, it follows that Tφ = ψ so that T is surjective and thus invertible.