Functions are used throughout mathematics, physics, and engineering to study the structures of sets and relationships between sets. You are familiar with the notation y = f(x), where f is a function that acts on numbers, signified by the input variable x, and produces numbers signified by the output variable y. It is a custom to write an input variable to the right of function following all European languages that perform writing in left to right.
In general, a function f : X ↦ Y is a rule that associates with each x in the set X a unique element y = f(x) in Y. We say that fmaps the setX into the set Y and maps the element x to the element y. The set X is the domain of f and the set Y is called range or codomain. The set of all outputs a particular function actually uses from the set X is its image. In linear algebra, we are interested in functions that maps vectors into vectors preserving vector operations.
A function T : ℝm ↦ ℝn is a linear transformation if it satisfies two conditions:
T(v + u) = T(v) + T(u) Preservation of addition;
T(kv) = kT(v) Preservation of scalar multiplication;
for all vectors v, u in ℝn and for all scalars k.
AS it is clear from the definition above, we can similar extend it for complex fields ℂm ↦ ℂn or rational fields ℚm ↦ ℚn. Actually, we can extend this definition for arbitrary vector spaces:
Let V and U be vector spaces over a scalar field 𝔽 (which is either ℂ or ℝ or ℚ). A function T : V ↦ U is called a linear transformation (also called linear mapping or vector space homomorphism) if T preserves vector addition and scalar multiplication.
Theorem 1:
Let V be a finite-dimensional vector space of dimension n≥1, and let
β = { v1, v2, … , vn } be a basis for V. Let U be any vector space, and let { u1, u2, … , un } be a list of vectors from U. The function T : V ↦ U defined by
is a linear transformation for any n scalars 𝑎1, 𝑎2, … , 𝑎n.
Because β is a basis in V, there are
unique scalars 𝑎1, 𝑎2, … , 𝑎n such that arbitrary vector v ∈ V is represented as a linear combination of basis vectors: v = 𝑎1v1 + 𝑎2v2 + ⋯ + 𝑎nvn. So there is a unique
corresponding element
To show that T is a linear transformation, take any vectors v = 𝑎1v1 + 𝑎2v2 + ⋯ + 𝑎nvn and w = b1v1 + b2v2 + ⋯ +
bnv2 in V and any real number k. We have
The two required conditions are satisfied, and T is a linear transformation.
This theorem provides a really nice way to create linear transformations.
Example 1:
Let us construct a linear transformation from ℝ² into ℝ³. WE choose two basis vectors in ℝ² and corresponding vectors in ℝ³:
We can now define T: ℝ² ↦ ℝ³ by
\[
T \left( x \begin{bmatrix} -1 \\ \phantom{-}2 \end{bmatrix} + y \begin{bmatrix} 1 \\ 1 \end{bmatrix} \right) = x\begin{bmatrix} -1 \\ \phantom{-}1 \\ \phantom{-}2 \end{bmatrix} + y \begin{bmatrix} 2 \\ 1 \\ 3 \end{bmatrix} =
\]
To prove that T is a linear transformation, all we need to
do is say: by Theorem 1, T is a linear transformation.
An alternative way to write T is as follows:
Let ℳm×n is a set of all m × n matrices with entries from the field 𝔽. Then transformation gives a linear transformation from ℳm×n into ℳn×m.
Let ℭ∞[𝕋] be set of infinitely differentiable periodic functions on the unit circle 𝕋 (one-dimensional torus). Then expansion of a function from ℭ∞[𝕋] into the Fourier series
provides a linear transformation from ℭ∞[𝕋] into the set of infinite sequences.
Isometric transformations
A transformation is isomeric when ∥A x∥ = ∥ x∥.
This implies that the eigenvalues of an isometric transformation are given by λ = exp(jφ). Then also we have 〈 Ax , Ay 〉 = 〈 x m y 〉.
When W is an invariant subspace of the isometric transformation A with dim(A) < ∞, then also W⊥ is also invariant subspace.
Orthogonal transformations
A transformation A is orthogonal if A is isometric and its inverse exists.
For an orthogonal transformation O, the identity OTO = I, so OT = O−1. If A and B are orthogonal, then AB and A−1 are also orthogonal.
Let A : V → V be orthogonal with dim(V) <
∞, then A is direct orthogonal if det(A) = +1. Matrix A describes a rotation. In particular, A provides a rotation of ℝ² through angle φ, it is given by
So the rotation angle φ is determined by trace tr(A) = 2cos(φ) with 0 ≤ φ ≤ π. Let λ₁ and λ₂ be the roots of the characteristic equation. Then Re(λ₁) = Re(λ₂) = cos(φ) and λ₁ = exp(jφ) and
λ₂ = exp(−jφ).
In ℝ³, λ₁ = 1, λ₂ = λ₃* = exp(jφ). A rotation over eigenspace corresponding λ₁ is given by matrix
A transformation A is called mirrored orthogonal if det(A) = −1. Vectors from E−1 are mirrored by A with respect to the invariant subspace
E⊥−1.
A mirroring in ℝ² in <\( \left( \cos \left( \frac{1}{2}\,\varphi \right) , \sin \left( \frac{1}{2}\,\varphi \right) \right) \) > is given by
Mirrored orthogonal transformations in ℝ³ are rotational mirroring rotations of axis < a > through angle φ and mirror plane
< a >⊥. The matrix of such transformation is given by
For all orthogonal transformations in ℝ³, O(x)×O(y) = O(x×y).
ℝn (n < ∞) can be decomposed in invariant subspaces with dimension 1 or 2 for each orthogonal transformation.
Unitary transformations
Let V be complex vector space with inner product. A linear transformation U of V is called unitary if it is isometric and its inverse exists.
An n × n matrix U is unitary if U*U = I, the identity matrix. Its determinant is det(U) = ±1. Each isometric transformation in a finite dimensional complex vector space is unitary.
Theorem 1:
For an n × n matrix A, the following statements are equivalent:
A is unitary.
The columns of A form an orthonormal set.
The rows of matrix A form an orthonormal set.
Symmetric transformations
A transformation of ℝn is called symmetric if 〈 Ax , y 〉 = 〈 x , Ay 〉 for any vectors x and y from the vector space.
A square matrix A is symmetric if AT = A. A linear transformation is symmetric if its matrix with respect to an arbitrary basis is symmetric. All eigenvalues of a symmetric transformation are real. Eigenvectors corresponding to distinct eigenvalues are orthogonal. If A is symmetric, then AT = A = A* for any orthogonal basis. The product ATA is symmetric if T is.
Self-adjoint transformations
A transformation H : ℂn → ℂn is called self-adjoint or Hermitian if
〈 Ax , y 〉 = 〈 x , Ay 〉 for any vectors x and y from the vector space.
A product AB of two self-adjoint matrices A and B is self-adjoint if its commutator is zero, [A, B] = AB − BA = 0.
Eigenvalues of any self-adjoint matrix are real numbers.
Normal transformations
A linear transformation A is called normal if A*A = AA*.
Let the different roots of the characteristic equation of normal matrix A be βi with multiplicities ni.
Than the dimension of each eigenspace Vi equalsni. These eigenspaces are mutually perpendicular and each vector x∈V can be written in exactly one way as