Before we dive into the ocean by the name of linear transformations and matrices, it makes sense to remind some definitions and concepts that will be used later.
A function f : XY is a rule that associates with an element x (input) from set X one and only one element y (output) of set Y. We say that f maps element x to the element y, which we denote as y = f(x). The set X is known as domain of f and Y is called codomain.
Codomain (in blue)
For any subset AX, we let f(A) = {f(𝑎) ∣ 𝑎 ∈ A}; the set f(A) is called the image of A under f. In particular, y = f(x) is the image of the element x. The image of the domain of f is called the range or image. We denote this set as Im(f) or just Imf.

As fields of scalars we consider one of the following three one-dimensional vector spaces: ℚ, rational numbers, ℝ, real numbers, or ℂ, complex numbers. When it does not matter which of these fields is in use, we denote it by 𝔽 each of them.

Next, we form from each of the scalar spaces a larger vector space---the direct product of n copies of them:

\[ \mathbb{F}^n = \mathbb{F} \times \mathbb{F} \times \cdots \times \mathbb{F} = \left\{ \left( x_1, x_2 , \ldots , x_n \right) \mid x_i \in \mathbb{F} \right\} . \]
This set 𝔽n, consisting of "ordered n-tuples" (x1, x2, … , xn), becomes a vector space with termwise operations:
\[ \begin{split} (a_1 , a_2 , \ldots , a_n ) + (b_1 , b_2 , \ldots , b_n ) = (a_1 + b_1 , a_2 +b_2 , \ldots , a_n + b_n ) , \\ \lambda (a_1 , a_2 , \ldots , a_n ) = (\lambda a_1 , \lambda a_2 , \ldots , \lambda a_n ) , \qquad \lambda \in \mathbb{F}. \end{split} \]
Any two ordered n-tuples (which are lists or finite sequences of numbers) to be regarded as the same, they must list the same numbers in the same order. Recall also that if n = 2, then the n-tuple is called an "ordered pair," and if n = 3, it is called an "ordered triple."

Note that independently which particular realization of a field is used, the direct product 𝔽n has the same standard basis for each of our three fields (ℚn or ℝn or ℂn):

\[ {\bf e}_1 = ( 1, 0, \ldots , 0, 0), \quad {\bf e}_2 = (0, 1, \ldots , 0, 0), \ldots , {\bf e}_n = (0,0, \ldots , 0, 1) . \]
Then every vector x ∈ 𝔽n can be uniquely expressed as a linear combination of these basis vectors
\[ {\bf x} = x_1 {\bf e}_1 + x_2 {\bf e}_2 + \cdots + x_n {\bf e}_n , \qquad x_i \in \mathbb{F} , \quad i=1,2,\ldots , n . \]
When 𝔽 is ℝ, the set of real numbers, and n is either 2 or 3, we can provide a geometric interpretation to a function. However, when n is larger than 3, or complex numbers are involved, the only practical implementation of a function is to define it algebraically.
An injective function (also known as injection, or one-to-one function) is a function f that maps distinct elements of its domain to distinct elements; that is, f(𝑎) = f(b) implies 𝑎 = b.
A surjective function (also known as surjection, or onto function) is a function f such that for every element y from the codomain of f there exists an input element x such that f(x) = y.
A bijective function f : XY is a one-to-one (injective) and onto (surjective) mapping of a set X to a set Y

Linear Transformations

Now we turn our attention to functions or transformations of vector spaces that do not “mess up” vector addition and scalar multiplication.
Let V = 𝔽n and U = 𝔽m be vector spaces over the same field 𝔽. We call a function T : VU a linear transformation (or linear operator or homomorphism) from V into U if for all vectors x, yV and any scalar α ∈ 𝔽, we have
  • \( T({\bf x} + {\bf y}) = T({\bf x}) + T({\bf y}) \)  (Additivity property),
  • \( T( \alpha\,{\bf x} ) = \alpha\,T( {\bf x} ) \)          (Homogeneity property).
Actually, the two required identities in the definition above can be gathered into a single one
\begin{equation} \label{EqTransform.1} T \left( \lambda {\bf x} + {\bf y} \right) = \lambda T \left( {\bf x} \right) + T \left( {\bf y} \right) , \qquad \forall \lambda \in \mathbb{F} , \qquad \forall {\bf x}. {\bf y} \in V. \end{equation}
Example 1: We consider several examples of linear transformations.
  1. Let V = ℝ≤n[x] be a space of polynomials in variable x with real coefficients of degree no larger than n. Let U = ℝ≤n−1[x] be a similar space of polynomials but with degree up to n−1. We consider the differential operator \( \displaystyle \texttt{D} :\, V \mapsto U \) that is defined by \( \displaystyle \texttt{D}\,p(x) = p'(x) . \) As you know from calculus, this differential operator is linear.
  2. Let V = ℝ≤1[x] be a space of polynomials in variable x with real coefficients of degree 1, and U = ℝ≤2[x]. For any polynomial p(x) = 𝑎x + bV, we assign another polynomial T(𝑎x + b) = (½)𝑎x² + bxU.

    This transformation T : VU is linear because \begin{align*} T \left( ax +b + cx +d \right) &= T \left( (a+c)\,x +b + d \right) \\ &= \frac{1}{2} \left( a + b \right) x^2 + \left( b + d \right) x \\ &= T \left( ax +b \right) + T \left( cx +d \right) , \end{align*} and for any real number λ, we have \begin{align*} T \left( \lambda \left( ax +b \right) \right) &= T \left( \lambda ax + \lambda b \right) \\ &= \frac{1}{2}\, \lambda a\,x^2 + \lambda b\,x = \lambda T \left( ax +b \right) . \end{align*}

  3. Let V = ℭ[0, 1] be a space of all continuous real-valued functions on interval [0, 1]. We define a linear transformation φ : V ⇾ ℝ by \[ \varphi (f) = \int_0^1 f(x)\,{\text d} x . \qquad \forall f \in V. \] This linear transformation is known as a functional on V.
  4. Let V = ℝ3,2 be the space of all 3×2 matrices with real entries. We define a linear operator A : VV by \[ A\,\begin{bmatrix} a & b \\ c & d \\ e & f \end{bmatrix} = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix} \, \begin{bmatrix} a & b \\ c & d \\ e & f \end{bmatrix} , \] where it is understood that two matrices are multiplied: \[ \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix} \, \begin{bmatrix} a & b \\ c & d \\ e & f \end{bmatrix} = \begin{bmatrix} a + 2c + 3 e & b + 2 d + 3 f \\ 4a + 5 c + 6 e & 4 b + 5 d + 6 f \\ 7a + 8c + 9e & 7b +8 d + 9f \end{bmatrix} \] So multiplication by a 3 × 3 mutrix we generate a linear transformation in the vector space of matrices ℝ3,2.
  5. Let us consider a vector space of square matrices with real coefficients: ℝn,n. For any matrix A ∈ ℝn,n, we define a linear transformation \[ T \left( {\bf A} \right) = {\bf A}^{\mathrm{T}} , \] where "T" means transposition (swap rows and columns).
End of Example 1
We often simply call T linear. The space V is referred to as the domain of the linear transformation and the space U is called codomain of T. For example, if f : ℝn ⇾ ℝm is a real-valued function that preserves linearity, we call it real linear transformation because scalars are real numbers. Geometrically, real linear transformations can be thought of as the functions that rotate, stretch, shrink, and/or reflect ℝn, but do so somewhat uniformly. Eq.\eqref{EqTransform.1} tells us that a linear transformation maps straight lines into straight lines because tx + y is a parametric equation of line.

Example 2: Let T : ℝ ⇾ ℝ be a linear transformation. Each element of ℝ can be viewed either as a vector or as a scalar. Let 𝑎 = T(1). Applying property T(ku) = k T(u) with u = 1 and k = x, we obtain \[ T(x) = T(x\cdot 1) = x\,T(1) = a\,x . \] Therefore, we conclude that every linear transformation from ℝ into ℝ can be defined by the formula T(x) = 𝑎 x for some real 𝑎 ∈ ℝ.

In above discussion, we never used specific knowledge that T acts from ℝ into ℝ. Therefore the same conclusion about the form of linear transformation is valid for any transformation in the filed of rational numbers ℚ ⇾ ℚ or in the field of complex numbers ℂ ⇾ ℂ.

End of Example 2
Let T : VW be a linear transformation. The kernel (also known as the null space or nullspace) of T, denoted ker(T), is the vector space of all elements v of V such that \[ \mbox{ker}(T) = \left\{ {\bf v} \in V \mid T({\bf v}) = 0 \right\} . \]
Example 3: Let us consider a linear transformation from ℝ³ into ℝ² defined by \[ T({\bf x}) = (x_1 + x_2 , x_1 -2 x_3 ) , \qquad {\bf x} = (x_1 , x_2 , x_3 ) \in \mathbb{R}^3 . \] If x ∈ ker(T), then \[ x_1 + x_2 = 0 \qquad \mbox{and} \qquad x_1 -2 x_3 = 0. \] Setting the free variable x₃ = t, we get \[ x_1 = 2\,t \qquad \Longrightarrow \qquad x_2 = -x_1 = -2\,t . \] Hence ker(T) is the one-dimensional subspace of ℝ³ consisting of all vectors of the form \[ \mbox{ker}(T) = \left\{ (2\,t, -2\,t , t \mid t \in \mathbb{R} \right\} . \] So the nullspace of T is spanned on the vector (2, −2, 1).
End of Example 3

We summarize the almost obvious statements about linear transformation in the following proposition.

Theorem 1: Let V and U be a vector spaces and T : VU be linear transformation.

  1. If T is a linear transformation, then T(0) = 0.
  2. T is a linear transformation if and only if for any \( {\bf x}_1 , \ldots , {\bf x}_n \in V \) and any real or complex scalars \( a_1 , \ldots , a_n \)
\[ T \left( \sum_{i=1}^r a_i {\bf x}_i \right) = \sum_{i=1}^r a_i T \left( {\bf x}_i \right) . \]

Corollary 1: For any two vector spaces V and U over the same field 𝔽, the set of all linear transformations from V to U, denoted by ℒ(V, U), is a vector space with addition and scalar multiplication defined as follows \[ \left( S + T \right) ({\bf v}) = S({\bf v}) + T({\bf v}), \qquad \alpha\,T(({\bf v}) = T(\alpha{\bf v}) . \]

The set of all linear transformations from V to U is also denoted as Hom( V, U) in order to emphasize that every element of ℒ(V, U) is a homomorphism.
Example 4: Let us construct a linear transformation from ℝ² into ℝ³. We choose two basis vectors in ℝ² and corresponding vectors in ℝ³:
\[ {\bf v}_1 = \begin{pmatrix} -1 \\ \phantom{-}2 \end{pmatrix}, \quad {\bf v}_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix} \qquad \mbox{and} \qquad {\bf u}_1 = \begin{bmatrix} -1 \\ \phantom{-}1 \\ \phantom{-}2 \end{bmatrix}, \quad {\bf u}_2 = \begin{bmatrix} 2 \\ 1 \\ 3 \end{bmatrix} . \]
We can now define T: ℝ² ↦ ℝ³ by
\[ T \left( x \begin{bmatrix} -1 \\ \phantom{-}2 \end{bmatrix} + y \begin{bmatrix} 1 \\ 1 \end{bmatrix} \right) = x\begin{bmatrix} -1 \\ \phantom{-}1 \\ \phantom{-}2 \end{bmatrix} + y \begin{bmatrix} 2 \\ 1 \\ 3 \end{bmatrix} = \]
To prove that T is a linear transformation, all we need to do is say: by Theorem 1, T is a linear transformation. An alternative way to write T is as follows:
\[ T \left( \begin{bmatrix} y-x \\ 2x+y \end{bmatrix} \right) = \begin{bmatrix} 2y-x \\ x+y \\ 2x + 3y \end{bmatrix} \]
End of Example 4
Example 5: Let us consider the transformation T : ℝ² ⇾ ℝ³, given by \[ T(x, y) = (x+y, 1, y). \] The function T is not a linear transformation. For example, if we multiply (x, y) ∈ ℝ² by 2, we get \[ T(2x, 2y) = (2x + 2y , 1, 2y) \ne 2\,T(x, y) = 2\,(x + y , 1, y) = (2x + 2y , 2, 2y) . \] Therefore, linear map T does not support the homogeneity property of linearity.

Matrix as a linear transformations

Every n × n matrix A ∈ 𝔽n×m can be considered as a linear operator maping 𝔽n into 𝔽m.

Theorem 2: Let A be an m × n -matrix, and consider the vector function TL : 𝔽n×1 ⇾ 𝔽m×1 defined by TL(v) = A v. Then T is a linear transformation, which can be extended to a linear map from 𝔽n into 𝔽m.
Similarly, an m × n -matrix A ∈ 𝔽 defines a linear transformation TR : 𝔽1×m ⇾ 𝔽1×n in the space of row vectors according to the formula TR(u) = u A.
This follows from the laws of matrix multiplication. Namely, by the distributive law, we have A(v + u) = A v + A u, showing that T preserves addition. And by the compatibility of matrix multiplication and scalar multiplication, we have Av) = λA(v), showing that T preserves scalar multiplication.
Example 6: Let us consider a 2 × 3 matrix \[ {\bf A} = \begin{bmatrix} \phantom{-}3 & 2 & \phantom{-}1 \\ -1 & 2 & -3 \end{bmatrix} . \] This matrix acts on column 3-vectors as \[ {\bf A}\,{\bf v} = \begin{bmatrix} \phantom{-}3 & 2 & \phantom{-}1 \\ -1 & 2 & -3 \end{bmatrix} \, \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 3x + 2y + z \\ -x + 2y - 3z \end{bmatrix} \in \mathbb{F}^{2 \times 1} . \] Therefore, matrix A transfer every 3-column vector into 2-column vector, showing that A is an operator A : 𝔽3×1 ⇾ 𝔽2×1.

Since every column vector space 𝔽n×1 is isomorphic to the direct product space 𝔽n, matrix A defines a linear transformation, which we denote as TA. In our case, we obtain \[ T_A \, : \mathbb{F}^3 \,\mapsto\, \mathbb{F}^2 , \qquad T_A (x, y, z) = (3x + 2y + z, -x + 2y -3z ) . \]

Similarly, when matrix A is considered as an operator acting from the right, we have \[ \left[ u, v \right] \, \begin{bmatrix} \phantom{-}3 & 2 & \phantom{-}1 \\ -1 & 2 & -3 \end{bmatrix} = \begin{bmatrix} 3u - v & 2 u + 2v & u - 3v \end{bmatrix} \in \mathbb{F}^{1 \times 3} . \] When matrix A acts from right, it generates a linear transformation TA : 𝔽2 ⇾ 𝔽3 as \[ T_A (u, v) = (3u - v , 2 u + 2v , u - 3v) . \]

End of Example 6

As a rule, we will consider action of matrices on vectors from the left (similar to all European languages that wring words from left to right). This approach requires utilization of column vectors from vector space 𝔽n×1 rather than Cartesian product 𝔽n.
An m × n matrix A ∈ 𝔽m×n, considered as an operator acting on vectors from left to right, defines a linear transformation, denoted as TA, from 𝔽n into 𝔽m.

The matrix of a linear transformation

In fact, matrix transformations are not just an example of linear transformations, but they are essentially the only example. One of the central theorems in linear algebra is that all linear transformations T : 𝔽3×1 ⇾ 𝔽2×1 are in fact matrix transformations. Therefore, a matrix can be regarded as a notation for a linear transformation, and vice versa. This is the subject of the following theorem.

Theorem 3: Let T : 𝔽n×1 ⇾ 𝔽m×1 be any linear transformation. Then there exists an m × n matrix A such that for all v ∈ 𝔽n×1 \begin{equation} \label{EqTransform.2} T({\bf v}) = {\bf A}\,{\bf v} . \end{equation} In other words, T is a matrix transformation, its m × n matrix is denoted by [T].
Suppose T : 𝔽n ⇾ 𝔽m; is a linear transformation and consider the standard basis { e1, e2, … , en } of 𝔽n. Foe eevry index i, define ui = T(ei) and let A be the matrix that has u1, u2, … , un as its columns. We claim that A is the desired matrix, i.e., that T(v) = A v holds for all v ∈ 𝔽n×1.

To see this, let \[ {\bf v} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} \in \mathbb{F}^{n\times 1} , \] be some arbitrary element of 𝔽n×1. Then v = x1e1 + x2e2 + ⋯ + xnen, and we have: \begin{align*} T({\bf v}) &= T \left( x_1 {\bf e}_1 + x_2 {\bf e}_2 + \cdots + x_n {\bf e}_n \right) \\ &= T \left( x_1 {\bf e}_1 \right) + T \left( x_2 {\bf e}_2 \right) + \cdots + T \left( x_n {\bf e}_n \right) \\ &= x_1 T \left( {\bf e}_1 \right) + x_2 T \left( {\bf e}_2 \right) + \cdots + x_n T \left( {\bf e}_n \right) \\ &= x_1 {\vd u}_1 + x_2 {\vd u}_2 + \cdots + x_n {\vd u}_n \\ &= {\bf A}\,{\bf v} . \end{align*}

Example 7: Suppose T : ℝ³ ⇾ ℝ² is a linear transformation where \[ T \left( \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \right) = \begin{bmatrix} \phantom{-}2 \\ -1 \end{bmatrix} , \qquad T \left( \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \right) = \begin{bmatrix} 1 \\ 1 \end{bmatrix} , \qquad T \left( \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right) = \begin{bmatrix} 2 \\ 3 \end{bmatrix} . \] Note that we converted ℝ³ and ℝ² into their equivalent imade ℝ3×1 and ℝ2×1, respectively. Therefore, instead of n-tuples, we use column vectors because matrices operated in such spaces, but not in direct products ℝn.

Using Eq.(3), we obtain \[ \left[ T \right] = {\bf A} = \begin{bmatrix} \phantom{-}2 & 1 & 2 \\ -1 & 1 & 3 \end{bmatrix} . \]

End of Example 7

In summary, the matrix corresponding to the linear transformation T has as its columns the vectors T(e1), T(e2), … , T(en), i.e., the images of the standard basis vectors. We can visualize this matrix as follows:

\begin{equation} \label{EqTransform.3} {\bf A} = \left[ T({\bf e}_1 ) , T({\bf e}_2 ) , \ldots , T({\bf e}_n ) \right] . \end{equation}

Corollary 2: Suppose V = 𝔽n and U = 𝔽m are vector spaces. As vector space over 𝔽, the space of all linear transformations ℒ(V, U) is isomorphic to the space of matrices of dimension m × n, so ℒ(𝔽m, 𝔽n) ≌ 𝔽m×n.

Choose ordered bases α = {v1, v2, … , vn} of V and β = {u1, u2, … , um} of U, Now suppose that T : 𝔽n ⇾ 𝔽m is a linear tramsformation, Every image vector under T is a unique linear combination of basis vectors \[ T({\bf v}_i ) = \sum_{j=1}^m a_{j,i} {\bf u}_j . \tag{C2.1} \] Denote the m × n matrix by A = [T]. This formula defines a map τ : ℒ(𝔽n, 𝔽m) ↦ 𝔽n×n. We show that τ is a bijective linear map. It is trivial to check that τ is a linea transformation because formula (C2.1) is linear. It is also clear that Ker(τ) = {0} because a matrix operator A v is injective for any matrix A. Hence, τ is injective. In particular, \[ \dim{\cal L}\left( \mathbb{F}^{n\times 1}, \mathbb{F}^{n\times 1} \right) \leqslant mn = \dim \mathbb{F}^{n\times n} . \] Now define mn linear transformations Ti,j : VU (i = 1, 2, … , m; j = 1, 2, … , n) by the formula Ti,j(vr) = δr,juj, where δr,j are the Kronecker delta: \[ \delta_{r,j} = \begin{cases} 0, & \quad \mbox{if} \quad r \ne j , \\ 1, & \quad \mbox{if} \quad r = j. \end{cases} \] The formula above is extended to whole V by linearity. We leave it as an exercise to check that Ti,j are linearly independent. Hence, \[ \dim {\cal L}(V, U) \geqslant mn . \] Therefore, these two spaces ℒ(𝔽n, 𝔽m) and 𝔽m×n are isomorphic.
Example 8: Let T : ℝ2×1 ⇾ ℝ3×1 be any linear transformation defined by \[ T \left( \begin{bmatrix} x \\ y \end{bmatrix} \right) = \begin{bmatrix} 2x -y \\ y-x \\ 5x -3y \end{bmatrix} , \] for all x, y ∈ ℝ

In order to find the matrix of this linear transformation, we compute the images of the standard basis vectors: \[ T({\bf e}_1 ) = T \left( \begin{bmatrix} 1 \\ 0 \end{bmatrix} \right) = \begin{bmatrix} \phantom{-}2 \\ -1 \\ \phantom{-}5 \end{bmatrix} , \qquad T({\bf e}_2 ) = T \left( \begin{bmatrix} 0 \\ 1 \end{bmatrix} \right) = \begin{bmatrix} -1 \\ \phantom{-}1 \\ -3 \end{bmatrix} . \] The matrix A = [T] has T(e₁) and T(e₂) as its columns. Therefore, \[ {\bf A} = \left] T \right] = \begin{bmatrix} \phantom{-}2 & -1 \\ -1 & \phantom{-}1 \\ \phantom{-}5 & -3 \end{bmatrix} \]

End of Example 8
Example 9: Let u = (1, −2, 3) and let T : ℝ3 ⇾ ℝ3 be any linear transformation defined by \[ T({\bf v}) = \mbox{prof}_{\bf u} ({\bf v}) = \frac{{\bf u} \bullet {\bf v}}{\| {\bf u} \|^2}\,{\bf u} . \] First, we need to show that T is a linear transformation for any fixed vector u. Using the distributive laws of the dot product and scalar multiplication, we get \[ T({\bf v} + {\bf w}) = \mbox{prof}_{\bf u} ({\bf v} + {\bf w}) = \frac{{\bf u} \bullet ({\bf v} + {\bf w})}{\| {\bf u} \|^2}\,{\bf u} = \frac{{\bf u} \bullet {\bf v}}{\| {\bf u} \|^2}\,{\bf u} + \frac{{\bf u} \bullet {\bf w}}{\| {\bf u} \|^2}\,{\bf u} \] Hence, T(v + w) = T(v) + T(w), and projection operation preserves addition. Also, given any scalar λ, we have \[ T (\lambda {\bf v} = \mbox{prof}_{\bf u} (\lambda {\bf v}) = \frac{{\bf u} \bullet \lambda {\bf v}}{\| {\bf u} \|^2}\,{\bf u} = \lambda \left( \frac{{\bf u} \bullet {\bf v}}{\| {\bf u} \|^2}\,{\bf u} \right) = \lambda\,\mbox{prof}_{\bf u} ({\bf v}) . \] So the function T preserves scalar multiplication. From two equations above follows that T is a linear transformation for any fixed vectior u.

To find the matrix of T, we must compute the images of the standard basis vectors T(e₁), T(e₂) and T(e₃). This yields \begin{align*} T \left( {\bf e}_1 \right) &= \mbox{prof}_{\bf u} ({\bf e}_1 ) = \frac{{\bf u} \bullet {\bf e}_1}{\| {\bf u} \|^2}\,{\bf u} = \frac{1}{14} \begin{bmatrix} \phantom{-}1 \\ -2 \\ \phantom{-}3 \end{bmatrix} , \\ T \left( {\bf e}_2 \right) &= \mbox{prof}_{\bf u} ({\bf e}_2 ) = \frac{{\bf u} \bullet {\bf e}_2}{\| {\bf u} \|^2}\,{\bf u} = \frac{-2}{14} \begin{bmatrix} \phantom{-}1 \\ -2 \\ \phantom{-}3 \end{bmatrix} , \\ T \left( {\bf e}_3 \right) &= \mbox{prof}_{\bf u} ({\bf e}_3 ) = \frac{{\bf u} \bullet {\bf e}_3}{\| {\bf u} \|^2}\,{\bf u} = \frac{3}{14} \begin{bmatrix} \phantom{-}1 \\ -2 \\ \phantom{-}3 \end{bmatrix} , \end{align*} because ∥ u ∥² = 14. Hence the matrix of T is \[ \left[ T \right] = \frac{1}{14} \begin{bmatrix} 1 & -2 & 3 \\ -2 & 4 & -6 \\ 3 & -6 & 9\end{bmatrix} , \] which is rank 1.

A = {{1, -2, 3}, {-2, 4, -6}, {3, -6, 9}};
MatrixRank[A]
1
End of Example 9

Composition of Transformations

Suppose that T : 𝔽n ⇾ 𝔽m and S : 𝔽m ⇾ 𝔽r are linear transformations. Then the composition of S and T is the function (ST) : 𝔽n ⇾ 𝔽r whose outputs are defined by \[ \left( S \circ T \right) ({\bf u}) = S \left( T({\bf u}) \right) , \qquad \forall {\bf u} \in \mathbb{F}^{n} . \]
Given that T and S are linear transformations, it would be nice to know that S∘T is also a linear transformation.

Theorem 6: If T : 𝔽n ⇾ 𝔽m and S : 𝔽m ⇾ 𝔽r are linear transformations, then their composition (ST) : 𝔽n ⇾ 𝔽r is a linear transformation.

We simply check the defining properties of a linear transformation: \begin{align*} \left( S \circ T \right) ({\bf v} + {\bf u}) &= S \left( T( {\bf v} + {\bf u}) \right) \\ &= S \left( T({\bf v}) + T({\bf u}) \right) \\ &= S \left( T({\bf v}) \right) + S \left( T({\bf u}) \right) \\ &= \left( S \circ T \right) ({\bf v}) + \left( S \circ T \right) ({\bf u}) , \end{align*} and for λ ∈ 𝔽, \begin{align*} \left( S \circ T \right) (\lambda{\bf u}) &= S \left( T \left( \lambda \,{\bf u} \right) \right) \\ &= S \left( \lambda\, T \left( {\bf u} \right) \right) \\ &= \lambda\,S \left( T \left( {\bf u} \right) \right) \\ &= \lambda \left( S \circ T \right) \left( {\bf u} \right) . \end{align*}
Example 12: Suppose that T : ℝ2 ⇾ ℝ4 and S : ℝ4 ⇾ ℝ3 are defined by \[ T \left( \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right] \right) = \begin{bmatrix} 2 x_1 - 3 x_2 \\ -3 x_1 + 4 x_2 \\ x_1 + 5 x_2 \\ 4 x_1 - x_2 \end{bmatrix} \qquad\mbox{and} \qquad S \left( \left[ \begin{array}{c} y_1 \\ y_2 \\ y_3 \\ y_4 \end{array} \right] \right) = \begin{bmatrix} 5 y_1 - 4 y_2 + 3y_3 - 2y_4 \\ 3 y_1 + y_2 - 3 y_3 - 4 y_4 \\ y_1 - 2 y_2 - 4 y_3 + y_4 \end{bmatrix} . \] Then by definition of composition, we have \[ S \left( \left[ \begin{array}{c} y_1 \\ y_2 \\ y_3 \\ y_4 \end{array} \right] \right) = \begin{bmatrix} 5 y_1 - 4y_2 + 3y_3 - 2y_4 \\ 3 y_1 + y_2 - 3 y_3 - 4 y_4 \\ y_1 - 2 y_2 - 4 y_3 + y_4 \end{bmatrix} . \] We substitute into this equation \begin{align*} y_1 &= 2 x_1 - 3 x_2 , \\ y_2 &= -3 x_1 + 4 x_2 , \\ y_3 &= x_1 + 5 x_2 , \\ y_4 &= 4 x_1 - x_2 . \end{align*} to obtain \[ \left( S \circ T \right) \left( \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right] \right) = \begin{bmatrix} 17 x_1 - 14 x_2 \\ -16 x_1 - 16 x_2 \\ 8 x_1 -32 x_2 \end{bmatrix} . \]
y1 = 2*x1 - 3* x2; y2 = -3*x1 + 4*x2; y3 = x1 + 5*x2; y4 = 4*x1 - x2;
Simplify[5*y1 - 4*y2 + 3*y3 - 2*y4]
17 x1 - 14 x2
Simplify[3*y1 + y2 - 3*y3 - 4*y4]
-16 (x1 + x2)
Simplify[y1 - 2*y2 - 4*y3 + y4]
8 (x1 - 4 x2)
End of Example 12
Composing two transformations means chaining them together: ST is the transformation that first applies T, then applies S (note the order of operations). More precisely, to evaluate ST on an input vector v, first you evaluate T(v), then you take this output vector of T(v) and use it as an input vector of S, that is, (ST)(v) = S(T(v)). Of course, this only makes sense when the outputs of T are valid inputs of S, that is, when the range of T is contained in the domain of S.
Domain and codomain of a composition:
  • In order for ST to be defined, the codomain of T must equal the domain of S.
  • The domain of ST is the domain of T.
  • The codomain of ST is the codomain of S.

Theorem 7: Let T : 𝔽n ⇾ 𝔽m and S : 𝔽m ⇾ 𝔽r be linear transformations, and let A and B be their standard matrices, respectively, so A is an m × n matrix and B is an n × r matrix. Then ST : 𝔽n ⇾ 𝔽r is a linear transformation, and its standard matrix is the product A B.

We don't need to verify that ST is linear---it was done in Theorem 6.

Let C be the standard matrix of ST; so we have T(x) = A x, S(y) = B y, and ST(x) = C x. According to Theorem 3, the first column of C is C e₁, and the first column of B is B e₁. We have \[ S \circ T ({\bf e}_1 ) = S \left( T \left( {\bf e}_1 \right) \right) = S \left( {\bf B\,e}_1 \right) = {\bf A} \left( {\bf B\,e}_1 \right) . \] By definition, the first column of the product A B is the product of A with the first column of B, which is B e₁, so \[ {\bf C}\,{\bf e}_1 = S \circ T({\bf e}_1 ) = S \left( T \left( {\bf e}_1 \right) \right) = S \left( {\bf B}\,{\bf e}_1 \right) = \left( {\bf A\,B} \right) {\bf e}_1 . \] It follows that C has the same first column as A B. The same argument as applied to the i-th standard coordinate vector ei shows that C and A B have the same i-th column; since they have the same columns, they are the same matrix.

Example 13: We reconsider the previous example. Transformation S : ℝ2 ⇾ ℝ4 is defined by matrix \[ {\bf A} = \begin{bmatrix} \phantom{-}2 & -3 \\ -3 & \phantom{-}4 \\ \phantom{-}1 & \phantom{-}5 \\ \phantom{-}4 & -1 \end{bmatrix} , \] and transformation T : ℝ4 ⇾ ℝ3 is defined by matrix \[ {\bf B} = \begin{bmatrix} 5 & -4 & \phantom{-}3 & -2 \\ 3 & \phantom{-}1 & -3 & -4 \\ 1 & -2 & -4 & \phantom{-}1 \end{bmatrix} . \] Then their product becomes \[ {\bf B\,A} = \begin{bmatrix} \phantom{-}2 & -3 \\ -3 & \phantom{-}4 \\ \phantom{-}1 & \phantom{-}5 \\ \phantom{-}4 & -1 \end{bmatrix} \,\begin{bmatrix} 5 & -4 & \phantom{-}3 & -2 \\ 3 & \phantom{-}1 & -3 & -4 \\ 1 & -2 & -4 & \phantom{-}1 \end{bmatrix} = \begin{bmatrix} 17 & -14 \\ -16 & -16 \\ 8 & -32 \end{bmatrix} \]
A = {{2, -3}, {-3, 4}, {1, 5}, {4, -1}}
B = {{5, -4, 3, -2}, {3, 1, -3, -4}, {1, -2, -4, 1}}
B.A
{{17, -14}, {-16, -16}, {8, -32}}
End of Example 13

Example 14: In mathematics, homogeneous coordinates or projective coordinates is a system of coordinates used in projective geometry, as Cartesian coordinates used in Euclidean geometry. It is a coordinate system that algebraically treats all points in the projective plane (both Euclidean and ideal) equally.

Homogeneous coordinates have a natural application to Computer Graphics; they form a basis for the projective geometry used extensively to project a three-dimensional scene onto a two- dimensional image plane. They also unify the treatment of common graphical transformations and operations. Homogeneous coordinates are also used in the related areas of CAD/CAM [Zeid], robotics, computational projective geometry, and fixed point arithmetic.

The homogeneous coordinate system is formed by equating each vector in ℝ² with a vector in ℝ³ having the same first two coordinates and having 1 as its third coordinate. \[ \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right] \, \longleftrightarrow \, \left[ \begin{array}{c} x_1 \\ x_2 \\ 1 \end{array} \right] . \] When we want to plot a point represented by the homogeneous coordinate vector (x₁, x₂, 1), we simply ignore the third coordinate and plot the ordered pair (x₁, x₂).

One of the advantages of homogeneous coordinates is that they allow for an easy combination of multiple transformations by concatenating several matrix-vector multiplications.

If the homogeneous coordinates of a point are multiplied by a non-zero scalar, then the resulting coordinates represent the same point. For example, the point (1, 2) in Cartesian coordinates has the following same homogeneous coordinates as \[ (1, 2 ) \qquad \iff \qquad \begin{split} (1, 2, 1) \\ (2, 4, 2) \\ (100, 200, 100) . \end{split} \]

The linear transformations discussed earlier must now be represented by 3 × 3 matrices. To do this, we take the 2 × 2 matrix representation and augment it by attaching the third row and third column of the 3 × 3 identity matrix. For example, in place of the 2 × 2 dilation matrix \[ \begin{bmatrix} 2 & 0 \\ 0 & 3 \end{bmatrix} , \] we have the 3 × 3 matrix \[ \begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 1 \end{bmatrix} , \] Note that scaling gives \[ \begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 1 \end{bmatrix} \left[ \begin{array}{c} x_1 \\ x_2 \\ 1 \end{array} \right] = \left[ \begin{array}{c} 2\,x_1 \\ 3\,x_3 \\ 1 \end{array} \right] \quad\mbox{and} \quad \begin{bmatrix} 2 & 0 \\ 0 & 3 \end{bmatrix} \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right] = \left[ \begin{array}{c} 2\,x_1 \\ 3\,x_2 \end{array} \right] \] Rotation operation is performed as \[ \begin{bmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{bmatrix} \left[ \begin{array}{c} x_1 \\ x_2 \\ 1 \end{array} \right] = \left[ \begin{array}{c} x_1 \cos\theta-x_2 \sin\theta \\ x_1 \sin\theta + x_2 \cos\theta \\ 1 \end{array} \right] \qquad\mbox{and} \qquad \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix} \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right] = \left[ \begin{array}{c} x_1 \cos\theta - x_2 \sin\theta \\ x_1 \sin\theta + x_2 \cos\theta \end{array} \right] \] Translation in homogeneous coordinates: \[ \begin{bmatrix} 1 & 0 & a \\ 0 & 1 & b \\ 0 & 0 & 1 \end{bmatrix} \left[ \begin{array}{c} x_1 \\ x_2 \\ 1 \end{array} \right] = \left[ \begin{array}{c} x_1 \\ x_2 \\ 1 \end{array} \right] + \left[ \begin{array}{c} a \\ b \\ 1 \end{array} \right] . \] So now in homogeneous coordinates, we are able to do scaling, rotation and translation by simple matrix multiplication instead of applying all the transformations separately, which is much more convenient for us when we want to combine these operations together.

In order to convert from homogeneous coordinates (x, y, w) to Cartesian coordinates, we simply divide x and y by w; \[ (x, y, w) \qquad \iff \qquad \left( \frac{x}{w} , \frac{y}{w} , 1 \right) \qquad \iff \qquad \left( \frac{x}{w} , \frac{y}{w}\right) . \]

  1. Hu, Yasen, Homogeneous Coordinates.
  2. Leon, S.J., de Pillis, L., Linear Algebra with Applications, Pearson, Harlow, ISBN 13: 978-1-292-35486-6
  3. Wikipedia.
End of Example 14

Example 15: Any m × n matrix A with real coefficients defines a linear transformation ℝn ⇾ ℝm also denoted by A. It is expressed according to formula (2) by matrix multiplication: \[ {\bf x} \mapsto {\bf A}\,{\bf x} . \] Such a map has the basic property A0 = 0.
Any map f : ℝn ⇾ ℝm of the form \[ {\bf x} \mapsto f({\bf x}) = {\bf A}\,{\bf x} + {\bf b} \] for some b ∈ ℝm, is called an affine map or transformation.
Since f(0) = b, such a map can be given by a matrix multiplication only when b = 0. However, one may also view affine maps as being induced by matrix multiplication as follows. The matrices of size (m+1)×(n+1) \[ \left( \begin{array}{c|c} {\bf A} & {\bf b} \\ \hline 0\cdots 0& 1 \end{array} \right) \tag{16.1} \] define transformation ℝn+1 ⇾ ℝm+1. On vectors with last component being 1, matrices (15.1) operates as follows \[ \left( \begin{array}{c|c} {\bf A} & {\bf b} \\ \hline 0\cdots 0& 1 \end{array} \right) \left( \begin{array}{c} {\bf x} \\ 1 \end{array} \right) = \left( \begin{array}{c} {\bf A}\,{\bf x} + {\bf b} \\ 1 \end{array} \right) . \tag{16.2} \] Composition of affine matrices (15.1) follow the rule: \[ \left( \begin{array}{c|c} {\bf A} & {\bf b} \\ \hline 0\cdots 0& 1 \end{array} \right) \left( \begin{array}{c|c} {\bf B} & {\bf c} \\ \hline 0\cdots 0& 1 \end{array} \right) = \left( \begin{array}{c|c} {\bf A}\,{\bf B} & {\bf A}\,{\bf c} + {\bf b} \\ \hline 0\cdots 0& 1 \end{array} \right) \tag{16.3} \]
End of Example 15

Example 16: The terms yaw, pitch, and roll are commonly used in the aerospace industry to describe the maneuvering of an aircraft or space shuttle. In flight, any aircraft will rotate about its center of gravity, a point which is the average location of the mass of the aircraft. We can define a three dimensional coordinate system through the center of gravity with each axis of this coordinate system perpendicular to the other two axes. We can then define the orientation of the aircraft by the amount of rotation of the parts of the aircraft along these principal axes.

Figure 16.1 below shows the initial position of a model airplane and four basic fources acting on it. In describing yaw, pitch, and roll, the current coordinate system is given in terms of the position of the vehicle. It is always assumed that the craft is situated on the xy-plane with its nose pointing in the direction of the positive x-axis and the left wing pointing in the direction of the positive y-axis. Furthermore, when the plane moves, the three coordinate axes move with the vehicle.

Fig. 16,1: Rotations of aircraft.
     
Fig. 16,2: Four forces on aircraft.

A force may be thought of as a push or pull in a specific direction. This slide shows the forces that act on an airplane in flight.

A yaw is a rotation in the xy-plane. Figure 16.3 illustrates a yaw of 15°. In this case, the craft has been rotated 15° to the right (clockwise). Viewed as a linear transformation in 3-space, a yaw is simply a rotation about the z-axis. Note that if the initial coordinates of the nose of the model plane are represented by the vector (1, 0, 0), then its xyz coordinates after the yaw transformation will still be (1, 0, 0), since the coordinate axis rotated with the craft. In the initial position of the airplane, the x, y, and z axes are in the same directions as the front-back, left-right, and top-bottom axes shown in the figure. We will refer to this initial front, left, top axis system as the FLT axis system. After the 30° yaw, the position of the nose of the craft with respect to the FLT axis system is \( \displaystyle \left( \frac{\sqrt{3}}{2}, - \frac{1}{2} , 0 \right) . \)

Fig. 16,3: Aircraft yaw motion.
     
Fig. 16,4: Aircraft yaw motion.

If we view a yaw transformation L in terms of the FLT axis system, it is easy to find a matrix representation. If T corresponds to yaw by an angle θ, then T will rotate the points (1, 0, 0) and (0, 1, 0) to the positions (cosθ, − sinθ, 0) and (sinθ, cosθ, 0), respectively. The point (0, 0, 1) will remained unchanged by the yaw since it is on the axis of rotation. In terms of column vectors, if y₁, y₂, and y₃ are the images of the standard basis vectors for ℝ³ under T, then \[ {\bf y}_1 = T\left( {\bf e}_1 \right) = \left[ \begin{array}{c} \cos\theta \\ -\sin\theta \\ 0 \end{array} \right] , \qquad {\bf y}_2 = T\left( {\bf e}_2 \right) = \left[ \begin{array}{c} \sin\theta \\ \cos\theta \\ 0 \end{array} \right] , \qquad {\bf y}_3 = T\left( {\bf e}_3 \right) = \left[ \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right] . \] Therefore, the matrix representation of the yaw transformation becomes \[ {\bf Y} = \begin{bmatrix} \cos\theta & \sin\theta & 0 \\ -\sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{bmatrix} . \]

Fig. 16,5: Aircraft pitch motion.
     
Fig. 16,6: Aircraft pitch motion.

A pitch is a rotation of the aircraft in the xz-plane. Figure 4.2.5(c) illustrates a pitch of −30°. Since the angle is negative, the nose of the aircraft is rotated 30&de; downward, toward the bottom axis of the figure. Viewed as a linear transformation in 3-space, a pitch is simply a rotation about the y-axis. As with the yaw, we can find the matrix for a pitch transformation with respect to the FLT axis system. If T is a pitch transformation with angle of rotation φ, the matrix representation of T is given by \[ {\bf P} = \begin{bmatrix} \cos\varphi & 0 & -\sin\varphi \\ 0 & 1 & 0 \\ \sin\varphi & 0 & \cos\varphi \end{bmatrix} . \]

Fig. 16,7: Aircraft roll motion.
     
Fig. 16,8: Aircraft roll motion.

A roll is a rotation of the aircraft in the yz-plane. Figure 4.2.5(d) illustrates a roll of 30°. In this case, the left wing is rotated up 30° toward the top axis in the figure and the right wing is rotated 30° downward toward the bottom axis. Viewed as a linear transformation in 3-space, a roll is simply a rotation about the x-axis. As with the yaw and pitch, we can find the matrix representation for a roll transformation with respect to the FLT axis system. If T is a roll transformation with angle of rotation ψ, the matrix representation of L is given by \[ {\bf R} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos\psi & -\sin\psi \\ 0 & \sin\psi & \cos\psi \end{bmatrix} . \]

If we perform a yaw by an angle θ and then a pitch by an angle φ, the composite transformation is linear; however, its matrix representation is not equal to the product P Y. The effect of the yaw on the standard basis vectors e₁, e₂, and e₃ is to rotate them to the new directions y₁, y₂, and yy₃. So the vectors y₁, y₂, and yy₃ will define the directions of the x, y, and z axes when we do the pitch. The desired pitch transformation is then a rotation about the new y-axis (i.e., the axis in the direction of the vector y₂). The vectors y₁ and y₃ form a plane, and when the pitch is applied, they are both rotated by an angle φ in that plane. The vector y₂ will remain unaffected by the pitch, since it lies on the axis of rotation. Thus, the composite transformation T has the following effect on the standard basis vectors: \begin{align*} {\bf e}_1 & \stackrel{\mbox{yaw}}{\longrightarrow} \,{\bf y}_1 \, \stackrel{\mbox{pitch}}{\longrightarrow} \,\cos\varphi \,{\bf y}_1 + \sin\varphi \,{\bf y}_3 , \\ {\bf e}_2 & \stackrel{\mbox{yaw}}{\longrightarrow} \,{\bf y}_2 \, \stackrel{\mbox{pitch}}{\longrightarrow} \,{\bf y}_2 , \\ {\bf e}_3 & \stackrel{\mbox{yaw}}{\longrightarrow} \,{\bf y}_1 \, \stackrel{\mbox{pitch}}{\longrightarrow} \,\sin\varphi \,{\bf y}_1 + \cos\varphi \, {\bf y}_3 . \end{align*}

The images of the standard basis vectors form the columns of the matrix representing the composite transformation: \[ \left( \cos\varphi \,{\bf y}_1 + \sin \varphi \,{\bf y}_3 , {\bf y}_2 , -\sin\varphi \,{\bf y}_1 + \cos \varphi \,{\bf y}_3 \right) = \left( {\bf y}_1 , {\bf y}_2 , {\bf y}_3 \right) \begin{bmatrix} \cos \varphi & 0 & - \sin \varphi \\ 0 & 1 & 0 \\ \sin\varphi & 0 & \cos \varphi \end{bmatrix} . \]

It follows that matrix representation of the composite is a product of the two individual matrices representing the yaw and the pitch, but the product must be taken in the reverse order, with the yaw matrix Y on the left and the pitch matrix P on the right. Similarly, for a composite transformation of a yaw with angle θ, followed by a pitch with angle φ, and then a roll with angle ψ, the matrix representation of the composite transformation would be the product Y  P  R.

End of Example 16

 


  1. Let u ∈ 𝔽n be a fixed non-zero vector. The function T defined by T(v) = v + u has the effect of translating all vectors by adding u. Show that T is not a linear transformation.
  2. Which of the following vector functions are linear transformations? \[ T_1 \left( \begin{bmatrix} x \\ y \end{bmatrix} \right) = \begin{bmatrix} x \left( y + z \right) \\ x-2y \\ z+ x \end{bmatrix} , \qquad T_2 \left( \begin{bmatrix} x \\ y \end{bmatrix} \right) = \begin{bmatrix} x + y^2 \\ y\left( x-2z \right) \\ 3z \end{bmatrix} . \]
  3. Consider the following functions T : ℝ³ ⇾ ℝ². Explain why each of these functions T is not linear. \[ \mbox{(a)}\quad T \left( \begin{bmatrix} x \\ y \\ z \end{bmatrix} \right) = \begin{bmatrix} 3x -z+2 \\ y-z \end{bmatrix} , \qquad \mbox{(b)}\quad T \left( \begin{bmatrix} x \\ y \\ z \end{bmatrix} \right) = \begin{bmatrix} x^2 + 2y \\ z-y \end{bmatrix} \]
  4. Find the matrix corresponding to the given transformation \[ \mbox{(a)}\quad \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 3x_{1}-4x_{2}\\ 8x_{1}+15x_{2}\\ \end{bmatrix} ; \qquad \mbox{(b)}\quad \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} x_{1}-3x_{2}+2x_{3}\\ 0 \\ 2x_{1}-x_{3} \\ -x_{1}+4x_{2}-3x_{3} \\ \end{bmatrix} ; \] \[ \mbox{(c)}\quad \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 4x_{1}\\ 7x_{2} \\ -8x_{3} \\ \end{bmatrix} ; \qquad \mbox{(d)}\quad \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 8x_{1}+4x_{2}-x_{3}\\ 10x_{1}-9x_{2}+12x_{13} \\ -2x_{1}-x_{3} \\ \end{bmatrix}; \] \[ \mbox{(e)}\quad \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 2x_{1}+2x_{3}-2x_{4}\\ -x_{4} \\ 2x_{1}-1x_{3} \\ -x_{2}+3x_{4} \\ \end{bmatrix}; \qquad \mbox{(f)}\quad \]
  5. Does the transformation satisfy the property T(A + B) = T(A) + T(B) ? \[ \mbox{(a)}\quad \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} x_{1}+2\\ x_{2}+2 \\ \end{bmatrix}; \qquad \mbox{(b)}\quad \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 2x_{1} \\ 3x_{2} \\ \end{bmatrix}. \]
  6. Does the transformation satisfy the property T(cA) = cT(A) for a constant c ? \[ \mbox{(a)}\quad \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} x_{1}+2x_{2} \\ 3x_{2} \end{bmatrix}; \qquad \mbox{(b)}\quad \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} \,\mapsto \, \begin{bmatrix} x_{1}^2 \\ x_{2} \end{bmatrix}. \]
  7. Are the following transformations linear? Check that both properties T(A + B) = T(A) + T(B) and T(cA) = cT(A) are satisfied. \[ \mbox{(a)}\quad \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 4x_{1}+x_{3}\\ -2x_{1}+3x_{2} \end{bmatrix} ; \qquad \mbox{(b)}\quad \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 4+ 2x_{1}+x_{3}\\ x_{1}+3x_{2} \end{bmatrix} . \]
  8. Find the composition of transformations S with T, i.e. ST.
    1. \[ \quad S:\ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} x_{1}-4x_{2} \\ 6x_{1}+3x_{2} \\ -2x_{1}+4x_{2} \end{bmatrix}, \quad T:\ \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 3x_{1}-2x_{2}+x_{3} \\ 5x_{1}+2x_{2}-7x_{3} \\ 1x_{1}+7x_{2}-4x_{3} \\ \end{bmatrix} \]
    2. \[ \quad S:\ \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 6x_{1}+3x_{2}-x_{3} \\ x_{1}+x_{2}+x_{3}+4x_{4} \\ -2x_{1}+5x_{2}+2x_{4} \\ \end{bmatrix}, \quad T\,:\, \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 2x_{1}+x_{2}+4x_{3} \\ x_{2}+x_{3}\\ \end{bmatrix}; \]
    3. \[ \quad S:\ \begin{bmatrix} x_{1} \\ x_{2} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} x_{1}+2x_{2} \\ 4x_{1}+3x_{2} \\ 2x_{1}+4x_{2} \\ \end{bmatrix}, \quad T: \ \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix} \,\mapsto \, \begin{bmatrix} 1x_{1}-2x_{2}+3x_{3} \\ 3x_{1}-2x_{2}+x_{3} \\ 2x_{1}+x_{2}-3x_{3} \end{bmatrix}. \]
  9. Let T be a linear transformation from ℝ³ into ℝ² defined by relations T(i) = (1,2), T(j) = (3,-4), and T(k) = (-5,6). Find the standard matrix for T.
  10. Let T : ℝ² ⇾ ℝ² be a linear transformation. If T(1, 2) = (2, −3) and T(3, −4) = (4, 1), find the value of T(5, 7).
  11. Determine whether the following are linear transformations from ℝ³ into ℝ²
    1. T(x) = (x₂, x₁);
    2. T(x) = (0, 0);
    3. T(x) = (x₂ + 1, x₁ −1);, xx₃).
    4. T(x) = (x₁ + x
  12. Determine whether the following are linear transformations from ℝ² into ℝ³
    1. T(x) = (0, 0, x₁);
    2. T(x) = (0, 0, 1);
    3. T(x) = (x₁, x₂, x₁);
    4. T(x) = (x₁ + x₂, x₁ − x₂, x₁).
  13. Determine whether the following are linear operators on ℝn×n:
    1. T(A) = 3A;
    2. T(A) = AI;
    3. T(A) = AAT;
    4. T(A) = AATATA.
  14. Let C be a fixed n × n matrix. Determine whether the following are linear operators on ℝn×n:
    1. T(A) = CAAC;
    2. T(A) = C²A;
    3. T(A) = A²C.
  15. For each of the following vector functions T : ℝn ⇾ ℝn, show that T is a linear transformation and find the corresponding matrix A = [T] such that TA(v) = Av.
    1. T multiplies the j-th component of v by a non-zero number b.
    2. T adds b times the j-th component of v to the i-th component.
    3. T switches the i-th and j-th components.

  1. Ch.G. Cullen, "Matrices and linear transformations" , Dover, reprint (1990) pp. 236ff
  2. Leon, S.J., de Pillis, L., Linear Algebra with Applications, Pearson, Harlow, ISBN 13: 978-1-292-35486-6