es

Plane Transformations

This section is devoted to illustrations of linear transformations on the plane and three-dimensional space using square matrices. This may help you to develop a geometric understanding of matrices and their relationship to coordinate space transformations in general. Every square matrix can be considered as an operator that acts on column vectors from left-to-right, with output also written as column vector. Then each linear transformation ℝn ⇾ ℝn is identified by a square n×n matrix A, and vise versa. When ℝn is identified with the space of column vectors, ℝn×1, a linear transformation in it is generated by matrix multiplication from left: A x, where x is an arbitrary column vector, x ∈ ℝn×1.

Plane transformations can be classified as reflections, contractions/expansions, shears, rotations, and projections (that is a topic of another section). The following subsections give appropriate terminologies for such transformations. The difference between matrix multiplication A x and corresponding linear transformation TA : ℝ² ⇾ ℝ² is merely a matter of notation. Therefore, we can classify matrices instead of as linear maps. We mention the following properties of matrix transformations:

  • linearity,
  • closed under composition,
  • associativity,
  • not commutative,
  • applied left-to-right.

We demonstrate linear transformations acting on a house that we place at the original, which is the corner of {0,0} in the Cartesian two-dimensional plane. Matrix algebra is used to move this house with each transformation having the same common corner at the origin. The first is not a transformation but the baseline beginning point which will be transformed as we proceed. The object of choice is a parallelogram as transformation of the vector points is all that is required to make the change we desire.

$Post := If[MatrixQ[#1], MatrixForm[#1], #1] &
Clear[house]; house[trans_ : {{1, 0}, {0, 1}}, label_ : "House in Quadrant I"] := Module[{para, tri, door}, para = Parallelogram[{0, 0}, trans]; tri = Triangle[{trans[[2]], {trans[[1, 1]], trans[[2, 2]]}, {.5*trans[[1, 1]], 1.5*trans[[2, 2]]}}]; door = Parallelogram[.4* trans[[1]], {.2*trans[[1]], .5 trans[[2]]}]; Graphics[{Blue, para, Red, tri, White, door}, Axes -> True, PlotLabel -> label, PlotRange -> {{-3, 3}, {-3, 3}}] ] houseNE = house[]
House to be transformed.
The base of the house (ignoring the roof and front door) when in the northeast quadrant is composed of the four corners represented by the Identity Matrix or basis vectors, i and j, which are
i = {1, 0}; j = {0, 1}; {i, j} == IdentityMatrix[2]
True
We plot the square base (in blue):
Graphics[{Blue, Parallelogram[{0, 0}, IdentityMatrix[2]]}, Axes -> True, PlotRange -> {{-3, 3}, {-3, 3}}]
Notice when only the square base of a standard solid color is involved you cannot tell if the house is upright, flipped on its side or upside down. Thus, below we add a roof and door help to orient the viewer.

 

Uniform scale


A diagonal matrix has non-zero numbers only on its left-to-right diagonal. If these entries are the same, this type of matrix defines a uniform scaling transformation:
\[ {\bf x} \mapsto {\bf A}\,{\bf x} , \qquad {\bf A} = \begin{bmatrix} a & 0 \\ 0 & a \end{bmatrix} , \]
where 𝑎 is a positive number. There are two kinds of uniform transformations.

Dilation (or expansion) is a transformation accomplished by matrix multiplication similar to the following:
\[ \begin{bmatrix} a & 0 \\ 0 & a \end{bmatrix} , \qquad \mbox{with} \quad a > 1. \]
We can make our house 50% bigger. Note some transforms are matrix multiplications, not dot products.
houseBig = house[{{1.5, 0}, {0, 1.5}}, "Dilation by 50%"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[DiagonalMatrix[{1.5, 1.5}]], houseBig}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 1.5 & 0 \\ 0 & 1.5 \end{bmatrix} \]      
Dilation by 50%.

A contraction (or compression) is a transformation defined by matrix multiplication (from left-to-right):
\[ \begin{bmatrix} a & 0 \\ 0 & a \end{bmatrix} , \qquad \mbox{with} \quad 0 < a < 1 . \]
houseSm = house[{{.6, 0}, {0, .6}}, "Uniform 40% contraction"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {houseNE, MatrixForm[DiagonalMatrix[{.6, .6}]], houseSm}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 0.6 & 0 \\ 0 & 0.6 \end{bmatrix} \]      
Uniform contraction by 40%.

 

Nonuniform scale


A nonuniform scaling is accomplished by matrix multiplication similar to the following:
\[ \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} , \qquad\mbox{where} \quad a \ne b . \]
houseTall = house[{{1, 0}, {0, 1.5}}, "Non-uniform (vertical)\ndilation"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{1, 0}, {0, 1.5}}], houseTall}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 1 & 0 \\ 0 & 1.5 \end{bmatrix} \]      
Nonuniform dilation in vertical direction.

houseWide = house[{{1.5, 0}, {0, 1}}, "Non-uniform (horizontal)\ndilation"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{1.5, 0}, {0, 1}}], houseWide}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 1.5 & 0 \\ 0 & 1 \end{bmatrix} \]      
Nonuniform dilation in horizontal direction.

houseUp = house[{{1.5, 0}, {0, 2}}, "Nonuniform dilation"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{1.5, 0}, {0, 2}}], houseUp}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 1.5 & 0 \\ 0 & 2 \end{bmatrix} \]      
Nonuniform dilation.

houseDn = house[{{1, 0}, {0, 0.6}}, "Nonuniform contraction\nin vertical direction"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{1, 0}, {0, 0.6}}], houseDn}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 1 & 0 \\ 0 & 0.6 \end{bmatrix} \]      
Nonuniform contraction in vertical direction.

houseDn2 = house[{{0.6, 0}, {0, 1}}, "Nonuniform contraction\nin horizontal direction"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{0.6, 0}, {0, 1}}], houseDn2}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 0.6 & 0 \\ 0 & 1 \end{bmatrix} \]      
Nonuniform contraction in horizontal direction.

houseDn3 = house[{{1.5, 0}, {0, 0.6}}, "Dilation and contraction"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{1.5, 0}, {0, 0.6}}], houseDn3}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 1.5 & 0 \\ 0 & 0.6 \end{bmatrix} \]      
Dilation and contraction.

 

Shear Map


A horizontal shear is observed when the vector (1, 0) pointing in the horizontal direction is fixed, but the vector (0, 1) pointing in the vertical direction is taken to the vector (𝑎, 1), where 𝑎 is some real number. Notice that if 𝑎 > 0, then our horizontal shear pushes the top of the blue parallelogram to the right. If 𝑎 < 0, then we push the top of the blue parallelogram to the left. It is easy to see from the definition of this horizontal shear that the corresponding matrix is
\[ \begin{bmatrix} 1 & a \\ 0 & 1 \end{bmatrix} \]

The code for the roof does not play well with the shear transform, so we dispense with the roof below, leaving only a door to remind us we started with a house. There are a number of ramifications to this worth mentioning before continuing. First, the roof is a triangle. Matrix algebra is about matrices, which are by their nature rectangles, not triangles. The reason the door survives in the illustration is that it is a rectangle, like the base. Second, when the base of the house is a rectangle the roof "follows" the base in the code, which is not true when the base is a non-rectangular parallelogram. This respects the reality that all rectangles are parallelograms but the converse is not true. Third, the house is a metaphor, an abstraction to advance the pedagogy in connection with linear algebra. We must be careful taking pedagogy into the real world. Put a roof on a base that is a tilted parallelogram and watch the house fall down to prove the importance of having the load orthogonal to its support in the real world where gravity matters.

Clear[house2]; house2[trans_ : {{1, 0}, {0, 1}}, label_ : ""] := Module[{para, door}, para = Parallelogram[{0, 0}, trans]; door = Parallelogram[.4*trans[[1]], {.2*trans[[1]], .5 trans[[2]]}]; Graphics[{Blue, para, White, door}, Axes -> True, PlotLabel -> label, PlotRange -> {{-3, 3}, {-3, 3}}] ]; hzShearR = house2[{{1, 1}, {0, 1}}, "Horizontal shear\nto the right-up"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{1, 1}, {0, 1}}], hzShearR}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \]      
Horizontal shear to the right-up.

A vertical shear would fix (0, 1) and would push the vertical component of the vector (1, 0) up 𝑎 units to the vector (1, 𝑎). So the matrix for a vertical shear is
\[ \begin{bmatrix} 1 & 0 \\ a & 1 \end{bmatrix} . \]
A shear matrix is a defective invertible matrix.
hzShearL = house2[{{1, -1}, {0, 1}}, "Horizontal shear\nto the left"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{1, -1}, {0, 1}}], hzShearL}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 1 & -1 \\ 0 & \phantom{-}1 \end{bmatrix} \]      
Vertical shear down.

vShearTop = house2[{{1, 0}, {1, 1}}, "Vertical shear to the top"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{1, 0}, {1, 1}}], vShearTop}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \]      
Vertical shear to the top-right.

vShearDn = house2[{{1, 0}, {-1, 1}}, "Vertical shear down"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{1, 0}, {-1, 1}}], vShearDn}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} \phantom{-}1 & 0 \\ -1 & 1 \end{bmatrix} \]      
Vertical shear top-left.

 

Reflection


Initially, the house is in the northeast quadrant of the Cartesian plane. Anything multiplied by the identity matrix is unchanged by that operation. So, beginning with the IdentityMatrix (original house) and proceeding clockwise, we name the four transformation matrices which first leaves the house where it is, then moves it, in three operations, from quadrant to quadrant.
ne == IdentityMatrix[2]
ne == {{1, 0}, {0, 1}}
{ne, se, sw, nw} = {{{1, 0}, {0, 1}}, {{1, 0}, {0, -1}}, {{-1, 0}, {0, -1}}, {{-1, 0}, {0, 1}}};
Below, the first is the Identity Matrix and the last three are the reflection matrices.
Grid[{{"ne", "se", "sw", "nw"}, MatrixForm[#] & /@ {ne, se, sw, nw}}, Frame -> All]

The first three transformations can be considered as a special case of nonuniform scale.

Below, the first is the Identity Matrix and the last three are the reflection matrices.

houseSE = Graphics[{Blue, Parallelogram[{0, 0}, se], Red, Opacity[.2], Parallelogram[{.5, -.5}, Transpose[se.{{-.5, .5}, {.5, .5}}]]}, Axes -> True, PlotRange -> {{-2, 2}, {-2, 2}}, Epilog -> {White, Parallelogram[{.4, 0}, {{1, 0}, {0, -1}}.{{.2, 0}, {0, .5}}]}]

House in quadrant I.
      \[ \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} \]      
Reflection about the abscissa.

houseSW = Graphics[{Blue, Parallelogram[{0, 0}, sw], Red, Opacity[.2], Parallelogram[{-.5, -.5}, sw.{{-.5, .5}, {.5, .5}}]}, Axes -> True, PlotRange -> {{-2, 2}, {-2, 2}}, Epilog -> {White, Parallelogram[{-.4, 0}, {{-1, 0}, {0, -1}}.{{.2, 0}, {0, .5}}]}]

House in quadrant I.
      \[ \begin{bmatrix} -1 & \phantom{-}0 \\ \phantom{-}0 & -1 \end{bmatrix} \]      
Reflection with respect to both axes.

houseNW = Graphics[{Blue, Parallelogram[{0, 0}, nw], Red, Opacity[.2], Parallelogram[{-.5, .5}, {{-.5, .5}, {.5, .5}}.nw]}, Axes -> True, PlotRange -> {{-2, 2}, {-2, 2}}, Epilog -> {White, Parallelogram[{-.4, 0}, {{-1, 0}, {0, 1}}.{{.2, 0}, {0, .5}}]}]

House in quadrant I.
      \[ \begin{bmatrix} -1 & 0 \\ \phantom{-}0 & 1 \end{bmatrix} \]      
Reflection about ordinate.

reflYneq = house2[{{0, -1}, {-1, 0}}, "Reflection about y=\[Minus]x"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{0, -1}, {-1, 0}}], reflYneq}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} \phantom{-}0 & -1 \\ -1 & \phantom{-}0 \end{bmatrix} \]      
Reflection about y = −x.

reflYeq = house2[{{0, 1}, {1, 0}}, "Reflection about y=x"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], MatrixForm[{{0, 1}, {1, 0}}], reflYeq}}, Frame -> All]

House in quadrant I.
      \[ \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \]      
Reflection with respect to ordinate.

Now we consider a linear transformation that reflects vectors across a line L that makes an angle θ with the x-axis (known as abscissa). The matrix that corresponds to such transformation is
\begin{equation} \label{EqPlane.1} {\bf A} = \begin{bmatrix} \cos 2\theta & \sin 2\theta \\ \sin 2\theta & -\cos 2\theta \end{bmatrix} . \end{equation}
refl\[Theta] = house2[.5 {{1, Sqrt[3]}, {Sqrt[3], -1}}, "Reflection about y = x/2"]; Grid[{{"House to be\nTransformed", "Transform\nMatrix", "Transformed\nHouse"}, {house[], HoldForm[1/2 MatrixForm[{{1, Sqrt[3]}, {Sqrt[3], -1}}]], refl\[Theta]}}, Frame -> All]

House in quadrant I.
      \[ \frac{1}{2} \begin{bmatrix} 1 & \sqrt{3} \\ \sqrt{3} & -1 \end{bmatrix} \]      
Reflection about y = x/2.

 

Rotation


This potic is moved to another section.
Example 1: A picture in the plane can be stored in the computer as a set of vertices. The vertices can then be plotted and connected by lines to produce the picture. If there are n vertices, they are stored in a 2 × n matrix. The x-coordinates of the vertices are stored in the first row and the y-coordinates in the second. Each successive pair of points is connected by a straight line.

For example, to generate a rhombus with vertices (0, 0), (1, 1), (0, 2), and (1, −1), we store the pairs as columns of a matrix: \[ {\bf T} = \begin{bmatrix} 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 2 & -1 & 0 \end{bmatrix} . \] An additional copy of the vertex (0, 0) is stored in the last column of T so that the previous point (1, −1) will be connected back to (0, 0) [see Figure 4.2.3(a)].

Rhombus defined by T.
      Leon, page 205, Figure 4.2.3

We can transform a figure by changing the positions of the vertices and then redrawing the figure. If the transformation is linear, it can be carried out as a matrix multiplication. Viewing a succession of such drawings will produce the effect of animation. The four primary geometric transformations that are used in computer graphics are as follows:

  1. Dilations and contractions. A linear operator of the form \[ T({\bf x}) = c\,{\bf x} \] is a dilation if c > 1 and a contraction if 0 < c < 1. The operator T is represented by the matrix cI, where I is the 2 × 2 identity matrix. A dilation increases the size of the figure by a factor c > 1, and a contraction shrinks the figure by a factor c < 1. Figure 4.2 shows a dilation by a factor of 1.5 of the rhombus stored in the matrix T.
  2. Reflections about an axis. If Tx is a transformation that reflects a vector x about the x-axis, then Tx is a linear operator and hence it can be represented by a 2 × 2 matrix A. Since \[ T_x ({\bf e}_1 ) = {\bf e}_1 \qquad \mbox{and} \qquad T_x ({\bf e}_2 ) = -{\bf e}_2 , \] it follows that \[ {\bf A} = \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} . \]

    Rhombus defined by T.
          Leon, page 205, Figure 4.2.3

    Similarly, if Ty is the linear operator that reflects a vector about the y-axis, then Ty is represented by the matrix \[ \left[ T_y \right] = \begin{bmatrix} -1 & 0 \\ \phantom{-}0 & 1 \end{bmatrix} . \] Figure 4.2.3(c) shows the image of the rhombus after a reflection about the y-axis.

    Rhombus reflected by y.
          Leon, page 205, Figure 4.2.3

  3. Rotations. Let T be a transformation that rotates a vector about the origin by an angle θ in the counterclockwise direction. We saw in Example ???? that T is a linear operator and that T(x) = A x, where \[ {\bf A} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \phantom{-}\cos\theta \end{bmatrix} . \] Figure 4.2.3(d) shows the result of rotating the triangle T by 60 ◦ in the counterclockwise direction.

    Rhombus rotated by 60°.
          Leon, page 205, Figure 4.2.3

  4. Translations. A translation by a vector a is a transformation of the form \[ T({\bf x}) = {\bf x} + {\bf a} . \] If a ≠ = 0, then T is not a linear transformation and hence T cannot be represented by a 2 × 2 matrix. However, in computer graphics it is desirable to do all transformations as matrix multiplications. The way around the problem is to introduce a new system of coordinates called homogeneous coordinates.
End of Example 1
Theorem 1: If E is an elementary matrix, then TE : ℝ2×1 ⇾ ℝ2×1 is one of the following:
  1. An expansion along a coordinate axis.
  2. A compression along a coordinate axis.
  3. A shear along a coordinate axis.
  4. A reflection along y = x.
  5. A reflection about a coordinate axis.
  6. A compression or expansion along a coordinate axis followed by reflection about a coordinate axis.
Because a 2 × 2 elementary matrix results from performing a single elementary row operation on the 2 × 2 identity matrix, such a matrix must have one of the following forms: \[ \begin{bmatrix} a & 0 \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 \\ 0 & a \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 \\ a & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & a \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} . \] The third and fourth matrices represent shears along coordinate axes, and the first two represent compressions or expansions along coordinate axes depending whether 0 < 𝑎 < 1 or 𝑎 > 1. If 𝑎 < 0, and if we set 𝑎 = −k, where ;k > 0 , then these matrices can be written as \[ \begin{bmatrix} a & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} -k & 0 \\ \phantom{-}0 & 1 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ \phantom{-}0 & 1 \end{bmatrix} \cdot \begin{bmatrix} k & 0 \\ 9 & 1 \end{bmatrix} \tag{P.1} \] and \[ \begin{bmatrix} 1 & 0 \\ 0 & a \end{bmatrix} = \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} = \begin{bmatrix} 1 & \phantom{-}0 \\ 0 & -1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 \\ 1 & k \end{bmatrix} . \tag{P.2} \] Since k > 0, the product in Eq.(P.1) represents a compression or expansion along the abscissa followed by a reflection about the ordinate, and Eq.(P.2) represents a compression or expansion along the ordinate followed by a reflection about the abscissa. In the case where 𝑎 = −1, transformations (P.1) and (P.2) are simply reflections about the ordinate and abscissa, respectively.

The last matrix represents a reflection about y = x.

Example 2: Let us consider the matrix \[ {\bf A} = \begin{bmatrix} 0 & 1 \\ \frac{3}{2} & 1 \end{bmatrix} . \]
code:

House in quadrant I.
      \[ \begin{bmatrix} 1 & 0 \\ \frac{3}{2} & 1 \end{bmatrix} \]      
Vertical shear to the top.

This matrix can be reduced to the identity matrix as follows: \[ \begin{bmatrix} 0 & 1 \\ \frac{3}{2} & 1 \end{bmatrix} \, \rightarrow \, \begin{bmatrix} \frac{3}{2} & 1 \\ 0 & 1 \end{bmatrix} \, \rightarrow \, \begin{bmatrix} 1 & \frac{3}{2} \\ 0 & 1 \end{bmatrix} \, \rightarrow \, \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} . \] First, we interchange first and second rows. Then we divide by 3/2 every entry in the first row. Finally, we add −⅔ times the second row to the first row.

These three successive row operations can be performed by multiplyingA on the left successively by \[ {\bf E}_1 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} , \qquad {\bf E}_2 = \begin{bmatrix} \frac{2}{3} & 0 \\ 0 & 1 \end{bmatrix} , \qquad {\bf E}_3 = \begin{bmatrix} 1 & -\frac{2}{3} \\ 0 & \phantom{-}1 \end{bmatrix} . \] Inverting these matrices, we get

Inverse[{{0, 1}, {1, 0}}]
\( \displaystyle \begin{pmatrix} 0&1 \\ 1 & 0 \end{pmatrix} \)
Inverse[{{2/3, 0}, {0, 1}}]
\( \displaystyle \begin{pmatrix} \frac{3}{2} & 0 \\ 0 & 1 \end{pmatrix} \)
Inverse[{{1, -2/3}, {0, 1}}]
\( \displaystyle \begin{pmatrix} 1 & \frac{2}{3} \\ 0&1 \end{pmatrix} \)
\[ {\bf E}_1^{-1} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} , \qquad {\bf E}_2^{-1} = \begin{bmatrix} \frac{3}{2} & 0 \\ 0 & 1 \end{bmatrix} , \qquad {\bf E}_3^{-1} = \begin{bmatrix} 1 & \frac{2}{3} \\ 0&1 \end{bmatrix} . \] Then matrix A becomes a product of the inverse matrices \[ {\bf A} = {\bf E}_1^{-1} \cdot {\bf E}_2^{-1} \cdot {\bf E}_3^{-1} . \] We check with Mathematica:
Inverse[{{0, 1}, {1, 0}}].Inverse[{{2/3, 0}, {0, 1}}].Inverse[{{1, -2/3}, {0, 1}}]
\( \displaystyle \begin{pmatrix} 0&1 \\ \frac{3}{2} & 1 \end{pmatrix} \)
Reading from right to left, we can see that the geometric effect of multiplying by A is equivalent to successively
  1. shearing by factor of ⅔ in the abscissa direction;
  2. expanding by a factor of 3/3 in the x-direction;
  3. reflecting about the line y = x.
The figure below illustrates the matrix decomposition.
code:

Shearing by a factor of ⅔.
     
Expanding by a factor of 3/2.
     
Reflection about the line y = x.

End of Example 2
Example 3: Using Eq.\eqref{EqPlane.1} with θ = π/3, we get \[ {\bf A} = \frac{1}{2} \begin{bmatrix} -1 & \sqrt{3} \\ \sqrt{3} & 1 \end{bmatrix} . \] The matrix A can be reduces to the identity matrix as follows: \[ {\bf A} = \frac{1}{2} \begin{bmatrix} -1 & \sqrt{3} \\ \sqrt{3} & 1 \end{bmatrix} \,\rightarrow \, \begin{bmatrix} 1 & -\sqrt{3} \\ \frac{\sqrt{3}}{2} & \frac{1}{2} \end{bmatrix} \,\rightarrow \, \begin{bmatrix} 1 & -\sqrt{3} \\ 0 & \frac{5}{2} \end{bmatrix} \,\rightarrow \, \] We multiply the first row by −2, then we add −2/√3 times the second row
End of Example 3
Example 4: In order to rotate by angle θ = 5π/4, we apply formula \eqref{EqPlane.2}: \[ {\bf A} = \begin{bmatrix} - \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} = \frac{1}{\sqrt{2}} \begin{bmatrix} -1 & 1 \\ -1 & 1 \end{bmatrix} . \]
End of Example 2

 

Orthogonal projection


Suppose that vector u ∈ ℝ² is given. It generates a straight line L = span(u) formed by all scalar multiples of u. Then arbitrary vector v ∈ ℝ² can be uniquely decomposed into sum of two vectors
\begin{equation} \label{EqPlane.3} {\bf v} = {\bf v}_{\|} + {\bf v}_{\perp} , \end{equation}
where v is parallel to u (so it is a constant multiple of u) and v is perpendicular to u:
\begin{equation} \label{EqPlane.4} \begin{split} {\bf v}_{\|} &= \left( {\bf v} \bullet {\bf u} \right) \frac{\bf u}{\| {\bf u} \|^2} , \\ {\bf v}_{\perp} &= {\bf v} - {\bf v}_{\|} . \end{split} \end{equation}
code: make a projection on line L = span(u)
Here vu = vu₁ + vu₂ is dot product and \( \displaystyle \| {\bf u} \|^2 = u_1^2 + u_2^2 \) is Eucledian norm of vector u. Vector v is called the projection of v on line L and is denoted as Pu(v). Note that any vector that lies in the 2d line or 3d plane perpendicular to u will not be affected by the scale operation.
code: make an example
\[ \begin{bmatrix} \cos^2 \theta & \sin\theta\,\cos\theta \\ \sin\theta\,\cos\theta & \sin^2 \theta \end{bmatrix} \]
  1. Find the standard matrix for a linear transformation T: ℝ² ↦ ℝ² that first reflects points through the horizontal x1-axis and then reflects points through the line x1 = x2.
  2. Find the standard matrix for a linear transformation T: ℝ² ↦ ℝ² that first rotates points through -3π/4 radian (clockwise) and then reflects points the vertical x2-axis.
  3. Find the standard matrix for a linear transformation T: ℝ² ↦ ℝ² that maps i=(1,0) into 2i-3j but leaves the vector j=(0,1) unchanged.
  4. Find the standard matrix for a linear transformation T: ℝ² ↦ ℝ² that rotates points (about the origin) through 3π/2 radians (counterclockwise).
  5. In ℝ², clearly R(θ+φ) = R(θ) R(φ). By writing out these matrices and performing matrix multiplication, derive the laws for the sine and cosine of the sum of two angles.
  6. If you need the formulas for sin(θ + π/2) and cos(θ + π/2) and don't remember them, what is a simple way to find them ?
  7. Find all 2 × 2 rotation matrices that are also diagonal.
  8. In ℝ², if the list of vertices of a square starts with (0, 0) and (𝑎, b) going counterclockwise, what are the remaining two vertices? (Hint: The vertex opposite (𝑎, b) can be obtained by rotating (𝑎, b) by 90° about the origin.)
  9. Find the standard matrix for a linear transformation T: ℝ² ↦ ℝ² that rotates points (about the origin) through -π/4 radians (clockwise).
  10. If \( {\bf A} = \begin{bmatrix} 1&2 \\ -1&-2 \end{bmatrix} , \) find two matrices BC such that AB = AC.
  11. Suppose that numbers 𝑎 and b in matrix \( \displaystyle \quad \begin{bmatrix} \phantom{-}a&b \\ -b&a \end{\bmatrix} \quad \) are not both zero. Find the entries of rotation matrix that takes (1, 0) to a unit vector in the direction of (𝑎, b). (You don't need to express the angle of the rotation.)
  12. Show that a matrix \( \displaystyle \quad {\bf A} = \begin{bmatrix} \phantom{-}a&b \\ -b&a \end{\bmatrix} \quad \) is equal to a rotation matrix times a scalar matrix rI with r > 0. (Hence, xx A preserves shapes and orientation while expanding or contracting the size uniformly.)

  1. Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
  2. Dunn, F. and Parberry, I. (2002). 3D math primer for graphics and game development. Plano, Tex.: Wordware Pub.
  3. Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1991), Computer Graphics: Principles and Practice (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-12110-7
  4. Matrices and Linear Transformations