This section covers the definition of a function of a square matrix using a diagonalization procedure. This method is applicable only for diagonalizable square matrices, and is not suitable for defective matrices.

 

Similar matrices


Two n-by-n matrices A and B are called similar if there exists an invertible n-by-n matrix S such that
\[ {\bf B} = {\bf S}^{-1} {\bf A}\,{\bf S} \qquad \mbox{or} \qquad {\bf A} = {\bf S}\, {\bf B}\,{\bf S}^{-1} . \]
Recall that any linear transformation T from ℝn to ℝm can be implemented via left-multiplication by m×n matrix, called the standard matrix of T. So there is a one-to-one correspondence between matrices and linear transformations (operators) from ℝn to ℝm. Similar square matrices represent the same linear operator under two (possibly) different bases. A basis of a vector space V is a linearly independent subset of V that spans V.

We will abbreviate the similarity relation between matrices by tilda, ∽. So if matrix A is similar to B, we write AB. If B is similar to C, then A is similar to C as well. Indeed, since we have

\[ {\bf A} \sim {\bf B} \qquad \Longleftrightarrow \qquad {\bf B} = {\bf S}^{-1} {\bf A} {\bf S} \qquad \Longleftrightarrow \qquad {\bf B} \sim {\bf A} \]
and
\[ {\bf B} \sim {\bf C} \qquad \Longleftrightarrow \qquad {\bf C} = {\bf P}^{-1} {\bf B} {\bf P} \qquad \Longleftrightarrow \qquad {\bf B} = {\bf P}{\bf C} {\bf P}^{-1} \]
for some invertible matrices S and P. Substituting the latter into the former, we obtain
\[ {\bf A} \sim {\bf C} \qquad \Longleftrightarrow \qquad {\bf C} = {\bf P}^{-1} {\bf S}^{-1} {\bf A} {\bf S} {\bf P} = \left( {\bf S} {\bf P} \right)^{-1} {\bf A} \left( {\bf S} {\bf P} \right) . \]
Also, using the product property of determinants, we find
\[ \det \left( {\bf B} \right) =\det \left( {\bf S}^{-1} {\bf A} {\bf S} \right) = \det {\bf S}^{-1} \det {\bf A} \,\det {\bf S} = \det {\bf A} . \]
In general, any property that is preserved by a similarity relation is called similarity invariant amd is said to be invariant under similarity. Table below lists the most important similarity invariants.
Property Description
Determinant A and S−1AS have the same determinant
Eigenvalues A and S−1AS have the same eigenvalues
Invertibility A is invertible if and only if S−1AS is invertible
Trace tr(A) = tr(S−1AS)
Characteristic polynomial A and S−1AS have the same characteristic polynomial
Minimal polynomial ψ(A) = ψ(S−1AS)

Example 1: Consider two similar matrices
\[ {\bf A} = {\bf S}^{-1} {\bf B} {\bf S} = \begin{bmatrix} -1&\phantom{-}36&\phantom{-}16 \\ \phantom{-}4&\phantom{-}13&\phantom{-}11 \\ \phantom{-}0&-27&-13 \end{bmatrix} , \quad {\bf B} = \begin{bmatrix} 1&\phantom{-}5 & \phantom{-}4 \\ 1&-3&-1 \\ 2&\phantom{-}2&\phantom{-}1 \end{bmatrix} , \qquad \mbox{with} \quad {\bf S} = \begin{bmatrix} 1&\phantom{-}4&3 \\ 2&-1&2 \\ 1&\phantom{-}2&2 \end{bmatrix} . \]
First, we find their eigenvalues and eigenvectors:
A= {{-1, 36, 16}, {4, 13, 11}, {0, -27, -13}}
B= {{1, 5, 4}, {1, -3, -1}, {2, 2, 1}}
S= {{1, 4, 3}, {2, -1, 2}, {1, 2, 2}}
Eigenvalues[A]
{-4, 4, -1}
Eigenvectors[A]
{{-4, -1, 3}, {-36, -17, 27}, {-43, -16, 36}}
Eigenvalues[B]
{-4, 4, -1}
Eigenvectors[B]
{{-1, 1, 0}, {23, 1, 16}, {-1, -2, 3}}
So we see that both matrices have the same eigenvalues: 4, −4, and −1, but their corresponding eigenvecors do not match. Since their eigenvalues are the same, we expect that the determinant and trace have the same values.
Det[A]
16
Tr[A]
-1
Det[B]
16
Tr[B]
-1

 

Example 1Extra: Consider three matrices:

\[ {\bf A} = \begin{bmatrix} 1&0&0 \\ 0&2&0 \\ 0&0&1/2 \end{bmatrix} , \qquad {\bf B} = \frac{1}{2} \begin{bmatrix} -1&-15&-6 \\ \phantom{-}2&\phantom{-}10& \phantom{-}6 \\ -1&\phantom{-1}3&-2 \end{bmatrix} , \qquad {\bf U} = \begin{bmatrix} 1&3&3 \\ 1&4&3 \\ 1&3&4 \end{bmatrix} . \]
Matrix A was chosen as a diagonal matrix and matrix U is an arbitrary non-singular matrix whose determinant equals 1. Recall that a square matrix having determinant equal to ±1 is called unimodal matrix. Matrix B is obtained by multiplying \( {\bf B} = {\bf U}^{-1} {\bf A}\,{\bf U} . \)
A = {{1, 0, 0}, {0, 2, 0}, {0, 0, 0.5}}
U = {{1, 3, 3}, {1, 4, 3}, {1, 3, 4}}
B = Inverse[U].A.U
Out[3]= {{-(1/2), -(15/2), -3}, {1, 5, 3}, {-(1/2), -(3/2), -1}}
Therefore, the matrix B is similar to the diagonal matrix A. We call such matrix B diagonalizable.
Eigenvalues[B]
Out[4]= {2, 1, 1/2}
Eigenvectors[B]
Out[5]= {{-3, 1, 0}, {-7, 1, 1}, {-3, 0, 1}}
Eigenvectors[A]
Out[6]= {{0, 1, 0}, {1, 0, 0}, {0, 0, 1}}
Therefore, these two similar matrices share the same eigenvalues, but they have distinct eigenvectors.    ■

 

Algebraic multiplicity


Before we go on with diagonalization, we'll take a short detour to exploring algebraic multiplicity, which will be necessary to understand the rest of this chapter. You studied at school how to find roots of a polynomial. Sometimes, these roots may be repeated. In this case, we call the number of times a root repeats the multiplicity of the root. Every polynomial p(x) can be represented in such unique form

\[ p(x) = \left( x - x_1 \right)^{m_1} \left( x- x_2 \right)^{m_2} \cdots \left( x - x_s \right)^{m_s} , \]
where x1, x2, …, xs are distinct zeros or nulls of the polynomial. Each power represents the algebraic multiplicity of its corresponding zero. If such multiplicity is 1, we have a simple zero. According to the Fundamental Theorem of Algebra, the total number of nulls of a polynomial, including (algebraic) multiplicities, must be the degree of the polynomial. Now we are going to extend this definition for arbitrary functions. We will call this new definition algebraic multiplicity, because later, we will introduce another kind of multiplicity (geometric multiplicity).

We say that function f(x) has zero or null at x=a if f(a) = 0. In case of multiple roots, some derivatives of f(x) at that point x=a can also be 0 (i.e. the derivative vanishes). We say that x=a has algebraic multiplicity n if its nth derivative does not vanish, but all previous derivatives vanish. In other words, the algebraic multiplicity of a null x=a of a function f(x) is the highest power to which (x - a) divides the function f(x).
Example 2: For example, polynomial \( f(x) = x^6 -6\,x^5 + 50\,x^3 -45\,x^2 -108\,x + 108 \) can be factored into \( f(x) = \left( x-3 \right)^3 \left( x+2 \right)^2 \left( x-1 \right) . \) Hence, x = 3 is a null of multiplicity 3, x = = -2 is zero of multiplicity 2, but x = 1 is a simple zero. Now we check their multiplicities according to the definition using Mathematica
f[x_] = 108 - 108 x - 45 x^2 + 50 x^3 - 6 x^5 + x^6
We check that x = 3 is a null
f[3]
0
Next derivative at this point is
f'[3]
0
Also
f''[3]
0
but
f'''[3]
300
Therefore, we claim that our function f(x) has algebraic multiplicity 3 at x = 3. On the other hand, similar procedure at point x = -2 yields
f[-2]
0
D[f[x], x] /. x -> -2
0
but
D[f[x], x,x] /. x -> -2
750
So x = -2 has algebraic multiplicity 2. For another null, we have
f[1]
0
but
f'[1]
-72
We have shown that x = 1 has algebraic multiplicity 1 because f(1) = 0, but its derivative is not zero at this point.    ■

 

Geometric multiplicity


Although the algebraic multiplicity can be applied to an arbitrary entire function or polynomial, the next definition---geometric multiplicity---makes sense only for eigenvalues of a square matrix.

The geometric multiplicity of an eigenvalue λ of a square matrix A is the number of linearly independent eigenvectors associated with it. That is, it is the dimension of the nullspace of λI - A, where I is the identity matrix.
Theorem 1: If λ is an eigenvalue of a square matrix A, then its algebraic multiplicity is at least as large as its geometric multiplicity.    ▣
Let x1, x2, … , xr be all of the linearly independent eigenvectors associated to λ, so that λ has geometric multiplicity r. Let xr+1, xr+2, … , xn complete this list to a basis for ℜn, and let S be the n×n matrix whose columns are all these vectors xs, s = 1, 2, … , n. As usual, consider the product of two matrices AS. Because the first r columns of S are eigenvectors, we have
\[ {\bf A\,S} = \begin{bmatrix} \vdots & \vdots&& \vdots & \vdots&& \vdots \\ \lambda{\bf x}_1 & \lambda{\bf x}_2 & \cdots & \lambda{\bf x}_r & ?& \cdots & ? \\ \vdots & \vdots&& \vdots & \vdots&& \vdots \end{bmatrix} . \]
Now multiply out S-1AS. Matrix S is invertible because its columns are a basis for ℜn. We get that the first r columns of S-1AS are diagonal with &lambda's on the diagonal, but that the rest of the columns are indeterminable. Now S-1AS has the same characteristic polynomial as A. Indeed,
\[ \det \left( {\bf S}^{-1} {\bf AS} - \lambda\,{\bf I} \right) = \det \left( {\bf S}^{-1} {\bf AS} - {\bf S}^{-1} \lambda\,{\bf I}{\bf S} \right) = \det \left( {\bf S}^{-1} \left( {\bf A} - \lambda\,{\bf I} \right) {\bf S} \right) = \det \left( {\bf S}^{-1} \right) \det \left( {\bf A} - \lambda\,{\bf I} \right) \det \det \left( {\bf S} \right) = \det \left( {\bf A} - \lambda\,{\bf I} \right) \]
because the determinants of S and S-1 cancel. So the characteristic polynomials of A and S-1AS are the same. But since the first few columns of S-1AS has a factor of at least (x - λ)r, so the algebraic multiplicity is at least as large as the geometric.    ◂
Example 3: We consider three matrices with integer entries for simplicity):
\[ {\bf A} = \begin{bmatrix} 1& \phantom{-}5&0 \\ 1& \phantom{-}0&1 \\ 1&-2& 2 \end{bmatrix} , \qquad {\bf B} = \begin{bmatrix} -3& \phantom{-}2& -4 \\ -2& \phantom{-}2& -2 \\ \phantom{-}4& -2& \phantom{-}5 \end{bmatrix} , \qquad {\bf C} = \begin{bmatrix} -18& \phantom{-}22& -14\\ -\phantom{1}6& \phantom{-3}9& -4 \\ \phantom{-}16& -18& \phantom{-}13 \end{bmatrix} . \]
Since the eigenvalues of matrix A are
A = {{1, 5, 0}, {1, 0, 1}, {1, -2, 2}}; Eigenvalues[A]
{3, -1, 1}
3, −1, and 1, the geometric multiplicity of each of them is 1, which is the same as their algebraic multiplicity.

  

Example 3B: For matrix B, we have
B = {{-3, 2, -4}, {-2, 2, -2}, {4, -2, 5}} Eigenvalues[B]
{2, 1, 1}
Eigenvectors[B]
{{-2, -1, 2}, {-1, 0, 1}, {1, 2, 0}}
Therefore, this matrix has one simple eigenvalue λ = 2, and one double eigenvalue λ = 1. The latter has the algebraic multiplicity 2 and the characteristic polynomial is
\[ \det \left( \lambda {\bf I} - {\bf B} \right) = \left( \lambda -1 \right)^2 \left( \lambda -2 \right) = \lambda^3 -4\lambda^2 + 5\lambda -2. \]
In order to determine eigenvectors corresponding to the eigenvalue λ = 1, we have to solve the characteristic equations
\[ \left( \lambda {\bf I} - {\bf B} \right) {\bf x} =0 \qquad \Longleftrightarrow \qquad \begin{split} -3 x_1 + 2 x_2 -4 x_3 &= x_1 , \\ -2x_1 + 2x_2 - 2x_3 &= x_2 \\ 4x_1 - 2x_2 + 5x_3 &= x_3 . \end{split} \]
We can drop one of the equations (because the determinant of this system is zero) and find from the second one that
\[ x_2 = 2x_1 + 2x_3 \qquad \Longrightarrow \qquad x_1 , \ x_3 \quad\mbox{undetermined}. \]
Hence, the eigenvector corresponding eigenvalue λ = 1 becomes
\[ {\bf x} = \begin{bmatrix} x_1 \\ 2x_1 + 2x_3 \\ x_3 \end{bmatrix} = x_1 \begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} 0 \\ 2 \\ 1 \end{bmatrix} , \]
which indicates that we have two eigenvectors [1, 2, 0]T and [0, 2, 1]T on which the eigenspace is spanned. We check with Mathematics
B = {{-3, 2, -4}, {-2, 2, -2}, {4, -2, 5}}; B.{1, 2, 0}
{1, 2, 0}
B.{0, 2, 1}
{0, 2, 1}
The vector provided by Mathematica also works:
B.{-1, 0, 1}
{-1, 0, 1}
This vector is a linear combination of other two eigenvectors:
\[ \begin{bmatrix} -1 \\ \phantom{-}0 \\ \phantom{-}1 \end{bmatrix} = c_1 \begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix} + c_2 \begin{bmatrix} 0 \\ 2 \\ 1\end{bmatrix} \qquad \Longrightarrow \qquad c_1 =-1, \quad c_2 = 1. \]
      First, we calculate the normal vector to the plane spanned on two eigenvectors:
Cross[{1, 2, 0}, {0, 2, 1}]
{2, -1, 2}
Then we plot the plane and three vectors on it:
plane = ContourPlot3D[ 2 x - 1 y + 2 z == 0, {x, -2, 3}, {y, -1, 3}, {z, -2, 2}, AxesLabel -> {x, y, z}, Mesh -> None, ContourStyle -> Directive[Red]];
ar1 = Graphics3D[{Thick, Arrow[{{0, 0, 0}, {1, 2, 0}}]}];
ar2 = Graphics3D[{Thick, Arrow[{{0, 0, 0}, {0, 2, 1}}]}];
ar3 = Graphics3D[{Thick, Arrow[{{0, 0, 0}, {-1, 0, 1}}]}];
Show[plane, ar1, ar2, ar3]
       Figure 1: Plane and three eigenvectors on it.            Mathematica code

  

Example 3C: The last matrix is defective
CC= {{-18, 22, -14}, {-6, 9, -4}, {16, -18, 13}}; Eigenvalues[CC]
{2, 1, 1}
Eigenvectors[CC]
{{-5, -2, 4}, {-6, -2, 5}, {0, 0, 0}}
To check the answer provided by Mathematica, we solve the system of equations
\[ \left( \lambda {\bf I} - {\bf C} \right) {\bf x} =0 \qquad \Longleftrightarrow \qquad \begin{split} -18 x_1 + 22 x_2 -14 x_3 &= x_1 , \\ -6x_1 + 9 x_2 - 4 x_3 &= x_2 \\ 16x_1 - 18x_2 + 13 x_3 &= x_3 . \end{split} \]
We ask Mathematica for help:
Solve[{-18*x1 + 22*x2 - 14*x3 == x1, -6*x1 + 9*x2 - 4*x3 == x2}, {x1, x2}]
{{x1 -> -((6 x3)/5), x2 -> -((2 x3)/5)}}
Therefore, there is the only one vector [−6, −2, 5]T that generates the eigenspace corresponding to he eigenvalue λ = 1. Its geometric multiplicity is 1, but the algebraic multiplicity is 2. Finally, we check with Mathematica
CC.{-6, -2, 5}
{-6, -2, 5}
   ■

 

Diagonalization of matrices


Because diagonal matrices have very simple structure and are equivalent to vectors (the main diagonal is n-vector), it is natural to consider matrices that are similar to diagonal matrices.

A square matrix A is called diagonalizable if there exists a nonsingular matrix S such that \( {\bf S}^{-1} {\bf A} {\bf S} = {\bf \Lambda} , \) a diagonal matrix. In other words, the matrix A is similar to a diagonal matrix.

Theorem 2: If λ1, λ2, … , λk are distinct eigenvalues of a square matrix A and if v1, v2, … , vk are corresponding eigenvectors, then { v1, v2, … , vk } is a linearly independent set.

Mathematica has a command dedicated to the determination of whether or not a matrix is diagonalizable. The DiagonalizableMatrixQ[A] command gives True if matrix A is diagonalizable, and False otherwise.

Theorem 3: A square n×n matrix A is diagonalizable if and only if there exist n linearly independent eigenvectors, so geometrical multiplicity of each eigenvalue is the same as its algebraic multiplicity.
Corollary: A square n×n matrix A is diagonalizable if and only if it is not defective.
An \( n \times n \) square matrix is diagonalizable if and only if there exist n linearly independent eigenvectors, so geometrical multiplicity of each eigenvalue is the same as its algebraic multiplicity. Then the matrix S can be built from eigenvectors of A, column by column.

Let A be a square \( n \times n \) diagonalizable matrix, and let \( {\bf \Lambda} \) be the corresponding diagonal matrix of its eigenvalues:

\begin{equation} \label{EqDiag.1} {\bf \Lambda} = \begin{bmatrix} \lambda_1 & 0 & 0 & \cdots & 0 \\ 0&\lambda_2 & 0& \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \cdots & \lambda_n \end{bmatrix} , \end{equation}

where \( \lambda_1 , \lambda_2 , \ldots , \lambda_n \) are eigenvalues (not all of them are necessary distinct) of the matrix A. Let v1, v2, … , vn be the set of linearly independent eigenvectors (so \( {\bf A}\,{\bf v}_i = \lambda_i {\bf v}_i , \ i=1,2,\ldots , n \) ). Then we can build the matrix from these vectors:

\begin{equation} \label{EqDiag.2} {\bf S} = \left[ {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_n \right] \qquad \Longrightarrow \qquad {\bf S}^{-1} {\bf A} {\bf S} = {\bf \Lambda} . \end{equation}
Multiplying both sides of the latter by S, we obtain
\[ {\bf A} {\bf S} = {\bf S} {\bf \Lambda} \qquad \Longleftrightarrow \qquad {\bf A} \left[ {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_n \right] = \left[ {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_n \right] \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0&0& \cdots & \lambda_n \end{bmatrix} = \left[ \lambda_1 {\bf v}_1 , \lambda_2 {\bf v}_2 , \ldots , \lambda_n {\bf v}_n \right] . \]
Since S is build of eigenvectors, the left-hand side becomes
\[ {\bf A} {\bf S} = \left[ \lambda_1 {\bf v}_1 , \lambda_2 {\bf v}_2 , \ldots , \lambda_n {\bf v}_n \right] . \]
There are known several cases that guarantee diagonalization of a matrix.
Theorem 5: A square n×n matrix A is diagonalizable if
  • all eigenvalues of A are distinct;
  • matrix A is self-adjoint, so A = A*;
  • matrix A is normal, so if it commutes with its adjointL \( {\bf A}\,{\bf A}^{\ast} = {\bf A}^{\ast} {\bf A} . \)
Example 4: We reconsider the matrix from Example 3:
\[ {\bf B} = \begin{bmatrix} -3& \phantom{-}2& -4 \\ -2& \phantom{-}2& -2 \\ \phantom{-}4& -2& \phantom{-}5 \end{bmatrix} \]
Its eigenvalues and eigenvectors are known to be
\[ \lambda = 2 , \quad {\bf v}_1 = [-3, 2, -4] \qquad\mbox{and} \qquad \lambda = 1, \quad {\bf v}_2 = [-1,0,1], \quad {\bf v}_3 = [1, 2, 0] . \]
B = {{-3, 2, -4}, {-2, 2, -2}, {4, -2, 5}} Eigenvalues[B]
{2, 1, 1}
Eigenvectors[B]
{{-2, -1, 2}, {-1, 0, 1}, {1, 2, 0}}
Therefore, this matrix has one simple eigenvalue λ = 2, and one double eigenvalue λ = 1. The latter has the algebraic multiplicity 2, which is its geometrical multiplicity as well because there are two linearly independent eigenvectors. Now we build the matrix from these three eigenvectors:
\[ {\bf S} = \begin{bmatrix} -2& -1& 1 \\ -1& \phantom{-}0 & 2 \\ \phantom{-}2& \phantom{-}1&0 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf S}^{-1} = \begin{bmatrix} \phantom{-}2& -1 & \phantom{-}2 \\ -4&\phantom{-}2& -3 \\ \phantom{-}1& \phantom{-}0 & \phantom{-}1 \end{bmatrix} . \]
S = {{-2, -1, 1}, {-1, 0, 2}, {2, 1, 0}}
SI = Inverse[S]
{{2, -1, 2}, {-4, 2, -3}, {1, 0, 1}}
Now we check two different products:
\[ {\bf S}{\bf B}{\bf S}^{-1} = \begin{bmatrix} \phantom{-}71& -28& \phantom{-}63 \\ \phantom{-}60& -23& \phantom{-}54 \\ -50& \phantom{-}20& -44 \end{bmatrix} , \qquad {\bf S}^{-1} {\bf B}{\bf S} = \begin{bmatrix} 2&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} . \]
SI.B.S
{{2, 0, 0}, {0, 1, 0}, {0, 0, 1}}
S.B.SI
{{71, -28, 63}, {60, -23, 54}, {-50, 20, -44}}
   ■
Example 5: A triangular matrix (either upper or lower) is not diagonalizable if it has identical entries on the main diagonal. Hence, we consider a diagonalizable lower triangular matrix:
\[ {\bf A} = \begin{bmatrix} \phantom{-}2& \phantom{-}0& \phantom{-}0 \\ -2& \phantom{-}1& \phantom{-}0 \\ \phantom{-}4& -2& -3 \end{bmatrix} . \]
A = {{2, 0, 0}, {-2, 1, 0}, {4, -2, -3}}
Eigenvalues[A]
{-3, 2, 1}
Eigenvectors[A]
{{0, 0, 1}, {5, -10, 8}, {0, -2, 1}}
So the matrix of its eigenvalues becomes
\[ {\bf S} = \begin{bmatrix} 0& \phantom{-}5& \phantom{-}0 \\ 0& -10& -2 \\ 1& \phantom{-}8& \phantom{-}1 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf S}^{-1} = \frac{1}{10} \begin{bmatrix} -6& \phantom{-}5& 10 \\ \phantom{-}2& \phantom{-}0& \phantom{-}0 \\ -10& -5& \phantom{-}0 \end{bmatrix} \]
S = {{0, 5, 0}, {0, -10, -2}, {1, 8, 1}}
SI = Inverse[S]
{{-(3/5), 1/2, 1}, {1/5, 0, 0}, {-1, -(1/2), 0}}
Finally, we diagonalize matrix A:
\[ {\bf S}^{-1} {\bf A}{\bf S} = \begin{bmatrix} -3& 0 & 0 \\ \phantom{-}0& 2 & 0 \\ \phantom{-}0&0& 1 \end{bmatrix} . \]
SI.A.S
{{-3, 0, 0}, {0, 2, 0}, {0, 0, 1}}
   ■
Example 6: Let us consider the Fibonacci sequence of numbers that satisfy the recurrence:
\[ F_{k+2} = F_{k+1} + F_k , \qquad F_0 =0, \quad F_1 = 1, \qquad k=0,1,2,\ldots \]
Upon introducing the vector uk whose coordinates are two consecutive Fibonacci numbers [Fk, Fk-1]T, we rewrite the Fibonacci recurrence in the vector form
\[ {\bf A} \begin{bmatrix} F_k \\ F_{k-1} \end{bmatrix} = \begin{pmatrix} 1&1 \\ 1&0 \end{pmatrix} \begin{bmatrix} F_k \\ F_{k-1} \end{bmatrix} = \begin{bmatrix} F_k + F_{k-1} \\ F_k \end{bmatrix} = \begin{bmatrix} F_{k+1} \\ F_{k} \end{bmatrix} = {\bf u}_{k+1} . \]
Thus, we can produce a vector whose coordinates are two consecutive Fibonacci numbers by applying matrix A many times to the vector u1 with coordinates [F1, F0]T = [1, 0]T:
\[ {\bf u}_k = \begin{bmatrix} F_{k} \\ F_{k-1} \end{bmatrix} = {\bf A}^k \begin{bmatrix} F_1 \\ F_{0} \end{bmatrix} = {\bf A}^{k-1} {\bf u}_1 , \qquad k=1,2,\ldots . \]
The eigenvalues of Fibonacci matrix is determined from the corresponding characteristic equation
\[ \det \left( \lambda {\bf I} - {\bf A} \right) = \begin{vmatrix} \lambda -1 & -1 \\ -1 & \lambda \end{vmatrix} = \lambda^2 - \lambda -1 =0. \]
Using the quadratic formula, we find to find the roots as
\[ \lambda_1 = \frac{1 + \sqrt{5}}{2} \approx 1.61803 , \qquad \lambda_2 = \frac{1 - \sqrt{5}}{2} \approx -0.618034. \]
Therefore, the Fibonacci matrix has two distinct real roots, the positive one is called the golden ratio. To find the corresponding eigenvectors, we have to solve two equations
\[ \begin{pmatrix} 1&1 \\ 1&0 \end{pmatrix} \begin{bmatrix} x_1 \\ y_{1} \end{bmatrix} = \lambda_1 \begin{bmatrix} x_1 \\ y_{1} \end{bmatrix} \qquad \mbox{and} \qquad \begin{pmatrix} 1&1 \\ 1&0 \end{pmatrix} \begin{bmatrix} x_2 \\ y_{2} \end{bmatrix} = \lambda_2 \begin{bmatrix} x_2 \\ y_{2} \end{bmatrix} . \]
This gives
\[ {\bf v}_1 = \begin{bmatrix} \lambda_1 \\ 1 \end{bmatrix} \qquad \mbox{and} \qquad {\bf v}_2 = \begin{bmatrix} \lambda_2 \\ 1 \end{bmatrix} \]
Each of the eigenvectors, v1 and v2, is also an eigenvectors of any power of the Fibonacci matrix. Using factorization
\[ \begin{pmatrix} 1&1 \\ 1&0 \end{pmatrix} = {\bf S} \begin{pmatrix} \lambda_1&0 \\ 0&\lambda_2 \end{pmatrix} {\bf S}^{-1} , \]
where
\[ {\bf S} = \begin{bmatrix} \lambda_1 & \lambda_2 \\ 1&1 \end{bmatrix} , \qquad {\bf S}^{-1} = \frac{1}{\sqrt{5}} \begin{bmatrix} 1 & - \lambda_2 \\ -1&\lambda_1 \end{bmatrix} . \]
The powers of matrix A can be evaluated as
\[ {\bf A}^n = {\bf S} \begin{pmatrix} \lambda_1^n&0 \\ 0&\lambda_2^n \end{pmatrix} {\bf S}^{-1} , \qquad n= 1,2 , 3 , \ldots . \]
In particular,
\[ {\bf A}^2 = \begin{pmatrix} 2& 1 \\ 1 & 1 \end{pmatrix} = {\bf S} \begin{pmatrix} \lambda_1^2&0 \\ 0&\lambda_2^2 \end{pmatrix} {\bf S}^{-1} . \]
lambda1 = (1 + Sqrt[5])/2; lambda2 = (1 - Sqrt[5])/2;
A = {{1, 1}, {1, 0}}
A2=A.A
{{2, 1}, {1, 1}}
S = {{lambda1, lambda2}, {1, 1}};
IS = Simplify[Inverse[S]]
{{1/Sqrt[5], 1/10 (5 - Sqrt[5])}, {-(1/Sqrt[5]), 1/10 (5 + Sqrt[5])}}
Simplify[S.{{lambda1^2, 0}, {0, lambda2^2}}.IS]
{{2, 1}, {1, 1}}
Similarly, we can find 9-th power of A:
Simplify[S.{{lambda1^9, 0}, {0, lambda2^9}}.IS]
{{55, 34}, {34, 21}}
The eigenvalues of A² are λ1² and λ2² with the same eigenvalues; mathematica confirms:
v1 = {{lambda1}, {1}};
Simplify[A2.v1/lambda1^2]
{{1/2 (1 + Sqrt[5])}, {1}}
   ■

 

Function of a square matrix


Let \( {\bf v}_1 , {\bf v}_2 , \ldots , {\bf v}_n \) be linearly independent eigenvectors, corresponding to the eigenvalues \( \lambda_1 , \lambda_2 , \ldots , \lambda_n .\) We build the nonsingular matrix S from these eigenvectors (every column is an eigenvector):

\[ {\bf S} = \begin{bmatrix} {\bf v}_1 & {\bf v}_2 & {\bf v}_3 & \cdots & {\bf v}_n \end{bmatrix} . \]
For any reasonable (we do not specify this word, it is sufficient to be smooth) function defined on the spectrum (set of all eigenvalues) of the diagonalizable matrix A, we define the function of this matrix by the formula:
\begin{equation} \label{EqDiag.3} f \left( {\bf A} \right) = {\bf S} f\left( {\bf \Lambda} \right) {\bf S}^{-1} , \qquad \mbox{where } \quad f\left( {\bf \Lambda} \right) = \begin{bmatrix} f(\lambda_1 ) & 0 & 0 & \cdots & 0 \\ 0 & f(\lambda_2 ) & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \cdots & f(\lambda_n ) \end{bmatrix} . \end{equation}

 

Example 7: Consider the 3 × 3 matrix \( {\bf A} = \begin{bmatrix} \phantom{-1}1&\phantom{-1}4&16 \\ \phantom{-}18&\phantom{-} 20&\phantom{-}4 \\ -12&-14&-7 \end{bmatrix} \) that has three distinct eigenvalues that we identify with Mathematica
A = {{1,4,16},{18,20,4},{-12,-14,-7}}
Eigenvalues[A]
Out[2]= 9, 4, 1
Eigenvectors[A]
Out[3]= {{1, -2, 1}, {4, -5, 2}, {4, -4, 1}}
Using eigenvectors, we build the transition matrix S of its eigenvectors:
\[ {\bf S} = \begin{bmatrix} \phantom{-}1&\phantom{-}4&\phantom{-}4 \\ -2&-5&-4 \\ \phantom{-}1& \phantom{-}2&\phantom{-}1 \end{bmatrix} , \quad\mbox{with} \quad {\bf S}^{-1} = \begin{bmatrix} -3&-4&-4 \\ \phantom{-}2&\phantom{-}3& \phantom{-}4 \\ -1&-2&-3 \end{bmatrix} . \]

We don't know exactly how many roots a matrix has, as some matrices have no roots and others have infinitely many roots (see roots section in this tutorial). However if a matrix has all distinct roots, we can construct 2m roots, where m is the number of distinct eigenvalues. Then we are ready to construct eight (it is 8 = 2³ roots because each square root of an eigenvalue has two values; for instance, \( \sqrt{9} = \pm 3 \) ) matrix square roots of the given matrix:

\[ \sqrt{\bf A} = {\bf S} \sqrt{\Lambda} {\bf S}^{-1} = \begin{bmatrix} \phantom{-}1&\phantom{-}4&\phantom{-}4 \\ -2&-5&-4 \\ \phantom{-}1&\phantom{-}2&\phantom{-}1 \end{bmatrix} \begin{bmatrix} \pm 3&0&0 \\ 0&\pm 2&0 \\ 0&0&\pm 1 \end{bmatrix} \begin{bmatrix} -3&-4&-4 \\ \phantom{-}2&\phantom{-}3&\phantom{-}4 \\ -1&-2&-3 \end{bmatrix} , \]
with appropriate choice of roots on the diagonal. We use Λ as a diagonal matrix with the eigenvalues. To help illustrate how this works, we will present 4 examples of answers:
\begin{align*} \sqrt{\bf A} &= \begin{bmatrix} \phantom{-}1&\phantom{-}4&\phantom{-}4 \\ -2&-5&-4 \\ \phantom{-}1&\phantom{-}2& \phantom{-}1 \end{bmatrix} \begin{bmatrix} 3&0&0 \\ 0&2&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} \phantom{-}1&\phantom{-}4&\phantom{-}4 \\ -2&-5&-4 \\ \phantom{-}1&\phantom{-}2&\phantom{-}1 \end{bmatrix}^{-1} = \begin{bmatrix} \phantom{-}3&\phantom{-}4&\phantom{-}8 \\ \phantom{-}2&\phantom{-}2&-4 \\ -2&-2&\phantom{-}1 \end{bmatrix} , \quad \\ &= \begin{bmatrix} \phantom{-}1&\phantom{-}4&\phantom{-}4 \\ -2&-5&-4 \\ \phantom{-}1&\phantom{-}2&\phantom{-}1 \end{bmatrix} \begin{bmatrix} -3&0&0 \\ \phantom{-}0&2&0 \\ \phantom{-}0&0&1 \end{bmatrix} \begin{bmatrix} -3&-4&-4 \\ \phantom{-}2& \phantom{-}3& \phantom{-}4 \\ -1&-2&-3 \end{bmatrix} = \begin{bmatrix} \phantom{-}21&\phantom{-}28& \phantom{-}32 \\ -34&-46&-52 \\ \phantom{-}16&\phantom{-}22&\phantom{-}25 \end{bmatrix} . \end{align*}
Similarly,
\begin{align*} \sqrt{\bf A} &= \begin{bmatrix} \phantom{-}1&\phantom{-}4&\phantom{-}4 \\ -2&-5&-4 \\ \phantom{-}1&\phantom{-}2&\phantom{-}1 \end{bmatrix} \begin{bmatrix} 3& 0&0 \\ 0&-2&0 \\ 0& 0&1 \end{bmatrix} \begin{bmatrix} -3&-4&-4 \\ \phantom{-}2&\phantom{-}3&\phantom{-}4 \\ -1&-2&-3 \end{bmatrix} = \begin{bmatrix} -29&-44&-56 \\ \phantom{-}42&\phantom{-}62&\phantom{-}76 \\ -18&-26&-31 \end{bmatrix} , \\ &= \begin{bmatrix} \phantom{-}1&\phantom{-}4&\phantom{-}4 \\ -2&-5&-4 \\ \phantom{-}1&\phantom{-}2&\phantom{-}1 \end{bmatrix} \begin{bmatrix} -3&0&0 \\ 0&-2&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} -3&-4&-4 \\ \phantom{-}2&\phantom{-}3&\phantom{-}4 \\ -1&-2&-3 \end{bmatrix} = \begin{bmatrix} -11&-20&-32 \\ \phantom{-}6&\phantom{-}14&\phantom{-}28 \\ \phantom{-}0&-2&-7 \end{bmatrix} . \end{align*}
We check with Mathematica for specific roots of eigenvalues: 3, 2, and 1. However, we can take any combination of these roots using \( \pm 3, \pm 2, \pm 1 \) next time.
S = Transpose[Eigenvectors[A]]
square = {{3, 0, 0}, {0, 2, 0}, {0, 0, 1}}
S.square.Inverse[S]
Out[7]= {{3, 4, 8}, {2, 2, -4}, {-2, -2, 1}}
Now we build three other matrix functions corresponding to the functions of variable λ containing a real parameter t (that later will be associated with time): \( U(t ) = e^{\lambda\,t}, \ \Phi (t ) = \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} , \ \Psi (t ) = \cos \left( \sqrt{\lambda}\,t \right) . \) We start with the exponential matrix:
diag = DiagonalMatrix[{9, 4, 1}]
Out[4]= {{9, 0, 0}, {0, 4, 0}, {0, 0, 1}}
y[t_] = MatrixExp[diag t]
Out[5]= {{E^(9 t), 0, 0}, {0, E^(4 t), 0}, {0, 0, E^(t)}}
or directly
DiagonalMatrix[Exp[{9, 4, 1}*t]]
Next we multiply the obtained diagonal matrix function by S the matrix of eigenvectors from the left and by its inverse from the right:
\[ e^{{\bf A}\,t} = {\bf S} \begin{bmatrix} e^{9t}&0&0 \\ 0&e^{4t}&0 \\ 0&0&e^t \end{bmatrix} {\bf S}^{-1} = e^{9t} \begin{bmatrix} -3&-4&-4 \\ \phantom{-}6&\phantom{-}8&\phantom{-}8 \\ -3&-4&-4 \end{bmatrix} + e^{4t} \begin{bmatrix} \phantom{-}8&\phantom{-}12&\phantom{-}16 \\ -10&-15&-20 \\ \phantom{-}4&\phantom{-}6&\phantom{-}8 \end{bmatrix} + e^t \begin{bmatrix} -4&-8&-12 \\ \phantom{-}4&\phantom{-}8&\phantom{-}12 \\ -1&-2& -3 \end{bmatrix} , \]
which we check with Mathematica standard command:
A = {{1, 4, 16}, {18, 20, 4}, {-12, -14, -7}}
MatrixExp[A*t]
Recall that the exponential matrix function is the unique solution to the following matrix initial value problem:
\[ \frac{\text d}{{\text d}t}\, {\bf U}(t) = {\bf A}\,{\bf U}(t) = {\bf U}(t)\, {\bf A} , \quad {\bf U}(0) = {\bf I}, \]
where I is the identity square matrix. Two other important matrix functions depending on time variable t:
\begin{eqnarray*} {\bf \Phi}(t) &=& \frac{\sin \left( \sqrt{\bf A}\,t \right)}{\sqrt{\bf A}} = {\bf S} \begin{bmatrix} \frac{\sin 3t}{3}&0&0 \\ 0&\frac{\sin 2t}{2}&0 \\ 0&0&\sin t \end{bmatrix} {\bf S}^{-1} \\ &=& \sin 3t \begin{bmatrix} -1&-\frac{4}{3} &-\frac{4}{3} \\ \phantom{-}2&\phantom{-}\frac{8}{3}&\phantom{-}\frac{8}{3} \\ -1&-\frac{4}{3}&-\frac{4}{3} \end{bmatrix} + \sin 2t \begin{bmatrix} \phantom{-}4&\phantom{-}6&\phantom{-1}8 \\ -5&-\frac{15}{2}&-10 \\ \phantom{-}2&\phantom{-}3&\phantom{-}4 \end{bmatrix} + \sin t \begin{bmatrix} -4&-8&-12 \\ \phantom{-}4&\phantom{-}8&\phantom{-}12 \\ -1&-2&-3 \end{bmatrix} , \end{eqnarray*}
\begin{eqnarray*} {\bf \Psi}(t) &=& \cos \left( \sqrt{\bf A}\,t \right) = {\bf S} \begin{bmatrix} \cos 3t&0&0 \\ 0&\cos 2t&0 \\ 0&0&\cos t \end{bmatrix} {\bf S}^{-1} \\ &=& \cos 3t \begin{bmatrix} -3&-4&-4 \\ \phantom{-}6&\phantom{-}8&\phantom{-}8 \\ -3&-4&-4 \end{bmatrix} + \cos 2t \begin{bmatrix} \phantom{-}8&\phantom{-}12&-12 \\ -10&-15&-20 \\ \phantom{-}4&\phantom{-}6&\phantom{-}8 \end{bmatrix} + \cos t \begin{bmatrix} -4&-8&-12 \\ \phantom{-}4&\phantom{-}8&\phantom{-}12 \\ -1&-2&-3 \end{bmatrix} . \end{eqnarray*}
S = Transpose[Eigenvectors[A]]
S.{{Sin[3*t]/3, 0, 0}, {0, Sin[2*t]/2, 0}, {0, 0, Sin[t]}}.Inverse[S]
S.{{Cos[3*t], 0, 0}, {0, Cos[2*t], 0}, {0, 0, Cos[t]}}.Inverse[S]
These two matrix functions are unique solutions of the following initial value problems:
\[ \frac{{\text d}^2}{{\text d} t^2} \,{\bf \Phi} + {\bf A}\, {\bf \Phi} = {\bf 0} , \quad {\bf \Phi}(0) = {\bf 0} , \quad \dot{\bf \Phi}(0) = {\bf I} \qquad \mbox{for} \quad {\bf \Phi}(t) = \frac{\sin \left( \sqrt{\bf A}\,t \right)}{\sqrt{\bf A}} , \]
and
\[ \frac{{\text d}^2}{{\text d} t^2} \,{\bf \Psi} + {\bf A}\, {\bf \Psi} = {\bf 0} , \quad {\bf \Psi}(0) = {\bf I} , \quad \dot{\bf \Psi}(0) = {\bf 0} \qquad \mbox{for} \quad {\bf \Psi}(t) = \cos \left( \sqrt{\bf A}\,t \right) . \]
Example 8: Consider the 3 × 3 matrix \( {\bf A} = \begin{bmatrix} -20&-42&-21 \\ \phantom{-}6&\phantom{-}13&\phantom{-}6 \\ \phantom{-}12&\phantom{-}24&\phantom{-}13 \end{bmatrix} \) that has two distinct eigenvalues

A = {{-20, -42, -21}, {6, 13, 6}, {12, 24, 13}}
Eigenvalues[A]
Out[2]= 4, 1, 1
Eigenvectors[A]
Out[3]= {{ -7, 2, 4 }, {-1, 0, 1 }, {-2, 1, 0 }}
Since the double eigenvalue \( \lambda =1 \) has two linearly independent eigenvectors, the given matrix is diagonalizable, and we are able to build the transition matrix of its eigenvectors:
\[ {\bf S} = \begin{bmatrix} -7&-1&-2 \\ \phantom{-}2&\phantom{-}0&\phantom{-}1 \\ \phantom{-}4&\phantom{-}1&\phantom{-}0 \end{bmatrix} , \quad\mbox{with} \quad {\bf S}^{-1} = \begin{bmatrix} \phantom{-}1&\phantom{-}2&\phantom{-}1 \\ -4&-8&-3 \\ -2&-3&-2 \end{bmatrix} . \]
For three functions of variable λ depending on a parameter t that usually corresponds to time, \( U(t ) = e^{\lambda \,t} , \quad \Phi (t ) = \frac{\sin \left( \sqrt{\lambda} \,t \right)}{\sqrt{\lambda}} , \quad \Psi (t ) = \cos \left( \sqrt{\lambda} \,t \right) , \) we construct the corresponding matrix-functions by substituting matrix A instead of variable λ. To achieve this, we introduce the diagonal matrix of eigenvalues: \( {\bf \Lambda} = \begin{bmatrix} 4&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} . \) This allows us to construct the required matrix-functions:
\begin{align*} {\bf U}(t) &= e^{{\bf A}\,t} = {\bf S} e^{{\bf \Lambda}\,t} {\bf S}^{-1} = \begin{bmatrix} -7&-1&-2 \\ \phantom{-}2&\phantom{-}0&\phantom{-}1 \\ \phantom{-}4&\phantom{-}1&\phantom{-}0 \end{bmatrix} \begin{bmatrix} e^{4t}&0&0 \\ 0&e^t & 0 \\ 0&0&e^t \end{bmatrix} \begin{bmatrix} \phantom{-}1&\phantom{-}2&\phantom{-}1 \\ -4&-8&-3 \\ -2&-3&-2 \end{bmatrix} \\ &= e^{2t} \begin{bmatrix} -7 & -14 & -7 \\ \phantom{-}2&\phantom{-}4&\phantom{-}2 \\ \phantom{-}4&\phantom{-}8& \phantom{-}4 \end{bmatrix} + e^t \begin{bmatrix} \phantom{-}8&14&\phantom{-}7 \\ -2&-3&-2 \\ -4&-8&-3 \end{bmatrix} , \end{align*}
Similarly,
\begin{align*} {\bf \Phi} (t) &= \frac{\sin \left( \sqrt{\bf A} \,t \right)}{\sqrt{\bf A}} = {\bf S} \frac{\sin \left( \sqrt{\bf \Lambda} \,t \right)}{\sqrt{\bf \Lambda}} {\bf S}^{-1} = \sin 2t \begin{bmatrix} -7/2 & -7 & -7/2 \\ 1&2&1 \\ 2&4&2 \end{bmatrix} + \sin t \begin{bmatrix} \phantom{-}8&14&\phantom{-}7 \\ -2&-3&-2 \\ -4&-8&-3 \end{bmatrix} , \\ {\bf \Psi} (t) &= {\bf S} \cos \left( {\bf \Lambda}\,t \right) {\bf S}^{-1} = \cos 2t \begin{bmatrix} -7 & -14 & -7 \\ \phantom{-}2& \phantom{-}4& \phantom{-}2 \\ 4&8&4 \end{bmatrix} + \cos t \begin{bmatrix} \phantom{-}8&14&\phantom{-}7 \\ -2&-3&-2 \\ -4&-8&-3 \end{bmatrix} . \end{align*}
These matrix functions are unique solutions of the following initial value problems with respect to time variable t:
\[ \frac{\text d}{{\text d}t}\,e^{{\bf A}\,t} = {\bf A}\,e^{{\bf A}\,t} , \qquad \lim_{t\to 0} \,e^{{\bf A}\,t} = {\bf I} , \quad \mbox{where } {\bf I} \mbox{ is the identity matrix}; \]
\[ \frac{{\text d}^2}{{\text d}t^2}\,{\bf \Phi} (t) + {\bf A}\,{\bf \Phi} (t) = {\bf 0} , \qquad \lim_{t\to 0} \,{\bf \Phi} (t) = {\bf 0} , \quad \quad \lim_{t\to 0} \,\dot{\bf \Phi} (t) = {\bf I} , \quad \mbox{where } {\bf I} \mbox{ is the identity matrix}; \]
\[ \frac{{\text d}^2}{{\text d}t^2}\,{\bf \Psi} (t) + {\bf A}\,{\bf \Psi} (t) = {\bf 0} , \qquad \lim_{t\to 0} \,{\bf \Psi} (t) = {\bf I} , \quad \quad \lim_{t\to 0} \,\dot{\bf \Psi} (t) = {\bf 0} . \]
Example 9: Consider the 3 × 3 matrix \( {\bf A} = \begin{bmatrix} 1 &2&3 \\ 2 &3&4 \\ 2&-6&-4 \end{bmatrix} \) that has two complex conjugate eigenvalues λ = 1 ±2j and one real eigenvalue λ = −2. Mathematica confirms:
A = {{1, 2, 3}, {2, 3, 4}, {2, -6, -4}}
Eigenvalues[A]
Out[2]= {1 + 2 I, 1 - 2 I, -2}
Eigenvectors[A]
Out[3]= {{-1 - I, -2 - I, 2}, {-1 + I, -2 + I, 2}, {-7, -6, 11}}
We build the transition matrix of its eigenvectors:
\[ {\bf S} = \begin{bmatrix} -1-{\bf j} & -1+{\bf j} &-7 \\ -2-{\bf j} & -2+{\bf j} &-6 \\ 2&2&1 \end{bmatrix} , \quad \mbox{with} \quad {\bf S}^{-1} = \frac{1}{6} \begin{bmatrix} 1 - 10{\bf j} & -1 + 13{\bf j} & 1 + 8{\bf j} \\ 1 +10 {\bf j} & -1 -13{\bf j} & 1 -8{\bf j} \\ -4 & 4 & 2 \end{bmatrix} . \]
Now we are ready to define a function of the given square matrix. For example, if \( f(\lambda ) = e^{\lambda \, t} , \) we obtain the corresponding exponential matrix:
\begin{align*} e^{{\bf A}\,t} &= {\bf S} \begin{bmatrix} e^{(1+2{\bf j})\,t} & 0&0 \\ 0& e^{(1-2{\bf j})\,t} & 0 \\ 0&0&e^{-2t} \end{bmatrix} {\bf S}^{-1} \\ &= \begin{bmatrix} -1-{\bf j} & -1+{\bf j} &-7 \\ -2-{\bf j} & -2+{\bf j} &-6 \\ 2&2&1 \end{bmatrix} \, \begin{bmatrix} e^{t} \left( \cos 2t + {\bf j}\,\sin 2t \right) & 0&0 \\ 0& e^{t} \left( \cos 2t - {\bf j}\,\sin 2t \right) & 0 \\ 0&0&e^{-2t} \end{bmatrix} \, \frac{1}{6} \begin{bmatrix} 1 - 10{\bf j} & -1 + 13{\bf j} & 1 + 8{\bf j} \\ 1 +10 {\bf j} & -1 -13{\bf j} & 1 -8{\bf j} \\ -4 & 4 & 2 \end{bmatrix} \\ &= \frac{1}{3} \, e^{-2t} \begin{bmatrix} 14 & -14& -7 \\ 12&-12& -6 \\ -2&2&1 \end{bmatrix} + \frac{1}{3} \, e^{t} \,\cos 2t \begin{bmatrix} -11&14&7 \\ -12&15&6 \\ 2&-2&2 \end{bmatrix} + \frac{1}{3} \, e^{t} \,\sin 2t \begin{bmatrix} -9&12&9 \\ -19&25&17 \\ 20&-26&-16 \end{bmatrix} . \end{align*}
Here we use Euler's formula: \( e^{a+b{\bf j}} = e^a \left( \cos b + {\bf j} \sin b \right) . \) Mathematica confirms
S = {{-1-I, -1+I, -7}, {-2-I, -2+I, -6}, {2, 2, 1}}
diag = {{Exp[t]*(Cos[2*t] + I*Sin[2*t]), 0, 0} , {0, Exp[t]*(Cos[2*t] - I*Sin[2*t]), 0}, {0, 0, Exp[-2*t]}}
FullSimplify[S.diag.Inverse[S]*3]
or
Simplify[ComplexExpand[%]]
The matrix function \( e^{{\bf A}\,t} \) is the unique solution of the following matrix initial value problem:
\[ \frac{\text d}{{\text d}t}\,e^{{\bf A}\,t} = {\bf A}\,e^{{\bf A}\,t} , \qquad \lim_{t\to 0} \,e^{{\bf A}\,t} = {\bf I} , \]
where I is the identity matrix.    ■