Vladimir Dobrushkin.

When the resolvent method is applied to define a function of a square matrix, it is actually based on the inverse Laplace transform, which can be reduced to the Cauchy integral formula
\[ f({\bf A}) = \frac{1}{2\pi{\bf j}} \, \int_{\gamma} {\text d} z \, f(z) \left( z{\bf I} - {\bf A} \right)^{-1} , \]
where f is a holomorphic function (meaning that it is represented locally by a convergent power series) on and inside a closed contour γ that encloses all eigenvalues of a square matrix A. Here j is a unit vector in the positive vertical direction on the complex plane ℂ so that \( {\bf j}^2 = -1 . \) The idea behind defining a function of a linear operator via the Cauchy integral formula came from the French mathematician Jules Henri Poincaré (1854--1912) when he worked on continuous groups in 1899. When the integrand is a ratio of two polynomials or of entire functions, then the contour integral is the sum of residues (see definition of residue below), which was predicted by a German mathematician Ferdinand Georg Frobenius (1849--1917) in 1896. Although our definition of matrix functions is based on previous achievements in this area, it is made understandable for undergraduates by the use of numerous examples.

Our next natural step is to define a set of functions that can be of use for describing matrix functions. It turns out that every matrix uses its own set of functions that we call admissible.


A function f of one complex variable is called admissible for given square n-by-n matrix A if f is defined (has finite value) at every eigenvalue of A, and all of the following derivatives exist
\begin{equation} \label{EqResolvent.1} f^{(s)} \left( \lambda_k \right) \equiv \left. \frac{{\text d}^s f}{{\text d} \lambda^s} \right\vert_{\lambda = \lambda_k} , \qquad s=0,1,\ldots , m_k ; \quad k=1,2,\ldots , m. \end{equation}
Here \( \lambda_1 , \lambda_2 , \ldots , \lambda_m \) are all of the distinct eigenvalues of matrix A of multiplicities \( m_1 , m_2 , \ldots , m_m , \) respectively.    ▣

Let f(z) be an admissible function for an n×n matrix A. Extending the definition of f(z) to matrices, we define f(A), formally replacing z by A, yielding another n×n matrix f(A) with entries in the real or complex field. For admissible functions f, g, and h of a matrix A, we require that the following conditions be satisfied:

  • If \( f(z ) = z , \) then \( f({\bf A}) = {\bf A} ; \)
  • If \( f(z ) = k, \) a constant, then \( f({\bf A}) = k\,{\bf I} , \) where I is the identity matrix;
  • If \( f(z ) = g(z ) + h(z ) , \) then \( f({\bf A}) = g({\bf A}) + h({\bf A}) ; \)
  • If \( f(z ) = g(z ) \cdot h(z ) , \) then \( f({\bf A}) = g({\bf A}) \, h({\bf A}) . \)
These requirements will ensure that the definition, when applied to a polynomial p(z), will yield the usual matrix polynomial p(A), and that any rational identity in scalar functions of a complex variable will be fulfilled by the corresponding matrix functions. The conditions above are not sufficient for most applications until a fifth requirement is met:
  • If \( f(z ) = h \left( g(z ) \right) , \) then \( f({\bf A}) = h\left( g({\bf A}) \right) , \)
for all admissible functions. The extension of the concept of building a matrix from a given function of a complex variable and a matrix has occupied the attention of a number of mathematicians since 1883. There are many known approaches for defining a function of a square matrix that could be found in the following references [2--5]. Our exposition of the matrix function presents another method, which is based on the residue application to the Cauchy formula.

Recall that the resolvent of a square matrix A is

\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} , \]
which is a matrix-function depending on a parameter λ. In general, the resolvent, after reducing all common multiples, is a ratio of a polynomial matrix \( {\bf Q}(\lambda ) \) of degree at most \( k-1 , \) where k is the degree of the minimal polynomial \( \psi (z ): \)
\begin{equation} \label{EqResolvent.2} {\bf R}_{\lambda} \left( {\bf A} \right) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} = \frac{1}{\psi (\lambda )} \, {\bf Q} (\lambda ) . \end{equation}
Recall that the minimal polynomial of a square matrix A is the unique monic polynomial ψ of lowest degree such that \( \psi ({\bf A} ) = {\bf 0} . \) It is assumed that the n×n matrix polynomial Q(λ) and ψ(λ) are relatively prime (this means that they do not have common multiples containing parameter λ). Then, the polynomial ψ in the denominator of the reduced resolvent formula is the minimal polynomial for the matrix A.
The residue of the ratio \( \displaystyle f(z ) = \frac{P(z )}{Q(z )} \) of two polynomials (or entire functions) at the pole \( \lambda_0 \) of multiplicity m is defined by

\begin{equation} \label{EqResolvent.3} \mbox{Res}_{\lambda_0} \, \frac{P(z )}{Q(z )} = \frac{1}{(m-1)!}\,\lim_{z \to\lambda_0} \, \frac{{\text d}^{m-1}}{{\text d} z^{m-1}} \, \frac{(z - \lambda_0 )^{m} P(z )}{Q(z )} = \left. \frac{1}{(m-1)!} \, \frac{{\text d}^{m-1}}{{\text d} z^{m-1}} \, \frac{(z - \lambda_0 )^{m} P(z )}{Q(z )} \right\vert_{\lambda = \lambda_0} . \end{equation}
Recall that a function f(z) has a pole of multiplicity m at \( z = \lambda_0 \) if, upon multiplication by \( \left( z - \lambda_0 \right)^m , \) the product \( f(z )\, \left( z - \lambda_0 \right)^m \) becomes a holomorphic function in a neighborhood of \( z = \lambda_0 . \) In particular, for simple pole \( m=1 , \) we have
\begin{equation} \label{EqResolvent.4} \mbox{Res}_{\lambda_0} \, \frac{P(z )}{Q(z )} = \frac{P(\lambda_0 )}{Q'(\lambda_0 )} . \end{equation}
For double pole, \( m=2 , \) we have
\begin{equation} \label{EqResolvent.5} \mbox{Res}_{\lambda_0} \, \frac{P(z )}{Q(z )} = \left. \frac{{\text d}}{{\text d} z} \, \frac{(z - \lambda_0 )^2 \, P(z )}{Q(z )} \right\vert_{z = \lambda_0} , \end{equation}
and for triple pole, \( m=3 , \) we get
\begin{equation} \label{EqResolvent.6} \mbox{Res}_{\lambda_0} \, \frac{P(z )}{Q(z )} = \left. \frac{1}{2!} \, \frac{{\text d}^{2}}{{\text d} z^{2}} \, \frac{(z - \lambda_0 )^{3} P(z )}{Q(z )} \right\vert_{z = \lambda_0} . \end{equation}

If a real-valued function \( f(z ) \) has a pair of complex conjugate poles \( a \pm b {\bf j} \) (here \( {\bf j}^2 =-1 \) ), then
\begin{equation} \label{EqResolvent.7} \mbox{Res}_{a+b{\bf j}} f(z ) + \mbox{Res}_{a-b{\bf j}} f(z ) = 2\, \Re \, \mbox{Res}_{a+b{\bf j}} f(z ) , \end{equation}
where Re z = \( \Re\,z \) stands for the real part of a complex number z.    ■
Let f(z) be an admissible function for a square matrix A. Then the matrix function f(A) can be defined as

\begin{equation} \label{EqResolvent.8} f({\bf A}) = \sum_{\mbox{all eigenvalues}} \, \mbox{Res} \, f(z ) \,{\bf R}_{z} ({\bf A}) . \end{equation}
Note that all residues at eigenvalues of A in the formula above exist for admissible functions. We are mostly interested in matrix functions corresponding to functions of one variable z and depending on a real parameter t associated with time, especially \( \displaystyle e^{z\,t} , \quad \frac{\sin \left( \sqrt{z}\,t \right)}{\sqrt{z}} , \quad \cos \left( \sqrt{z}\,t \right) \) because they are solutions of the following initial value problems (where dots stand for derivatives with respect to t and I is the identity matrix):
\begin{align} {\bf U} ({\bf A}, t) \equiv e^{{\bf A}\,t} \, : \,& \quad \frac{\text d}{{\text d}t}\,{\bf U} = {\bf A}\, {\bf U}(t) , \qquad {\bf U}(0) = {\bf I} , \label{EqResolvent.10} \\ {\bf \Phi} ({\bf A}, t) \equiv \frac{\sin \left( \sqrt{\bf A}\,t \right)}{\sqrt{\bf A}} \, : \,& \quad \frac{{\text d}^2}{{\text d}t^2}\,{\bf \Phi} (t) + {\bf A}\,{\bf \Phi}(t) = {\bf 0} , \qquad {\bf \Phi} (0) = {\bf 0} , \quad \dot{\bf \Phi} (0) = {\bf I} , \label{EqResolvent.11} \\ {\bf \Psi} ({\bf A}, (t) \equiv \cos \left( \sqrt{\bf A}\,t \right) \, : \,& \quad \ddot{\bf \bf \Psi} t) + {\bf A}\,{\bf \Psi}(t) = {\bf 0} , \qquad {\bf \Psi} (0) = {\bf I} , \quad \dot{\bf \Psi} (0) = {\bf 0} . \label{EqResolvent.12} \end{align}

When a square matrix A is diagonalizable, the function ψ(λ) in formula \eqref{EqResolvent.2} is a product of simple terms, ψ(λ) = (λ - λ1) ··· (λ - λm). In this case, the residues of the resolvent become Sylvester's auxiliary matrices:

\begin{equation} \label{EqResolvent.13} \mbox{Res}_{\lambda = s} {\bf R}_{\lambda} ({\bf A}) = \mbox{Res}_{\lambda = s} \frac{1}{\psi (\lambda )}\,{\bf Q}(\lambda ) = \frac{1}{\psi' (s )}\,{\bf Q}(s) = {\bf Z}_{s} , \end{equation}
where s is an eigenvalue of matrix A. The derivative of the minimal polynomial ψ(λ) evaluated at λ = s can be evaluated without differentiation by dropping a particular factor.

 

Moreover, we will show that the two functions, Φ(t) and Ψ(t), are real and imaginary parts of the exponential function:

\[ {\bf \Phi} (t) = \Im \,{\bf U} ({\bf j}t) / \sqrt{\bf A} \qquad \mbox{and} \qquad {\bf \Psi} (t) = \Re \,{\bf U} ({\bf j}t) , \]
where Re z = ℜ z and Im z = ℑ z are real and imaginary parts of a complex number z, respectively. Therefore, we obtain Euler's formula for matrices
\begin{equation} \label{EqResolvent.9} e^{{\bf j\,A}\,t} = \cos \left( {\bf A}\,t \right) + {\bf j}\,\sin \left( {\bf A}\,t \right) . \end{equation}
These matrix-functions are solutions of the initial value problems \eqref{EqResolvent.10} --\eqref{EqResolvent.12}, where dots represent differentiation with respect to time variable t: \( \dot{\bf U} = {\text d}{\bf U}/{\text d}t \) and \( \ddot{\bf U} = {\text d}^2 {\bf U}/{\text d}t^2 . \)

Note that not every function that you studied in calculus is suitable for calculation of a corresponding matrix function. For example, a square root function \( f(\lambda ) = \lambda^{1/2} = \sqrt{\lambda} \) is an analytic function, but not a holomorphic function at the origin λ = 0. Therefore, if matrix A is singular, then the corresponding square root matrix function A1/2 requires residue evaluation at λ = 0, which does not exist. Nevertheless, we demonstrate via some examples that that this square root function can be determined formally when a corresponding matrix A has a minimal polynomial with simple zero. However, if the minimal polynomial has a double of higher zero root (so it contains a multiple λm with m ≥ 2), then the root A1/2 does not exist. For instance, matrix \( {\bf A} = \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} \) has minimal polynomial ψ(λ) = λ², and it has no square root. So there does not exist a matrix R such that R² = A.

A = [-2,-4,2;-2,1,2;4,2,5]
[char_poly_A, Resolvent_A] = my_resolvent(A)
function [char_poly, resolvent_m] = my_resolvent(A)
    %This returns two things, the resolvent of a matrix and
    %the matrix's characteristic polynomial.
    syms l
    [m,n] = size(A);
    assert(isequal(m,n), "The matrix inputed is not square.")
    %assert(~isequal(det(A),0),"The determiant of the matrix is 0.")
    lI = eye(m) * l;
    char_poly = det(A - lI);
    resolvent_m = inv(A - lI);
end
Example 1: Consider the \( 3 \times 3 \) matrix \( {\bf A} = \begin{bmatrix} \phantom{-1}1& \phantom{-1}4&16 \\ \phantom{-}18&\phantom{-}20&\phantom{-}4 \\ -12&-14&-7 \end{bmatrix} \) that has three distinct eigenvalues; Mathematica confirms
A = {{1,4,16},{18,20,4},{-12,-14,-7}}
Eigenvalues[A]
Out[2]= 9, 4, 1
Then we define its resolvent using variable s in place of λ
resolvent[s_] = Inverse[s*IdentityMatrix[3]-A]
\[ {\bf R}_{\lambda} ({\bf A}) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} = \begin{bmatrix} \lambda -1 & -4 & -16 \\ -18 & \lambda -20 & -4 \\ 12 & 14 & \lambda + 7 \end{bmatrix}^{-1} = \frac{1}{\left( \lambda - 1 \right)\left( \lambda - 4 \right)\left( \lambda - 9 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} . \tag{1.1} \]
To find square roots of the given matrix, we choose the function \( f(\lambda ) = \lambda^{1/2} \) and apply the resolvent method. So we have to calculate three residues, one at each eigenvalue:
\[ \sqrt{\bf A} = \mbox{Res}_{\lambda =1} \lambda^{1/2} {\bf R}_{\lambda} ({\bf A}) + \mbox{Res}_{\lambda =4} \lambda^{1/2} {\bf R}_{\lambda} ({\bf A}) + \mbox{Res}_{\lambda =9} \lambda^{1/2} {\bf R}_{\lambda} ({\bf A}) . \]
Since each eigenvalue is a simple pole of the resolvent, we have
\[ \sqrt{\bf A} = \lim_{\lambda \to 1} \left( \lambda -1 \right) \lambda^{1/2} {\bf R}_{\lambda} ({\bf A}) + \lim_{\lambda \to 4} \left( \lambda -4 \right) \lambda^{1/2} {\bf R}_{\lambda} ({\bf A}) + \lim_{\lambda \to 9} \left( \lambda -9 \right) \lambda^{1/2} {\bf R}_{\lambda} ({\bf A}) . \]
Each limit is actually a Sylvester auxiliary matrix corresponding to each eigenvalue. By choosing one of two root values, we obtain eight different roots:
\[ \sqrt{\bf A} = \pm \lim_{\lambda \to 1} \left( \lambda -1 \right) {\bf R}_{\lambda} ({\bf A}) \pm 2\, \lim_{\lambda \to 4} \left( \lambda -4 \right) {\bf R}_{\lambda} ({\bf A}) \pm 3\, \lim_{\lambda \to 9} \left( \lambda -9 \right) {\bf R}_{\lambda} ({\bf A}) . \]
For instance, choosing all positive root branches, we get
\begin{align*} \sqrt{\bf A} &= \lim_{\lambda \to 1} \frac{1}{\left( \lambda - 4 \right)\left( \lambda - 9 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &\quad + 2\,\lim_{\lambda \to 4} \frac{1}{\left( \lambda - 1 \right)\left( \lambda - 9 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &\quad + 3\,\lim_{\lambda \to 9} \frac{1}{\left( \lambda - 4 \right)\left( \lambda - 1 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &= \begin{bmatrix} -4&-8&-12 \\ \phantom{-}4&\phantom{-}8&\phantom{-}12 \\ -1&-2&-3 \end{bmatrix} + 2 \begin{bmatrix} \phantom{-}8&&\phantom{-}12&\phantom{-}16 \\ -10&-15&-20 \\ \phantom{-}4& \phantom{-}6&\phantom{-}8 \end{bmatrix} + 3 \begin{bmatrix} -3&-4&-4 \\ \phantom{-}6&\phantom{-}8&\phantom{-}8 \\ -3&-4&-4 \end{bmatrix} \\ &= \begin{bmatrix} \phantom{-}3&\phantom{-}4&\phantom{-}8 \\ \phantom{-}2& \phantom{-}2&-4 \\ -2&-2&\phantom{-}1 \end{bmatrix} \end{align*}
Here, we calculate the three residues of the resolvent, one at each eigenvalue:
r1 = Simplify[(s - 1)*resolvent[s]] /. s -> 1
{{-4, -8, -12}, {4, 8, 12}, {-1, -2, -3}}
r4 = Simplify[(s - 4)*resolvent[s]] /. s -> 4
{{8, 12, 16}, {-10, -15, -20}, {4, 6, 8}}
r9 = Simplify[(s - 9)*resolvent[s]] /. s -> 9
{{-3, -4, -4}, {6, 8, 8}, {-3, -4, -4}}
To find the root of the matrix, we sum the products of each residue with the root of its corresponding eigenvalue. Since the root of each number has two values, one positive and one negative, we can choose one of the values.
r1 + 2*r4 + 3*r9
{{3, 4, 8}, {2, 2, -4}, {-2, -2, 1}}
Upon choosing another sign combination, we find another square root:
\[ \sqrt{\bf A} = \begin{bmatrix} -29&-44&-56 \\ \phantom{-}42&\phantom{-}62& \phantom{-}76 \\ -18&-26&-31 \end{bmatrix} \]
r1 + 2*r4 - 3*r9
{{-29, -44, -56}, {42, 62, 76}, {-18, -26, -31}}
We check the answer with Mathematica
R2 = {{21, 28, 32}, {-34, -46, -52}, {16, 22, 25}};
R2.R2
{{1, 4, 16}, {18, 20, 4}, {-12, -14, -7}}

We consider the exponential matrix-function

\[ {\bf U} (t) = \mbox{Res}_{\lambda = 1} e^{\lambda t}{\bf R}_{\lambda} ({\bf A}) + \mbox{Res}_{\lambda =4} e^{\lambda t} {\bf R}_{\lambda} ({\bf A}) + \mbox{Res}_{\lambda =9} e^{\lambda t} {\bf R}_{\lambda} ({\bf A}) . \]
Since all poles of the resolvent are simple, we find
\begin{align*} {\bf U} (t) &= e^{{\bf A}\,t} = e^t \lim_{\lambda \to 1} \frac{1}{\left( \lambda - 4 \right)\left( \lambda - 9 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &\quad + \lim_{\lambda \to 4} \frac{e^{4t}}{\left( \lambda - 1 \right)\left( \lambda - 9 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &\quad + \lim_{\lambda \to 9} \frac{e^{9t}}{\left( \lambda - 1 \right)\left( \lambda - 4 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &= e^t \begin{bmatrix} -4&-8&-12 \\ \phantom{-}4&\phantom{-}8&\phantom{-}12 \\ -1&-2&-3 \end{bmatrix} + e^{4t} \begin{bmatrix} \phantom{-1}8\phantom{-}&12&\phantom{-}16 \\ -10&-15&-20 \\ \phantom{-1}4&\phantom{-1}6&\phantom{-1}8 \end{bmatrix} + e^{9t} \begin{bmatrix} -3&-4&-4 \\ \phantom{-}6&\phantom{-}8&\phantom{-}8 \\ -3&-4&-4 \end{bmatrix} . \end{align*}
Similarly, we get
\begin{align*} {\bf \Phi} (t) &= \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} = \lim_{\lambda \to 1} \frac{\sin t}{\left( \lambda - 4 \right)\left( \lambda - 9 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &\quad + \frac{\sin 2t}{2}\,\lim_{\lambda \to 4} \frac{1}{\left( \lambda - 1 \right)\left( \lambda - 9 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &\quad + \frac{\sin 3t}{3}\,\lim_{\lambda \to 9} \frac{1}{\left( \lambda - 1 \right)\left( \lambda - 4 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &= \sin t \begin{bmatrix} -4&-8&-12 \\ \phantom{-}4&\phantom{-}8&\phantom{-}12 \\ -1&-2&-3 \end{bmatrix} + \frac{\sin 2t}{2} \begin{bmatrix} \phantom{-}8&\phantom{-}12&\phantom{-}16 \\ -10&-15&-20 \\ \phantom{-}4&\phantom{-}6&\phantom{-}8 \end{bmatrix} + \frac{\sin 3t}{3} \begin{bmatrix} -3&-4&-4 \\ \phantom{-}6&\phantom{-}8&\phantom{-}8 \\ -3&-4&-4 \end{bmatrix} , \end{align*}
and
\begin{align*} {\bf \Psi} (t) &= \cos \left( t\,\sqrt{\bf A} \right) = \lim_{\lambda \to 1} \frac{\cos t}{\left( \lambda - 4 \right)\left( \lambda - 9 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &\quad + \cos 2t\,\lim_{\lambda \to 4} \frac{1}{\left( \lambda - 1 \right)\left( \lambda - 9 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &\quad + \cos 3t\,\lim_{\lambda \to 9} \frac{1}{\left( \lambda - 1 \right)\left( \lambda - 4 \right)} \begin{bmatrix} \lambda^2 -13\lambda -84 &4(\lambda -49 )& 16\left( \lambda - 19 \right) \\ 18\lambda +78 & \lambda^2 + 6\lambda + 185 & 4\left( \lambda - 71 \right) \\ -12 \left( \lambda -1 \right) & -14\lambda -34 & \lambda^2 -21\lambda -52 \end{bmatrix} \\ &= \cos t \begin{bmatrix} -4&-8&-12 \\ \phantom{-}4&\phantom{-}8&\phantom{-}12 \\ -1&-2&-3 \end{bmatrix} + \cos 2t \begin{bmatrix} \phantom{-}8&\phantom{-}12&\phantom{-}16 \\ -10&-15&-20 \\ \phantom{-}4&\phantom{-}6&\phantom{-}8 \end{bmatrix} + \cos 3t \begin{bmatrix} -3&-4&-4 \\ \phantom{-}6&\phantom{-}8&\phantom{-}8 \\ -3&-4&-4 \end{bmatrix} . \end{align*}

Note that matrices in the right-hand side are Sylvester's auxiliary matrices.

Example 2: Consider the nonderogatory (diagonalizable) \( 3 \times 3 \) matrix \( {\bf A} = \begin{bmatrix} -20&-42&-21 \\ \phantom{-1}6&\phantom{-}13&\phantom{-1}6 \\ \phantom{-}12&\phantom{-}24&\phantom{-}13 \end{bmatrix} \) that has only two distinct eigenvalues
A = {{-20,-42,-21}, {6,13,6}, {12,24,13}}
Eigenvalues[A]
Out[2]= 4, 1, 1
Then we define the resolvent:
\[ {\bf R}_{\lambda} ({\bf A}) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} = \begin{bmatrix} \lambda +20 & 42& 21 \\ -6& \lambda - 13 & -6 \\ -12&24&\lambda -13 \end{bmatrix}^{-1} = \frac{1}{(\lambda -1)(\lambda -4)} \,\begin{bmatrix} \lambda -25 & -42 & -21 \\ 6 & 8+\lambda & 6 \\ 12&24&8+\lambda \end{bmatrix} . \tag{2.1} \]
to verify that the minimal polynomial is a product of simple terms. Recall that the denominator in a resolvent (upon reducing all common multiples) is always the minimal polynomial.
resolvent = FullSimplify[Inverse[lambda*IdentityMatrix[3] - A]]
Out[2]= {{(-25 + lambda)/( 4 - 5 lambda + lambda^2), -(42/(4 - 5 lambda + lambda^2)), -(21/( 4 - 5 lambda + lambda^2))}, {6/(4 - 5 lambda + lambda^2), ( 8 + lambda)/(4 - 5 lambda + lambda^2), 6/( 4 - 5 lambda + lambda^2)}, {12/(4 - 5 lambda + lambda^2), 24/( 4 - 5 lambda + lambda^2), (8 + lambda)/(4 - 5 lambda + lambda^2)}}
Using the resolvent, we define Sylvester's auxiliary matrices (that are actually projector matrices on eigenspaces)
Z1 = FullSimplify[(lambda - 1)*resolvent] /. lambda -> 1
Out[3]= {{8, 14, 7}, {-2, -3, -2}, {-4, -8, -3}}
Z4 = FullSimplify[(lambda - 4)*resolvent] /. lambda -> 4
Out[4]= {{-7, -14, -7}, {2, 4, 2}, {4, 8, 4}}
\[ {\bf Z}_1 = \begin{bmatrix} \phantom{-}8&14&\phantom{-}7 \\ -2& -3&-2 \\ -4&-8&-3 \end{bmatrix} , \qquad {\bf Z}_4 = \begin{bmatrix} -7&-14&-7 \\ \phantom{-}2& \phantom{-1}4&\phantom{-}2 \\ \phantom{-}4&\phantom{-1}8&\phantom{-}4 \end{bmatrix} . \]
These auxiliary matrices are projection matrices; this means that the following relations hold:
\[ {\bf Z}_1 {\bf Z}_1 = {\bf Z}_1 , \qquad {\bf Z}_4 {\bf Z}_4 = {\bf Z}_4 , \qquad {\bf Z}_1 {\bf Z}_4 = {\bf 0} . \]
Their eigenvalues are
Eigenvalues[Z1]
{1, 1, 0}
Eigenvalues[Z4]
{1, 0, 0}
Although matrix A has several square roots
\[ {\bf R}_1 = {\bf Z}_1 + 2\, {\bf Z}_4 = \begin{bmatrix} -6&-14&-7 \\ \phantom{-}2&\phantom{-1}5&\phantom{-}2 \\ \phantom{-}4&\phantom{-1}8&\phantom{-}5 \end{bmatrix} , \qquad {\bf R}_2 = {\bf Z}_1 - 2\, {\bf Z}_4 = \begin{bmatrix} 22&42&21 \\ -6&-11&-6 \\ -12&-24 & -11 \end{bmatrix} , \]
the following matrix functions are defined uniquely:
\begin{align*} {\bf \Phi} (t) &= \sin t \,{\bf Z}_1 + \frac{\sin 2\,t}{2} \,{\bf Z}_4 , \\ {\bf \Psi} (t) &= \cos t\,{\bf Z}_1 + \cos 2\,t\,{\bf Z}_4 . \end{align*}
These two matrix functions are solutions of the matrix differential equation \( \displaystyle \ddot{\bf P} (t) + {\bf A}\,{\bf P} (t) = {\bf 0} \) subject to the initial conditions
\[ {\bf \Phi} (0) = {\bf 0} , \quad \dot{\bf \Phi} (0) = {\bf I} , \quad {\bf \Psi} (0) = {\bf I} , \quad \dot{\bf \Psi} (0) = {\bf 0} \]
because \( \displaystyle {\bf Z}_1 + {\bf Z}_4 = {\bf I} \) and \( \displaystyle {\bf A}\,{\bf Z}_1 - {\bf Z}_1 = {\bf 0}, \quad {\bf A}\,{\bf Z}_4 - 4\,{\bf Z}_4 = {\bf 0} . \)

Example 3: We consider two complex valued matrices:

\[ {\bf A} = \begin{bmatrix} 2+3{\bf j} & 1-2{\bf j} \\ 1+5{\bf j} & 3-3{\bf j} \end{bmatrix} , \qquad {\bf B} = \begin{bmatrix} 7+{\bf j} & 2-{\bf j} \\ -2+{\bf j} & 11-{\bf j} \end{bmatrix} . \]
A = {{2 + 3*I, 1 - 2*I}, {1 + 5*I, 3 - 3*I}}
Eigenvalues[A]
Out[2]= {4, 1}
B = {{7 + I, 2 - I}, {- 2 +I, 11 - I}}
Eigenvalues[%]
Out[4]= {9, 9}
We use the percent symbol to reference the previous output, matrix B. See the Wolfram manual for more on referencing.
Eigenvectors[B]
Out[5]= {{1, 1}, {0, 0}}

Therefore, 2×2 matrix A is diagonalizable (because it has two distinct simple eigenvalues), but matrix B is defective because its eigenspace is one dimensional spanned by the vector [1,1]. We are going to find square roots of these matrices, as well as the following matrix functions:

\[ {\bf \Phi}_{\bf A} (t) = \dfrac{\sin\left( \sqrt{\bf A} t \right)}{\sqrt{\bf A}} , \qquad {\bf \Phi}_{\bf B} (t) = \dfrac{\sin\left( \sqrt{\bf B} t \right)}{\sqrt{\bf B}} \]
and
\[ {\bf \Psi}_{\bf A} (t) = \cos \left( \sqrt{\bf A} t \right) , \qquad {\bf \Psi}_{\bf B} (t) = \cos\left( \sqrt{\bf B} t \right) . \]
These matrix functions are unique solutions of the second order matrix differential equations \( \ddot{\bf P}(t) + {\bf A}\,{\bf P}(t) \equiv {\bf 0} \) and \( \ddot{\bf P}(t) + {\bf B}\,{\bf P}(t)\equiv {\bf 0} , \) respectively. Here dots indicate derivatives with respect to time variable t. They also satisfy the initial conditions
\[ {\bf \Phi}_{\bf A} (0) = {\bf 0} , \quad \dot{\bf \Phi}_{\bf A} (0) = {\bf I} , \qquad {\bf \Psi}_{\bf A} (0) = {\bf I} , \quad \dot{\bf \Psi}_{\bf A} (0) = {\bf 0} , \]
and similar for index B:
\[ {\bf \Phi}_{\bf B} (0) = {\bf 0} , \quad \dot{\bf \Phi}_{\bf B} (0) = {\bf I} , \qquad {\bf \Psi}_{\bf B} (0) = {\bf I} , \quad \dot{\bf \Psi}_{\bf B} (0) = {\bf 0} . \]
Since these initial value problems for matrix functions have unique solutions, the matrix functions \( {\bf \Phi}_{\bf A} (t) , \quad {\bf \Psi}_{\bf A} (t) , \quad {\bf \Phi}_{\bf B} (t) , \quad {\bf \Psi}_{\bf B} (t) \) do not depend on whether their square roots exist.

First, we find square roots of matrices A and B using the resolvent method. Therefore, we need to find resolvents:
\begin{align*} {\bf R}_{\lambda} \left( {\bf A} \right) &= \frac{1}{(\lambda -1)(\lambda -4)} \begin{bmatrix} \lambda -3+3{\bf j} & 1 - 2{\bf j} \\ 1+ 5{\bf j} & \lambda -2-3{\bf j} \end{bmatrix} , \\ {\bf R}_{\lambda} \left( {\bf B} \right) &= \frac{1}{(\lambda -9)^2} \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} . \end{align*}
For diagonalizable matrix A, we have
\[ \sqrt{\bf A} = \pm \mbox{Res}_{\lambda =1} \,{\bf R}_{\lambda} \left( {\bf A} \right) \pm 2\,\mbox{Res}_{\lambda =4} \, {\bf R}_{\lambda} \left( {\bf A} \right) = \pm \frac{1}{3} \begin{bmatrix} 2-3{\bf j} & -1+2{\bf j} \\ -1 -5{\bf j} & 1+3{\bf j} \end{bmatrix} \pm \frac{2}{3} \begin{bmatrix} 1 + 3 {\bf j} & 1-2 {\bf j} \\ 1+ 5{\bf j} & 2- {\bf j} \end{bmatrix} . \]
Two of these roots are
\[ \sqrt{\bf A} = \begin{bmatrix} 3{\bf j} & 1-2{\bf j} \\ 1+5{\bf j} & 1-3{\bf j} \end{bmatrix} \qquad\mbox{and}\qquad \frac{1}{3} \begin{bmatrix} 4 + 3 {\bf j} & 1-2{\bf j} \\ 1+5{\bf j} & 5-3{\bf j} \end{bmatrix} . \]
For matrix B, we have
\[ \sqrt{\bf B} = \lim_{\lambda \to \,9} \, \frac{\text d}{{\text d} \lambda} \, \sqrt{\lambda} \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} = \frac{1}{6} \begin{bmatrix} 16+ {\bf j} & 2- {\bf j} \\ -2+{\bf j} & 20-{\bf j} \end{bmatrix} . \]
Its other root is the negative of the one above: \( -\sqrt{\bf B} . \)
B = {{7 + I, 2 - I}, {-2 + I, 11 - I}}
R[a_] = FullSimplify[Inverse[a*IdentityMatrix[2] - B]*(a - 9)^2]
R3 = Limit[D[Sqrt[a]*R[a], a], a -> 9]
R3.R3
Out[4]= {{7 + I, 2 - I}, {-2 + I, 11 - I}}
Now we turn our attention to defining matrix functions Φ(t) and Ψ(t). For matrix B, we have
\begin{align*} {\bf \Phi}_{\bf B} (t) &= \lim_{\lambda \to \,9} \, \frac{\text d}{{\text d} \lambda} \, \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \, \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} \\ &= \frac{1}{54} \begin{bmatrix} (-6+ 3 {\bf j})\,t\,\cos 3t + (20-{\bf j})\,\sin 3t & (2-{\bf j}) \left[ 3t\,\cos 3t - \sin 3t \right] \\ (-2+{\bf j}) \left[ 3t\,\cos 3t - \sin 3t \right] & (6-3{\bf j})\,t\,\cos 3t + (16+ {\bf j})\,\sin 3t \end{bmatrix} \end{align*}
and
\begin{align*} {\bf \Psi}_{\bf B} (t) &= \lim_{\lambda \to \,3} \, \frac{\text d}{{\text d} \lambda} \, \cos \left( \sqrt{\lambda}\,t \right) \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} \\ &= \frac{1}{6} \begin{bmatrix} 6\,\cos 3t + (2- {\bf j}) \,t\,\sin 3t & (-2 + {\bf j}) \, t\,\sin 3t \\ (2- {\bf j})\,t\,\sin 3t & 6\,\cos 3t - (2- {\bf j})\,t \, \sin 3t \end{bmatrix} . \end{align*}
Limit[D[Sin[Sqrt[a]*t]*R[a]/Sqrt[a], a], a -> 9]
FullSimplify[Limit[D[Cos[Sqrt[a]*t]*R[a], a], a -> 9]
Each of these matrix functions is a unique solution of the corresponding initial value problem:
\begin{align*} \ddot{\bf \Phi}_{\bf B} (t) + {\bf B}\, {\bf \Phi}_{\bf B} (t) &= {\bf 0} , \qquad {\bf \Phi}_{\bf B} (0) = {\bf 0} , \quad \dot{\bf \Phi}_{\bf B} (0) = {\bf I} , \\ \ddot{\bf \Psi}_{\bf B} (t) + {\bf B}\, {\bf \Psi}_{\bf B} (t) &= {\bf 0} , \qquad {\bf \Psi}_{\bf B} (0) = {\bf I} , \quad \dot{\bf \Psi}_{\bf B} (0) = {\bf 0} . \end{align*}
For matrix A, we have
\begin{align*} {\bf \Phi}_{\bf A} (t) &= \mbox{Res}_{\lambda =1} \,\frac{\sin \left( \sqrt{\lambda} \,t \right)}{\sqrt{\lambda}} \, {\bf R}_{\lambda} \left( {\bf A} \right) + \mbox{Res}_{\lambda =4} \, \frac{\sin \left( \sqrt{\lambda} \,t \right)}{\sqrt{\lambda}} \, {\bf R}_{\lambda} \left( {\bf A} \right) \\ &= \frac{\sin t}{3} \begin{bmatrix} 2 - 3 {\bf j} & -1+2{\bf j} \\ -1-5{\bf j} & 1 + 3 {\bf j} \end{bmatrix} + \frac{\sin 2t}{6} \begin{bmatrix} 1+3{\bf j} & 1-2 {\bf j} \\ 1+5 {\bf j} & 2-3{\bf j} \end{bmatrix} , \\ {\bf \Psi}_{\bf A} (t) &= \mbox{Res}_{\lambda =1} \,\cos \left( \sqrt{\lambda} \,t \right) {\bf R}_{\lambda} \left( {\bf A} \right) + \mbox{Res}_{\lambda =4} \,\cos \left( \sqrt{\lambda} \,t \right) {\bf R}_{\lambda} \left( {\bf A} \right) \\ &= \frac{\cos t}{3} \begin{bmatrix} 2 - 3 {\bf j} & -1+2{\bf j} \\ -1-5{\bf j} & 1 + 3 {\bf j} \end{bmatrix} + \frac{\cos 2t}{3} \begin{bmatrix} 1+3{\bf j} & 1-2 {\bf j} \\ 1+5 {\bf j} & 2-3{\bf j} \end{bmatrix} . \end{align*}

 

Square Roots


As a byproduct, we can find a complex square root of the identity matrix:

\[ {\bf R}_1 = \frac{1}{3} \begin{bmatrix} -1+ 6{\bf j} & 2 - 4{\bf j} \\ 2+10{\bf j} & 1 -6{\bf j} \end{bmatrix} \qquad \Longrightarrow \qquad {\bf R}_1^2 = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} . \]
Matrix R1 has two real eigenvalues \( \lambda = \pm 1 . \)    ■

Example 4: The matrix \( {\bf A} = \begin{bmatrix} 1&\phantom{-}2&&3 \\ 2&\phantom{-}3&\phantom{-}4 \\ 2&-6&-4 \end{bmatrix} \) has two complex conjugate eigenvalues \( \lambda = 1 \pm 2{\bf j} , \) and one real eigenvalue \( \lambda = -2 . \) Since \( (\lambda -1 + 2{\bf j})(\lambda -1 - 2{\bf j}) = (\lambda -1)^2 +4 , \) the Sylvester auxiliary matrix corresponding to the real eigenvalue becomes
\[ {\bf Z}_{-2} = \frac{\left( {\bf A} - {\bf I} \right)^2 + 4\, {\bf I} }{(-3)^2 + 4} = \frac{1}{13}\,\begin{bmatrix} \phantom{-}14&-14&-7 \\ 12&-12&-6 \\ -22&\phantom{-}22&11 \end{bmatrix} . \]
Since its eigenvalues are
Eigenvalues[Z2]
{1, 0, 0}
this matrix is a projection; so \( {\bf Z}_{-2} {\bf Z}_{-2} = {\bf Z}_{-2}^2 = {\bf Z}_{-2}^3 = {\bf Z}_{-2} . \)

Sylvester's auxiliary matrices corresponding to two complex conjugate eigenvalues are also complex conjugate:

\[ {\bf Z}_{1+2{\bf j}} = \frac{\left( {\bf A} + \left( -1 + 2{\bf j} \right) {\bf I} \right) \left( {\bf A} + 2{\bf I} \right) }{4{\bf j} (3+ 2{\bf j})} = \frac{1}{26}\,\begin{bmatrix} -1&14&7 \\ -12&25&6 \\ 22&-22& 2 \end{bmatrix} + \frac{\bf j}{26}\,\begin{bmatrix} -21&8&-9 \\ -31&5&-17 \\ 20&6&16 \end{bmatrix} = \overline{{\bf Z}_{1-2{\bf j}} } . \]
These two complex auxiliary matrices are also projections: \( {\bf Z}_{1+2{\bf j}}^2 = {\bf Z}_{1+2{\bf j}}^3 = {\bf Z}_{1+2{\bf j}} , \) and similar relations for its complex conjugate.

Now suppose you want to build a matrix function corresponding to the exponential function \( U(\lambda ) = e^{\lambda \,t} \) of variable λ depending on a real parameter t. Using Sylvester's auxiliary matrices, we have
\[ U({\bf A}) = e^{{\bf A}\,t} = e^{(1+2{\bf j})t}\, {\bf Z}_{1+2{\bf j}} + e^{(1-2{\bf j})t}\, {\bf Z}_{1-2{\bf j}} + e^{-2t} \,{\bf Z}_{-2} . \]
factoring out the real-valued function \( e^{t} , \) the first two terms become
\[ e^{2{\bf j}\,t}\, {\bf Z}_{1+2{\bf j}} + e^{-2{\bf j}\,t}\, {\bf Z}_{1-2{\bf j}} = 2\,\mbox{Re}\, e^{2{\bf j}\,t}\, {\bf Z}_{1+2{\bf j}} = 2\,\cos 2t \left( \Re\, {\bf Z}_{1+2{\bf j}} \right) - 2\,\sin 2t \left( \Im\, {\bf Z}_{1+2{\bf j}} \right) , \]
where Re ≡ ℜ represents the real part and Im ≡ ℑ stands for the imaginary part. This allows us to simplify the answer to
\[ U({\bf A}) = e^{{\bf A}\,t} = e^t \left( \frac{\cos 2t}{13} \,\begin{bmatrix} -1&14&7 \\ -12&25&6 \\ 22&-22& 2 \end{bmatrix} -\frac{\sin 2t}{13} \, \begin{bmatrix} -21&8&-9 \\ -31&5&-17 \\ 20&6&16 \end{bmatrix} \right) + e^{-2t} \,{\bf Z}_{-2} . \]
The obtained exponential matrix function \( e^{{\bf A}\,t} \) is a unique solution to the initial value problem:
\[ \dot{\bf X} (t) = {\bf A}\,{\bf X} (t) , \qquad {\bf X} (0) = {\bf I} . \]
   ■

Example 5: Consider a defective singular matrix
\[ {\bf A} = \begin{bmatrix} -1&\phantom{-}1&0 \\ \phantom{-}0&-1&1 \\ \phantom{-}4&-8&4 \end{bmatrix} , \tag{5.1} \]
which we enter into Mathematica notebook:
A= {{-1, 1, 0}, {0, -1, 1}, {4, -8, 4}};
Resolvent[lambda_] = Factor[Inverse[lambda*IdentityMatrix[3] - A]];
Eigenvalues[A]
Out[3]= {1, 1, 0}
Eigenvectors[A]
Out[4]= {{1, 2, 4}, {0, 0, 0}, {1, 1, 1}}

Since there are only two eigenvectors, the matrix A is defective, with defective eigenvalue \( \lambda =1. \)

We construct three functions with this matrix: \( U(t) = e^{\lambda\,t}, \) \( \Phi (t) = \sin \left( \sqrt{\lambda}\,t\right)/\sqrt{\lambda} , \) and \( \Psi (t) = \cos\left( \sqrt{\lambda}\,t\right) , \) based on the resolvent:

\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \frac{1}{\lambda \left( \lambda -1 \right)^2} \begin{bmatrix} \lambda^2 -3\lambda -4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda -4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} . \tag{5.2} \]
A0 = {{-1, 1, 0}, {0, -1, 1}, {4, -8, 4}};
Inverse[s*IdentityMatrix[3] - A0]
{{(4 - 3 s + s^2)/(s - 2 s^2 + s^3), (-4 + s)/(s - 2 s^2 + s^3), 1/( s - 2 s^2 + s^3)}, {4/(s - 2 s^2 + s^3), (-4 - 3 s + s^2)/( s - 2 s^2 + s^3), (1 + s)/(s - 2 s^2 + s^3)}, {(4 + 4 s)/( s - 2 s^2 + s^3), (-4 - 8 s)/(s - 2 s^2 + s^3), (1 + 2 s + s^2)/( s - 2 s^2 + s^3)}}

syms L t
A = [-1, 1, 0; 0, -1, 1; 4, -8, 4];
%% Resolvent
R=eye(3)*L - A;
Resolvent_A = simplify(inv(R))
%% Eigenvalues and Eigenvectors
char_poly = det(R);
Eigenvalues = solve(char_poly,L)'
eigenvectors = zeros(3);  %This line is to make the code run smoother.
for i = 1:3 %This computes the eigenvectors
   matrix = A - eye(3)*Eigenvalues(i);
   a_eigenvector = null(matrix); %An eigenvector is the nullspace of a matrix minus one of its eigenvalues along the diagonal.
   a_eigenvector = a_eigenvector/min(abs(a_eigenvector)); %This makes the eigenvector look nice and readable.
   eigenvectors(:,i) = a_eigenvector;
end
fprintf('Notice how A has only two linearly independent eigenvectors.')
fprintf('\nTherefore, the matrix A is defective with eigenvalue L = 1\n\n')

Note that the polynomial λ(λ - 1)² in the denominator is the minimal polynomial for the given matrix. For each of the functions above, we calculate residues at \( \lambda =0 \) and at \( \lambda =1. \)

 

Exponential function


We consider the exponential function U(λ) = eλt, where t is a real parameter. The corresponding matrix function is the sum of two residues:
\[ e^{{\bf A}\,t} = \mbox{Res}_{\lambda = 0} e^{\lambda t} {\bf R}_{\lambda} \left( {\bf A} \right) + \mbox{Res}_{\lambda = 1} e^{\lambda t} {\bf R}_{\lambda} \left( {\bf A} \right) . \]
We evaluate each residue separately.
res0= Factor[lambda*Resolvent[lambda]] /. lambda -> 0
Out[5]= {{4, -4, 1}, {4, -4, 1}, {4, -4, 1}}
exp1 = D[Factor[(lambda - 1)^2*Resolvent[lambda]]*Exp[lambda*t], lambda] /. lambda -> 1
Out[6]= {{-3 E^t + 2 E^t t, 4 E^t - 3 E^t t, -E^t + E^t t}, {-4 E^t + 4 E^t t,
5 E^t - 6 E^t t, -E^t + 2 E^t t}, {-4 E^t + 8 E^t t,
4 E^t - 12 E^t t, 4 E^t t}}

Then the exponential matrix-function U(t) = exp(At) is the sum of these two residues:

expA= res0+%
Out[7]=
{{4 - 3 E^t + 2 E^t t, -4 + 4 E^t - 3 E^t t,
1 - E^t + E^t t}, {4 - 4 E^t + 4 E^t t, -4 + 5 E^t - 6 E^t t,
1 - E^t + 2 E^t t}, {4 - 4 E^t + 8 E^t t, -4 + 4 E^t - 12 E^t t,
1 + 4 E^t t}}

To check the answer we subtract MatrixExp[A*t] from the above expression:

Simplify[%-MatrixExp[A*t]]
Out[8]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
\[ e^{{\bf A}\,t} = \begin{bmatrix} 4 & -4 & 4 \\ 4 & -4 & 1 \\ 4 & -4 & 1 \end{bmatrix} + e^t \begin{bmatrix} -3-2t &4 - 3t &-4+4t \\ -4 + 4t &5 - 6t & -1 + 2t \\ -4 + 8t & 4 -12 t & 4t \end{bmatrix} . \tag{5.3} \]
You can check that this matrix function satisfies the regular properties of exponential function:
\[ \left( e^{{\bf A}\,t} \right)^{-1} = e^{-{\bf A}\,t} , \qquad e^{{\bf A}\,t} e^{{\bf A}\,t} = e^{2{\bf A}\,t} , \qquad e^{{\bf A}\,0} = \texttt{I}, \qquad \frac{\text d}{{\text d}t} \,e^{{\bf A}\,t} = {\bf A}\, e^{{\bf A}\,t} , \]
where \( \texttt{I} \) is the identity matrix.

 

Trigonometric functions


Trigonometric functions cos(At) and sin(At) can be defined from the exponential function with pure imaginary argument. More over, Euler's formula is also valid:
\[ e^{{\bf j}\,{\bf A}\,t} = \Re e^{{\bf j}\,{\bf A}\,t} + {\bf j} \Im e^{{\bf j}\,{\bf A}\,t} = \cos \left( {\bf A}\,t \right) + {\bf j} \sin \left( {\bf A}\,t \right) , \]
A = {{-1, 1, 0}, {0, -1, 1}, {4, -8, 4}};
U[t_] = MatrixExp[A*t];
Assuming[t > 0, ComplexExpand[U[I*t]]]
where its real part is denoted by cos(At) and its imaginary part is sin(At). Note that j denotes the unit vector in the positive vertical direction on the complex plane ℂ, so j² = −1. These matrix functions satisfy all known trigonometric identities, for example,
\[ \cos^2 \left( {\bf A}\,t \right) + \sin^2 \left( {\bf A}\,t \right) = \texttt{I}, \qquad \frac{\text d}{{\text d}t} \,\cos \left( {\bf A}\,t \right) = - {\bf A}\,\sin \left( {\bf A}\,t \right) , \qquad \frac{\text d}{{\text d}t} \,\sin \left( {\bf A}\,t \right) = {\bf A}\, \cos \left( {\bf A}\,t \right) . \]
Of course, these trigonometric functions can be defined directly:
\[ \cos \left( {\bf A}\,t \right) = \mbox{Res}_{\lambda = 0} \cos \left(\lambda t\right) {\bf R}_{\lambda} \left( {\bf A} \right) + \mbox{Res}_{\lambda = 1} \cos \left(\lambda t\right) {\bf R}_{\lambda} \left( {\bf A} \right) = \begin{bmatrix} 4 & -4& 1 \\ 4 & -4 & 1 \\ 4 & -4 & 1 \end{bmatrix} + \begin{bmatrix} -3\cos t -2t\,\sin t & 4 \cos t + 3t\,\sin t & -\cos t -t\,\sin t \\ -4 \cos t -4t\,\sin t & 5 \cos t + 6t\,\sin t & -\cos t -2t\,\sin t \\ -4 \cos t - 8t\,\sin t & 4 \cos t+ 12t\,\sin t & -4t\,\sin t \end{bmatrix} \]
and
\[ \sin \left( {\bf A}\,t \right) = \mbox{Res}_{\lambda = 0} \sin \left(\lambda t\right) {\bf R}_{\lambda} \left( {\bf A} \right) + \mbox{Res}_{\lambda = 1} \sin \left(\lambda t\right) {\bf R}_{\lambda} \left( {\bf A} \right) = \begin{bmatrix} 2t\,\cos t - 3 \sin t& 4 \sin t -3t\,\cos t & t\,\cos t - \sin t \\ 4t\,\cos t + 4\sin t & 5 \sin t -6t \,\sin t & 2t\,\cos t - \sin t \\ 8t\,\cos t - 4 \sin t & 4 \sin t -12t\, \cos t & 4t\,\cos t \end{bmatrix} . \]
These trigonometric matrix-functions are (unique) solutions of the following initial value problems:
\[ \frac{{\text d}^2}{{\text d} t^2} \left[ \cos \left( {\bf A}\,t \right) \right] + {\bf A}^2 \cos \left( {\bf A}\,t \right) = {\bf 0} , \qquad \lim_{t\to 0}\, \cos \left( {\bf A}\,t \right) = \cos \left( 0 \right) = {\bf I} , \quad \lim_{t\to 0}\, \frac{\text d}{{\text d}t} \left[ \cos \left( {\bf A}\,t \right) \right] = \left. \frac{\text d}{{\text d}t} \left[ \cos \left( {\bf A}\,t \right) \right] \right\vert_{t=0} = {\bf 0} , \]
and
\[ \frac{{\text d}^2}{{\text d} t^2} \left[ \sin \left( {\bf A}\,t \right) \right] + {\bf A}^2 \sin \left( {\bf A}\,t \right) = {\bf 0} , \qquad \lim_{t\to 0}\, \sin \left( {\bf A}\,t \right) = \sin \left( 0 \right) = {\bf 0} , \quad \lim_{t\to 0}\, \frac{\text d}{{\text d}t} \left[ \sin \left( {\bf A}\,t \right) \right] = \left. \frac{\text d}{{\text d}t} \left[ \sin \left( {\bf A}\,t \right) \right] \right\vert_{t=0} = {\bf A} , \]
where I is the identity 3×3 matrix and 0 is the zero 3×3 matrix.

 

Trigonometric functions with roots


Now we construct \( \Psi (t) = \sin \left( \sqrt{\bf A}\,t\right) /\sqrt{\bf A} \) and \( \Phi (t) = \cos \left( \sqrt{\bf A}\,t\right) \) matrices:

\[ \frac{\sin \left( \sqrt{\bf A}\,t\right)}{/\sqrt{\bf A}} = \mbox{Res}_{\lambda =0} \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \,{\bf R}_{\lambda}({\bf A} ) + \mbox{Res}_{\lambda =1} \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \,{\bf R}_{\lambda}({\bf A} ) , \]
and
\[ \cos \left( \sqrt{\bf A}\,t\right) = \mbox{Res}_{\lambda =0} \cos\left( \sqrt{\lambda}\,t \right) {\bf R}_{\lambda}({\bf A} ) + \mbox{Res}_{\lambda =1} \cos \left( \sqrt{\lambda}\,t \right) {\bf R}_{\lambda}({\bf A} ) . \]
We calculate residues for function Ψ(λ):
\[ \mbox{Res}_{\lambda =0} \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \,{\bf R}_{\lambda}({\bf A} ) = \lim_{\lambda \to 0} \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\left( \lambda - 1 \right)^2 \sqrt{\lambda}} \begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda -4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} = t \begin{bmatrix} 4&-4 &1 \\ 4& -4 & 1 \\ 4& -4 & 1 \end{bmatrix} \]
and
\[ \mbox{Res}_{\lambda =1} \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \,{\bf R}_{\lambda}({\bf A} ) = \lim_{\lambda \to 1} \frac{\text d}{{\text d}\lambda} \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\lambda \sqrt{\lambda}} \begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda -4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} = \begin{bmatrix} t\,\cos t - 4\,\sin t & -\frac{3}{2}\,t\,\cos t + \frac{11}{2}\,\sin t & \frac{t}{2}\,\cos t - \frac{3}{2}\,\sin t \\ 2t\,\cos t - 6\,\sin t &8\,\sin t - 3t\,\cos t & t\.\cos t - 2\,\sin t \\ 4t\,\cos t -8\,\sin t & 10\,\sin t -6t\,\cos t & 2t\,\cos t - 2\,\sin t \end{bmatrix} \]
. Adding these two residues, we obtain
\[ \frac{\sin \left( \sqrt{\bf A}\,t\right)}{/\sqrt{\bf A}} = t \begin{bmatrix} 4&-4 &1 \\ 4& -4 & 1 \\ 4& -4 & 1 \end{bmatrix} + \begin{bmatrix} t\,\cos t - 4\,\sin t & -\frac{3}{2}\,t\,\cos t + \frac{11}{2}\,\sin t & \frac{t}{2}\,\cos t - \frac{3}{2}\,\sin t \\ 2t\,\cos t - 6\,\sin t &8\,\sin t - 3t\,\cos t & t\,\cos t - 2\,\sin t \\ 4t\,\cos t -8\,\sin t & 10\,\sin t -6t\,\cos t & 2t\,\cos t - 2\,\sin t \end{bmatrix} . \]
r[s_] = {{s^2 - 3*s + 4, s - 4, 1}, {4, s^2 - 3*s - 4, s + 1}, {4*(s + 1), -4*(2*s + 1), (s + 1)^2}}
D[r[s]*Sin[Sqrt[s]*t]*s^(-3/2), s] /. s -> 1
or
sin1[t_] = D[Factor[(lambda - 1)^2*Resolvent[lambda]]* Sin[Sqrt[lambda]*t]/Sqrt[lambda], lambda] /. lambda -> 1
Out[9]=
{{t Cos[t] - 4 Sin[t], -(3/2) t Cos[t] + (11 Sin[t])/2,
1/2 t Cos[t] - (3 Sin[t])/2}, {2 t Cos[t] - 6 Sin[t], -3 t Cos[t] + 8 Sin[t],
t Cos[t] - 2 Sin[t]}, {4 t Cos[t] - 8 Sin[t], -6 t Cos[t] + 10 Sin[t], 2 t Cos[t] - 2 Sin[t]}}
Similarly, we get
\[ \mbox{Res}_{\lambda =0} \cos \left( \sqrt{\lambda}\,t \right) \,{\bf R}_{\lambda}({\bf A} ) = \lim_{\lambda \to 0} \frac{\cos \left( \sqrt{\lambda}t \right)}{(\lambda -1)^2} \begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda -4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} = \begin{bmatrix} 4&-4&1 \\ 4&-4&1 \\ 4&-4&1 \end{bmatrix} \]
. and
\[ \mbox{Res}_{\lambda =1} \cos \left( \sqrt{\lambda}\,t \right) \,{\bf R}_{\lambda}({\bf A} ) = \lim_{\lambda \to 1} \frac{\text d}{{\text d}\lambda} \frac{\cos \left( \sqrt{\lambda}t \right)}{\lambda}\begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda -4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} = \begin{bmatrix} -3\,\cos t -4t\,\sin t &4\,\cos t + \frac{3}{2}\,t\,\sin t & -\cos t - \frac{t}{2}\,\sin t \\ -4\,\cos t - 4t\,\sin t & 5\,\cos t + 3t\,\sin t & -\cos t - t\,\sin t \\ -4\,\cos t - 2t\,\sin t & 4\,\cos t + 6t\,\sin t & -2t\,\sin t \end{bmatrix} . \]
.
D[r[s]*Cos[Sqrt[s]*t]/s, s] /. s -> 1
{{-3 Cos[t] - t Sin[t], 4 Cos[t] + 3/2 t Sin[t], -Cos[t] - 1/2 t Sin[t]}, {-4 Cos[t] - 2 t Sin[t], 5 Cos[t] + 3 t Sin[t], -Cos[t] - t Sin[t]}, {-4 Cos[t] - 4 t Sin[t], 4 Cos[t] + 6 t Sin[t], -2 t Sin[t]}}
or
cos1[t_] = res0+ D[Factor[(lambda - 1)^2*Resolvent[lambda]]*
Cos[Sqrt[lambda]*t], lambda] /. lambda -> 1
Out[10]=
{{4 - 3 Cos[t] - t Sin[t], -4 + 4 Cos[t] + 3/2 t Sin[t],
1 - Cos[t] - 1/2 t Sin[t]}, {4 - 4 Cos[t] - 2 t Sin[t], -4 + 5 Cos[t] + 3 t Sin[t],
1 - Cos[t] - t Sin[t]}, {4 - 4 Cos[t] - 4 t Sin[t], -4 + 4 Cos[t] + 6 t Sin[t], 1 - 2 t Sin[t]}}

To check that these matrix-functions are solutions of the second order matrix differential equation, we type:

Simplify[D[cos1[t], {t, 2}] + A.cos1[t]]
Out[11]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

Check initial conditions:

cos1[0]
Out[12]= {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
D[cos1[t], t] /. t -> 0
Out[13]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

We will get similar answers for the function sin1[t].

 

Square Roots


Note that generally speaking the given singular matrix A is not suitable for evaluating a square root because its minimal polynomial \( \psi (\lambda ) = \lambda \left( \lambda - 1 \right)^2 \) has multiple λ and the residue does not exist at λ = 0. Nevertheless, we apply formally the residue method and surprisingly get correct square roots despite theory does not work in this case.

Since the square root of zero is zero, we need to evaluate only one residure:

\[ {\bf A}^{1/2} = \mbox{Res}_{\lambda = 1} \lambda^{1/2} {\bf R}_{\lambda} \left( {\bf A} \right) = \mbox{Res}_{\lambda =1} \,\frac{1}{\sqrt{\lambda} \left( \lambda -1 \right)^2} \begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda -4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} . \]
Using formula \eqref{EqResolvent.5}, we get
\[ {\bf A}^{1/2} = \lim_{\lambda \to 1} \,\frac{\text d}{{\text d}\lambda} \,\lambda^{-1/2} \begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda -4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} . \]
Application of the product rule yields
\[ {\bf A}^{1/2} = \lim_{\lambda \to 1} \left( -\frac{1}{2} \right) \lambda^{-3/2} \begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda -4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} + \lim_{\lambda \to 1} \begin{bmatrix} 2\lambda -3 & 1 & 0 \\ 0 & 2\lambda -3 & 1 \\ 4 & -8 & 2 (\lambda +1) \end{bmatrix} \]
A0 = {{-1, 1, 0}, {0, -1, 1}, {4, -8, 4}};
r[s_] = s^(-1/2)*{{s^2 - 3*s + 4, s - 4, 1}, {4, s^2 - 3*s - 4, s + 1}, {4*(s + 1), -4*(2*s + 1), (s + 1)^2}}
D[r[s], s] /. s -> 1
R = %
{{-2, 5/2, -(1/2)}, {-2, 2, 0}, {0, -2, 2}}
Finally, we check the answer:
R.R - A0
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Taking one branch of a square root that corresponds \( \sqrt{1} = 1, \) we obtain the matrix root:
\[ {\bf A}^{1/2} = \left( -\frac{1}{2} \right) \begin{bmatrix} 2& -3& 1\\ 4 & 2 & 2 \\ 8& -12& 4 \end{bmatrix} + \begin{bmatrix} -1& 1 & 0 \\ 0& -1&1 \\ 4& -8 & 4 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} -4 & \phantom{-}5 & -1 \\ -4& \phantom{-}4& \phantom{-}0 \\ \phantom{-}0 & -4& \phantom{-}4 \end{bmatrix} . \]

 

Power function


Suppose we need to raise A to the power of 100. This means that we need to consider the power function f(λ) = λ100. In this case, we need to calculate the only one residue
\[ {\bf A}^{100} = \mbox{Res}_{\lambda = 1} \lambda^{100} {\bf R}_{\lambda} \left( {\bf A} \right) \]
because the other residue at λ = 0 vanishes. According to \refeq{EqResolvent.5}, we have
\[ {\bf A}^{100} = \mbox{Res}_{\lambda = 1} \lambda^{100} {\bf R}_{\lambda} \left( {\bf A} \right) = \lim_{\lambda \to 1} \,\frac{\text d}{{\text d}\lambda} \lambda^{99} \begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda -4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} . \]
Calculations show that
\[ {\bf A}^{100} = 99\,\lim_{\lambda \to 1} \begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda -4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} + ,\lim_{\lambda \to 1} \begin{bmatrix} 2\lambda -3 & 1 & 0 \\ 0 & 2\lambda -3 & 1 \\ 4 & -8 & 2(\lambda +1) \end{bmatrix} \]
Therefore,
\[ {\bf A}^{100} = 99 \begin{bmatrix} 2& -3&1 \\ 4 & -6 & 2 \\ 8&-12& 4\end{bmatrix} + \begin{bmatrix} -1&\phantom{-}1&0 \\ \phantom{-}0&-1&1 \\ \phantom{-}4 &-8&4\end{bmatrix} . \]

%% E^At
%Res0
Res0 = subs(L*Resolvent_A,L,0)
%Res1
ans2=diff((exp(L*t)*((L - 1)^2)*Resolvent_A),L);
Res1 = subs(ans2,L,1)
%Answer
expa = Res0 + Res1
if isequal(expm(A*t),expa), disp('expa is correct'),end
%% Sin(Sqrt(A)t)/sqrt(A)
ans3=diff((sin(sqrt(L)*t)/sqrt(L) * (L - 1)^2*Resolvent_A),L);
Res_sin_1 = subs(ans3,L,1);
sin_at = Res_sin_1 + Res0
%% Cos(sqrt(a)*t)
ans4=diff((cos(sqrt(L)*t) * (L - 1)^2*Resolvent_A),L);
Res_cos_1 = subs(ans4,L,1);
cos_at =  Res_cos_1 + Res0
%Does it work?
should_be_zeros = diff(cos_at,t,2) + A*cos_at
%Initial Conditions?
Should_be_identity_matrix = subs(cos_at,t,0)
Should_be_zeros_again = subs(diff(cos_at,t),t,0)
   ■

Example 6: We consider a defective matrix that has one eigenvalue \( \lambda = 1. \)

\[ {\bf A} = \begin{bmatrix} -13& -2& \phantom{1-}6 \\ \phantom{-}52 & \phantom{-}5 & -20 \\ -22& -4 & \phantom{-}11 \end{bmatrix} . \tag{6.1} \]
Its resolvent is
\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \frac{1}{(\lambda -1)^3} \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} . \tag{6.2} \]
We build the following matrix functions based on the resolvent method:
\begin{align*} e^{{\bf A}\,t} &= \lim_{\lambda \to \,1} \, \frac{1}{2} \, \frac{{\text d}^2}{{\text d} \lambda^2}\, e^{\lambda\,t} \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} \\ &= e^t \begin{bmatrix} 1-14t-20t^2 & -2t-2t^2 & 6t +8t^2 \\ 52t-40t^2 & 1+4t -4t^2 & -20t +16t^2 \\ -22t-60t^2 & -4t-6t^2 & 1+10t +24t^2 \end{bmatrix} , \\ \dfrac{\sin\left( \sqrt{\bf A} t \right)}{\sqrt{\bf A}} &= \lim_{\lambda \to \,1} \, \frac{1}{2} \, \frac{{\text d}^2}{{\text d} \lambda^2} \, \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} \\ &= \sin t \begin{bmatrix} -7& -\frac{1}{2} & 3 \\ -56 & -4 & 22 \\ -34 & -\frac{5}{2} & 14 \end{bmatrix} + t\,\cos t \begin{bmatrix} 8 & \frac{1}{2} & -3 \\ 56 & 5 & -22 \\ 34 & \frac{5}{2} & -13 \end{bmatrix} + t^2 \sin t \begin{bmatrix} 5 & \frac{1}{2} & -2 \\ 10 & 1 & -4 \\ 15 & \frac{3}{2} & -6 \end{bmatrix} , \\ \cos \left( \sqrt{\bf A} t \right) &= \lim_{\lambda \to \,1} \, \frac{1}{2} \, \frac{{\text d}^2}{{\text d} \lambda^2} \, \cos \left( \sqrt{\lambda}\,t \right) \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} \\ &= \cos t \,{\bf I} + t\,\sin t \begin{bmatrix} \phantom{-}2& \phantom{-}\frac{1}{2} & -1 \\ -36 &-3 & 14 \\ -4& \phantom{-}\frac{1}{2} & \phantom{-}1 \end{bmatrix} + t^2 \cos t \begin{bmatrix} 5& \frac{1}{2} & -2 \\ 10& 1 & -4 \\ 15 & \frac{3}{2} & -6 \end{bmatrix} , \end{align*}
where I is the identity matrix.
K = {{-13, -2, 6}, {52, 5, -20}, {-22, -4, 11}}
res[a_] = Simplify[Factor[Inverse[a*IdentityMatrix[3] - K]*(a - 1)^3]]
FullSimplify[Limit[D[Sin[Sqrt[a]*t]*res[a]/Sqrt[a], a, a], a -> 1]]/2
All these matrix functions are unique solutions of the corresponding initial value problems. ■

 

Here is the code to calculate a function of a square matrix. The function f= Exp is chosen for simplicity, but it can be any admissible function.

A = {{1, 2}, {-1, 3}};
n = Dimensions[A][[1]];
resolvA = Inverse[lambda*IdentityMatrix[n] - A];
eigvalsA = Eigenvalues[A];
eigvalsAwithmultiplicities = Tally[eigvalsA];
numuniqueeigvalsA = Length[eigvalsAwithmultiplicities];
fofAt = 0;
Do[
lambda0 = eigvalsAwithmultiplicities[[i]][[1]];
m = eigvalsAwithmultiplicities[[i]][[2]];
res = (1/Factorial[m - 1]) * (
Simplify[
( D[
FullSimplify[resolvA * ((lambda - lambda0)^m)]*
f[lambda*t] , {lambda, m - 1}] ) /. {lambda -> lambda0}
]
);
fofAt += res;
, {i, numuniqueeigvalsA}];
fofAt = ComplexExpand[fofAt]
Out[1]=

{{E^(2 t) Cos[t] - E^(2 t) Sin[t], 2 E^(2 t) Sin[t]}, {-E^(2 t) Sin[t], E^(2 t) Cos[t] + E^(2 t) Sin[t]}}

 

For instance, if one wants to calculate cosine of A*t, then type:

f = Cos;
A = {{1, 2}, {-1, 3}};
n = Dimensions[A][[1]];
resolvA = Inverse[lambda*IdentityMatrix[n] - A];
eigvalsA = Eigenvalues[A];
eigvalsAwithmultiplicities = Tally[eigvalsA];
numuniqueeigvalsA = Length[eigvalsAwithmultiplicities];
fofAt = 0;
Do[
lambda0 = eigvalsAwithmultiplicities[[i]][[1]];
m = eigvalsAwithmultiplicities[[i]][[2]];
res = (1/Factorial[m - 1]) * (
Simplify[
( D[
FullSimplify[resolvA * ((lambda - lambda0)^m)]*
f[lambda*t] , {lambda, m - 1}] ) /. {lambda -> lambda0}
]
);
fofAt += res;
, {i, numuniqueeigvalsA}];
fofAt = ComplexExpand[fofAt]
Out[2]=

{{Cos[2 t] Cosh[t] + Sin[2 t] Sinh[t], -2 Sin[2 t] Sinh[t]}, {Sin[2 t] Sinh[t], Cos[2 t] Cosh[t] - Sin[2 t] Sinh[t]}}

 

  1. Dobrushkin, V.A., Applied Differential Equations. The Primary Course, CRC Press, second edition, 2021.
  2. F. G. Frobenius, Über die cogredienten Transformationen der bilinearen Formen, Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin, 16: 7–16, 1896.
  3. R. A. Frazer, W. J. Duncan, and A. R. Collar. Elementary Matrices and Some Applications to Dynamics and Differential Equations. Cambridge University Press, 1938.
  4. Higham, Nicholas J., Functions of Matrices. Theory and Computation. SIAM, 2008,
  5. Moler, Cleve B. and Van Loan, Charles F., Nineteen dubious ways to compute the exponential of a matrix. SIAM Review, 20, No 4, 801--836, 1978.
  6. Rinehart, R. F., The equivalence of definitions of a matric function. American Mathematical Monthly, 62, No 6, 395--414, 1955.