MATLAB TUTORIAL. Part 2.1: Resolvent

The Resolvent Method
Vladimir Dobrushkin.

The resolvent method and its applications to partial differential equations were developed by Vladimir Dobrushkin (born 1949) in the 1980s. When the resolvent method is applied for defining a function of a square matrix, it is actually based on the Cauchy integral formula
\[ f({\bf A}) = \frac{1}{2\pi{\bf j}} \, \int_{\gamma} {\text d} z \, f(z) \left( z{\bf I} - {\bf A} \right)^{-1} , \]
where f is holomorphic function (meaning that it is represented by a convergent power series) on and inside a closed contour γ that encloses all eigenvalues of a square matrix A. Here j is a unit vector in the positive vertical direction on the complex plane ℂ so that \( {\bf j}^2 = -1 . \) The idea to define a function of a linear operator via the Cauchy integral formula came from the French mathematician Jules Henri Poincaré (1854--1912) when he worked on continuous groups in 1899. When the integrand is a ratio of two polynomials or of entire functions, then the contour integral is the sum of residues (see definition of residue below), which was predicted by a German mathematician Ferdinand Georg Frobenius (1849--1917) in 1896. Although our definition of matrix functions is based on previous achievements in this area, it is made understandable for people on undergraduate level supported by numerous examples.
For each square n-by-n matrix A, we call a function f(λ) of one complex variable λ admissible for A if f is defined on the spectrum (which is the set of all eigenvalues) of A and all derivatives
\[ f^{(s)} \left( \lambda_k \right) \equiv \left. \frac{{\text d}^s f}{{\text d} \lambda^s} \right\vert_{\lambda = \lambda_k} , \qquad s=0,1,\ldots , m_k ; \quad k=1,2,\ldots , m, \]
exist. Here \( \lambda_1 , \lambda_2 , \ldots , \lambda_m \) are distinct eigenvalues of matrix A of multiplicities \( m_1 , m_2 , \ldots , m_m , \) respectively.    ▣

A general matrix function is a correspondence that relates to each square matrix A of order n and admissible function f a matrix, denoted by \( f({\bf A}) . \) The matrix function \( f({\bf A}) \) has the same dimensions as A with elements in the real or complex field. Such correspondence is assumed to satisfy the following conditions:

  • If \( f(\lambda ) = \lambda , \) then \( f({\bf A}) = {\bf A} ; \)
  • If \( f(\lambda ) = k, \) a constant, then \( f({\bf A}) = k\,{\bf I} , \) where I is the identity matrix;
  • If \( f(\lambda ) = g(\lambda ) + h(\lambda ) , \) then \( f({\bf A}) = g({\bf A}) + h({\bf A}) ; \)
  • If \( f(\lambda ) = g(\lambda ) \cdot h(\lambda ) , \) then \( f({\bf A}) = g({\bf A}) \, h({\bf A}) . \)
These requirements will ensure that the definition, when applied to a polynomial p(λ), will yield the usual matrix polynomial p(A), and that any rational identity in scalar functions of a complex variable will be fulfilled by the corresponding matrix functions. The above conditions are not sufficient for most applications, and a fifth requirement would be highly desirable
  • If \( f(\lambda ) = h \left( g(\lambda ) \right) , \) then \( f({\bf A}) = h\left( g({\bf A}) \right) . \)
for all admissible functions. The extension of the concept of a function of a complex variable to matrix functions has occupied the attention of a number of mathematicians since 1883. While there are known many approaches to define a function of a square matrix that could be found in the following references [2--5]. Our exposition of matrix function presents another method, which is based on residue application to Cauchy formula.

Recall that the resolvent of a square matrix A is

\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} , \]
which is a matrix-function depending on a parameter λ. In general, the resolvent, after reducing all common multiples, is a ratio of a polynomial matrix \( {\bf Q}(\lambda ) \) of degree at most \( k-1 , \) where k is the degree of the minimal polynomial \( \psi (\lambda ): \)
\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} = \frac{1}{\psi (\lambda )} \, {\bf Q} (\lambda ) . \]
Recall that the minimal polynomial for a square matrix A is the unique monic polynomial ψ of lowest degree such that \( \psi ({\bf A} ) = {\bf 0} . \) It is assumed that polynomials \( {\bf Q}(\lambda ) \) and \( \psi (\lambda ) \) are relatively prime (this means that they do not have common multiples containing parameter λ). Then, the polynomial in the denominator of the reduced resolvent formula is the minimal polynomial for the matrix A.
The residue of the ratio \( \displaystyle f(\lambda ) = \frac{P(\lambda )}{Q(\lambda )} \) of two polynomials (or entire functions) at the pole \( \lambda_0 \) of multiplicity m is defined by

\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \frac{1}{(m-1)!}\,\lim_{\lambda \to\lambda_0} \, \frac{{\text d}^{m-1}}{{\text d} \lambda^{m-1}} \, \frac{(\lambda - \lambda_0 )^{m} P(\lambda )}{Q(\lambda )} = \left. \frac{1}{(m-1)!} \, \frac{{\text d}^{m-1}}{{\text d} \lambda^{m-1}} \, \frac{(\lambda - \lambda_0 )^{m} P(\lambda )}{Q(\lambda )} \right\vert_{\lambda = \lambda_0} . \]
Recall that a function f(λ) has a pole of multiplicity m at \( \lambda = \lambda_0 \) if, upon multiplication by \( \left( \lambda - \lambda_0 \right)^m , \) the product \( f(\lambda )\, \left( \lambda - \lambda_0 \right)^m \) becomes a holomorphic function in a neighborhood of \( \lambda = \lambda_0 . \) In particular, for simple pole \( m=1 , \) we have
\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \frac{P(\lambda_0 )}{Q'(\lambda_0 )} . \]
For double pole, \( m=2 , \) we have
\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \left. \frac{{\text d}}{{\text d} \lambda} \, \frac{(\lambda - \lambda_0 )^2 \, P(\lambda )}{Q(\lambda )} \right\vert_{\lambda = \lambda_0} , \]
and for triple pole, \( m=3 , \) we get
\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \left. \frac{1}{2!} \, \frac{{\text d}^{2}}{{\text d} \lambda^{2}} \, \frac{(\lambda - \lambda_0 )^{3} P(\lambda )}{Q(\lambda )} \right\vert_{\lambda = \lambda_0} . \]

If a real-valued function \( f(\lambda ) \) has a pair of complex conjugate poles \( a \pm b {\bf j} \) (here \( {\bf j}^2 =-1 \) ), then
\[ \mbox{Res}_{a+b{\bf j}} f(\lambda ) + \mbox{Res}_{a-b{\bf j}} f(\lambda ) = 2\, \Re \, \mbox{Res}_{a+b{\bf j}} f(\lambda ) , \]
where Re = \( \Re \) stands for the real part of a complex number. ■
Let f(λ) be an admissible function for a square matrix A. Then matrix function can be defined as

\[ f({\bf A}) = \sum_{\mbox{all eigenvalues}} \, \mbox{Res} \, f(\lambda ) \,{\bf R}_{\lambda} ({\bf A}) . \]
Note that all residues at eigenvalues of A in the above formula exist for admissible functions. We mostly interested for matrix functions corresponding to functions of one variable λ: \( \displaystyle e^{\lambda\,t} , \quad \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} , \quad \cos \left( \sqrt{\lambda}\,t \right) \) because they are solutions of the following initial value problems (where dots stand for derivatives with respect to t and I is the identity matrix):
\begin{align*} {\bf f} (t) \equiv e^{{\bf A}\,t} \, : \,& \quad \dot{\bf f} = {\bf A}\, {\bf f}(t) , \qquad {\bf f}(0) = {\bf I} , \\ {\bf \Phi} (t) \equiv \frac{\sin \left( \sqrt{\bf A}\,t \right)}{\sqrt{\bf A}} \, : \,& \quad \ddot{\bf \Phi} (t) + {\bf A}\,{\bf \Phi}(t) = {\bf 0} , \qquad {\bf \Phi} (0) = {\bf 0} , \quad \dot{\bf \Phi} (0) = {\bf I} , \\ {\bf \Psi} (t) \equiv \cos \left( \sqrt{\bf A}\,t \right) \, : \,& \quad \ddot{\bf \bf \Psi} (t) + {\bf A}\,{\bf \Psi}(t) = {\bf 0} , \qquad {\bf \Psi} (0) = {\bf I} , \quad \dot{\bf \Psi} (0) = {\bf 0} . \end{align*}
A = [-2,-4,2;-2,1,2;4,2,5]
[char_poly_A, Resolvent_A] = my_resolvent(A)
function [char_poly, resolvent_m] = my_resolvent(A)
    %This returns two things, the resolvent of a matrix and
    %the matrix's characteristic polynomial.
    syms l
    [m,n] = size(A);
    assert(isequal(m,n), "The matrix inputed is not square.")
    %assert(~isequal(det(A),0),"The determiant of the matrix is 0.")
    lI = eye(m) * l;
    char_poly = det(A - lI);
    resolvent_m = inv(A - lI);
end
Example: Consider a defective singular matrix
\[ {\bf A} = \begin{bmatrix} -1&1&0 \\ 0&-1&1 \\ 4&-8&4 \end{bmatrix} , \]
which we enter into the computer:

syms L t
A = [-1, 1, 0; 0, -1, 1; 4, -8, 4];
%% Resolvent
R=eye(3)*L - A;
Resolvent_A = simplify(inv(R))
%% Eigenvalues and Eigenvectors
char_poly = det(R);
Eigenvalues = solve(char_poly,L)'
eigenvectors = zeros(3);  %This line is to make the code run smoother.
for i = 1:3 %This computes the eigenvectors
   matrix = A - eye(3)*Eigenvalues(i);
   a_eigenvector = null(matrix); %An eigenvector is the nullspace of a matrix minus one of its eigenvalues along the diagonal.
   a_eigenvector = a_eigenvector/min(abs(a_eigenvector)); %This makes the eigenvector look nice and readable.
   eigenvectors(:,i) = a_eigenvector;
end
fprintf('Notice how A has only two linearly independent eigenvectors.')
fprintf('\nTherefore, the matrix A is defective with eigenvalue L = 1\n\n')

Since there are only two eigenvectors, the matrix A is defective, with defective eigenvalue \( \lambda =1. \)

We construct three functions with this matrix: \( e^{\lambda\,t}, \) \( \phi (t) = \sin \left( \sqrt{\lambda}\,t\right)/\sqrt{\lambda} , \) and \( \psi (t) = \cos\left( \sqrt{\lambda}\,t\right) , \) based on the resolvent:

\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \frac{1}{\lambda \left( \lambda -1 \right)^2} \begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda +4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} . \]

First we calculate residues at \( \lambda =0 \) and at \( \lambda =1. \)


%% E^At
%Res0
Res0 = subs(L*Resolvent_A,L,0)
%Res1
ans2=diff((exp(L*t)*((L - 1)^2)*Resolvent_A),L);
Res1 = subs(ans2,L,1)
%Answer
expa = Res0 + Res1
if isequal(expm(A*t),expa), disp('expa is correct'),end
%% Sin(Sqrt(A)t)/sqrt(A)
ans3=diff((sin(sqrt(L)*t)/sqrt(L) * (L - 1)^2*Resolvent_A),L);
Res_sin_1 = subs(ans3,L,1);
sin_at = Res_sin_1 + Res0
%% Cos(sqrt(a)*t)
ans4=diff((cos(sqrt(L)*t) * (L - 1)^2*Resolvent_A),L);
Res_cos_1 = subs(ans4,L,1);
cos_at =  Res_cos_1 + Res0
%Does it work?
should_be_zeros = diff(cos_at,t,2) + A*cos_at
%Initial Conditions?
Should_be_identity_matrix = subs(cos_at,t,0)
Should_be_zeros_again = subs(diff(cos_at,t),t,0)

To check the answer we subtract MatrixExp[A*t] from the above expression:

Simplify[%-MatrixExp[A*t]]
Out[8]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

Now we construct \( \sin \left( \sqrt{\bf A}\,t\right) /\sqrt{\bf A} \) and \( \cos \left( \sqrt{\bf A}\,t\right) \) matrices:

sin1[t_] = D[Factor[(lambda - 1)^2*Resolvent[lambda]]* Sin[Sqrt[lambda]*t]/Sqrt[lambda], lambda] /. lambda -> 1
Out[9]=
{{t Cos[t] - 4 Sin[t], -(3/2) t Cos[t] + (11 Sin[t])/2,
1/2 t Cos[t] - (3 Sin[t])/2}, {2 t Cos[t] - 6 Sin[t], -3 t Cos[t] + 8 Sin[t],
t Cos[t] - 2 Sin[t]}, {4 t Cos[t] - 8 Sin[t], -6 t Cos[t] + 10 Sin[t], 2 t Cos[t] - 2 Sin[t]}}
cos1[t_] = res0+ D[Factor[(lambda - 1)^2*Resolvent[lambda]]*
Cos[Sqrt[lambda]*t], lambda] /. lambda -> 1
Out[10]=
{{4 - 3 Cos[t] - t Sin[t], -4 + 4 Cos[t] + 3/2 t Sin[t],
1 - Cos[t] - 1/2 t Sin[t]}, {4 - 4 Cos[t] - 2 t Sin[t], -4 + 5 Cos[t] + 3 t Sin[t],
1 - Cos[t] - t Sin[t]}, {4 - 4 Cos[t] - 4 t Sin[t], -4 + 4 Cos[t] + 6 t Sin[t], 1 - 2 t Sin[t]}}

To check that these matrix-functions are solutions of the second order matrix differential equation, we type:

Simplify[D[cos1[t], {t, 2}] + A.cos1[t]]
Out[11]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

Check initial conditions:

cos1[0]
Out[12]= {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
D[cos1[t], t] /. t -> 0
Out[3]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

We will get similar answers for the function sin1[t]. ■

Example: We consider a defective matrix that has one eigenvalue \( \lambda = 1. \)

\[ {\bf A} = \begin{bmatrix} -13& -2& 6 \\ 52 & 5 & -20 \\ -22& -4 & 11 \end{bmatrix} . \]
Its resolvent is
\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \frac{1}{(\lambda -1)^3} \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} . \]
We build the following matrix functions based on the resolvent method:
\begin{align*} e^{{\bf A}\,t} &= \lim_{\lambda \to \,1} \, \frac{1}{2} \, \frac{{\text d}^2}{{\text d} \lambda^2}\, e^{\lambda\,t} \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} \\ &= e^t \begin{bmatrix} 1-14t-20T62 & -2t-2t^2 & 6t +8t^2 \\ 52t-40t^2 & 1+4t -4t^2 & -20t +16t^2 \\ -22t-60t^2 & -4t-6t^2 & 1+10t +24t^2 \end{bmatrix} , \\ \dfrac{\sin\left( \sqrt{\bf A} t \right)}{\sqrt{\bf A}} &= \lim_{\lambda \to \,1} \, \frac{1}{2} \, \frac{{\text d}^2}{{\text d} \lambda^2} \, \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} \\ &= \sin t \begin{bmatrix} -7& -\frac{1}{2} & 3 \\ -56 & -4 & 22 \\ -34 & -\frac{5}{2} & 14 \end{bmatrix} + t\,\cos t \begin{bmatrix} 8 & \frac{1}{2} & -3 \\ 56 & 5 & -22 \\ 34 & \frac{5}{2} & -13 \end{bmatrix} + t^2 \sin t \begin{bmatrix} 5 & \frac{1}{2} & -2 \\ 10 & 1 & -4 \\ 15 & \frac{3}{2} & -6 \end{bmatrix} , \\ \cos \left( \sqrt{\bf A} t \right) &= \lim_{\lambda \to \,1} \, \frac{1}{2} \, \frac{{\text d}^2}{{\text d} \lambda^2} \, \cos \left( \sqrt{\lambda}\,t \right) \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} \\ &= \cos t \,{\bf I} + t\,\sin t \begin{bmatrix} 2& \frac{1}{2} & -1 \\ -36 &-3 & 14 \\ -4& \frac{1}{2} & 1 \end{bmatrix} + t^2 \cos t \begin{bmatrix} 5& \frac{1}{2} & -2 \\ 10& 1 & -4 \\ 15 & \frac{3}{2} & -6 \end{bmatrix} , \end{align*}
where I is the identity matrix.
K = {{-13, -2, 6}, {52, 5, -20}, {-22, -4, 11}}
res[a_] = Simplify[Factor[Inverse[a*IdentityMatrix[3] - K]*(a - 1)^3]]
FullSimplify[Limit[D[Sin[Sqrt[a]*t]*res[a]/Sqrt[a], a, a], a -> 1]]/2
All these matrix functions are unique solutions of the corresponding initial value problems. ■

Example: We consider two complex valued matrices:

\[ {\bf A} = \begin{bmatrix} 2+3{\bf j} & 1-2{\bf j} \\ 1+5{\bf j} & 3-3{\bf j} \end{bmatrix} , \qquad {\bf B} = \begin{bmatrix} 7+{\bf j} & 2-{\bf j} \\ -2+{\bf j} & 11-{\bf j} \end{bmatrix} . \]
A = {{2 + 3*I, 1 - 2*I}, {1 + 5*I, 3 - 3*I}}
Eigenvalues[A]
Out[2]= {4, 1}
B = {{7 + I, 2 - I}, {- 2 +I, 11 - I}}
Eigenvalues[%]
Out[4]= {9, 9}
Eigenvectors[B]
Out[5]= {{1, 1}, {0, 0}}

Therefore, matrix A is digoanalizable (because it has two distinct simple eigenvalues), but matrix B is defective. We are going to find square roots of these matrices, as well as the following matrix functions:

\[ {\bf \Phi}_{\bf A} (t) = \dfrac{\sin\left( \sqrt{\bf A} t \right)}{\sqrt{\bf A}} , \qquad {\bf \Phi}_{\bf B} (t) = \dfrac{\sin\left( \sqrt{\bf B} t \right)}{\sqrt{\bf B}} \]
and
\[ {\bf \Psi}_{\bf A} (t) = \cos \left( \sqrt{\bf A} t \right) , \qquad {\bf \Psi}_{\bf B} (t) = \cos\left( \sqrt{\bf B} t \right) . \]
These matrix functions are unique solutions of the second order matrix differential equations \( \ddot{\bf P}(t) + {\bf A}\,{\bf P}(t) \equiv {\bf 0} \) and \( \ddot{\bf P}(t) + {\bf B}\,{\bf P}(t)\equiv {\bf 0} , \) respectively. Here dots stay for the derivatives with respect to time variable t. They also satisfy the initial conditions
\[ {\bf \Phi}_{\bf A} (0) = {\bf 0} , \quad \dot{\bf \Phi}_{\bf A} (0) = {\bf I} , \qquad {\bf \Psi}_{\bf A} (0) = {\bf I} , \quad \dot{\bf \Psi}_{\bf A} (0) = {\bf 0} , \]
and similar for index B:
\[ {\bf \Phi}_{\bf B} (0) = {\bf 0} , \quad \dot{\bf \Phi}_{\bf B} (0) = {\bf I} , \qquad {\bf \Psi}_{\bf B} (0) = {\bf I} , \quad \dot{\bf \Psi}_{\bf B} (0) = {\bf 0} . \]
Since the above initial value problems for matrix functions have unique solutions, the matrix functions \( {\bf \Phi}_{\bf A} (t) , \quad {\bf \Psi}_{\bf A} (t) , \quad {\bf \Phi}_{\bf B} (t) , \quad {\bf \Psi}_{\bf B} (t) \) do not depend on a choice of a square root even these roots do not exist.

First, we find square roots of these two matrices using the resolvent method. To achieve it, we need to find resolvents:
\begin{align*} {\bf R}_{\lambda} \left( {\bf A} \right) &= \frac{1}{(\lambda -1)(\lambda -4)} \begin{bmatrix} \lambda -3+3{\bf j} & 1 - 2{\bf j} \\ 1+ 5{\bf j} & \lambda -2-3{\bf j} \end{bmatrix} , \\ {\bf R}_{\lambda} \left( {\bf B} \right) &= \frac{1}{(\lambda -9)^2} \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} . \end{align*}
For diagonalizable matrix A, we have
\[ \sqrt{\bf A} = \pm \mbox{Res}_{\lambda =1} \,{\bf R}_{\lambda} \left( {\bf A} \right) \pm 2\,\mbox{Res}_{\lambda =4} \, {\bf R}_{\lambda} \left( {\bf A} \right) = \pm \frac{1}{3} \begin{bmatrix} 2-3{\bf j} & -1+2{\bf j} \\ -1 -5{\bf j} & 1+3{\bf j} \end{bmatrix} \pm \frac{2}{3} \begin{bmatrix} 1 + 3 {\bf j} & 1-2 {\bf j} \\ 1+ 5{\bf j} & 2- {\bf j} \end{bmatrix} . \]
Two of these roots are
\[ \sqrt{\bf A} = \begin{bmatrix} 3{\bf j} & 1-2{\bf j} \\ 1+5{\bf j} & 1-3{\bf j} \end{bmatrix} \qquad\mbox{and}\qquad \frac{1}{3} \begin{bmatrix} 4 + 3 {\bf j} & 1-2{\bf j} \\ 1+5{\bf j} & 5-3{\bf j} \end{bmatrix} . \]
For matrix B, we have
\[ \sqrt{\bf B} = \lim_{\lambda \to \,9} \, \frac{\text d}{{\text d} \lambda} \, \sqrt{\lambda} \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} = \frac{1}{6} \begin{bmatrix} 16+ {\bf j} & 2- {\bf j} \\ -2+{\bf j} & 20-{\bf j} \end{bmatrix} . \]
B = {{7 + I, 2 - I}, {-2 + I, 11 - I}}
R[a_] = FullSimplify[Inverse[a*IdentityMatrix[2] - B]*(a - 9)^2]
R3 = Limit[D[Sqrt[a]*R[a], a], a -> 9]
R3.R3
Out[4]= {{7 + I, 2 - I}, {-2 + I, 11 - I}}
Now we turn our attention to definding matrix functions Φ(t) and Ψ(t). For matrix B, we have
\begin{align*} {\bf \Phi}_{\bf B} (t) &= \lim_{\lambda \to \,9} \, \frac{\text d}{{\text d} \lambda} \, \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \, \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} \\ &= \frac{1}{54} \begin{bmatrix} (-6+ 3 {\bf j})\,t\,\cos 3t + (20-{\bf j})\,\sin 3t & (2-{\bf j}) \left[ 3t\,\cos 3t - \sin 3t \right] \\ (-2+{\bf j}) \left[ 3t\,\cos 3t - \sin 3t \right] & (6-3{\bf j})\,t\,\cos 3t + (16+ {\bf j})\,\sin 3t \end{bmatrix} \end{align*}
and
\begin{align*} {\bf \Psi}_{\bf B} (t) &= \lim_{\lambda \to \,3} \, \frac{\text d}{{\text d} \lambda} \, \cos \left( \sqrt{\lambda}\,t \right) \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} \\ &= \frac{1}{6} \begin{bmatrix} 6\,\cos 3t + (2- {\bf j}) \,t\,\sin 3t & (-2 + {\bf j}) \, t\,\sin 3t \\ (2- {\bf j})\,t\,\sin 3t & 6\,\cos 3t - (2- {\bf j})\,t \, \sin 3t \end{bmatrix} . \end{align*}
Limit[D[Sin[Sqrt[a]*t]*R[a]/Sqrt[a], a], a -> 9]
FullSimplify[Limit[D[Cos[Sqrt[a]*t]*R[a], a], a -> 9]
Each of these matrix functions is a unique solution of the corresponding initial value problem:
\begin{align*} \ddot{\bf \Phi}_{\bf B} (t) + {\bf B}\, {\bf \Phi}_{\bf B} (t) &= {\bf 0} , \qquad {\bf \Phi}_{\bf B} (0) = {\bf 0} , \quad \dot{\bf \Phi}_{\bf B} (0) = {\bf I} , \\ \ddot{\bf \Psi}_{\bf B} (t) + {\bf B}\, {\bf \Psi}_{\bf B} (t) &= {\bf 0} , \qquad {\bf \Psi}_{\bf B} (0) = {\bf I} , \quad \dot{\bf \Psi}_{\bf B} (0) = {\bf 0} . \end{align*}
For matrix A, we have
\begin{align*} {\bf \Phi}_{\bf A} (t) &= \mbox{Res}_{\lambda =1} \,\frac{\sin \left( \sqrt{\lambda} \,t \right)}{\sqrt{\lambda}} \, {\bf R}_{\lambda} \left( {\bf A} \right) + \mbox{Res}_{\lambda =4} \, \frac{\sin \left( \sqrt{\lambda} \,t \right)}{\sqrt{\lambda}} \, {\bf R}_{\lambda} \left( {\bf A} \right) \\ &= \frac{\sin t}{3} \begin{bmatrix} 2 - 3 {\bf j} & -1+2{\bf j} \\ -1-5{\bf j} & 1 + 3 {\bf j} \end{bmatrix} + \frac{\sin 2t}{6} \begin{bmatrix} 1+3{\bf j} & 1-2 {\bf j} \\ 1+5 {\bf j} & 2-3{\bf j} \end{bmatrix} , \\ {\bf \Psi}_{\bf A} (t) &= \mbox{Res}_{\lambda =1} \,\cos \left( \sqrt{\lambda} \,t \right) {\bf R}_{\lambda} \left( {\bf A} \right) + \mbox{Res}_{\lambda =4} \,\cos \left( \sqrt{\lambda} \,t \right) {\bf R}_{\lambda} \left( {\bf A} \right) \\ &= \frac{\cos t}{3} \begin{bmatrix} 2 - 3 {\bf j} & -1+2{\bf j} \\ -1-5{\bf j} & 1 + 3 {\bf j} \end{bmatrix} + \frac{\cos 2t}{3} \begin{bmatrix} 1+3{\bf j} & 1-2 {\bf j} \\ 1+5 {\bf j} & 2-3{\bf j} \end{bmatrix} . \end{align*}

As a byproduct, we can find a complex square root of the identity matrix:

\[ {\bf R}_1 = \frac{1}{3} \begin{bmatrix} -1+ 6{\bf j} & 2 - 4{\bf j} \\ 2+10{\bf j} & 1 -6{\bf j} \end{bmatrix} \qquad \Longrightarrow \qquad {\bf R}_1^2 = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} . \]
Matrix R1 has two real eigenvalues \( \lambda = \pm 1 . \)


  1. F. G. Frobenius, Uber die cogredienten Transformationen der bilinearen Formen, Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin, 16:7–16, 1896.
  2. R. A. Frazer, W. J. Duncan, and A. R. Collar. Elementary Matrices and Some Applications to Dynamics and Differential Equations. Cambridge University Press, 1938.
  3. R. F. Rinehart. The equivalence of definitions of a matric function. American Mathematical Monthly, 62, No 6, 395--414, 1955.
  4. Cleve B. Moler and Charles F. Van Loan. Nineteen dubious ways to compute the exponential of a matrix. SIAM Review, 20, No 4, 801--836, 1978.
  5. Nicholas J. Higham, Functions of Matrices. Theory and Computation. SIAM, 2008,
  6. Vladimir Dobrushkin, Applied Differential Equations. The Primary Course, CRC Press, second edition, 2019.