Preface


This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0340. It is primarily for students who have some experience using Mathematica. If you have never used Mathematica before and would like to learn more of the basics for this computer algebra system, it is strongly recommended looking at the APMA 0330 tutorial. As a friendly reminder, don't forget to clear variables in use and/or the kernel.

Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in normal font. This means that you can copy and paste all commands into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330
Return to Part I of the course APMA0340

Resolvent Method


Vladimir Dobrushkin.

The resolvent method and its applications to partial differential equations were developed by Vladimir Dobrushkin (born 1949) in the 1980s. When the resolvent method is applied for defining a function of a square matrix, it is actually based on the Cauchy integral formula
\[ f({\bf A}) = \frac{1}{2\pi{\bf j}} \, \int_{\gamma} {\text d} z \, f(z) \left( z{\bf I} - {\bf A} \right)^{-1} , \]
where f is holomorphic (meaning that it is represented by a convergent power series) on and inside a closed contour γ that encloses all eigenvalues of a square matrix A. Here j is a unit vector in the positive vertical direction on the complex plane so that \( {\bf j}^2 = -1 . \) When the integrand is a ratio of two polynomials or of entire functions, then the contour integral is the sum of residues (see definition of residue below), which was predicted by a German mathematician Ferdinand Georg Frobenius (1849--1917):
G. Frobenius. Uber die cogredienten Transformationen der bilinearen Formen.
Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin, 16:7–16, 1896.

For each square n-by-n matrix A, we call a function f(λ) of one complex variable λ admissible for A if f is defined on the spectrum of A (which is the set of all eigenvalues) and all derivatives

\[ f^{(s)} \left( \lambda_k \right) \equiv \left. \frac{{\text d}^s f}{{\text d} \lambda^s} \right\vert_{\lambda = \lambda_k} , \qquad s=0,1,\ldots , m_k ; \quad k=1,2,\ldots , m, \]
exist. Here \( \lambda_1 , \lambda_2 , \ldots , \lambda_m \) are distinct eigenvalues of matrix A of multiplicities \( m_1 , m_2 , \ldots , m_m , \) respectively. ■

A general matrix function is a correspondence that relates to each square matrix A of order n and admissible function f a matrix, denoted by \( f({\bf A}) . \) The matrix \( f({\bf A}) \) has the same order as A with elements in the real or complex field. Such correspondence is assumed to satisfy the following conditions:

  • If \( f(\lambda ) = \lambda , \) then \( f({\bf A}) = {\bf A} ; \)
  • If \( f(\lambda ) = k, \) a constant, then \( f({\bf A}) = k\,{\bf I} , \) where I is the identity matrix;
  • If \( f(\lambda ) = g(\lambda ) + h(\lambda ) , \) then \( f({\bf A}) = g({\bf A}) + h({\bf A}) ; \)
  • If \( f(\lambda ) = g(\lambda ) \cdot h(\lambda ) , \) then \( f({\bf A}) = g({\bf A}) \, h({\bf A}) . \)
These requirements will ensure that the definition, when applied to a polynomial p(λ) will yield the usual matrix polynomial p(A) and that any rational identity in scalar functions of a complex variable will be fulfilled by the corresponding matrix functions. The above conditions are not sufficient for most applications, and a fifth requirement would be highly desirable.
  • If \( f(\lambda ) = h \left( g(\lambda ) \right) , \) then \( f({\bf A}) = h\left( g({\bf A}) \right) . \)
for holomorphic functions and all admissible functions. The extension of the concept of a function of a complex variable to matrix functions has occupied the attention of a number of mathematicians since 1883. While there are known many approaches to define a function of a square matrix that could be found in the following references:
  • R. A. Frazer, W. J. Duncan, and A. R. Collar. Elementary Matrices and Some Applications to Dynamics and Differential Equations. Cambridge University Press, 1938.
  • R. F. Rinehart. The equivalence of definitions of a matric function. American Mathematical Monthly, 62, No 6, 395--414, 1955.
  • Cleve B. Moler and Charles F. Van Loan. Nineteen dubious ways to compute the exponential of a matrix. SIAM Review, 20, No 4, 801--836, 1978.
  • Nicholas J. Higham, Functions of Matrices. Theory and Computation. SIAM, 2008,
we present another method, which is more understandable at undergraduate level.

Recall that the resolvent of a square matrix A is

\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} , \]
which is a matrix-function depending on a parameter λ. In general, the resolvent, after reducing all common multiples, is a ratio of a polynomial matrix \( {\bf Q}(\lambda ) \) of degree at most \( k-1 , \) where k is the degree of the minimal polynomial \( \psi (\lambda ): \)
\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} = \frac{1}{\psi (\lambda )} \, {\bf Q} (\lambda ) . \]
Recall that the minimal polynomial for a square matrix A is the unique monic polynomial ψ of lowest degree such that \( \psi ({\bf A} ) = {\bf 0} . \) It is assumed that polynomials \( {\bf Q}(\lambda ) \) and \( \psi (\lambda ) \) are relatively prime (this means that they do not have common multiples). Then, the polynomial in the denominator of the reduced resolvent formula is the minimal polynomial for the matrix A.

The residue of the ratio \( \displaystyle f(\lambda ) = \frac{P(\lambda )}{Q(\lambda )} \) of two polynomials (or entire functions) at the pole \( \lambda_0 \) of multiplicity m is defined by

\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \frac{1}{(m-1)!}\,\lim_{\lambda \to\lambda_0} \, \frac{{\text d}^{m-1}}{{\text d} \lambda^{m-1}} \, \frac{(\lambda - \lambda_0 )^{m} P(\lambda )}{Q(\lambda )} = \left. \frac{1}{(m-1)!} \, \frac{{\text d}^{m-1}}{{\text d} \lambda^{m-1}} \, \frac{(\lambda - \lambda_0 )^{m} P(\lambda )}{Q(\lambda )} \right\vert_{\lambda = \lambda_0} . \]
Recall that a function f(λ) has a pole of multiplicity m at \( \lambda = \lambda_0 \) if, upon multiplication by \( \left( \lambda - \lambda_0 \right)^m , \) the product \( f(\lambda )\, \left( \lambda - \lambda_0 \right)^m \) becomes a holomorphic function in a neighborhood of \( \lambda = \lambda_0 . \) In particular, for simple pole \( m=1 , \) we have
\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \frac{P(\lambda_0 )}{Q'(\lambda_0 )} . \]
For double pole, \( m=2 , \) we have
\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \left. \frac{{\text d}}{{\text d} \lambda} \, \frac{(\lambda - \lambda_0 )^2 \, P(\lambda )}{Q(\lambda )} \right\vert_{\lambda = \lambda_0} , \]
and for triple pole, \( m=3 , \) we get
\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \left. \frac{1}{2!} \, \frac{{\text d}^{2}}{{\text d} \lambda^{2}} \, \frac{(\lambda - \lambda_0 )^{3} P(\lambda )}{Q(\lambda )} \right\vert_{\lambda = \lambda_0} . \]

If a real-valued function \( f(\lambda ) \) has a pair of complex conjugate poles \( a \pm b {\bf j} \) (here \( {\bf j}^2 =-1 \) ), then
\[ \mbox{Res}_{a+b{\bf j}} f(\lambda ) + \mbox{Res}_{a-b{\bf j}} f(\lambda ) = 2\, \Re \, \mbox{Res}_{a+b{\bf j}} f(\lambda ) , \]
where Re = \( \Re \) stands for the real part of a complex number. ■

Let \( f(\lambda ) \) be an admissible function defined on the spectrum of a square matrix A. Then

\[ f({\bf A}) = \sum_{\mbox{all eigenvalues}} \, \mbox{Res} \, f(\lambda ) \,{\bf R}_{\lambda} ({\bf A}) . \]
Note that all residues at eigenvalues of A in the above formula exist for admissible functions. We mostly interested for matrix functions corresponding to functions of one variable λ: \( \displaystyle e^{\lambda\,t} , \quad \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} , \quad \cos \left( \sqrt{\lambda}\,t \right) \) because they are solutions of the following initial value problems (where dots stand for derivatives with respect to t and I is the identity matrix):
\begin{align*} {\bf f} (t) \equiv e^{{\bf A}\,t} \, : \,& \quad \dot{\bf f} = {\bf A}\, {\bf f}(t) , \qquad {\bf f}(0) = {\bf I} , \\ {\bf \Phi} (t) \equiv \frac{\sin \left( \sqrt{\bf A}\,t \right)}{\sqrt{\bf A}} \, : \,& \quad \ddot{\bf \Phi} (t) + {\bf A}\,{\bf \Phi}(t) = {\bf 0} , \qquad {\bf \Phi} (0) = {\bf 0} , \quad \dot{\bf \Phi} (0) = {\bf I} , \\ {\bf \Psi} (t) \equiv \cos \left( \sqrt{\bf A}\,t \right) \, : \,& \quad \ddot{\bf \bf \Psi} (t) + {\bf A}\,{\bf \Psi}(t) = {\bf 0} , \qquad {\bf \Psi} (0) = {\bf I} , \quad \dot{\bf \Psi} (0) = {\bf 0} . \end{align*}

 

To solve another problem with Mathematica, we need to clean the kernel (to do this, click the "Evaluation" fall down button and c hoose "Quit Kernel" option).

Example: Consider a defective singular matrix

\[ {\bf A} = \begin{bmatrix} -1&1&0 \\ 0&-1&1 \\ 4&-8&4 \end{bmatrix} , \]
which we enter into the computer:
A= {{-1, 1, 0}, {0, -1, 1}, {4, -8, 4}};
Resolvent[lambda_] = Factor[Inverse[lambda*IdentityMatrix[3] - A]];
Eigenvalues[A]
Out[3]= {1, 1, 0}
Eigenvectors[A]
Out[4]= {{1, 2, 4}, {0, 0, 0}, {1, 1, 1}}

Since there are only two eigenvectors, the matrix A is defective, with defective eigenvalue \( \lambda =1. \)

We construct three functions with this matrix: \( e^{\lambda\,t}, \) \( \phi (t) = \sin \left( \sqrt{\lambda}\,t\right)/\sqrt{\lambda} , \) and \( \psi (t) = \cos\left( \sqrt{\lambda}\,t\right) , \) based on the resolvent:

\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \frac{1}{\lambda \left( \lambda -1 \right)^2} \begin{bmatrix} \lambda^2 -3\lambda +4 & \lambda -4 & 1 \\ 4 & \lambda^2 -3\lambda +4 & \lambda +1 \\ 4\,(\lambda +1) & -4\,(2\lambda +1) & (\lambda +1)^2 \end{bmatrix} . \]

First we calculate residues at \( \lambda =0 \) and at \( \lambda =1. \)

res0= Factor[lambda*Resolvent[lambda]] /. lambda -> 0
Out[5]= {{4, -4, 1}, {4, -4, 1}, {4, -4, 1}}
exp1 = D[Factor[(lambda - 1)^2*Resolvent[lambda]]*Exp[lambda*t], lambda] /. lambda -> 1
Out[6]= {{-3 E^t + 2 E^t t, 4 E^t - 3 E^t t, -E^t + E^t t}, {-4 E^t + 4 E^t t,
5 E^t - 6 E^t t, -E^t + 2 E^t t}, {-4 E^t + 8 E^t t,
4 E^t - 12 E^t t, 4 E^t t}}

Then exp(A*t) matrix is the sum of these two residues:

expA= res0+%
Out[7]=
{{4 - 3 E^t + 2 E^t t, -4 + 4 E^t - 3 E^t t,
1 - E^t + E^t t}, {4 - 4 E^t + 4 E^t t, -4 + 5 E^t - 6 E^t t,
1 - E^t + 2 E^t t}, {4 - 4 E^t + 8 E^t t, -4 + 4 E^t - 12 E^t t,
1 + 4 E^t t}}

To check the answer we subtract MatrixExp[A*t] from the above expression:

Simplify[%-MatrixExp[A*t]]
Out[8]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

Now we construct \( \sin \left( \sqrt{\bf A}\,t\right) /\sqrt{\bf A} \) and \( \cos \left( \sqrt{\bf A}\,t\right) \) matrices:

sin1[t_] = D[Factor[(lambda - 1)^2*Resolvent[lambda]]* Sin[Sqrt[lambda]*t]/Sqrt[lambda], lambda] /. lambda -> 1
Out[9]=
{{t Cos[t] - 4 Sin[t], -(3/2) t Cos[t] + (11 Sin[t])/2,
1/2 t Cos[t] - (3 Sin[t])/2}, {2 t Cos[t] - 6 Sin[t], -3 t Cos[t] + 8 Sin[t],
t Cos[t] - 2 Sin[t]}, {4 t Cos[t] - 8 Sin[t], -6 t Cos[t] + 10 Sin[t], 2 t Cos[t] - 2 Sin[t]}}
cos1[t_] = res0+ D[Factor[(lambda - 1)^2*Resolvent[lambda]]*
Cos[Sqrt[lambda]*t], lambda] /. lambda -> 1
Out[10]=
{{4 - 3 Cos[t] - t Sin[t], -4 + 4 Cos[t] + 3/2 t Sin[t],
1 - Cos[t] - 1/2 t Sin[t]}, {4 - 4 Cos[t] - 2 t Sin[t], -4 + 5 Cos[t] + 3 t Sin[t],
1 - Cos[t] - t Sin[t]}, {4 - 4 Cos[t] - 4 t Sin[t], -4 + 4 Cos[t] + 6 t Sin[t], 1 - 2 t Sin[t]}}

To check that these matrix-functions are solutions of the second order matrix differential equation, we type:

Simplify[D[cos1[t], {t, 2}] + A.cos1[t]]
Out[11]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

Check initial conditions:

cos1[0]
Out[12]= {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
D[cos1[t], t] /. t -> 0
Out[3]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

We will get similar answers for the function sin1[t]. ■

Example: We consider a defective matrix that has one eigenvalue \( \lambda = 1. \)

\[ {\bf A} = \begin{bmatrix} -13& -2& 6 \\ 52 & 5 & -20 \\ -22& -4 & 11 \end{bmatrix} . \]
Its resolvent is
\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \frac{1}{(\lambda -1)^3} \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} . \]
We build the following matrix functions based on the resolvent method:
\begin{align*} e^{{\bf A}\,t} &= \lim_{\lambda \to \,1} \, \frac{1}{2} \, \frac{{\text d}^2}{{\text d} \lambda^2}\, e^{\lambda\,t} \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} = e^t \begin{bmatrix} 1-14t-20T62 & -2t-2t^2 & 6t +8t^2 \\ 52t-40t^2 & 1+4t -4t^2 & -20t +16t^2 \\ -22t-60t^2 & -4t-6t^2 & 1+10t +24t^2 \end{bmatrix} , \\ \dfrac{\sin\left( \sqrt{\bf A} t \right)}{\sqrt{\bf A}} &= \lim_{\lambda \to \,1} \, \frac{1}{2} \, \frac{{\text d}^2}{{\text d} \lambda^2} \, \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} \\ &= \sin t \begin{bmatrix} -7& -\frac{1}{2} & 3 \\ -56 & -4 & 22 \\ -34 & -\frac{5}{2} & 14 \end{bmatrix} + t\,\cos t \begin{bmatrix} 8 & \frac{1}{2} & -3 \\ 56 & 5 & -22 \\ 34 & \frac{5}{2} & -13 \end{bmatrix} + t^2 \sin t \begin{bmatrix} 5 & \frac{1}{2} & -2 \\ 10 & 1 & -4 \\ 15 & \frac{3}{2} & -6 \end{bmatrix} , \\ \cos \left( \sqrt{\bf A} t \right) &= \lim_{\lambda \to \,1} \, \frac{1}{2} \, \frac{{\text d}^2}{{\text d} \lambda^2} \, \cos \left( \sqrt{\lambda}\,t \right) \begin{bmatrix} \lambda^2 -16\,\lambda -25 & -2(\lambda +1) & 2\,(3\lambda + 5) \\ 4\,(13\lambda -33) & \lambda^2 +2\lambda -11 & -4\,(5\lambda -13) \\ -2\,(11\lambda +49)& -4\,(\lambda +2) & \lambda^2 +8\lambda +39 \end{bmatrix} \\ &= \cos t \,{\bf I} + t\,\sin t \begin{bmatrix} 2& \frac{1}{2} & -1 \\ -36 &-3 & 14 \\ -4& \frac{1}{2} & 1 \end{bmatrix} + t^2 \cos t \begin{bmatrix} 5& \frac{1}{2} & -2 \\ 10& 1 & -4 \\ 15 & \frac{3}{2} & -6 \end{bmatrix} , \end{align*}
where I is the identity matrix.
K = {{-13, -2, 6}, {52, 5, -20}, {-22, -4, 11}}
res[a_] = Simplify[Factor[Inverse[a*IdentityMatrix[3] - K]*(a - 1)^3]]
FullSimplify[Limit[D[Sin[Sqrt[a]*t]*res[a]/Sqrt[a], a, a], a -> 1]]/2
All these matrix functions are unique solutions of the corresponding initial value problems. ■

Example: We consider two complex valued matrices:

\[ {\bf A} = \begin{bmatrix} 2+3{\bf j} & 1-2{\bf j} \\ 1+5{\bf j} & 3-3{\bf j} \end{bmatrix} , \qquad {\bf B} = \begin{bmatrix} 7+{\bf j} & 2-{\bf j} \\ -2+{\bf j} & 11-{\bf j} \end{bmatrix} . \]
A = {{2 + 3*I, 1 - 2*I}, {1 + 5*I, 3 - 3*I}}
Eigenvalues[A]
Out[2]= {4, 1}
B = {{7 + I, 2 - I}, {- 2 +I, 11 - I}}
Eigenvalues[%]
Out[4]= {9, 9}
Eigenvectors[B]
Out[5]= {{1, 1}, {0, 0}}

Therefore, matrix A is digoanalizable (because it has two distinct simple eigenvalues), but matrix B is defective. We are going to find square roots of these matrices, as well as the following matrix functions:

\[ {\bf \Phi}_{\bf A} (t) = \dfrac{\sin\left( \sqrt{\bf A} t \right)}{\sqrt{\bf A}} , \qquad {\bf \Phi}_{\bf B} (t) = \dfrac{\sin\left( \sqrt{\bf B} t \right)}{\sqrt{\bf B}} \]
and
\[ {\bf \Psi}_{\bf A} (t) = \cos \left( \sqrt{\bf A} t \right) , \qquad {\bf \Psi}_{\bf B} (t) = \cos\left( \sqrt{\bf B} t \right) . \]
These matrix functions are unique solutions of the second order matrix differential equations \( \ddot{\bf P}(t) + {\bf A}\,{\bf P}(t) \equiv {\bf 0} \) and \( \ddot{\bf P}(t) + {\bf B}\,{\bf P}(t)\equiv {\bf 0} , \) respectively. Here dots stay for the derivatives with respect to time variable t. They also satisfy the initial conditions
\[ {\bf \Phi}_{\bf A} (0) = {\bf 0} , \quad \dot{\bf \Phi}_{\bf A} (0) = {\bf I} , \qquad {\bf \Psi}_{\bf A} (0) = {\bf I} , \quad \dot{\bf \Psi}_{\bf A} (0) = {\bf 0} , \]
and similar for index B:
\[ {\bf \Phi}_{\bf B} (0) = {\bf 0} , \quad \dot{\bf \Phi}_{\bf B} (0) = {\bf I} , \qquad {\bf \Psi}_{\bf B} (0) = {\bf I} , \quad \dot{\bf \Psi}_{\bf B} (0) = {\bf 0} . \]
Since the above initial value problems for matrix functions have unique solutions, the matrix functions \( {\bf \Phi}_{\bf A} (t) , \quad {\bf \Psi}_{\bf A} (t) , \quad {\bf \Phi}_{\bf B} (t) , \quad {\bf \Psi}_{\bf B} (t) \) do not depend on a choice of a square root even these roots do not exist.

First, we find square roots of these two matrices using the resolvent method. To achieve it, we need to find resolvents:
\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \frac{1}{(\lambda -1)(\lambda -4)} \begin{bmatrix} \lambda -3+3{\bf j} & 1 - 2{\bf j} \\ 1+ 5{\bf j} & \lambda -2-3{\bf j} \end{bmatrix} , \qquad {\bf R}_{\lambda} \left( {\bf B} \right) = \frac{1}{(\lambda -9)^2} \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} \]
For diagonalizable matrix A, we have
\[ \sqrt{\bf A} = \pm \mbox{Res}_{\lambda =1} \,{\bf R}_{\lambda} \left( {\bf A} \right) \pm 2\,\mbox{Res}_{\lambda =4} \, {\bf R}_{\lambda} \left( {\bf A} \right) = \pm \frac{1}{3} \begin{bmatrix} 2-3{\bf j} & -1+2{\bf j} \\ -1 -5{\bf j} & 1+3{\bf j} \end{bmatrix} \pm \frac{2}{3} \begin{bmatrix} 1 + 3 {\bf j} & 1-2 {\bf j} \\ 1+ 5{\bf j} & 2- {\bf j} \end{bmatrix} . \]
Two of these roots are
\[ \sqrt{\bf A} = \begin{bmatrix} 3{\bf j} & 1-2{\bf j} \\ 1+5{\bf j} & 1-3{\bf j} \end{bmatrix} \qquad\mbox{and}\qquad \frac{1}{3} \begin{bmatrix} 4 + 3 {\bf j} & 1-2{\bf j} \\ 1+5{\bf j} & 5-3{\bf j} \end{bmatrix} . \]
For matrix B, we have
\[ \sqrt{\bf B} = \lim_{\lambda \to \,3} \, \frac{\text d}{{\text d} \lambda} \, \sqrt{\lambda} \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} = \frac{1}{6} \begin{bmatrix} 16+ {\bf j} & 2- {\bf j} \\ -2+{\bf j} & 20-{\bf j} \end{bmatrix} . \]
B = {{7 + I, 2 - I}, {-2 + I, 11 - I}}
R[a_] = FullSimplify[Inverse[a*IdentityMatrix[2] - B]*(a - 9)^2]
R3 = Limit[D[Sqrt[a]*R[a], a], a -> 9]
R3.R3
Out[4]= {{7 + I, 2 - I}, {-2 + I, 11 - I}}
Now we turn our attention to definding matrix functions \( {\bf \Phi} \) and \( {\bf \Psi} . \) For matrix B, we have
\[ {\bf \Phi}_{\bf B} (t) = \lim_{\lambda \to \,3} \, \frac{\text d}{{\text d} \lambda} \, \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \, \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} = \frac{1}{54} \begin{bmatrix} (-6+ 3 {\bf j})\,t\,\cos 3t + (20-{\bf j})\,\sin 3t & (2-{\bf j}) \left[ 3t\,\cos 3t - \sin 3t \right] \\ (-2+{\bf j}) \left[ 3t\,\cos 3t - \sin 3t \right] & (6-3{\bf j})\,t\,\cos 3t + (16+ {\bf j})\,\sin 3t \end{bmatrix} \]
and
\[ {\bf \Psi}_{\bf B} (t) = \lim_{\lambda \to \,3} \, \frac{\text d}{{\text d} \lambda} \, \cos \left( \sqrt{\lambda}\,t \right) \begin{bmatrix} \lambda -11 +{\bf j} & 2 - {\bf j} \\ -2+ {\bf j} & \lambda -7 - {\bf j} \end{bmatrix} = \frac{1}{6} \begin{bmatrix} 6\,\cos 3t + (2- {\bf j}) \,t\,\sin 3t & (-2 + {\bf j}) \, t\,\sin 3t \\ (2- {\bf j})\,t\,\sin 3t & 6\,\cos 3t - (2- {\bf j})\,t \, \sin 3t \end{bmatrix} . \]
Limit[D[Sin[Sqrt[a]*t]*R[a]/Sqrt[a], a], a -> 9]
FullSimplify[Limit[D[Cos[Sqrt[a]*t]*R[a], a], a -> 9]
Each of these matrix functions is a unique solutions of the corresponding initial value problem:
\begin{align*} \ddot{\bf \Phi}_{\bf B} (t) + {\bf B}\, {\bf \Phi}_{\bf B} (t) &= {\bf 0} , \qquad {\bf \Phi}_{\bf B} (0) = {\bf 0} , \quad \dot{\bf \Phi}_{\bf B} (0) = {\bf I} , \\ \ddot{\bf \Psi}_{\bf B} (t) + {\bf B}\, {\bf \Psi}_{\bf B} (t) &= {\bf 0} , \qquad {\bf \Psi}_{\bf B} (0) = {\bf I} , \quad \dot{\bf \Psi}_{\bf B} (0) = {\bf 0} . \end{align*}
For matrix A, we have
\begin{align*} {\bf \Phi}_{\bf A} (t) &= \mbox{Res}_{\lambda =1} \,\frac{\sin \left( \sqrt{\lambda} \,t \right)}{\sqrt{\lambda}} \, {\bf R}_{\lambda} \left( {\bf A} \right) + \mbox{Res}_{\lambda =4} \, \frac{\sin \left( \sqrt{\lambda} \,t \right)}{\sqrt{\lambda}} \, {\bf R}_{\lambda} \left( {\bf A} \right) = \frac{\sin t}{3} \begin{bmatrix} 2 - 3 {\bf j} & -1+2{\bf j} \\ -1-5{\bf j} & 1 + 3 {\bf j} \end{bmatrix} + \frac{\sin 2t}{6} \begin{bmatrix} 1+3{\bf j} & 1-2 {\bf j} \\ 1+5 {\bf j} & 2-3{\bf j} \end{bmatrix} , \\ {\bf \Psi}_{\bf A} (t) &= \mbox{Res}_{\lambda =1} \,\cos \left( \sqrt{\lambda} \,t \right) {\bf R}_{\lambda} \left( {\bf A} \right) + \mbox{Res}_{\lambda =4} \,\cos \left( \sqrt{\lambda} \,t \right) {\bf R}_{\lambda} \left( {\bf A} \right) = \frac{\cos t}{3} \begin{bmatrix} 2 - 3 {\bf j} & -1+2{\bf j} \\ -1-5{\bf j} & 1 + 3 {\bf j} \end{bmatrix} + \frac{\cos 2t}{3} \begin{bmatrix} 1+3{\bf j} & 1-2 {\bf j} \\ 1+5 {\bf j} & 2-3{\bf j} \end{bmatrix} . \end{align*}

As a byproduct, we can find a complex square root of the identity matrix:

\[ {\bf R}_1 = \frac{1}{3} \begin{bmatrix} -1+ 6{\bf j} & 2 - 4{\bf j} \\ 2+10{\bf j} & 1 -6{\bf j} \end{bmatrix} \qquad \Longrightarrow \qquad {\bf R}_1^2 = \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} , \]
which has two real eigenvalues \( \lambda = \pm 1 . \)

 

Here is the code to calculate a function of a square matrix. The function f= Exp is chosen for simplicity, but it can be any admissible function.

A = {{1, 2}, {-1, 3}};
n = Dimensions[A][[1]];
resolvA = Inverse[lambda*IdentityMatrix[n] - A];
eigvalsA = Eigenvalues[A];
eigvalsAwithmultiplicities = Tally[eigvalsA];
numuniqueeigvalsA = Length[eigvalsAwithmultiplicities];
fofAt = 0;
Do[
lambda0 = eigvalsAwithmultiplicities[[i]][[1]];
m = eigvalsAwithmultiplicities[[i]][[2]];
res = (1/Factorial[m - 1]) * (
Simplify[
( D[
FullSimplify[resolvA * ((lambda - lambda0)^m)]*
f[lambda*t] , {lambda, m - 1}] ) /. {lambda -> lambda0}
]
);
fofAt += res;
, {i, numuniqueeigvalsA}];
fofAt = ComplexExpand[fofAt]
Out[1]=

{{E^(2 t) Cos[t] - E^(2 t) Sin[t], 2 E^(2 t) Sin[t]}, {-E^(2 t) Sin[t], E^(2 t) Cos[t] + E^(2 t) Sin[t]}}

 

For instance, if one wants to calculate cosine of A*t, then type:

f = Cos;
A = {{1, 2}, {-1, 3}};
n = Dimensions[A][[1]];
resolvA = Inverse[lambda*IdentityMatrix[n] - A];
eigvalsA = Eigenvalues[A];
eigvalsAwithmultiplicities = Tally[eigvalsA];
numuniqueeigvalsA = Length[eigvalsAwithmultiplicities];
fofAt = 0;
Do[
lambda0 = eigvalsAwithmultiplicities[[i]][[1]];
m = eigvalsAwithmultiplicities[[i]][[2]];
res = (1/Factorial[m - 1]) * (
Simplify[
( D[
FullSimplify[resolvA * ((lambda - lambda0)^m)]*
f[lambda*t] , {lambda, m - 1}] ) /. {lambda -> lambda0}
]
);
fofAt += res;
, {i, numuniqueeigvalsA}];
fofAt = ComplexExpand[fofAt]
Out[2]=

{{Cos[2 t] Cosh[t] + Sin[2 t] Sinh[t], -2 Sin[2 t] Sinh[t]}, {Sin[2 t] Sinh[t], Cos[2 t] Cosh[t] - Sin[2 t] Sinh[t]}}

 

 

Return to Mathematica page

Return to the main page (APMA0330)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Equations
Return to the Part 3 Linear Systems of Ordinary Differential Equations
Return to the Part 4 Non-linear Systems of Ordinary Differential Equations
Return to the Part 5 Numerical Methods
Return to the Part 6 Fourier Series
Return to the Part 7 Partial Differential Equations