This is a
tutorial made solely for the purpose of education and it was designed
for students taking Applied Math 0340. It is primarily for students who
have some experience using Mathematica. If you have never used
Mathematica before and would like to learn more of the basics for this computer algebra system, it is strongly recommended looking at the APMA
0330 tutorial. As a friendly reminder, don't forget to clear variables in use and/or the kernel.
Finally, the commands in this tutorial are all written in bold black font,
while Mathematica output is in normal font. This means that you can
copy and paste all commands into Mathematica, change the parameters and
run them. You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have
the right to distribute this tutorial and refer to this tutorial as long as
this tutorial is accredited appropriately.
Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330
Return to Part I of the course APMA0340
Vladimir Dobrushkin.
The resolvent method and its applications to partial differential equations
were developed by Vladimir Dobrushkin (born 1949) in the 1980s. When the
resolvent method is applied for defining a function of a square matrix, it is
actually based on the Cauchy integral formula
where f is holomorphic (meaning that it is represented by a convergent
power series) on and inside a closed contour γ that encloses all
eigenvalues of a square matrix
A. Here j is a unit vector in the positive
vertical direction on the complex plane so that
\( {\bf j}^2 = -1 . \)
When the integrand is a ratio of two polynomials or of entire functions,
then the contour integral is the sum of residues (see definition of residue
below), which was predicted by a
German mathematician Ferdinand Georg Frobenius (1849--1917):
G. Frobenius. Uber die cogredienten Transformationen der bilinearen Formen.
Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften
zu Berlin, 16:7–16, 1896.
For each square n-by-n matrix A, we call a
function f(λ) of one complex variable λ admissible
for A if f is defined on the spectrum of
A (which is the set of all eigenvalues) and all derivatives
exist. Here \( \lambda_1 , \lambda_2 , \ldots , \lambda_m
\) are distinct eigenvalues of matrix A of
multiplicities \( m_1 , m_2 , \ldots , m_m , \)
respectively. ■
A general matrix function is a correspondence that relates to each square
matrix A of order n and admissible function
f a matrix, denoted by \( f({\bf A}) . \)
The matrix \( f({\bf A}) \) has the
same order as A with elements in the real or complex field.
Such correspondence is assumed to satisfy the following conditions:
If \( f(\lambda ) = \lambda , \) then
\( f({\bf A}) = {\bf A} ; \)
If \( f(\lambda ) = k, \) a constant, then
\( f({\bf A}) = k\,{\bf I} , \)
where I is the identity matrix;
These requirements will ensure that the definition, when applied to a
polynomial p(λ) will yield the usual matrix polynomial
p(A) and that any rational identity in scalar functions
of a complex variable will be fulfilled by the corresponding matrix functions.
The above conditions are not sufficient for most applications, and a fifth
requirement would be highly desirable.
If \( f(\lambda ) = h \left( g(\lambda ) \right) , \) then \( f({\bf A}) = h\left( g({\bf A}) \right) . \)
for holomorphic functions and all admissible functions. The extension of the
concept of a function of a complex variable to matrix functions has occupied
the attention of a number of mathematicians since 1883. While there are known
many approaches to define a function of a square matrix that could be found in
the following references:
R. A. Frazer, W. J. Duncan, and A. R. Collar. Elementary Matrices and Some
Applications to Dynamics and Differential Equations. Cambridge University
Press, 1938.
R. F. Rinehart. The equivalence of definitions of a matric function.
American Mathematical Monthly, 62, No 6, 395--414, 1955.
Cleve B. Moler and Charles F. Van Loan. Nineteen dubious ways to compute the
exponential of a matrix. SIAM Review, 20, No 4, 801--836, 1978.
Nicholas J. Higham, Functions of Matrices. Theory and Computation. SIAM, 2008,
we present another method, which is more understandable at undergraduate level.
which is a matrix-function depending on a parameter λ. In general, the
resolvent, after reducing all common multiples, is a ratio of a polynomial
matrix \( {\bf Q}(\lambda ) \) of degree at most
\( k-1 , \) where k is the degree of the
minimal polynomial \( \psi (\lambda ): \)
Recall that the minimal polynomial for a square matrix A is
the unique monic polynomial ψ of lowest degree such that
\( \psi ({\bf A} ) = {\bf 0} . \)
It is assumed that polynomials
\( {\bf Q}(\lambda ) \) and
\( \psi (\lambda ) \) are relatively prime
(this means that they do not have common multiples). Then,
the polynomial in the denominator of the reduced resolvent formula is
the minimal polynomial for the matrix A.
The residue of the ratio \( \displaystyle
f(\lambda ) = \frac{P(\lambda )}{Q(\lambda )} \) of two polynomials
(or entire functions) at the
pole \( \lambda_0 \) of multiplicity m is
defined by
Recall that a function f(λ) has a pole of multiplicity m
at \( \lambda = \lambda_0 \) if, upon
multiplication by \( \left( \lambda - \lambda_0 \right)^m
, \) the product \( f(\lambda )\,
\left( \lambda - \lambda_0 \right)^m \) becomes a holomorphic function
in a neighborhood of \( \lambda = \lambda_0 . \)
In particular, for simple pole \( m=1 , \) we have
Note that all residues at eigenvalues of A in the above
formula exist for admissible functions. We mostly interested for matrix
functions corresponding to functions of one variable λ:
\( \displaystyle e^{\lambda\,t} , \quad
\frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} , \quad
\cos \left( \sqrt{\lambda}\,t \right) \) because they are solutions
of the following initial value problems (where dots stand for derivatives
with respect to t and I is the identity matrix):
To solve another problem with Mathematica, we need to clean the
kernel (to do this, click the "Evaluation" fall down button and c
hoose "Quit Kernel" option).
Since there are only two eigenvectors, the matrix A
is defective, with defective eigenvalue
\( \lambda =1. \)
We construct three functions with this matrix: \( e^{\lambda\,t}, \) \( \phi (t) = \sin \left( \sqrt{\lambda}\,t\right)/\sqrt{\lambda} , \) and
\( \psi (t) = \cos\left( \sqrt{\lambda}\,t\right) ,
\) based on the resolvent:
B = {{7 + I, 2 - I}, {- 2 +I, 11 - I}}
Eigenvalues[%]
Out[4]= {9, 9}
Eigenvectors[B]
Out[5]= {{1, 1}, {0, 0}}
Therefore, matrix A is digoanalizable (because it has two
distinct simple eigenvalues), but matrix
B is defective. We are going to find square roots of these
matrices, as well as the following matrix functions:
These matrix functions are unique solutions of the second order matrix
differential
equations \( \ddot{\bf P}(t) + {\bf A}\,{\bf P}(t)
\equiv {\bf 0} \) and \( \ddot{\bf P}(t) +
{\bf B}\,{\bf P}(t)\equiv {\bf 0} , \) respectively. Here dots stay
for the derivatives with respect to time variable t. They also satisfy
the initial conditions
Since the above initial value problems for matrix functions have unique
solutions, the matrix functions \( {\bf \Phi}_{\bf A} (t)
, \quad {\bf \Psi}_{\bf A} (t) , \quad {\bf \Phi}_{\bf B} (t) , \quad
{\bf \Psi}_{\bf B} (t) \) do not depend on a choice of a square root
even these roots do not exist.
First, we find square roots of these two matrices using the resolvent method.
To achieve it, we need to find resolvents:
Return to the main page (APMA0330)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Equations
Return to the Part 3 Linear Systems of Ordinary Differential Equations
Return to the Part 4 Non-linear Systems of Ordinary Differential Equations
Return to the Part 5 Numerical Methods
Return to the Part 6 Fourier Series
Return to the Part 7 Partial Differential Equations