Resolvent Method

 

Vladimir Dobrushkin.

The resolvent method and its applications to partial differential equations was developed by Vladimir Dobrushkin (born 1949) in 1980s. We show its utilization for defining a function of a square matrix, which is based on Cauchy integral formula
\[ f({\bf A}) = \frac{1}{2\pi{\bf j}} \, \int_{\gamma} {\text d} z \, f(z) \left( z{\bf I} - {\bf A} \right)^{-1} , \]
where f is holomorphic (meaning that it is represented by a convegent power series) on and inside a closed contour γ that enclosed all eigenvalues of a square matrix A. Here j is a unit vector in positive vertical direction on the complex plane so that \( {\bf j}^2 = -1 . \) When integrand is a ratio of two polynomials or entire functions, then the contour integral is the sum of residues, which was predicted by a German mathematician Ferdinand Georg Frobenius (1849--1917):
G. Frobenius. Uber die cogredienten Transformationen der bilinearen Formen. Sitzungsber K. Preuss. Akad. Wiss. Berlin, 16:7–16, 1896.

A general matrix function is a correspondence that relates to each square matrix A of order n and admissible function f a matrix, denoted by \( f({\bf A}) , \) of the same order with elements in the real or complex filed. Such correspondence is assumed to satisfy the following conditions:

  • If \( f(\lambda ) = \lambda , \) then \( f({\bf A}) = {\bf A} ; \)
  • If \( f(\lambda ) = k, \) a constant, then \( f({\bf A}) = k\,{\bf I} , \) where I is the identity matrix;
  • If \( f(\lambda ) = g(\lambda ) + h(\lambda ) , \) then \( f({\bf A}) = g({\bf A}) + h({\bf A}) ; \)
  • If \( f(\lambda ) = g(\lambda ) \cdot h(\lambda ) , \) then \( f({\bf A}) = g({\bf A}) \, h({\bf A}) . \)
These requirements will insure that the definition, when applied to a polynomial \( p(\lambda ) , \) will yield the usual matrix polynomial p(A) and that any rational identity in scalar functions of a complex varibale will be fulfilled by the corresponding matrix functions. The above conditions are not sufficient for most applications, and a fifth requirement would be highly desirable
  • If \( f(\lambda ) = h \left( g(\lambda ) \right) , \) then \( f({\bf A}) = h\left( g({\bf A}) \right) . \)
for holomorphic functions and all admissible functions. The extension of the concept of a function of a complex variable to matrix functions has occupied the attention of a number of mathematicians since 1883. While there known many approaches to define a function of a square matrix that could be found in the following references
  • R. A. Frazer, W. J. Duncan, and A. R. Collar. Elementary Matrices and Some Applications to Dynamics and Differential Equations. Cambridge University Press, 1938.
  • R. F. Rinehart. The equivalence of definitions of a matric function. American Mathematical Monthly, 62, No 6, 395--414, 1955.
  • Cleve B. Moler and Charles F. Van Loan. Nineteen dubious ways to compute the exponential of a matrix. SIAM Review, 20, No 4, 801--836, 1978.
  • Nicholas J. Higham, Functions of Matrices. Theory and Computation. SIAM, 2008,
we present another method, which is understandable at undergraduate level.

Recall that the resolvent of a square matrix A is

\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} , \]
which is a matrix-function depending on a parameter λ. In general, the resolvent, after reducing all common multiples, is a ratio of a polynomial matrix \( {\bf Q}(\lambda ) \) of degree \( k-1 , \) where k is the degree of the minimal polynomial \( \psi (\lambda ): \)
\[ {\bf R}_{\lambda} \left( {\bf A} \right) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} = \frac{1}{\psi (\lambda )} \, {\bf Q} (\lambda ) . \]
It is assumed that polynomials in \( {\bf Q}(\lambda ) \) and \( \psi (\lambda ) \) are relatively prime. Then the polynomial is the denominator is the minimal polynomial for the matrix A.

The residue of the ratio \( f(\lambda ) = P(\lambda )/Q(\lambda ) \) of two polynomials (or entire functions) at the pole \( \lambda_0 \) of multiplicity m is defined by

\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \left. \frac{1}{(m-1)!} \, \frac{{\text d}^{m-1}}{{\text d} \lambda^{m-1}} \, \frac{(\lambda - \lambda_0 )^{m} P(\lambda )}{Q(\lambda )} \right\vert_{\lambda = \lambda_0} = \lim_{\lambda \mapsto \lambda_0} \left( \frac{1}{(m-1)!} \, \frac{{\text d}^{m-1}}{{\text d} \lambda^{m-1}} \, \frac{(\lambda - \lambda_0 )^{m} P(\lambda )}{Q(\lambda )} \right) . \]
In particular, for simple pole \( m=1 , \) we have
\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \frac{P(\lambda_0 )}{Q'(\lambda_0 )} . \]
For double pole, \( m=2 , \) we have
\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \left. \frac{{\text d}}{{\text d} \lambda} \, \frac{(\lambda - \lambda_0 )^2 \, P(\lambda )}{Q(\lambda )} \right\vert_{\lambda = \lambda_0} , \]
and for triple pole, \( m=3 , \) we get
\[ \mbox{Res}_{\lambda_0} \, \frac{P(\lambda )}{Q(\lambda )} = \left. \frac{1}{2!} \, \frac{{\text d}^{2}}{{\text d} \lambda^{2}} \, \frac{(\lambda - \lambda_0 )^{3} P(\lambda )}{Q(\lambda )} \right\vert_{\lambda = \lambda_0} . \]
sage: M.rank()
sage: M.right_nullity()

If a real-valued function \( f(\lambda ) \) has a pair of complex conjugate poles \( a \pm b {\bf j} \) (here \( {\bf j}^2 =-1 \) ), then
\[ \mbox{Res}_{a+b{\bf j}} f(\lambda ) + \mbox{Res}_{a-b{\bf j}} f(\lambda ) = 2\, \Re \, \mbox{Res}_{a+b{\bf j}} f(\lambda ) , \]
where Re = \( \Re \) stands for the real part of a complex number.

Let \( f(\lambda ) \) be a function defined on the spectrum of a square matrix A. Then

\[ f({\bf A}) = \sum_{\mbox{all eigenvalues}} \, \mbox{Res} \, f(\lambda ) \,{\bf R}_{\lambda} ({\bf A}) . \]

Example.

Example.

Example.