This is a
tutorial made solely for the purpose of education and it was designed
for students taking Applied Math 0340. It is primarily for students who
have some experience using Mathematica. If you have never used
Mathematica before and would like to learn more of the basics for this computer algebra system, it is strongly recommended looking at the APMA
0330 tutorial. As a friendly reminder, don't forget to clear variables in use and/or the kernel.
Finally, the commands in this tutorial are all written in bold black font,
while Mathematica output is in normal font. This means that you can
copy and paste all commands into Mathematica, change the parameters and
run them. You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have
the right to distribute this tutorial and refer to this tutorial as long as
this tutorial is accredited appropriately.
Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the course APMA0340
Return to the main page for the course APMA0330
Return to Part I of the course APMA0340
Originally, spectral decomposition was developed for symmetric or self-adjoint matrices. Following tradition, we
present this method for symmetric/self-adjoint matrices in next chapter. Now we expand
spectral decomposition method for arbitrary square matrices by considering their polynomial interpolation. Our objective is to define a function
\( f ({\bf A} ) \) of a square matrix A for certain class of functions.
The function \( f (\lambda ) \) of a single variable λ is said to be admissible
for a square matrix A if the values
exist. Here mi is the multiplicity of the eigenvalue λi and there are s distinct
eigenvalues. The above values of the function f and possibly its derivatives at eigenvalues are called the values of
f on the spectrum of A. ■
In most cases of practical interest f is given by a formula, such as \( f (\lambda ) = e^{\lambda\, t} \)
or \( f (\lambda ) = \cos \left( t\,\sqrt{\lambda} \right) . \) However, the following
definition of \( f ({\bf A} ) \) requires only the values of f on the
spectrum of A; it does not require any other information about f. For every
\( n \times n \) matrix A, there exists a set of admissible functions
for each of which we can define a matrix \( f ({\bf A} ) . \) Such definition of
n-by-n matrix \( f ({\bf A} ) \) is not unique. Previously, we defined
a function of a square matrix using the resolvent method or
Sylvester method. There are known other equivalent definitions that could be found in the following references:
Higham, Nicholas J. Functions of Matrices: Theory and Computation. Cambridge University Press, Cambridge, 2008.
Leonard, I.E., The Matrix Exponential, SIAM Review, vol 38, No 3, 507--512, 1996.
Moler, Cleve, and van Loan, Charles, Nineteen Dubious Ways to compute the Exponential of a Matrix, Twenty-Five Years Later, SIAM Review, vol. 45, No. 1, 3--49, 2003.
One of the main uses of matrix functions in computational mathematics is for solving nonlinear matrix equations,
\( g ({\bf X} ) = {\bf A} . \) Two particular cases are especially important: to find a
square root and logarithm by solving the equations \( {\bf X}^2 = {\bf A} \) and
\( e^{\bf X} = {\bf A} , \) respectively. It may happen that a matrix equation has a solution
that is beyond the set of admissible functions. For instance, a unit matrix has infinite many roots out of which we can
construct only finite number of roots using admissible functions.
In applications to systems of ordinary differential equations, we usually need to construct
\( f ({\bf A} ) \) for analytic functions such as \( f (\lambda ) = e^{\lambda\, t} \)
or \( \displaystyle f (\lambda ) = \frac{\sin \left( \sqrt{\lambda}\, t \right)}{\sqrt{\lambda}} . \)
In our exposition, we will assume that functions possess as many derivatives as needed; however, it is sufficient
to consider admissible functions.
Let A be a square \( n \times n \) matrix and let f be an analytic
function in a neighborhood of each its eigenvalue.
Using a Maclaurin series
we can define the matrix-valued function \( f ({\bf A} ) \) as
\[
f({\bf A} ) = \sum_{k\ge 0} f_k {\bf A}^k .
\]
We do not discuss the convergence issues of the above series because we will define a function of a square
matrix as a matrix polynomial; so this series serves for illustration only.
Let \( \psi (\lambda ) \) be a minimal polynomial of degree m for the matrix
A. Then every power \( {\bf A}^p \) of matrix A can
be expressed as a polynomial of degree not higher than \( m-1. \) Therefore,
for each eigenvalue \( \lambda_k , \quad k=1,2,\ldots , s , \) where s is the number of distinct eigenvalues.
If the eigenvalue \( \lambda_k \) is of multiplicity \( m_k \) in the minimal polynomial
\( \psi (\lambda ) , \) then we need to add \( m_k -1 \) auxiliary algebraic equations
It is instructive to consider the cases where A is a rank-1 n-by-n matrix so
\( {\bf A} = {\bf u}\,{\bf v}^{\ast} \) is a matrix product of two n-vectors. In this case,
a function of rank-1 matrix is the sum of two terms:
When admissible function f and its derivative are defined at the origin and \( {\bf v}^{\ast} {\bf u} =0, \)
the function of a matrix can be designated as
\[
f \left( {\bf A} \right) = f \left( {\bf u}{\bf v}^{\ast} \right) = f(0) \, {\bf I} + f' (0) \,{\bf u}{\bf v}^{\ast} ,
\]
where \( {\bf I} = {\bf A}^0 \) is the identity matrix.
Example: Consider two vectors: \( {\bf u} = \langle 1,1,1 \rangle \)
and \( {\bf v} = \langle 1,0,3 \rangle . \) Then their products defines the matrix A and the number
This matrix A has one simple eigenvalue \( \lambda =4 \) and another one double eigenvalue
\( \lambda = 0 . \) Mathematica confirms:
A = {{1, 0, 3}, {1, 0, 3}, {1, 0, 3}}
ss = Eigenvalues[A]
{4, 0, 0}
Eigenvectors[A]
{{1, 1, 1}, {-3, 0, 1}, {0, 1, 0}}
CharacteristicPolynomial[A, x]
4 x^2 - x^3
Since matrix A is diagonilizable, its minimal polynomial is of second degree:
\( \psi (\lambda ) = \lambda \left( \lambda -4 \right) . \)
We determine two matrix functions
that correspond to single-valued functions \( \Phi (\lambda ) =
\frac{\sin \left( t \, \sqrt{\lambda} \right)}{\sqrt{\lambda}} \quad\mbox{and} \quad
\Psi (\lambda ) = \cos \left( t \, \sqrt{\lambda} \right) , \) respectively. Using the formula for
rank 1 matrices, we get
To verify that we specify matrix functions correctly, we prove that they are solutions of the following initial value problems
(because these problems have unique solutions):
Therefore, matrix A is diagonalizable because its minimal polynomial is a product of simple terms,
and we don't need to find eigenspaces. For the exponential function \( f (\lambda ) = e^{\lambda\, t} \)
we construct the corresponding matrix function \( f ({\bf A} ) = e^{{\bf A}\, t} \) as a
polynomial of degree 2 (because the minimal polynomial is of that degree):
However, it is not actual verification because we compare our constructed exponential matrix with another matrix
provided by Mathematica. Of course, we trust Mathematica, but we need real verification. It is
known that matrix exponential \( f ({\bf A} ) = e^{{\bf A}\, t} \) is a unique solution
of the following initial value problem for matrix differential equation:
We are going to determine two matrix functions \( \displaystyle {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} \)
and \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \) corresponding to
functions \( \displaystyle f \left( \lambda \right) = \frac{\sin \left( t\,\sqrt{\lambda} \right)}{\sqrt{\lambda}} \)
and \( \displaystyle g \left( \lambda \right) = \cos \left( t\,\sqrt{\lambda} \right) , \)
respectively. Then we find square roots of matrix A.
Let us start with \( \displaystyle f \left( {\bf A} \right) , \) which does not depend
on what branch of square root is chosen in \( \displaystyle \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} . \)
We seek this matrix function in the form
Note that matrices A and \( \displaystyle {\bf \Phi} (t) \) commute.
To verify that we get a correct matrix function, we have to show that the function
\( \displaystyle {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} . \)
is a solution to the initial value problem
Now we build another matrix function \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \)
using exactly the same steps. So we seek it in the form
Here \( \displaystyle g \left( \lambda \right) = \cos \left( t\,\sqrt{\lambda} \right) . \)
We solve the system of algebraic equations using Mathematica:
Each of constructed matrix functions either
\( \displaystyle {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} \)
or \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \)
is unique independently of the method in use because they are solutions of the corresponding initial value problems.
On the other had, we cannot guarantee that square roots of A that we are going to find are the only
available: there could be other roots.
To determine a square root of the given matrix, we consider a corresponding function
\( r(\lambda ) = \lambda^{1/2} \equiv \sqrt{\lambda} , \) where we have to choose a
particular branch because a square root is an analytic function but not a single-valued function: it assigns
to every input number two outputs. For example, \( 4^{1/2} = \pm 2 . \) Since the
characteristic polynomial of matrix A is of degree 3, we assume that
\( r({\bf A} ) \) is represented as
The matrix has two eiegenvalues \( \lambda_1 =1 \) and \( \lambda_2 =4 . \)
The former has multiplicity 2 and its geometric multiplicity is also 2 because there are two linearly independent eigenvectors:
A = A = {{2, 1, 1}, {1, 2, 1}, {1, 1, 2}}
Eigenvalues[A]
{4, 1, 1}
Eigenvectors[A]
{{1, 1, 1}, {-1, 0, 1}, {-1, 1, 0}}
So the minimal polynomial is of second degree: \( \psi \left( \lambda \right) = \left( \lambda -1 \right) \left( \lambda -4 \right) . \)
Therefore, we seek any function of matrix A as a polynomial of first degree:
\( f({\bf A}) = b_0 {\bf I} + b_1 {\bf A} . \) In particular, if we consider a square root
\( f(\lambda ) \sqrt{\lambda} , \) we get the system of two algebraic equation
We don't need to define the second complex eigenvalue because it is complex conjugate to the first one. Since the
minimal polynomial for matrix A has degree 3, we seek exponential matrix function in the form:
Return to the main page (APMA0330)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Equations
Return to the Part 3 Linear Systems of Ordinary Differential Equations
Return to the Part 4 Non-linear Systems of Ordinary Differential Equations
Return to the Part 5 Numerical Methods
Return to the Part 6 Fourier Series
Return to the Part 7 Partial Differential Equations