Preface


Polynomial interpolation is based on either the minimal polynomial or characteristic polynomial. It allows us to define a matrix functions as a matrix polynomial with coefficients evaluated according to required function. This method does not depend whether matrix is diagonalizable or defective (similar to the resolvent method).

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part I of the course APMA0340
Introduction to Linear Algebra with Mathematica

Polynomial Interpolation


The idea of polynomial interpolation approach is based on Cayley--Hamiltom theorem that any square matrix is annihilated by its characteristic polynomial. If a minimal polynomial is known, then there is an advantage to use it instead of the characteristic polynomial. Originally, polynomial interpolation and more general spectral decomposition method ware developed for symmetric or self-adjoint matrices. However, we expand our exposition for arbitrary square matrices by considering their polynomial interpolation. If a minimal polynomial or characteristic polynomial is a polynomial or degree m, then any power of matrix A that exceeds m can be expressed as a linear combination of previous powers: A0 = I, A¹ = A, A², … , Am-1. This allows us to to express an entire function of square matrix A as a polynomial containing only first m powers of A.

Our objective is to define a function, f(A) of a square matrix A for a certain class of functions. A function f(λ) of a single variable λ is said to be admissible for a square matrix A if the values

\[ f^{(j)} (\lambda_i ) , \qquad j=0,1,\ldots , m_i -1, \quad i=1,2,\ldots s, \]
exist. Here, mi is the multiplicity of the eigenvalue λi, and there are s distinct eigenvalues. The above values of the function f and possibly its derivatives at eigenvalues are called the values of f on the spectrum of A.

One of the main uses of matrix functions in computational mathematics is for solving nonlinear matrix equations, such as \( g ({\bf X} ) = {\bf A} . \) Two particular cases are especially important: to find a square root and logarithm by solving the equations \( {\bf X}^2 = {\bf A} \) and \( e^{\bf X} = {\bf A} , \) respectively. It may happen that a matrix equation has a solution that is beyond the set of admissible functions. For instance, a unit matrix has infinite many roots out of which we can construct only finite number of roots using admissible functions.

In most cases of practical interest, f is given by a formula, such as \( f (\lambda ) = e^{\lambda\, t} \) or \( f (\lambda ) = \cos \left( t\,\sqrt{\lambda} \right) . \) However, the following definition of \( f ({\bf A} ) \) requires only the values of f on the spectrum of A; it does not require any other information about f. For every n × n matrix A, there exists a set of admissible functions for each of which we can define a matrix \( f ({\bf A} ) . \) Such definition of n-by-n matrix \( f ({\bf A} ) \) is not unique. Previously, we defined a function of a square matrix using the resolvent method or Sylvester method. There are known other equivalent definitions that could be found in the references [1 -- 3].

In applications to systems of ordinary differential equations, we usually need to construct \( f ({\bf A} ) \) for analytic functions such as \( f (\lambda ) = e^{\lambda\, t} \) or \( \displaystyle f (\lambda ) = \frac{\sin \left( \sqrt{\lambda}\, t \right)}{\sqrt{\lambda}} . \) In our exposition, we will assume that functions possess as many derivatives as needed; when considering admissible functions, the number of derivatives is the multiplicity minus one.

Let A be a square \( n \times n \) matrix and let f be an analytic function in a neighborhood of each of its eigenvalues. Upon changing the independent variable, the analytic function can be expanded into a convergent Maclaurin series
\begin{equation} \label{EqSpectral.1} f(\lambda ) = f_0 + f_1 \lambda + f_2 \lambda^2 + \cdots + f_k \lambda^k + \cdots = \sum_{k\ge 0} f_k \lambda^k . \end{equation}
It seems reasonable to define the matrix-valued function f(A) as
\begin{equation} \label{EqSpectral.2} f({\bf A} ) = \sum_{k\ge 0} f_k {\bf A}^k . \end{equation}
We do not discuss the convergence issues of series \eqref{EqSpectral.2} because we will define a function of a square matrix as a matrix polynomial; so this series serves for illustration only. Let ψ(λ) be a minimal polynomial of degree m for the matrix A. Then every power Ap (pm) of matrix A can be expressed as a polynomial of degree not higher than m - 1. Therefore,
\begin{equation} \label{EqSpectral.3} f({\bf A} ) = \sum_{j= 0}^{m-1} b_j {\bf A}^j , \end{equation}
where coefficients \( b_j , \quad j=0,1,\ldots , m-1; \) should satisfy the following equations
\begin{equation} \label{EqSpectral.4} f(\lambda_k ) = \sum_{j= 0}^{m-1} b_k \,\lambda_k^j , \qquad k =1,2,\ldots , s , \end{equation}
for each eigenvalue λk, k = 1, 2, … , s, where s is the number of distinct eigenvalues. If the eigenvalue λk is of multiplicity mk in the minimal polynomial ψ(λ) (so \( \psi (\lambda )/(\lambda - \lambda_k )^{m_k} \) is a polynomial), then we need to add mk - 1 auxiliary algebraic equations
\begin{equation} \label{EqSpectral.5} \lim_{\lambda \to \lambda_k} \, \frac{{\text d}^{\nu} f(\lambda )}{{\text d} \lambda^{\nu}} = \left. \frac{{\text d}^{\nu} f(\lambda )}{{\text d} \lambda^{\nu}} \right\vert_{\lambda = \lambda_k} = \left. \frac{{\text d}^{\nu}}{{\text d} \lambda^{\nu}} \, \sum_{j= 0}^{m-1} b_k \,\lambda^j \right\vert_{\lambda = \lambda_k} , \quad \nu =1,2,\ldots , m_k -1 . \end{equation}
   ■

Note: Our exposition and formula \eqref{EqSpectral.3} are based on Maclauring power series expansion \eqref{EqSpectral.1} of arbitrary admissible function. However, we used Maclaurin series for simplicity and generally speaking have to utilize Taylor series instead. Nevertheless, our formula \eqref{EqSpectral.3} is valid independently whether we use Maclaurin series or Taylor one. It is important that an admissible function is analytic at all eigenvalues of the corresponding matrix.

For example, a square root function \( r(\lambda ) = \sqrt{\lambda} \) is not holomorphic at λ = 0 because it is its branch point. However, the square root function is a holomorphic function at other points on the complex plane ℂ. Therefore, you can use formula \eqref{EqSpectral.3} for defining a square root of a matrix if it is not a singular matrix (has no zero eigenvalue).    ■

 It is instructive to consider the case where n-by-n matrix A has rank 1 (making it a rank-1 matrix), so A is a matrix product of two n-vectors: \( {\bf A} = {\bf u}\,{\bf v}^{\ast} . \) Both vectors, u and v, have dimension n × 1, so v* has dimension 1 × n. Then their product will be an n × n matrix. In this case of matrix \( {\bf A} = {\bf u}\,{\bf v}^{\ast} , \) a function of rank-1 matrix is the sum of two terms:

\[ f \left( {\bf A} \right) \equiv f \left( {\bf u}\,{\bf v}^{\ast} \right) = f(0) \,{\bf I} + \left( \frac{f \left( {\bf v}^{\ast} \,{\bf u} \right) - f(0)}{{\bf v}^{\ast} \,{\bf u} -0} \right) {\bf A} , \qquad {\bf v}^{\ast} \,{\bf u} \ne 0. \]

Since u and v are assumed to be n × 1 column vectors, then their product \( {\bf v}^{\ast} {\bf u} \) is a number. When admissible function f and its derivative are defined at the origin and \( {\bf v}^{\ast} {\bf u} =0, \) the function of a matrix can be designated as
\begin{equation} \label{EqSpectral.6} f \left( {\bf A} \right) = f \left( {\bf u}{\bf v}^{\ast} \right) = f(0) \, {\bf I} + f' (0) \,{\bf u}{\bf v}^{\ast} \qquad\mbox{for } {\bf v}^{\ast} \,{\bf u} = 0, \end{equation}
where, by definition, \( {\bf I} = {\bf A}^0 \) is the identity matrix.
Example 1: Consider two vectors: \( {\bf u} = \langle 1,1,1 \rangle \) and \( {\bf v} = \langle 1,0,3 \rangle . \) Then their products \( {\bf u} {\bf v}^{\ast} \) and \( {\bf v}^{\ast} {\bf u} \) define the matrix and the number
\[ {\bf A} = {\bf u} \, {\bf v}^{\ast} = \begin{bmatrix} 1&0&3 \\ 1&0&3 \\ 1&0&3 \end{bmatrix} \qquad\mbox{and} \qquad {\bf v}^{\ast} {\bf u} = {\bf v} \cdot {\bf u} = 4 , \]
respectively. This matrix A has one simple eigenvalue λ = 4 and another one double eigenvalue λ = 0 . Mathematica confirms:
A = {{1, 0, 3}, {1, 0, 3}, {1, 0, 3}}
ss = Eigenvalues[A]
{4, 0, 0}
Eigenvectors[A]
{{1, 1, 1}, {-3, 0, 1}, {0, 1, 0}}
Mathematica has a dedicated command to determine a characteristic polynomial for a square matrix:
CharacteristicPolynomial[A, x]
4 x^2 - x^3
Since matrix A is diagonalizable, it is a derogatory matrix and its minimal polynomial is of second degree: \( \psi (\lambda ) = \lambda \left( \lambda -4 \right) . \) We build two matrix functions

\[ {\bf \Phi} ({\bf A}) = \frac{\sin \left( t \, \sqrt{\bf A} \right)}{\sqrt{\bf A}} \qquad\mbox{and} \qquad {\bf \Psi} ({\bf A}) = \cos \left( t \, \sqrt{\bf A} \right) \]
that correspond to single-valued functions \( \Phi (\lambda ) = \frac{\sin \left( t \, \sqrt{\lambda} \right)}{\sqrt{\lambda}} \quad\mbox{and} \quad \Psi (\lambda ) = \cos \left( t \, \sqrt{\lambda} \right) , \) respectively. Using the formula for rank-1 matrices, we get
\begin{eqnarray*} {\bf \Phi} ({\bf A}) &=& \Phi (0) \, {\bf I} + \left( \frac{\Phi \left( {\bf v}^{\ast} \,{\bf u} \right) - t}{{\bf v}^{\ast} \,{\bf u} -0} \right) {\bf A} = t\,{\bf I} + \frac{1}{4} \left( \frac{\sin 2t}{2} - t \right) {\bf A} , \\ {\bf \Psi} ({\bf A}) &=& \Psi (0) \, {\bf I} + \left( \frac{\Psi \left( {\bf v}^{\ast} \,{\bf u} \right) - \Psi (0)}{{\bf v}^{\ast} \,{\bf u} -0} \right) {\bf A} = {\bf I} + \frac{1}{4} \left( \cos 2t -1 \right) {\bf A} , \end{eqnarray*}
because \( \Phi (0) =t, \ \Phi \left( {\bf v}^{\ast} \,{\bf u} \right) = \Phi (4) = \frac{\sin 2t}{2} \) and \( \Psi (0) =1, \ \Psi \left( {\bf v}^{\ast} \,{\bf u} \right) = \Psi (4) = \cos 2t . \)

To verify that we specify matrix functions correctly, we prove that they are solutions of the following initial value problems (because these matrix problems have unique solutions):

\[ \ddot{\bf \Phi} (t) + {\bf A}\,{\bf \Phi} (t) = {\bf 0}, \qquad {\bf \Phi} (0) = {\bf 0} , \quad \dot{\bf \Phi} (0) = {\bf I} , \]
and
\[ \ddot{\bf \Psi} (t) + {\bf A}\,{\bf \Psi} (t) = {\bf 0}, \qquad {\bf \Psi} (0) = {\bf I} , \quad \dot{\bf \Psi} (0) = {\bf 0} . \]
Since the second derivatives are \( \ddot{\Phi} = - \frac{\sin 2t}{2} \) and \( \ddot{\Psi} = -\cos 2t , \) we have to show that
\[ - \frac{\sin 2t}{2} \,{\bf A} + t\,{\bf A} + \frac{1}{4} \left( \frac{\sin 2t}{2} - t \right) {\bf A}^2 = {\bf 0} \qquad\mbox{and} \qquad -\cos 2t \,{\bf A} + \frac{1}{4} \left( \cos 2t - 1 \right) {\bf A}^2 = {\bf 0} . \]
Therefore, it is equivalent to prove that \( {\bf A}^{2} = 4\,{\bf A} . \) Mathematica confirms
A.A
{{4, 0, 12}, {4, 0, 12}, {4, 0, 12}}
Since the initial conditions are obviously true, we have shown that constructed matrix functions are correct.

Finally, we build the exponential matrix function corresponding to \( U(\lambda ) = e^{\lambda\,t} . \) In this case, we have

\[ U({\bf A}) = {\bf I} + \frac{e^{4t} -1}{4}\,{\bf A} = {\bf I} - \frac{1}{4}\,{\bf A} + \frac{1}{4}\, e^{4t}\,{\bf A} . \]
We check our answer with Mathematica:
A = {{1, 0, 3}, {1, 0, 3}, {1, 0, 3}}
expA[t_] = IdentityMatrix[3] - A/4 + Exp[4*t]/4*A
{{3/4 + E^(4 t)/4, 0, -(3/4) + (3 E^(4 t))/4}, {-(1/4) + E^(4 t)/4, 1, -(3/4) + (3 E^(4 t))/4}, {-(1/4) + E^(4 t)/4, 0, 1/4 + (3 E^(4 t))/4}}
Simplify[expA[t] - MatrixExp[A*t]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
This confirms that Mathematica, with its dedicated command MatrixExp, indeed can find the correct answer.    ■
Example 2: Consider the \( 3 \times 3 \) matrix \( {\bf A} = \begin{bmatrix} \phantom{-2}1 & \phantom{-1}4&16 \\ \phantom{-}18 & \phantom{-}20 & \phantom{-}4 \\ -12&-14&-7 \end{bmatrix} \) that has three distinct eigenvalues; Mathematica confirms
A = {{1,4,16},{18,20,4},{-12,-14,-7}}
Eigenvalues[A]
Out[2]= 9, 4, 1
Then we try to solve what seems to be an impossible problem of finding square roots of the given matrix. Upon introducing the root function \( r(\lambda ) = \sqrt{\lambda} = \lambda^{1/2} , \) we see that this function has no Maclaurin expansion. Nevertheless, we try to apply the polynomial interpolation approach. So we represent this function as
\[ r(\lambda ) = \sqrt{\lambda} = b_0 + b_1 \lambda + b_2 \lambda^2 . \tag{2.1} \]
To find expressions for b0, b1, and b2, we need to solve the system of algebraic equations:
\[ \begin{split} \sqrt{1}&= \pm 1 = b_0 + b_1 + b_2 , \\ \sqrt{4}&= \pm 2 = b_0 + b_1 4 + b_2 4^2 , \\ \sqrt{9}&= \pm 3 = b_0 + b_1 9 + b_2 9^2 . \end{split} \tag{2.2} \]
By choosing one of possible values for a square root (which assigns two outputs), we can construct eight possible square matrix root. We don't know whether there exists another square matrix root, but at least we can find eight of them. We dedicate problem for solving system of algebraic equations to Mathematica in hope that it is capable to assist us:
Solve[{b0 + b1 + b2 == 1, 2 == b0 + 4 b1 + 4^2 b2, 3 == b0 + 9 b1 + 81 b2}, {b0, b1, b2}]
{{b0 -> 3/5, b1 -> 5/12, b2 -> -(1/60)}}
For this particular choice of square roots, we find
\[ b_0 = \frac{3}{5} , \qquad b_1 = \frac{5}{12}, \qquad b_2 = - \frac{1}{60} . \]
Then we construct the root matrix
\[ {\bf R}_1 = \frac{3}{5} \,{\bf I} + \frac{5}{12}\, {\bf A} - \frac{1}{60} \, {\bf A}^2 = \begin{bmatrix} \phantom{-}3 &\phantom{-}4 &\phantom{-}8 \\ \phantom{-} 2&\phantom{-} 2&-4 \\ -2&-2&\phantom{-}1 \end{bmatrix} . \tag{2.3} \]
R1 = 3/5*IdentityMatrix[3] + 5/12*A - 1/60*A.A
Then we check our answer with Mathematica:
Simplify[R1.R1 - A]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Now if we choose another triple of square roots, we get
Solve[{b0 + b1 + b2 == 1, -2 == b0 + 4 b1 + 16 b2, 3 == b0 + 9 b1 + 81 b2}, {b0, b1, b2}]
{{b0 -> 3, b1 -> -(9/4), b2 -> 1/4}}
We build another square root based on new values of parameters:
\[ {\bf R}_2 = 3 \,{\bf I} - \frac{9}{4}\, {\bf A} + \frac{1}{4} \, {\bf A}^2 = \begin{bmatrix} -29&-44&-56 \\ \phantom{-}42 &\phantom{-}62 &\phantom{-}76 \\ -18&-26&-31 \end{bmatrix} . \tag{2.4} \]
R2 = 3*IdentityMatrix[3] - 9*A/4 - A.A/4
Then we check our answer with Mathematica:
Simplify[R2.R2 - A]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}

 

For exponential function U(λ) = exp(λt), we have
\[ U(t) = e^{{\bf A}\,t} = b_0 {\bf I} + b_1 {\bf A} + b_2 {\bf A}^2 , \]
where coefficients are determined from the system of equations:
\[ \begin{split} e^t &= b_0 + b_1 + b_2 , \\ e^{4t}&= b_0 + b_1 4 + b_2 4^2 , \\ e^{9t}&= b_0 + b_1 9 + b_2 9^2 . \end{split} \]
Using Mathematica, we get
\[ b_0 = \frac{1}{10} \left( 15\, e^t -6\, e^{4t} + e^{9t} \right) , \quad b_1 = \frac{1}{24} \left( -13\, e^t + 16\, e^{4t} -3\, e^{9t} \right) , \quad b_2 = \frac{1}{120} \left( 5\, e^t -8\,e^{4t} + 3\, e^{9t} \right) . \]
Solve[{b0 + b1 + b2 == Exp[t], Exp[4*t] == b0 + 4 b1 + 16 b2, Exp[9*t] == b0 + 9 b1 + 81 b2}, {b0, b1, b2}]
{{b0 -> 1/10 E^t (15 - 6 E^(3 t) + E^(8 t)), b1 -> -(1/24) E^t (13 - 16 E^(3 t) + 3 E^(8 t)), b2 -> 1/120 E^ t (-1 + E^t)^2 (5 + 10 E^t + 15 E^(2 t) + 12 E^(3 t) + 9 E^(4 t) + 6 E^(5 t) + 3 E^(6 t))}}
We build the exponential fundamental matrix
\[ U(t) = e^t \begin{bmatrix} -4&-8&-12 \\ \phantom{-}4& \phantom{-}8& \phantom{-}12 \\ -1&-2&-3 \end{bmatrix} + e^{4t} \begin{bmatrix} \phantom{-1}8 &\phantom{-}12 &\phantom{-}16 \\ -10&-15&-20 \\ \phantom{-1}4 &\phantom{-1}6 &\phantom{-1}8 \end{bmatrix} + e^{9t} \begin{bmatrix} -3&-4&-4 \\ \phantom{-}6& \phantom{-}8& \phantom{-}8 \\ -3&-4&-4 \end{bmatrix} . \tag{2.5} \]
Then we check our answer with exponential matrix function provided by Mathematica:
expA[t_] = Simplify[1/10 E^t (15 - 6 E^(3 t) + E^(8 t))* IdentityMatrix[3] - A*1/24 E^t (13 - 16 E^(3 t) + 3 E^(8 t)) + A.A*(5 E^t - 8 E^(4 t) + 3 E^(9 t))/120]
{{-4 E^t + 8 E^(4 t) - 3 E^(9 t), -4 E^ t (2 - 3 E^(3 t) + E^(8 t)), -4 E^ t (3 - 4 E^(3 t) + E^(8 t))}, {2 E^t (2 - 5 E^(3 t) + 3 E^(8 t)), E^t (8 - 15 E^(3 t) + 8 E^(8 t)), 4 E^t (3 - 5 E^(3 t) + 2 E^(8 t))}, {-E^t + 4 E^(4 t) - 3 E^(9 t), -2 E^t + 6 E^(4 t) - 4 E^(9 t), -3 E^t + 8 E^(4 t) - 4 E^(9 t)}}
Simplify[expA[t] - MatrixExp[A*t]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Finally, we consider two functions
\[ \Phi (\lambda ) = \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \qquad \mbox{and} \qquad \Psi (\lambda ) = \cos \left( \sqrt{\lambda}\,t \right) . \]
We represent the corresponding matrix-functions as
\[ \begin{split} \Phi ({\bf A}) &= b_0 {\bf I} + b_1 {\bf A} + b_2 {\bf A}^2 , \\ \Psi ({\bf A}) &= c_0 {\bf I} + c_1 {\bf A} + c_2 {\bf A}^2 , \end{split} \]
where the coefficients are determined from systems of algebraic equations
\[ \begin{split} \sin (t) &= b_0 + b_1 + b_2 , \\ \frac{1}{2}\,\sin (2t) &= b_0 + b_1 4 + b_2 4^2 , \\ \frac{1}{3}\,\sin (3t) &= b_0 + b_1 9 + b_2 9^2 , \end{split} \]
and
\[ \begin{split} \cos (t) &= c_0 + c_1 + c_2 , \\ \cos (2t) &= c_0 + c_1 4 + c_2 4^2 , \\ \cos (3t) &= c_0 + c_1 9 + c_2 9^2 , \end{split} \]
respectively. We dedicate this job to Mathematica:
Solve[{b0 + b1 + b2 == Sin[t], Sin[2*t]/2 == b0 + 4 b1 + 16 b2, Sin[3*t]/3 == b0 + 9 b1 + 81 b2}, {b0, b1, b2}]
{{b0 -> 1/30 (45 Sin[t] - 9 Sin[2 t] + Sin[3 t]), b1 -> 1/24 (-13 Sin[t] + 8 Sin[2 t] - Sin[3 t]), b2 -> 1/120 (5 Sin[t] - 4 Sin[2 t] + Sin[3 t])}}
\[ b_0 = \frac{1}{30} \left( 45\,\sin t -9\,\sin 2t + \sin 3t \right) , \quad b_1 = \frac{1}{24} \left( -13\,\sin t + 8\,\sin 2t - \sin 3t \right) , \quad b_2 = \frac{1}{120} \left( 5\,\sin t -4\,sin 2t + \sin 3t \right) . \]
Solve[{c0 + c1 + c2 == Cos[t], Cos[2*t] == c0 + 4 c1 + 16 c2, Cos[3*t] == c0 + 9 c1 + 81 c2}, {c0, c1, c2}]
{{c0 -> 1/10 (15 Cos[t] - 6 Cos[2 t] + Cos[3 t]), c1 -> 1/24 (-13 Cos[t] + 16 Cos[2 t] - 3 Cos[3 t]), c2 -> 1/120 (5 Cos[t] - 8 Cos[2 t] + 3 Cos[3 t])}}
\[ c_0 = \frac{1}{10} \left( 15\,\cos t -6\,\cos 2t + \cos 3t \right) , \quad c_1 = \frac{1}{24} \left( -13\,\cos t + 16\,\cos 2t -3\,\cos 3t \right) , \quad c_2 = \frac{1}{120} \left( 5\,\cos t -8\,\cos 2t + 3\,\cos 3t \right) . \]
With these coefficients, we build the matrix functions
\[ \Phi ({\bf A}) = \frac{\sin \left( \sqrt{\bf A}\,t \right)}{\sqrt{\bf A}} = \sin t \begin{bmatrix} -4&-8&-12 \\ \phantom{-}4 &\phantom{-} 8& \phantom{-}12 \\ -1&-2&-3 \end{bmatrix} + \sin 2t \begin{bmatrix} \phantom{-1}8& 12&16 \\ -10&-15&-20 \\ \phantom{-1}4& \phantom{-}6&\phantom{-}8 \end{bmatrix} + \sin 3t \begin{bmatrix} -3&-4&-4 \\ \phantom{-}6& \phantom{-}8& \phantom{-}8 \\ -3&-4&-4 \end{bmatrix} \]
and
\[ \Psi ({\bf A}) = \cos \left( \sqrt{\bf A}\,t \right) = \cos t \begin{bmatrix} -4&-8&-12 \\ \phantom{-}4& \phantom{-}8& \phantom{-}12 \\ -1&-2&-3 \end{bmatrix} + \cos 2t \begin{bmatrix} 8&\phantom{-}12&\phantom{-}16 \\ -10&-15&-20 \\ 4&6&8 \end{bmatrix} + \cos 3t \begin{bmatrix} -3&-4&-4 \\ \phantom{-}6&\phantom{-}8&\phantom{-}8 \\ -3&-4&-4 \end{bmatrix} \]
because
Solve[{b0 + b1 + b2 == Sin[t], Sin[2*t]/2 == b0 + 4 b1 + 16 b2, Sin[3*t]/3 == b0 + 9 b1 + 81 b2}, {b0, b1, b2}]
{{b0 -> 1/30 (45 Sin[t] - 9 Sin[2 t] + Sin[3 t]), b1 -> 1/24 (-13 Sin[t] + 8 Sin[2 t] - Sin[3 t]), b2 -> 1/120 (5 Sin[t] - 4 Sin[2 t] + Sin[3 t])}
and
Simplify[1/10 (15 Cos[t] - 6 Cos[2 t] + Cos[3 t])*IdentityMatrix[3] + 1/24 (-13 Cos[t] + 16 Cos[2 t] - 3 Cos[3 t])*A + 1/120 (5 Cos[t] - 8 Cos[2 t] + 3 Cos[3 t])*A.A]
{{-4 Cos[t] + 8 Cos[2 t] - 3 Cos[3 t], -4 (2 Cos[t] - 3 Cos[2 t] + Cos[3 t]), -4 (3 Cos[t] - 4 Cos[2 t] + Cos[3 t])}, {4 Cos[t] - 10 Cos[2 t] + 6 Cos[3 t], 8 Cos[t] - 15 Cos[2 t] + 8 Cos[3 t], 4 (3 Cos[t] - 5 Cos[2 t] + 2 Cos[3 t])}, {-Cos[t] + 4 Cos[2 t] - 3 Cos[3 t],
To prove that our formulas are correct, we show that these matrix-functions are solutions of the following initial value problems:
\[ \frac{{\text d}^2}{{\text d}t^2} \left( \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} \right) + {\bf A} \left( \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} \right) = {\bf 0}, \qquad \left. \left( \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} \right) \right\vert_{t=0} = {\bf 0}, \quad \left. \frac{\text d}{{\text d}t} \left( \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} \right) \right\vert_{t=0} = {\bf I}, \]
and
\[ \frac{{\text d}^2}{{\text d}t^2} \, \cos \left( \sqrt{\bf A}\, t \right) + {\bf A} \, \cos \left( \sqrt{\bf A}\, t \right) = {\bf 0}, \qquad \left. \cos \left( \sqrt{\bf A}\, t \right) \right\vert_{t=0} = {\bf I}, \quad \left. \frac{\text d}{{\text d}t} \cos \left( \sqrt{\bf A}\, t \right) \right\vert_{t=0} = {\bf 0} . \]
We again ask Mathematica for help.
phi[t_] = 1/30 (45 Sin[t] - 9 Sin[2 t] + Sin[3 t])*IdentityMatrix[3] + 1/24 (-13 Sin[t] + 8 Sin[2 t] - Sin[3 t])*A + 1/120 (5 Sin[t] - 4 Sin[2 t] + Sin[3 t])*A.A
psi[t_] = Simplify[1/10 (15 Cos[t] - 6 Cos[2 t] + Cos[3 t])* IdentityMatrix[3] + 1/24 (-13 Cos[t] + 16 Cos[2 t] - 3 Cos[3 t])*A + 1/120 (5 Cos[t] - 8 Cos[2 t] + 3 Cos[3 t])*A.A]
We first verify the initial conditions:
phi[0]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
D[phi[t], t] /. t -> 0
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
and
psi[0]
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
D[psi[t], t] /. t -> 0
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Then we check that the matrix functions Φ(t) and Ψ(t) satisfy the differential equation.
Simplify[D[phi[t], t, t] + A.phi[t]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Simplify[D[psi[t], t, t] + A.psi[t]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
This means that our matrix functions are all correct.    ■
Example 3: Consider the nonderogatory (diagonalizable) \( 3 \times 3 \) matrix \( {\bf A} = \begin{bmatrix} -20&-42&-21 \\ \phantom{-1}6& \phantom{-}13&\phantom{-1}6 \\ \phantom{-}12&\phantom{-}24&\phantom{-}13 \end{bmatrix} \) that has only two distinct eigenvalues
A = {{-20, -42, -21}, {6, 13, 6}, {12, 24, 13}};
Eigenvalues[A]
Out[2]= 4, 1, 1
We verify that the minimal polynomial is a product of simple terms.
(A - IdentityMatrix[3]).(A - 4*IdentityMatrix[3])
Out[2]= {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
To build matrix-functions using polynomial interpolation, we have two options: either to use the characteristic polynomial χ(λ) = det(λI - A) = (λ-1)²(λ-4) or to use the minimal polynomial ψ(λ) = (λ-1)(λ-4). Of course, the latter is the easiest way, but we show both options.

Let us start with the characteristic polynomial and define the exponential function accordingly:

\[ U(t) = e^{{\bf A}\,t} = b_0 {\bf I} + b_1 {\bf A} + b_2 {\bf A}^2 , \tag{3.1} \]
where the coefficients are determined by solving the system of algebraic equations
\[ \begin{split} e^t &= b_0 + b_1 + b_2 , \\ t\,e^{t}&= b_1 + 2\,b_2 , \\ e^{4t}&= b_0 + b_1 4 + b_2 4^2 . \end{split} \tag{3.2} \]
The second equation is obtained upon differentiation with respect to λ the first one.
Solve[{Exp[t] == b0 + b1 + b2, t*Exp[t] == b1 + 2 b2, Exp[4*t] == b0 + 4 b1 + 16 b2}, {b0, b1, b2}]
{{b0 -> 1/9 E^t (8 + E^(3 t) - 12 t), b1 -> -(1/9) E^t (-2 + 2 E^(3 t) - 15 t), b2 -> 1/9 E^t (-1 + E^(3 t) - 3 t)}}
This allows us to construct the exponential matrix
\begin{align*} U(t) &= e^{{\bf A}\,t} = \frac{1}{9}\left( 8\, e^t -12t\, e^t + e^{4t} \right) {\bf I} + \frac{1}{9}\left( 2\, e^t + 15t\,e^t - 2\,e^{4t} \right) {\bf A} + \frac{1}{9}\left( -e^t -3t\,e^t + e^{4t} \right) {\bf A}^2 \\ &= e^t \begin{bmatrix} \phantom{-}8&14&\phantom{-}7 \\ -2& -3&-2 \\ -4&-8&-3 \end{bmatrix} + e^{4t} \begin{bmatrix} -7&-14&-7 \\ \phantom{-}2& \phantom{-}4&\phantom{-}2 \\ \phantom{-}4&\phantom{-}8&\phantom{-}4 \end{bmatrix} . \tag{3.3} \end{align*}
expA[t_] = Simplify[1/9 E^t (8 + E^(3 t) - 12 t)* IdentityMatrix[3] - (1/9) E^t (-2 + 2 E^(3 t) - 15 t)*A + 1/9 E^t (-1 + E^(3 t) - 3 t)*A.A ]
Note that terms containing the multiple tet are canceled out. Now we use the minimal polynomial and represent the same exponential function as the sum of two terms:
\[ U(t) = e^{{\bf A}\,t} = c_0 {\bf I} + c_1 {\bf A} , \tag{3.4} \]
where coefficients are determined from the following system of equations
\[ \begin{split} e^t &= c_0 + c_1 , \\ e^{4t}&= c_0 + c_1 4 . \end{split} \tag{3.5} \]
Solve[{Exp[t] == c0 + c1, Exp[4*t] == c0 + 4 c1}, {c0, c1}]
Out[4]= {{-7, -14, -7}, {2, 4, 2}, {4, 8, 4}} {{c0 -> -(1/3) E^t (-4 + E^(3 t)), c1 -> 1/3 E^t (-1 + E^(3 t))}}
Then the exponential matrix function becomes
\[ U(t) = e^{{\bf A}\,t} = c_0 {\bf I} + c_1 {\bf A} = \frac{1}{3} \left( e^{4t} - 4\, e^{t} \right) {\bf I} + \frac{1}{3} \left( e^{4t} - e^t \right) {\bf A} , \]
which is exactly the same expression as we found using the characteristic polynomial.

Now we construct two matrix-functions corresponding to functions

\[ \Phi (\lambda ) = \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \qquad\mbox{and} \qquad \Psi (\lambda ) = \cos \left( \sqrt{\lambda}\,t \right) . \tag{3.6} \]
We show all detail for the former functions and formulate outputs for the latter. First, we use the characteristic polynomial and represent the corresponding matrix-function as
\[ {\bf \Phi} (t) = \frac{\sin \left( \sqrt{\bf A}\,t \right)}{\sqrt{\bf A}} = b_0 {\bf I} + b_1 {\bf A} + b_2 {\bf A}^2 . \tag{3.7} \]
Using explicit expression for the derivative
\[ \frac{\text d}{{\text d}\lambda}\,\Phi (\lambda ) = \frac{\text d}{{\text d}\lambda}\, \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} = = \frac{t\,\cos \left( \sqrt{\lambda}\,t \right)}{2\,\lambda} - \frac{\sin \left( \sqrt{\lambda}\,t \right)}{2\,\lambda^{3/2}} , \tag{3.8} \]
D[Sin[Sqrt[s]*t]/Sqrt[s], s]
(t Cos[Sqrt[s] t])/(2 s) - Sin[Sqrt[s] t]/(2 s^(3/2))
we find the values of coefficients from the system of algebraic equations
\[ \begin{split} \sin t &= b_0 + b_1 + b_2 , \\ \frac{t}{2} \,\cos t - \frac{1}{2}\, \sin t &= b_1 + 2\,b_2 , \\ \frac{1}{2}\,\sin 2t &= b_0 + 4\,b_1 + 4^2 b_2 . \end{split} \]
Solve[{Sin[t] == b0 + b1 + b2, t*Cos[t]/2 - Sin[t]/2 == b1 + 2 b2, Sin[2*t]/2 == b0 + 4 b1 + 16 b2}, {b0, b1, b2}]
{{b0 -> 1/18 (-12 t Cos[t] + 28 Sin[t] + Sin[2 t]), b1 -> 1/18 (15 t Cos[t] - 11 Sin[t] - 2 Sin[2 t]), b2 -> 1/18 (-3 t Cos[t] + Sin[t] + Sin[2 t])}}
With this at hand, we build the matrix-function
\begin{align*} {\bf \Phi} (t) &= \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} = b_0 {\bf I} + b_1 {\bf A} + b_2 {\bf A}^2 \\ &= \frac{1}{18} \left( 28\,\sin t - 12t\,\cos t + \sin 2t \right) {\bf I} + \frac{1}{18} \left( -11\,\sin t + 15t\,\cos t -2\,\sin 2t \right) {\bf A} + \frac{1}{18} \left( \sin t -3t\,\cos t + \sin 2t \right) {\bf A}^2 \\ &= \frac{1}{18} \left( 28\,\sin t - 12t\,\cos t + \sin 2t \right) \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix} + \frac{1}{18} \left( -11\,\sin t + 15t\,\cos t -2\,\sin 2t \right) \begin{bmatrix} -20&-42&-21 \\ \phantom{-1}6& \phantom{-}13&\phantom{-1}6 \\ \phantom{-}12& \phantom{-}24& \phantom{-}13 \end{bmatrix} + \frac{1}{18} \left( \sin t -3t\,\cos t + \sin 2t \right) \begin{bmatrix} -104&-210&-105 \\ \phantom{-1}30& \phantom{-1}61& \phantom{-1}30 \\ \phantom{-1}60& \phantom{-}120& \phantom{-1}61 \end{bmatrix} \end{align*}
Therefore, we get
\[ {\bf \Phi} (t) = \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} = \sin t \begin{bmatrix} \phantom{-}8&14& \phantom{-}7 \\ -2& -3&-2 \\ -4&-8&-3 \end{bmatrix} + t\,\cos t \begin{bmatrix} 0&0&0 \\ 0&0&0 \\ 0&0&0 \end{bmatrix} + \sin 2t \begin{bmatrix} -7&-14&-7 \\ \phantom{-}2& \phantom{-1}4& \phantom{-}2 \\ \phantom{-}4& \phantom{-1}8& \phantom{-}4 \end{bmatrix} . \]
Similarly, we find
\[ {\bf \Psi} (t) = \cos \left( \sqrt{\bf A}\, t \right) = \cos t \begin{bmatrix} \phantom{-}8&14& \phantom{-}7 \\ -2& -3&-2 \\ -4&-8&-3 \end{bmatrix} + t\,\sin t \begin{bmatrix} 0&0&0 \\ 0&0&0 \\ 0&0&0 \end{bmatrix} + \cos 2t \begin{bmatrix} -7&-14&-7 \\ \phantom{-}2& \phantom{-1}4& \phantom{-}2 \\ \phantom{-}4& \phantom{-1}8& \phantom{-}4 \end{bmatrix} . \]

Now we use the minimal polynomial and represent the required function as

\[ {\bf \Phi} (t) = \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} = c_0 {\bf I} + c_1 {\bf A} , \]
where coefficients are determined from the system of equations
\[ \begin{split} \sin t &= c_0 + c_1 , \\ \frac{1}{2}\,\sin 2t &= c_0 + 4\,c_1 . \end{split} \]
Solving this system with Mathematica, we get
Solve[{Sin[t] == c0 + c1 , Sin[2*t]/2 == c0 + 4*c1}, {c0, c1}]
{{c0 -> 1/6 (8 Sin[t] - Sin[2 t]), c1 -> 1/6 (-2 Sin[t] + Sin[2 t])}
\[ {\bf \Phi} (t) = \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} = \frac{1}{6} \left( 8\,\sin t - \sin 2t \right) {\bf I} + \frac{1}{6} \left( -2\,\sin t + \sin 2t \right) {\bf A} , \]
which gives exactly the same answer as we found previously with the characteristic polynomial. A similar result is valid for matrix-function Ψ(t).

The constructed above matrix-functions Φ(t) and Ψ(t) are defined uniquely because they are solutions of the following initial value matrix problems

\[ \frac{{\text d}^2}{{\text d}t^2} \left( \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} \right) + {\bf A} \left( \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} \right) = {\bf 0}, \qquad \left. \left( \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} \right) \right\vert_{t=0} = {\bf 0}, \quad \left. \frac{\text d}{{\text d}t} \left( \frac{\sin \left( \sqrt{\bf A}\, t \right)}{\sqrt{\bf A}} \right) \right\vert_{t=0} = {\bf I}, \]
and
\[ \frac{{\text d}^2}{{\text d}t^2} \, \cos \left( \sqrt{\bf A}\, t \right) + {\bf A} \, \cos \left( \sqrt{\bf A}\, t \right) = {\bf 0}, \qquad \left. \cos \left( \sqrt{\bf A}\, t \right) \right\vert_{t=0} = {\bf I}, \quad \left. \frac{\text d}{{\text d}t} \cos \left( \sqrt{\bf A}\, t \right) \right\vert_{t=0} = {\bf 0} . \]

Although matrix A has several square roots, we outline application of polynomial interpolation to determine one of them. Using the minimal polynomial, we seek square roots of matrix A in the form

\[ \sqrt{\bf A} = d_0 {\bf I} + d_1 {\bf A} , \]
where coefficients are determined from the system of equations:
\[ \begin{split} \pm 1 &= d_0 + d_1 , \\ \pm 2 &= d_0 + 4\,d_1 \end{split} \]
Solving with Mathematica for one choice of signs, we get one root
Solve[{1 == d0 + d1 , 2 == d0 + 4*d1}, {d0,d1}]
{{d0 -> 2/3, d1 -> 1/3}}
\[ {\bf R}_1 = \frac{2}{3} \, {\bf I} + \frac{1}{3}\, {\bf A} = \begin{bmatrix} -6&-14&-7 \\ \phantom{-}2& \phantom{-1}5& \phantom{-}2 \\ \phantom{-}4& \phantom{-1}8& \phantom{-}5 \end{bmatrix} . \]
Simplify[2/3*IdentityMatrix[3] + A/3]
{{-6, -14, -7}, {2, 5, 2}, {4, 8, 5}}
Other square roots of matrix A can be constructed similarly.    ■

Example 4: Consider a diagonalizable 4-by-4 matrix

\[ {\bf A} = \begin{bmatrix} -4& \phantom{-1}7& \phantom{-}1& \phantom{-1}4 \\ \phantom{-1}6&-16&-3&-9 \\ \phantom{-}12&-27&-4&-15 \\ -18& \phantom{-}43& \phantom{-}7& \phantom{-}24 \end{bmatrix} . \]
Its characteristic polynomial is
\[ \chi (\lambda ) = \det \left( \lambda {\bf I} - {\bf A} \right) = \lambda \left( \lambda -2 \right) \left( \lambda +1 \right)^2 , \]
but its minimal polynomial is a product of simple terms (so matrix A is diagonalizable):
\[ \psi (\lambda ) = \lambda \left( \lambda -2 \right) \left( \lambda +1 \right) . \]
We check it with Mathematica:
A = {{-4, 7, 1, 4}, {6, -16, -3, -9}, {12, -27, -4, -15}, {-18, 43, 7, 24}}
Eigenvalues[A]
{2, -1, -1, 0}
Eigenvectors[A]
{{1, -2, -4, 6}, {-1, -1, 0, 1}, {-2, -1, 1, 0}, {1, -3, -3, 7}}

Therefore, A is a derogatory diagonalizable matrix because it has 4 linearly independent eigenvectors. In this case, its minimal polynomial is a product of simple terms. For the exponential function \( f (\lambda ) = e^{\lambda\, t} \) we construct the corresponding matrix function \( f ({\bf A} ) = e^{{\bf A}\, t} \) as a polynomial of degree 2 (because its minimal polynomial ψ(λ) = λ(λ - 2)(λ + 1) is of degree 3):
\[ e^{{\bf A}\, t} = b_0 {\bf I} + b_1 {\bf A} + b_2 {\bf A}^2 , \tag{4.1} \]
where coefficients are determined from the system of algebraic equations
\begin{eqnarray*} 1 &=& b_0 \\ e^{2t} &=& b_0 + 2\,b_1 + 4\, b_2 , \\ e^{-t} &=& b_0 - b_1 + b_2 , \end{eqnarray*}
corresponding to each eigenvalue. Solving with Mathematica,
Solve[{Exp[2*t] == 1 + 2*b1 + 4*b2, Exp[-t] == 1 - b1 + b2}, {b1, b2}]
{{b1 -> 1/6 E^-t (-4 + 3 E^t + E^(3 t)), b2 -> 1/6 E^-t (-1 + E^t)^2 (2 + E^t)}}
we get the required values
\[ b_0 =1, \qquad b_1 = \frac{1}{6} \left( 3 - 4\, e^{-t} + e^{2t} \right) , \qquad b_2 = \frac{1}{6} \left( -3 +2\, e^{-t} + e^{2t} \right) . \]
This yields the exponential matrix function:
\[ e^{{\bf A}\, t} = {\bf I} + \frac{1}{6} \left( 3 -4\, e^{-t} + e^{2t} \right) {\bf A} + \frac{1}{6} \left( -3 +2\, e^{-t} + e^{2t} \right) {\bf A}^2 . \]
We check the answer again with Mathematica:
u[t_] = Simplify[ IdentityMatrix[4] + (3 - 4*Exp[-t] + Exp[2*t])* A/6 + (-3 + 2*Exp[-t] + Exp[2*t])*A.A/6]
Simplify[u[t] - MatrixExp[A*t]]
{{0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}}
However, it is not actual verification because we compare our constructed exponential matrix with another matrix provided by Mathematica. Of course, we trust Mathematica, but we need real verification. It is known that matrix exponential \( {\bf U}(t) ) = e^{{\bf A}\, t} \) is a unique solution of the following initial value problem for matrix differential equation:
\[ \frac{\text d}{{\text d}t}\, {\bf U} = {\bf A} \,{\bf U} (t), \qquad {\bf U} (0) = {\bf I} , \]
where I = A0 is the identity matrix. So we check with Mathematica:
Simplify[D[u[t], t] - A.u[t]]
{{0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}}
u[0]
{{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}}
   ■
Example 5: Consider a 3-by-3 matrix
\[ {\bf A} = \begin{bmatrix} 1& \phantom{-}2& \phantom{-}3 \\ 2& \phantom{-}3& \phantom{-}4 \\ 2&-6&-4 \end{bmatrix} \]
that has two complex conjugate eigenvalues.
A = {{1, 2, 3}, {2, 3, 4}, {2, -6, -4}}
Eigenvalues[A]
{1 + 2 I, 1 - 2 I, -2}
We can define eigenvalues in Mathematica as
s = Eigenvalues[A]
lambda1 = s[[1]]
1 + 2 I
lambda3 = s[[3]]
-2
We don't need to define the second complex eigenvalue because it is a complex conjugate to the first one. Since the minimal polynomial for matrix A has degree 3, we seek exponential matrix function in the form:
\[ {\bf U}(t) = e^{{\bf A}\, t} = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 , \tag{5.1} \]
where the coefficients b0, b1, and b2 are determined from the system of algebraic equations:
\begin{eqnarray*} e^{(1 + 2{\bf j})t} &=& b_0 + (1 + 2{\bf j})\,b_1 + (1 + 2{\bf j})^2\, b_2 , \\ e^{(1 - 2{\bf j})t} &=& b_0 + (1 - 2{\bf j})\,b_1 + (1 - 2{\bf j})^2\, b_2 , \\ e^{-2t} &=& b_0 -2\, b_1 + 4\, b_2 . \tag{5.2} \end{eqnarray*}
Mathematica solves this system of algebraic equations without a problem:
ss = Solve[{Exp[(1 + 2 I)*t] == b0 + (1 + 2 I)*b1 + (1 + 2 I)^2*b2,
Exp[(1 - 2 I)*t] == b0 + (1 - 2 I)*b1 + (1 - 2 I)^2*b2,
Exp[-2*t] == b0 - 2*b1 + 4*b2 }, {b0, b1, b2}] // Flatten
{b0 -> (5 E^(-2 t))/ 13 + (4/13 + I/26) E^((1 - 2 I) t) + (4/13 - I/26) E^((1 + 2 I) t),
b1 -> -(1/52) E^((-2 - 2 I) t) (8 E^(2 I t) - (4 + 7 I) E^( 3 t) - (4 - 7 I) E^((3 + 4 I) t)),
b2 -> -(1/52) E^((-2 - 2 I) t) (-4 E^(2 I t) + (2 - 3 I) E^( 3 t) + (2 + 3 I) E^((3 + 4 I) t))}
We extract coefficients from the given list
b0 = Part[Values@ss, 1]
b1 = Part[Values@ss, 2]
b2 = Part[Values@ss, 3]
However, they all contain the imaginary unit and it forces us to get rid of pure imaginary numbers with the following commands:
b0 = Simplify[ComplexExpand[b0]]
1/13 E^(-2 t) (5 + 8 E^(3 t) Cos[2 t] + E^(3 t) Sin[2 t])
b1 = Simplify[ComplexExpand[b1]]
1/26 E^(-2 t) (-4 + 4 E^(3 t) Cos[2 t] + 7 E^(3 t) Sin[2 t])
b2 = Simplify[ComplexExpand[b2]]
1/52 E^(-2 t) (4 - 4 E^(3 t) Cos[2 t] + 6 E^(3 t) Sin[2 t]
Finally, we define the exponential matrix function
exp[t_] = Simplify[b0*IdentityMatrix[3] + b1*A + b2*A.A]
{{1/13 E^(-2 t) (14 - E^(3 t) Cos[2 t] + 21 E^(3 t) Sin[2 t]), 2/13 E^(-2 t) (-7 + 7 E^(3 t) Cos[2 t] - 4 E^(3 t) Sin[2 t]), 1/13 E^(-2 t) (-7 + 7 E^(3 t) Cos[2 t] + 9 E^(3 t) Sin[2 t])}, {1/ 13 E^(-2 t) (12 - 12 E^(3 t) Cos[2 t] + 31 E^(3 t) Sin[2 t]), 1/13 E^(-2 t) (-12 + 25 E^(3 t) Cos[2 t] - 5 E^(3 t) Sin[2 t]), 1/13 E^(-2 t) (-6 + 6 E^(3 t) Cos[2 t] + 17 E^(3 t) Sin[2 t])}, {2/ 13 E^(-2 t) (-11 + 11 E^(3 t) Cos[2 t] - 10 E^(3 t) Sin[2 t]), -(2/ 13) E^(-2 t) (-11 + 11 E^(3 t) Cos[2 t] + 3 E^(3 t) Sin[2 t]), 1/13 E^(-2 t) (11 + 2 E^(3 t) Cos[2 t] - 16 E^(3 t) Sin[2 t])}}
\begin{align*} {\bf U} (t) &= e^{{\bf A}\, t} = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 \\ &= \frac{1}{13} \, e^{-2t} \begin{bmatrix} \phantom{-}14&-14&-7 \\ \phantom{-}12& -12& -6 \\ -22 & \phantom{-}22& 11 \end{bmatrix} + \frac{\cos 2t}{13} \, e^{-2t} \begin{bmatrix} -1& \phantom{-}14&7 \\ -12& \phantom{-}25 & 6 \\ \phantom{-}22 & -22 & 2 \end{bmatrix} + \frac{\sin 2t}{13} \, e^{-2t} \begin{bmatrix} \phantom{-}21& -8& \phantom{-}9 \\ \phantom{-}31& -5& \phantom{-}17 \\ -20 & -6 & -16 \end{bmatrix} . \end{align*}
This matrix function is a unique solution to the initial value problem
\[ \frac{{\text d}}{{\text d} t} \,{\bf U} (t) = {\bf A}\,{\bf U} (t) , \qquad {\bf U} (0) = {\bf I} . \]
Finally, we check with Mathematica
Simplify[D[exp[t], t] - A.exp[t]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
exp[0]
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
   ■

Example 6: Consider a nondiagonalizable 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} \phantom{-}9& \phantom{-}9&38 \\ \phantom{-}1& \phantom{-}7&10 \\ -1&-2&-4 \end{bmatrix} . \]
This matrix has only one eigenvalue of multiplicity 3.
A = {{9, 9, 38}, {1, 7, 10}, {-1, -2, -4}}
Eigenvalues[A]
{4, 4, 4}
Eigenvectors[A]
{{-4, -2, 1}, {0, 0, 0}, {0, 0, 0}}
Since its characteristic polynomial
CharacteristicPolynomial[A,s]
64 - 48 s +12 s^2 - s^3

coincides with the minimal polynomial, the given matrix is nonderogatory. We are going to determine two matrix functions \( \displaystyle {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} \) and \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \) corresponding to functions \( \displaystyle \Phi \left( \lambda \right) = \frac{\sin \left( t\,\sqrt{\lambda} \right)}{\sqrt{\lambda}} \) and \( \displaystyle \Psi \left( \lambda \right) = \cos \left( t\,\sqrt{\lambda} \right) , \) respectively.

Let us start with \( \displaystyle {\bf \Phi} \left( {\bf A} \right) , \) which does not depend on what branch of square root is chosen in \( \displaystyle \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} . \) We seek this matrix function in the form

\[ {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 , \tag{6.1} \]
where the coefficients b0, b1, and b2 are determined from the system of algebraic equations:
\begin{eqnarray*} \Phi (4) &=& \frac{\sin 2t}{2} = b_0 + 4\,b_1 + 16\, b_2 , \\ \Phi' (4) &=& \frac{1}{8}\,t\,\cos 2t - \frac{1}{16}\, \sin 2t = b_1 + 8\, b_2 , \\ \Phi'' (4) &=& -\frac{3}{64}\,t\,\cos 2t + \frac{3}{128}\,\sin 2t - \frac{1}{32}\,t^2 \sin 2t = 2\, b_2 . \tag{6.2} \end{eqnarray*}
phi[s_] = (Sin[t*Sqrt[s]])/Sqrt[s]
D[f[s], s] /. s -> 4
1/8 t Cos[2 t] - 1/16 Sin[2 t]
D[phi[s], s, s] /. s -> 4
-(3/64) t Cos[2 t] + 3/128 Sin[2 t] - 1/32 t^2 Sin[2 t]
Now we ask Mathematica to solve the system of algebraic equations
Solve[{Sin[2*t]/2 == b0 + 4*b1 + 16*b2,
1/8 t Cos[2 t] - 1/16 Sin[2 t] == b1 + 8*b2,
-(3/64) t Cos[2 t] + 3/128 Sin[2 t] - 1/32 t^2 Sin[2 t] == 2*b2 }, {b0, b1, b2}]
{{b0 -> 1/16 (-14 t Cos[2 t] + 15 Sin[2 t] - 4 t^2 Sin[2 t]),
b1 -> 1/32 (10 t Cos[2 t] - 5 Sin[2 t] + 4 t^2 Sin[2 t]),
b2 -> 1/256 (-6 t Cos[2 t] + 3 Sin[2 t] - 4 t^2 Sin[2 t])}}
\begin{eqnarray*} b_0 &=& \frac{1}{16} \left( 15\,\sin 2t -4t^2 \sin 2t -14t\,\cos 2t \right) , \\ b_1 &=& \frac{1}{32}\left( 10t\,\cos 2t - 5\, \sin 2t + 4t^2 \sin 2t \right) , \\ b_2 &=& \frac{1}{256}\left( 3\,\sin 2t - 4\,t^2 \sin 2t -6t\,\cos 2t \right) . \tag{6.3} \end{eqnarray*}
Next, we build the matrix function
ff[t_] = Simplify[(1/ 16 (-14 t Cos[2 t] + 15 Sin[2 t] - 4 t^2 Sin[2 t]))* IdentityMatrix[ 3]
+ (1/32 (10 t Cos[2 t] - 5 Sin[2 t] + 4 t^2 Sin[2 t]))* A
+ (1/256 (-6 t Cos[2 t] + 3 Sin[2 t] - 4 t^2 Sin[2 t]))*A.A]
{{1/64 (46 t Cos[2 t] + (9 + 4 t^2) Sin[2 t]),
1/64 (78 t Cos[2 t] + (-39 + 4 t^2) Sin[2 t]),
1/32 (170 t Cos[2 t] + (-85 + 12 t^2) Sin[2 t])},
{1/ 128 (22 t Cos[2 t] + (-11 + 4 t^2) Sin[2 t]),
1/128 (54 t Cos[2 t] + (37 + 4 t^2) Sin[2 t]),
1/64 (98 t Cos[2 t] + (-49 + 12 t^2) Sin[2 t])},
{1/ 256 (-38 t Cos[2 t] + (19 - 4 t^2) Sin[2 t]),
1/256 (-70 t Cos[2 t] + (35 - 4 t^2) Sin[2 t]),
1/128 (-146 t Cos[2 t] + (137 - 12 t^2) Sin[2 t])}}
which gives
\begin{align*} {\bf \Phi} (t) &= \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} \\ &= \frac{1}{256} \begin{bmatrix} 4 \left[ 46t\,\cos 2t + (9 - 4t^2 )\,\sin 2t \right] & 4 \left[ 78t\,\cos 2t + (-39 + 4t^2 )\,\sin 2t \right] & 8 \left[ 170t\,\cos 2t + (-85 + 12t^2 )\,\sin 2t \right] \\ 2 \left[ 22t\,\cos 2t + (-11 + 4t^2 )\,\sin 2t \right] & 2 \left[ 54t\,\cos 2t + (37 + 4t^2 )\,\sin 2t \right] & 4 \left[ 98t\,\cos 2t + (-49 + 12t^2 )\,\sin 2t \right] \\ -38t\,\cos 2t + (19 - 4t^2 )\,\sin 2t & -70t\,\cos 2t + (35 - 4t^2 )\,\sin 2t & -146t\,\cos 2t + (137 - 12 t^2 )\,\sin 2t \end{bmatrix} \end{align*}
Note that matrices A and \( \displaystyle {\bf \Phi} (t) \) commute. To verify that we get a correct matrix function, we have to show that the function \( \displaystyle {\bf \Phi} (t) \) is a solution to the initial value problem
\[ \frac{{\text d}^2}{{\text d} t^2} \,{\bf \Phi} (t) + {\bf A}\,{\bf \Phi} (t) = {\bf 0} , \qquad {\bf \Phi} (0) = {\bf 0} , \quad \dot{{\bf \Phi}} (0) = {\bf I} . \]
We again delegate this job to Mathematica:
Simplify[D[ff[t], t, t] + A.ff[t]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
ff[0]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
D[ff[t], t] /. t -> 0
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}

Now we build another matrix function \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \) using exactly the same steps. So we seek it in the form

\[ {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 , \]
where the coefficients b0, b1, and b2 are determined from the system of algebraic equations:
\begin{eqnarray*} \Psi (4) &=& \cos 2t = b_0 + 4\,b_1 + 16\, b_2 , \\ \Psi' (4) &=& -\frac{1}{4}\,t\,\sin 2t = b_1 + 8\, b_2 , \\ \Psi'' (4) &=& -\frac{1}{16}\,t^2\,\cos 2t + \frac{1}{32}\,t\,\sin 2t = 2\, b_2 . \end{eqnarray*}
Here \( \displaystyle \Psi \left( \lambda \right) = \cos \left( t\,\sqrt{\lambda} \right) . \) We solve the system of algebraic equations using Mathematica:
psi[s_] = Cos[t*Sqrt[s]]
D[psi[s], s] /. s -> 4
-(1/4) t Sin[2 t]
D[psi[s], s, s] /. s -> 4
-(1/16) t^2 Cos[2 t] + 1/32 t Sin[2 t]
This allows us to determine the values of coefficients b0, b1, and b2 in the formula
\[ \cos \left( t\,\sqrt{\bf A} \right) = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 . \]
Solve[{Cos[2*t] == b0 + 4*b1 + 16*b2,
-(1/4) t Sin[2 t] == b1 + 8*b2,
-(1/16) t^2 Cos[2 t] + 1/32 t Sin[2 t] == 2*b2 }, {b0, b1, b2}]
{{b0 -> 1/4 (4 Cos[2 t] - 2 t^2 Cos[2 t] + 5 t Sin[2 t]),
b1 -> 1/8 (2 t^2 Cos[2 t] - 3 t Sin[2 t]),
b2 -> 1/64 (-2 t^2 Cos[2 t] + t Sin[2 t])}}
Finally, we build the matrix function \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) : \)
gg[t_] = Simplify[ 1/4 (4 Cos[2 t] - 2 t^2 Cos[2 t] + 5 t Sin[2 t])* IdentityMatrix[3]
+ 1/8 (2 t^2 Cos[2 t] - 3 t Sin[2 t])*A
+ 1/64 (-2 t^2 Cos[2 t] + t Sin[2 t])*A.A]
{{1/8 ((8 + t^2) Cos[2 t] - 21 t Cos[t] Sin[t]), 1/16 t (2 t Cos[2 t] - 37 Sin[2 t]), 1/8 t (6 t Cos[2 t] - 79 Sin[2 t])},
{1/ 32 t (2 t Cos[2 t] - 9 Sin[2 t]), 1/32 (2 (16 + t^2) Cos[2 t] - 25 t Sin[2 t]), 1/16 t (6 t Cos[2 t] - 43 Sin[2 t])},
{-(1/64) t (2 t Cos[2 t] - 17 Sin[2 t]), -(1/64) t (2 t Cos[2 t] - 33 Sin[2 t]), (1 - (3 t^2)/16) Cos[2 t] + 67/32 t Sin[2 t]}}
To verify that our matrix-function ψ(A) was obtained correctly, we need to show that it is a solution (which is unique) of the initial value problem
\[ \frac{{\text d}^2}{{\text d} t^2} \,{\bf \Psi} (t) + {\bf A}\,{\bf \Psi} (t) = {\bf 0} , \qquad {\bf \Psi} (0) = {\bf I} , \quad \dot{{\bf \Psi}} (0) = {\bf 0} . \]
Mathematica helps with the equation
Simplify[D[gg[t], t, t] + A.gg[t]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
and with the initial conditions
gg[0]
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
D[gg[t], t] /. t -> 0
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Each of constructed matrix functions either \( \displaystyle {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} \) or \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \) is unique independently of the possible method in use because they are solutions of the corresponding initial value problems. On the other hand, we cannot guarantee that the square roots of A that we are going to find are the only ones: there could be other roots.

To determine a square root of the given matrix, we consider a corresponding function \( r(\lambda ) = \lambda^{1/2} \equiv \sqrt{\lambda} , \) where we have to choose a particular branch because a square root is an analytic function but not a single-valued function: it assigns to every input number two outputs. For example, \( 4^{1/2} = \pm 2 . \) Since the characteristic polynomial of matrix A is of degree 3, we assume that \( r({\bf A} ) \) is represented as

\[ r \left( {\bf A} \right) = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 , \]
where the coefficients b0, b1, and b2 are determined from the linear system of algebraic equations:
\begin{eqnarray*} r(4) &=& 4^{1/2} = b_0 + 4\,b_1 + 16\, b_2 , \\ r' (4) &=& -\frac{1}{2}\,4^{-1/2} = b_1 + 8\, b_2 , \\ r'' (4) &=& -\frac{1}{4}\,4^{-3/2} = 2\, b_2 . \end{eqnarray*}
Mathematica solves this system of algebraic equations without a problem:
ss=Solve[{2 == b0 + 4*b1 + 16*b2, 1/4 == b1 + 8*b2, 1/32 == 2*b2 }, {b0, b1, b2}] // Flatten
{b0 -> 3/4, b1 -> 3/8, b2 -> -(1/64)}
b0 = Part[Values@ss, 1]
3/4
b1 = Part[Values@ss, 2]
3/8
b2= Part[Values@ss, 3]
-1/64
With all coefficients at hand, we build the square root matrix
R1 = Simplify[b0*IdentityMatrix[3] + b1*A + b2*A.A]
{{53/16, 37/16, 79/8}, {9/32, 89/32, 43/ 16}, {-(17/64), -(33/64), -(3/32)}}
Therefore, we get two roots
\[ {\bf R}_1 = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 = \frac{1}{32} \begin{bmatrix} 108& 74 & 316 \\ 9 & 89 & 86 \\ -17/2 & -33/2 & -3 \end{bmatrix} , \qquad {\bf R}_2 = -{\bf R}_1 . \]
Finally, we check the answer:
R1.R1
{{9, 9, 38}, {1, 7, 10}, {-1, -2, -4}}

Example 7: Consider a diagonalizable symmetric 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} 2&1&1 \\ 1&2&1 \\ 1&1&2 \end{bmatrix} . \]
The matrix has two eiegenvalues \( \lambda_1 =1 \) and \( \lambda_2 =4 . \) The former has multiplicity 2 and its geometric multiplicity is also 2 because there are two linearly independent eigenvectors:
A = A = {{2, 1, 1}, {1, 2, 1}, {1, 1, 2}}
Eigenvalues[A]
{4, 1, 1}
Eigenvectors[A]
{{1, 1, 1}, {-1, 0, 1}, {-1, 1, 0}}
So the minimal polynomial is of second degree: \( \psi \left( \lambda \right) = \left( \lambda -1 \right) \left( \lambda -4 \right) , \) and the given matrix is derogatory. Therefore, we seek any function of matrix A as a polynomial of first degree: \( f({\bf A}) = b_0 {\bf I} + b_1 {\bf A} . \) In particular, if we consider a square root \( f(\lambda ) = \sqrt{\lambda} , \) we get the linear system of two algebraic equations
\[ f(1) = b_0 + b_1 \sqrt{1} , \qquad f(4) = b_0 + b_1 \sqrt{4} , \]
where we need to choose appropriate branches for square root. It is sufficient to consider two of them:
\begin{align*} 1 &= b_0 + b_1 , \\ 2 &= b_0 + 4\, b_1 ; \end{align*}
and
\begin{align*} -1 &= b_0 + b_1 , \\ 2 &= b_0 + 4\, b_1 . \end{align*}
We ask Mathematica to solve these equations:
Solve[{1 == b0 + b1, 2 == b0 + 4*b1}, {b0, b1}]
{{b0 -> 2/3, b1 -> 1/3}}
Solve[{-1 == b0 + b1, 2 == b0 + 4*b1}, {b0, b1}]
{{b0 -> -2, b1 -> 1}}
Then we build two roots:
Simplify[2*IdentityMatrix[3] + A]/3
{{4/3, 1/3, 1/3}, {1/3, 4/3, 1/3}, {1/3, 1/3, 4/3}}
Simplify[-2*IdentityMatrix[3] + A]
R2 = {{0, 1, 1}, {1, 0, 1}, {1, 1, 0}}
This gives two root matrices:
\[ {\bf R}_1 = \frac{1}{3} \begin{bmatrix} 4&1&1 \\ 1&4&1 \\ 1&1&4 \end{bmatrix} , \qquad {\bf R}_2 = \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 1&1&0 \end{bmatrix} . \]
Two other roots are just negative of these two: \( {\bf R}_3 = - {\bf R}_1 , \quad {\bf R}_4 = - {\bf R}_2 . \) To verify this statement, we ask Mathematica
R2.R2
{{2, 1, 1}, {1, 2, 1}, {1, 1, 2}}
A similar answer we obtain for each matrix root. Note that matrix A has other square roots that cannot be obtained with our method:
\[ \frac{1}{3} \begin{bmatrix} 1&4&1 \\ 4&1&1 \\ 1&1&4 \end{bmatrix} , \qquad \frac{1}{3} \begin{bmatrix} 4&1&1 \\ 1&1&4 \\ 1&4&1 \end{bmatrix} , \qquad \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 1&1&0 \end{bmatrix} , \qquad \begin{bmatrix} 0&1&1 \\ 1&1&0 \\ 1&0&1 \end{bmatrix} . \]
Two of them are build in next section.
  1. Higham, Nicholas J. Functions of Matrices: Theory and Computation. Cambridge University Press, Cambridge, 2008.
  2. Leonard, I.E., The Matrix Exponential, SIAM Review, vol 38, No 3, 507--512, 1996.
  3. Moler, Cleve, and van Loan, Charles, Nineteen Dubious Ways to compute the Exponential of a Matrix, Twenty-Five Years Later, SIAM Review, vol. 45, No. 1, 3--49, 2003.

 

Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions