1 MuPAD Tutorial: Polynomial Interpolation

Polynomial Interpolation

Originally, spectral decomposition was developed for symmetric or self-adjoint matrices. Following tradition, we present this method for symmetric/self-adjoint matrices in next chapter. Now we expand spectral decomposition method for arbitrary square matrices by considering their polynomial interpolation. Our objective is to define a function \( f ({\bf A} ) \) of a square matrix A for certain class of functions. The function \( f (\lambda ) \) of a single variable λ is said to be admissible for a square matrix A if the values

\[ f^{(j)} (\lambda_i ) , \qquad j=0,1,\ldots , m_i -1, \quad i=1,2,\ldots s, \]
exist. Here mi is the multiplicity of the eigenvalue λi and there are s distinct eigenvalues. The above values of the function f and possibly its derivatives at eigenvalues are called the values of f on the spectrum of A. ■

In most cases of practical interest f is given by a formula, such as \( f (\lambda ) = e^{\lambda\, t} \) or \( f (\lambda ) = \cos \left( t\,\sqrt{\lambda} \right) . \) However, the following definition of \( f ({\bf A} ) \) requires only the values of f on the spectrum of A; it does not require any other information about f. For every \( n \times n \) matrix A, there exists a set of admissible functions for each of which we can define a matrix \( f ({\bf A} ) . \) Such definition of n-by-n matrix \( f ({\bf A} ) \) is not unique. Previously, we defined a function of a square matrix using the resolvent method or Sylvester method. There are known other equivalent definitions that could be found in the following references:

  • Higham, Nicholas J. Functions of Matrices: Theory and Computation. Cambridge University Press, Cambridge, 2008.
  • Leonard, I.E., The Matrix Exponential, SIAM Review, vol 38, No 3, 507--512, 1996.
  • Moler, Cleve, and van Loan, Charles, Nineteen Dubious Ways to compute the Exponential of a Matrix, Twenty-Five Years Later, SIAM Review, vol. 45, No. 1, 3--49, 2003.

One of the main uses of matrix functions in computational mathematics is for solving nonlinear matrix equations, \( g ({\bf X} ) = {\bf A} . \) Two particular cases are especially important: to find a square root and logarithm by solving the equations \( {\bf X}^2 = {\bf A} \) and \( e^{\bf X} = {\bf A} , \) respectively. It may happen that a matrix equation has a solution that is beyond the set of admissible functions. For instance, a unit matrix has infinite many roots out of which we can construct only finite number of roots using admissible functions.

In applications to systems of ordinary differential equations, we usually need to construct \( f ({\bf A} ) \) for analytic functions such as \( f (\lambda ) = e^{\lambda\, t} \) or \( \displaystyle f (\lambda ) = \frac{\sin \left( \sqrt{\lambda}\, t \right)}{\sqrt{\lambda}} . \) In our exposition, we will assume that functions possess as many derivatives as needed; however, it is sufficient to consider admissible functions.

Let A be a square \( n \times n \) matrix and let f be an analytic function in a neighborhood of each its eigenvalue. Using a Maclaurin series
\[ f(\lambda ) = f_0 + f_1 \lambda + f_2 \lambda^2 + \cdots + f_k \lambda^k + \cdots = \sum_{k\ge 0} f_k \lambda^k , \]
we can define the matrix-valued function \( f ({\bf A} ) \) as
\[ f({\bf A} ) = \sum_{k\ge 0} f_k {\bf A}^k . \]
We do not discuss the convergence issues of the above series because we will define a function of a square matrix as a matrix polynomial; so this series serves for illustration only. Let \( \psi (\lambda ) \) be a minimal polynomial of degree m for the matrix A. Then every power \( {\bf A}^p \) of matrix A can be expressed as a polynomial of degree not higher than \( m-1. \) Therefore,
\[ f({\bf A} ) = \sum_{j= 0}^{m-1} b_j {\bf A}^j , \]
where coefficients \( b_j , \quad j=0,1,\ldots , m-1; \) should satisfy the following equations
\[ f(\lambda_k ) = \sum_{j= 0}^{m-1} b_k \,\lambda_k^j , \]
for each eigenvalue \( \lambda_k , \quad k=1,2,\ldots , s , \) where s is the number of distinct eigenvalues. If the eigenvalue \( \lambda_k \) is of multiplicity \( m_k \) in the minimal polynomial \( \psi (\lambda ) , \) then we need to add \( m_k -1 \) auxiliary algebraic equations
\[ \left. \frac{{\text d}^p f(\lambda )}{{\text d} \lambda^p} \right\vert_{\lambda = \lambda_k} = \left. \frac{{\text d}^p}{{\text d} \lambda^p} \, \sum_{j= 0}^{m-1} b_k \,\lambda_k^j \right\vert_{\lambda = \lambda_k} , \quad p=1,2,\ldots , m_k -1 . \]

 It is instructive to consider the cases where A is a rank-1 n-by-n matrix so \( {\bf A} = {\bf u}\,{\bf v}^{\ast} \) is a matrix product of two n-vectors. In this case, a function of rank-1 matrix is the sum of two terms:

\[ f \left( {\bf A} \right) \equiv f \left( {\bf u}\,{\bf v}^{\ast} \right) = f(0) \,{\bf I} + \left( \frac{f \left( {\bf v}^{\ast} \,{\bf u} \right) - f(0)}{{\bf v}^{\ast} \,{\bf u} -0} \right) {\bf A} . \]

When admissible function f and its derivative are defined at the origin and \( {\bf v}^{\ast} {\bf u} =0, \) the function of a matrix can be designated as
\[ f \left( {\bf A} \right) = f \left( {\bf u}{\bf v}^{\ast} \right) = f(0) \, {\bf I} + f' (0) \,{\bf u}{\bf v}^{\ast} , \]
where \( {\bf I} = {\bf A}^0 \) is the identity matrix.

Example: Consider two vectors: \( {\bf u} = \langle 1,1,1 \rangle \) and \( {\bf v} = \langle 1,0,3 \rangle . \) Then their products defines the matrix A and the number

\[ {\bf A} = {\bf u} \, {\bf v}^{\ast} = \begin{bmatrix} 1&0&3 \\ 1&0&3 \\ 1&0&3 \end{bmatrix} \qquad\mbox{and} \qquad {\bf v}^{\ast} {\bf u} = {\bf v} \cdot {\bf u} = 4 . \]
This matrix A has one simple eigenvalue \( \lambda =4 \) and another one double eigenvalue \( \lambda = 0 . \) Mathematica confirms:
A = {{1, 0, 3}, {1, 0, 3}, {1, 0, 3}}
ss = Eigenvalues[A]
{4, 0, 0}
Eigenvectors[A]
{{1, 1, 1}, {-3, 0, 1}, {0, 1, 0}}
CharacteristicPolynomial[A, x]
4 x^2 - x^3
Since matrix A is diagonilizable, its minimal polynomial is of second degree: \( \psi (\lambda ) = \lambda \left( \lambda -4 \right) . \) We determine two matrix functions

\[ {\bf \Phi} ({\bf A}) = \frac{\sin \left( t \, \sqrt{\bf A} \right)}{\sqrt{\bf A}} \qquad\mbox{and} \qquad {\bf \Psi} ({\bf A}) = \cos \left( t \, \sqrt{\bf A} \right) \]
that correspond to single-valued functions \( \Phi (\lambda ) = \frac{\sin \left( t \, \sqrt{\lambda} \right)}{\sqrt{\lambda}} \quad\mbox{and} \quad \Psi (\lambda ) = \cos \left( t \, \sqrt{\lambda} \right) , \) respectively. Using the formula for rank 1 matrices, we get
\begin{eqnarray*} {\bf \Phi} ({\bf A}) &=& \Phi (0) \, {\bf I} + \left( \frac{\Phi \left( {\bf v}^{\ast} \,{\bf u} \right) - t}{{\bf v}^{\ast} \,{\bf u} -0} \right) {\bf A} = t\,{\bf I} + \frac{1}{4} \left( \frac{\sin 2t}{2} - t \right) {\bf A} , \\ {\bf \Psi} ({\bf A}) &=& \Psi (0) \, {\bf I} + \left( \frac{\Psi \left( {\bf v}^{\ast} \,{\bf u} \right) - \Psi (0)}{{\bf v}^{\ast} \,{\bf u} -0} \right) {\bf A} = {\bf I} + \frac{1}{4} \left( \cos 2t -1 \right) {\bf A} , \end{eqnarray*}
because \( \Phi (0) =t, \ \Phi \left( {\bf v}^{\ast} \,{\bf u} \right) = \Phi (4) = \frac{\sin 2t}{2} \) and \( \Psi (0) =1, \ \Psi \left( {\bf v}^{\ast} \,{\bf u} \right) = \Psi (4) = \cos 2t . \)

To verify that we specify matrix functions correctly, we prove that they are solutions of the following initial value problems (because these problems have unique solutions):

\[ \ddot{\bf \Phi} (t) + {\bf A}\,{\bf \Phi} (t) = {\bf 0}, \qquad {\bf \Phi} (0) = {\bf 0} , \quad \dot{\bf \Phi} (0) = {\bf I} , \]
and
\[ \ddot{\bf \Psi} (t) + {\bf A}\,{\bf \Psi} (t) = {\bf 0}, \qquad {\bf \Psi} (0) = {\bf I} , \quad \dot{\bf \Psi} (0) = {\bf 0} . \]
Since we have \( \ddot{\Phi} = - \frac{\sin 2t}{2} \) and \( \ddot{\Psi} = -\cos 2t , \) we have to show that
\[ - \frac{\sin 2t}{2} \,{\bf A} + t\,{\bf A} + \frac{1}{4} \left( \frac{\sin 2t}{2} - t \right) {\bf A}^2 = {\bf 0} \qquad\mbox{and} \qquad -\cos 2t \,{\bf A} + \frac{1}{4} \left( \cos 2t - 1 \right) {\bf A}^2 = {\bf 0} . \]
Therefore, we have to show that \( {\bf A}^{2} = 4\,{\bf A} . \) Mathematica confirms
A.A
{{4, 0, 12}, {4, 0, 12}, {4, 0, 12}}
Since the initial conditions are obviously true, we have shown that constructed matrix functions are correct.

Example: Consider a diagonalizable 4-by-4 matrix

\[ {\bf A} = \begin{bmatrix} -4&7&1&4 \\ 6&-16&-3&-9 \\ 12&-27&-4&-15 \\ -18&43&7&24 \end{bmatrix} . \]
Its characteristic polynomial is
\[ \chi (\lambda ) = \det \left( \lambda {\bf I} - {\bf A} \right) = \lambda \left( \lambda -2 \right) \left( \lambda +1 \right)^2 , \]
but its minimal polynomial is a product of simple terms:
\[ \psi (\lambda ) = \det \left( \lambda {\bf I} - {\bf A} \right) = \lambda \left( \lambda -2 \right) \left( \lambda +1 \right) . \]
We check it with Mathematica:
A = {{-4, 7, 1, 4}, {6, -16, -3, -9}, {12, -27, -4, -15}, {-18, 43, 7, 24}}
Eigenvalues[A]
{2, -1, -1, 0}
Eigenvectors[A]
{{1, -2, -4, 6}, {-1, -1, 0, 1}, {-2, -1, 1, 0}, {1, -3, -3, 7}}

Therefore, matrix A is diagonalizable because its minimal polynomial is a product of simple terms, and we don't need to find eigenspaces. For the exponential function \( f (\lambda ) = e^{\lambda\, t} \) we construct the corresponding matrix function \( f ({\bf A} ) = e^{{\bf A}\, t} \) as a polynomial of degree 2 (because the minimal polynomial is of that degree):
\[ e^{{\bf A}\, t} = b_0 {\bf I} + b_1 {\bf A} + b_2 {\bf A}^2 , \]
where coefficients are determined from the system of algebraic equations
\begin{eqnarray*} 1 &=& b_0 \\ e^{2t} &=& b_0 + 2\,b_1 + 4\, b_2 , \\ e^{-t} &=& b_0 - b_1 + b_2 . \end{eqnarray*}
Solving with Mathematica,
Solve[{Exp[2*t] == 1 + 2*b1 + 4*b2, Exp[-t] == 1 - b1 + b2}, {b1, b2}]
{{b1 -> 1/6 E^-t (-4 + 3 E^t + E^(3 t)), b2 -> 1/6 E^-t (-1 + E^t)^2 (2 + E^t)}}
we get the exponential matrix function:
\[ e^{{\bf A}\, t} = {\bf I} + \frac{1}{6} \left( 3 -4\, e^{-t} + e^{2t} \right) {\bf A} + \frac{1}{6} \left( -3 +2\, e^{-t} + e^{2t} \right) {\bf A}^2 . \]
We check the answer again with Mathematica:
f[t_] = Simplify[ IdentityMatrix[4] + (3 - 4*Exp[-t] + Exp[2*t])* A/6 + (-3 + 2*Exp[-t] + Exp[2*t])*A.A/6]
Simplify[f[t] - MatrixExp[A*t]]
{{0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}}
However, it is not actual verification because we compare our constructed exponential matrix with another matrix provided by Mathematica. Of course, we trust Mathematica, but we need real verification. It is known that matrix exponential \( f ({\bf A} ) = e^{{\bf A}\, t} \) is a unique solution of the following initial value problem for matrix differential equation:
\[ \frac{\text d}{{\text d}t}\, {\bf \Phi} = {\bf A} \,{\bf \Phi} (t), \qquad {\bf \Phi} (0) = {\bf I} , \]
where \( {\bf I} = {\bf A}^0 \) is the identity matrix. So we check with Mathematica:
Simplify[D[f[t], t] - A.f[t]]
{{0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}}
f[0]
{{1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 1, 0}, {0, 0, 0, 1}}

Example: Consider a nondiagonalizable 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} 9&9&38 \\ 1&7&10 \\ -1&-2&-4 \end{bmatrix} . \]
This matrix has only one eigenvalue of multiplicity 3.
A = {{9, 9, 38}, {1, 7, 10}, {-1, -2, -4}}
Eigenvalues[A]
{4, 4, 4}
Eigenvectors[A]
{{-4, -2, 1}, {0, 0, 0}, {0, 0, 0}}
Its characteristic polynomial is
CharacteristicPolynomial[A,s]
64 - 48 s +12 s^2 - s^3

We are going to determine two matrix functions \( \displaystyle {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} \) and \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \) corresponding to functions \( \displaystyle f \left( \lambda \right) = \frac{\sin \left( t\,\sqrt{\lambda} \right)}{\sqrt{\lambda}} \) and \( \displaystyle g \left( \lambda \right) = \cos \left( t\,\sqrt{\lambda} \right) , \) respectively. Then we find square roots of matrix A.

Let us start with \( \displaystyle f \left( {\bf A} \right) , \) which does not depend on what branch of square root is chosen in \( \displaystyle \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} . \) We seek this matrix function in the form

\[ {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 , \]
where the coefficients b0, b1, and b2 are determined from the system of algebraic equations:
\begin{eqnarray*} f(4) &=& \frac{\sin 2t}{2} = b_0 + 4\,b_1 + 16\, b_2 , \\ f' (4) &=& \frac{1}{8}\,t\,\cos 2t - \frac{1}{16}\, \sin 2t = b_1 + 8\, b_2 , \\ f'' (4) &=& -\frac{3}{64}\,t\,\cos 2t + \frac{3}{128}\,\sin 2t - \frac{1}{32}\,t^2 \sin 2t = 2\, b_2 . \end{eqnarray*}
f[s_] = (Sin[t*Sqrt[s]])/Sqrt[s]
D[f[s], s] /. s -> 4
1/8 t Cos[2 t] - 1/16 Sin[2 t]
D[f[s], s, s] /. s -> 4
-(3/64) t Cos[2 t] + 3/128 Sin[2 t] - 1/32 t^2 Sin[2 t]
Now we ask Mathematica to solve the system of algebraic equations
Solve[{Sin[2*t]/2 == b0 + 4*b1 + 16*b2,
1/8 t Cos[2 t] - 1/16 Sin[2 t] == b1 + 8*b2,
-(3/64) t Cos[2 t] + 3/128 Sin[2 t] - 1/32 t^2 Sin[2 t] == 2*b2 }, {b0, b1, b2}]
{{b0 -> 1/16 (-14 t Cos[2 t] + 15 Sin[2 t] - 4 t^2 Sin[2 t]),
b1 -> 1/32 (10 t Cos[2 t] - 5 Sin[2 t] + 4 t^2 Sin[2 t]),
b2 -> 1/256 (-6 t Cos[2 t] + 3 Sin[2 t] - 4 t^2 Sin[2 t])}}
\begin{eqnarray*} b_0 &=& \frac{1}{16} \left( 15\,\sin 2t -4t^2 \sin 2t -14t\,\cos 2t \right) , \\ b_1 &=& \frac{1}{32}\left( 10t\,\cos 2t - 5\, \sin 2t + 4t^2 \sin 2t \right) , \\ b_2 &=& \frac{1}{256}\left( 3\,\sin 2t - 4\,t^2 \sin 2t -6t\,\cos 2t \right) . \end{eqnarray*}
Next, we build the matrix function
ff[t_] = Simplify[(1/ 16 (-14 t Cos[2 t] + 15 Sin[2 t] - 4 t^2 Sin[2 t]))* IdentityMatrix[ 3]
+ (1/32 (10 t Cos[2 t] - 5 Sin[2 t] + 4 t^2 Sin[2 t]))* A
+ (1/256 (-6 t Cos[2 t] + 3 Sin[2 t] - 4 t^2 Sin[2 t]))*A.A]
{{1/64 (46 t Cos[2 t] + (9 + 4 t^2) Sin[2 t]),
1/64 (78 t Cos[2 t] + (-39 + 4 t^2) Sin[2 t]),
1/32 (170 t Cos[2 t] + (-85 + 12 t^2) Sin[2 t])},
{1/ 128 (22 t Cos[2 t] + (-11 + 4 t^2) Sin[2 t]),
1/128 (54 t Cos[2 t] + (37 + 4 t^2) Sin[2 t]),
1/64 (98 t Cos[2 t] + (-49 + 12 t^2) Sin[2 t])},
{1/ 256 (-38 t Cos[2 t] + (19 - 4 t^2) Sin[2 t]),
1/256 (-70 t Cos[2 t] + (35 - 4 t^2) Sin[2 t]),
1/128 (-146 t Cos[2 t] + (137 - 12 t^2) Sin[2 t])}}
which gives
\[ {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} = \frac{1}{256} \begin{bmatrix} 4 \left[ 46t\,\cos 2t + (9 - 4t^2 )\,\sin 2t \right] & 4 \left[ 78t\,\cos 2t + (-39 + 4t^2 )\,\sin 2t \right] & 8 \left[ 170t\,\cos 2t + (-85 + 12t^2 )\,\sin 2t \right] \\ 2 \left[ 22t\,\cos 2t + (-11 + 4t^2 )\,\sin 2t \right] & 2 \left[ 54t\,\cos 2t + (37 + 4t^2 )\,\sin 2t \right] & 4 \left[ 98t\,\cos 2t + (-49 + 12t^2 )\,\sin 2t \right] \\ -38t\,\cos 2t + (19 - 4t^2 )\,\sin 2t & -70t\,\cos 2t + (35 - 4t^2 )\,\sin 2t & -146t\,\cos 2t + (137 - 12 t^2 )\,\sin 2t \end{bmatrix} \]
Note that matrices A and \( \displaystyle {\bf \Phi} (t) \) commute. To verify that we get a correct matrix function, we have to show that the function \( \displaystyle {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} . \) is a solution to the initial value problem
\[ \frac{{\text d}^2}{{\text d} t^2} \,{\bf \Phi} (t) + {\bf A}\,{\bf \Phi} (t) = {\bf 0} , \qquad {\bf \Phi} (0) = {\bf 0} , \quad \dot{{\bf \Phi}} (0) = {\bf I} . \]
We again delegate this job to Mathematica:
Simplify[D[ff[t], t, t] + A.ff[t]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
ff[0]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
D[ff[t], t] /. t -> 0
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}

Now we build another matrix function \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \) using exactly the same steps. So we seek it in the form

\[ {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 , \]
where the coefficients b0, b1, and b2 are determined from the system of algebraic equations:
\begin{eqnarray*} g(4) &=& \cos 2t = b_0 + 4\,b_1 + 16\, b_2 , \\ g' (4) &=& -\frac{1}{4}\,t\,\sin 2t = b_1 + 8\, b_2 , \\ g'' (4) &=& -\frac{1}{16}\,t^2\,\cos 2t + \frac{1}{32}\,t\,\sin 2t = 2\, b_2 . \end{eqnarray*}
Here \( \displaystyle g \left( \lambda \right) = \cos \left( t\,\sqrt{\lambda} \right) . \) We solve the system of algebraic equations using Mathematica:
g[s_] = Cos[t*Sqrt[s]]
D[g[s], s] /. s -> 4
-(1/4) t Sin[2 t]
D[g[s], s, s] /. s -> 4
-(1/16) t^2 Cos[2 t] + 1/32 t Sin[2 t]
Solve[{Cos[2*t] == b0 + 4*b1 + 16*b2,
-(1/4) t Sin[2 t] == b1 + 8*b2,
-(1/16) t^2 Cos[2 t] + 1/32 t Sin[2 t] == 2*b2 }, {b0, b1, b2}]
{{b0 -> 1/4 (4 Cos[2 t] - 2 t^2 Cos[2 t] + 5 t Sin[2 t]),
b1 -> 1/8 (2 t^2 Cos[2 t] - 3 t Sin[2 t]),
b2 -> 1/64 (-2 t^2 Cos[2 t] + t Sin[2 t])}}
Finally, we build the matrix function \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) : \)
gg[t_] = Simplify[ 1/4 (4 Cos[2 t] - 2 t^2 Cos[2 t] + 5 t Sin[2 t])* IdentityMatrix[3]
+ 1/8 (2 t^2 Cos[2 t] - 3 t Sin[2 t])*A
+ 1/64 (-2 t^2 Cos[2 t] + t Sin[2 t])*A.A]
{{1/8 ((8 + t^2) Cos[2 t] - 21 t Cos[t] Sin[t]), 1/16 t (2 t Cos[2 t] - 37 Sin[2 t]), 1/8 t (6 t Cos[2 t] - 79 Sin[2 t])},
{1/ 32 t (2 t Cos[2 t] - 9 Sin[2 t]), 1/32 (2 (16 + t^2) Cos[2 t] - 25 t Sin[2 t]), 1/16 t (6 t Cos[2 t] - 43 Sin[2 t])},
{-(1/64) t (2 t Cos[2 t] - 17 Sin[2 t]), -(1/64) t (2 t Cos[2 t] - 33 Sin[2 t]), (1 - (3 t^2)/16) Cos[2 t] + 67/32 t Sin[2 t]}}
To verify that our function was obtained correctly, we need to show that it is a solution (which is unique) of the initial value problem
\[ \frac{{\text d}^2}{{\text d} t^2} \,{\bf \Psi} (t) + {\bf A}\,{\bf \Psi} (t) = {\bf 0} , \qquad {\bf \Phi} (0) = {\bf I} , \quad \dot{{\bf \Phi}} (0) = {\bf 0} . \]
Mathematica helps with the equation
Simplify[D[gg[t], t, t] + A.gg[t]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
and with the initial conditions
gg[0]
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
D[gg[t], t] /. t -> 0
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Each of constructed matrix functions either \( \displaystyle {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} \) or \( \displaystyle {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \) is unique independently of the method in use because they are solutions of the corresponding initial value problems. On the other had, we cannot guarantee that square roots of A that we are going to find are the only available: there could be other roots.

To determine a square root of the given matrix, we consider a corresponding function \( r(\lambda ) = \lambda^{1/2} \equiv \sqrt{\lambda} , \) where we have to choose a particular branch because a square root is an analytic function but not a single-valued function: it assigns to every input number two outputs. For example, \( 4^{1/2} = \pm 2 . \) Since the characteristic polynomial of matrix A is of degree 3, we assume that \( r({\bf A} ) \) is represented as

\[ r \left( {\bf A} \right) = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 , \]
where the coefficients b0, b1, and b2 are determined from the system of algebraic equations:
\begin{eqnarray*} r(4) &=& 4^{1/2} = b_0 + 4\,b_1 + 16\, b_2 , \\ r' (4) &=& -\frac{1}{2}\,4^{-1/2} = b_1 + 8\, b_2 , \\ r'' (4) &=& -\frac{1}{4}\,4^{-3/2} = 2\, b_2 . \end{eqnarray*}
Mathematica solves this system of algebraic equations without a problem:
ss=Solve[{2 == b0 + 4*b1 + 16*b2, 1/4 == b1 + 8*b2, 1/32 == 2*b2 }, {b0, b1, b2}] // Flatten
{b0 -> 3/4, b1 -> 3/8, b2 -> -(1/64)}
b0 = Part[Values@ss, 1]
3/4
b1 = Part[Values@ss, 2]
3/8
b2= Part[Values@ss, 3]
-1/64
With all coefficients at hand, we build the square root matrix
R1 = Simplify[b0*IdentityMatrix[3] + b1*A + b2*A.A]
{{53/16, 37/16, 79/8}, {9/32, 89/32, 43/ 16}, {-(17/64), -(33/64), -(3/32)}}
Therefore, we get two roots
\[ {\bf R}_1 = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 = \frac{1}{32} \begin{bmatrix} 108& 74 & 316 \\ 9 & 89 & 86 \\ -17/2 & -33/2 & -3 \end{bmatrix} , \qquad {\bf R}_2 = -{\bf R}_1 . \]
Finally, we check the answer:
R1.R1
{{9, 9, 38}, {1, 7, 10}, {-1, -2, -4}}

Example: Consider a diagonalizable symmetric 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} 2&1&1 \\ 1&2&1 \\ 1&1&2 \end{bmatrix} . \]
The matrix has two eiegenvalues \( \lambda_1 =1 \) and \( \lambda_2 =4 . \) The former has multiplicity 2 and its geometric multiplicity is also 2 because there are two linearly independent eigenvectors:
A = A = {{2, 1, 1}, {1, 2, 1}, {1, 1, 2}}
Eigenvalues[A]
{4, 1, 1}
Eigenvectors[A]
{{1, 1, 1}, {-1, 0, 1}, {-1, 1, 0}}
So the minimal polynomial is of second degree: \( \psi \left( \lambda \right) = \left( \lambda -1 \right) \left( \lambda -4 \right) . \) Therefore, we seek any function of matrix A as a polynomial of first degree: \( f({\bf A}) = b_0 {\bf I} + b_1 {\bf A} . \) In particular, if we consider a square root \( f(\lambda ) \sqrt{\lambda} , \) we get the system of two algebraic equation
\[ f(1) = b_0 + b_1 \sqrt{1} , \qquad f(4) = b_0 + b_1 \sqrt{4} , \]
where we need to choose appropriate branches for square root. It is sufficient to consider two of them:
\begin{align*} 1 &= b_0 + b_1 , \\ 2 &= b_0 + 4\, b_1 ; \end{align*}
and
\begin{align*} -1 &= b_0 + b_1 , \\ 2 &= b_0 + 4\, b_1 . \end{align*}
We ask Mathematica to solve these equations:
Solve[{1 == b0 + b1, 2 == b0 + 4*b1}, {b0, b1}]
{{b0 -> 2/3, b1 -> 1/3}}
Solve[{-1 == b0 + b1, 2 == b0 + 4*b1}, {b0, b1}]
{{b0 -> -2, b1 -> 1}}
Then we build two roots:
Simplify[2*IdentityMatrix[3] + A]/3
{{4/3, 1/3, 1/3}, {1/3, 4/3, 1/3}, {1/3, 1/3, 4/3}}
Simplify[-2*IdentityMatrix[3] + A]
R2 = {{0, 1, 1}, {1, 0, 1}, {1, 1, 0}}
This gives two root matrices:
\[ {\bf R}_1 = \frac{1}{3} \begin{bmatrix} 4&1&1 \\ 1&4&1 \\ 1&1&4 \end{bmatrix} , \qquad {\bf R}_2 = \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 1&1&0 \end{bmatrix} . \]
Two other roots are just negative of these two: \( {\bf R}_3 = - {\bf R}_1 , \quad {\bf R}_4 = - {\bf R}_2 . \) Two verify, we ask Mathematica
R2.R2
{{2, 1, 1}, {1, 2, 1}, {1, 1, 2}}
A similar answer we obtain for each matrix root. Note that matrix A has other square roots that cannot be obtained with our method:
\[ \frac{1}{3} \begin{bmatrix} 1&4&1 \\ 4&1&1 \\ 1&1&4 \end{bmatrix} , \qquad \frac{1}{3} \begin{bmatrix} 4&1&1 \\ 1&1&4 \\ 1&4&1 \end{bmatrix} , \qquad \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 1&1&0 \end{bmatrix} , \qquad \begin{bmatrix} 0&1&1 \\ 1&1&0 \\ 1&0&1 \end{bmatrix} . \]
Two of them are build in next chapter.

Example: Consider a 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} 1&2&3 \\ 2&3&4 \\ 2&-6&-4 \end{bmatrix} \]
that has two complex conjugate eigenvalues.
A = {{1, 2, 3}, {2, 3, 4}, {2, -6, -4}}
Eigenvalues[A]
{1 + 2 I, 1 - 2 I, -2}
We can define eigenvalues in Mathematica as
s = Eigenvalues[A]
lambda1 = s[[1]]
1 + 2 I
lambda3 = s[[3]]
-2
We don't need to define the second complex eigenvalue because it is complex conjugate to the first one. Since the minimal polynomial for matrix A has degree 3, we seek exponential matrix function in the form:
\[ e^{{\bf A}\, t} = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 , \]
where the coefficients b0, b1, and b2 are determined from the system of algebraic equations:
\begin{eqnarray*} e^{(1 + 2{\bf j})t} &=& b_0 + (1 + 2{\bf j})\,b_1 + (1 + 2{\bf j})^2\, b_2 , \\ e^{(1 - 2{\bf j})t} &=& b_0 + (1 - 2{\bf j})\,b_1 + (1 - 2{\bf j})^2\, b_2 , \\ e^{-2t} &=& b_0 -2\, b_1 + 4\, b_2 . \end{eqnarray*}
Mathematica solves this system of algebraic equations without a problem:
ss = Solve[{Exp[(1 + 2 I)*t] == b0 + (1 + 2 I)*b1 + (1 + 2 I)^2*b2,
Exp[(1 - 2 I)*t] == b0 + (1 - 2 I)*b1 + (1 - 2 I)^2*b2,
Exp[-2*t] == b0 - 2*b1 + 4*b2 }, {b0, b1, b2}] // Flatten
{b0 -> (5 E^(-2 t))/ 13 + (4/13 + I/26) E^((1 - 2 I) t) + (4/13 - I/26) E^((1 + 2 I) t),
b1 -> -(1/52) E^((-2 - 2 I) t) (8 E^(2 I t) - (4 + 7 I) E^( 3 t) - (4 - 7 I) E^((3 + 4 I) t)),
b2 -> -(1/52) E^((-2 - 2 I) t) (-4 E^(2 I t) + (2 - 3 I) E^( 3 t) + (2 + 3 I) E^((3 + 4 I) t))}
We extract coefficients from the given list
b0 = Part[Values@ss, 1]
b1 = Part[Values@ss, 2]
b2 = Part[Values@ss, 3]
However, they all contain the imaginary unit and it forces us to get rid of pure imaginary numbers with the following commands:
b0 = Simplify[ComplexExpand[b0]]
1/13 E^(-2 t) (5 + 8 E^(3 t) Cos[2 t] + E^(3 t) Sin[2 t])
b1 = Simplify[ComplexExpand[b1]]
1/26 E^(-2 t) (-4 + 4 E^(3 t) Cos[2 t] + 7 E^(3 t) Sin[2 t])
b2 = Simplify[ComplexExpand[b2]]
1/52 E^(-2 t) (4 - 4 E^(3 t) Cos[2 t] + 6 E^(3 t) Sin[2 t]
Finally, we define the exponential matrix function
exp[t_] = Simplify[b0*IdentityMatrix[3] + b1*A + b2*A.A]
{{1/13 E^(-2 t) (14 - E^(3 t) Cos[2 t] + 21 E^(3 t) Sin[2 t]), 2/13 E^(-2 t) (-7 + 7 E^(3 t) Cos[2 t] - 4 E^(3 t) Sin[2 t]), 1/13 E^(-2 t) (-7 + 7 E^(3 t) Cos[2 t] + 9 E^(3 t) Sin[2 t])}, {1/ 13 E^(-2 t) (12 - 12 E^(3 t) Cos[2 t] + 31 E^(3 t) Sin[2 t]), 1/13 E^(-2 t) (-12 + 25 E^(3 t) Cos[2 t] - 5 E^(3 t) Sin[2 t]), 1/13 E^(-2 t) (-6 + 6 E^(3 t) Cos[2 t] + 17 E^(3 t) Sin[2 t])}, {2/ 13 E^(-2 t) (-11 + 11 E^(3 t) Cos[2 t] - 10 E^(3 t) Sin[2 t]), -(2/ 13) E^(-2 t) (-11 + 11 E^(3 t) Cos[2 t] + 3 E^(3 t) Sin[2 t]), 1/13 E^(-2 t) (11 + 2 E^(3 t) Cos[2 t] - 16 E^(3 t) Sin[2 t])}}
\[ {\bf exp} (t) = e^{{\bf A}\, t} = b_0 + b_1 {\bf A} + b_2 {\bf A}^2 = \frac{1}{13} \, e^{-2t} \begin{bmatrix} 14&-14&-7 \\ 12& -12& -6 \\ -22 & 22& 11 \end{bmatrix} + \frac{\cos 2t}{13} \, e^{-2t} \begin{bmatrix} -1&14&7 \\ -12& 25 & 6 \\ 22 & -22 & 2 \end{bmatrix} + \frac{\sin 2t}{13} \, e^{-2t} \begin{bmatrix} 21& -8&9 \\ 31& -5& 17 \\ -20 & -6 & -16 \end{bmatrix} . \]
This matrix function is a unique solution to the initial value problem
\[ \frac{{\text d}}{{\text d} t} \,{\bf exp} (t) = {\bf A}\,{\bf exp} (t) , \qquad {\bf exp} (0) = {\bf I} . \]
Therefore, we check with Mathematica
Simplify[D[exp[t], t] - A.exp[t]]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
exp[0]
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}