1 MuPAD Tutorial: Roots

Roots

  This web page serves as an introduction to definition of a function of a square matrix, which may have several meanings. In particular, for some scalar function f and a square matrix A, we specify \( f ({\bf A} ) \) to be a matrix of the same dimensions as A. Since our objection is applications of matrix functions in differential equations, we usually utilize holomorphic (represented by convergent power series) functions of a single variable. Previously, we used several equivalent definitions of matrix functions for admissible functions that are defined on spectrum of A if the values

\[ f^{(j)} (\lambda_i ) , \qquad j=0,1,\ldots , m_i -1, \quad i=1,2,\ldots s \]
exist for every eigenvalue λi of multiplicity mi. There are known many other definitions of a function of a square matrix (see the monograph by Nicolas J. Higham, born in 1961):
  • Higham, Nicholas J. Functions of Matrices: Theory and Computation. Cambridge University Press, Cambridge, 2008.
Nicolas Higham.

 We know that every non-zero number has precisely two square roots, as for instance, -1 has two square roots: j and -j, where j is a unit vector in positive vertical direction on the complex plane \( \mathbb{C} . \) The situation is rather more complicated for matrices. In fact, we know that some matrices have infinite many roots, others have only finite number of roots (that we can discover), with some having none at all.

 The matrix square root and logarithm are among the most commonly occurring matrix functions, arising most frequently in the context of symmetric positive definite matrices. The key roles that the square root plays in, for example, the matrix sign function, the definite generalized eigenvalue problem, the polar decomposition, and the geometric mean, make it a useful theoretical and computational tool. We consider matrix square roots related for solving second order differential equation, where functions

\[ {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} \qquad \mbox{and} \qquad {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \]
occur frequently. Although square root for some matrices may not exist, and for other matrices there could be infinite many matrix-roots, the above functions are uniquely defined for arbitrary square matrix A. Indeed, one infinite family of square roots of the identity \( 2 \times 2 \) matrix comprises the Householder reflections
\[ {\bf H} (\theta ) = \begin{bmatrix} \cos \theta & \sin \theta \\ \sin \theta & -\cos \theta \end{bmatrix} , \qquad \theta \in [0, 2\pi ] . \]
We always assume the choice of branches of the square root function is fixed in the neighbourhoods of the eigenvalues of the matrix of interest, since otherwise the square root is not even continuous.

Theorem: An arbitrary n-by-n matrix A has a square root if and only if in the "ascent sequence" of integers d1, d2, ... defined by

\[ d_i = \mbox{dim} \left( \mbox{ker}\left( {\bf A}^i \right) \right) - \mbox{dim} \left( \mbox{ker}\left( {\bf A}^{i-1} \right) \right) , \qquad i=1,2,\ldots , \]
no two terms are the same odd integer. ■

  When square root exists, it can be found via Cauchy integral

\[ {\bf A}^{1/2} = \frac{1}{2\pi {\bf j}} \, \int_{\gamma} {\text d} z\,f(z) \left( z{\bf I} - {\bf A} \right)^{-1} , \]
where f is holomorphic on and inside a closed contour γ that enclosed all eigenvalues of A. ■

Square Roots of 2-by-2 Matrices


 We start with diagonalizable 2-by-2 matrices, i.e., such matrices that are expressed in the form

\[ {\bf A} = {\bf S} {\bf \Lambda} {\bf S}^{-1} , \qquad \mbox{with } \quad {\bf \Lambda} = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} , \quad \det{\bf S} \ne 0 . \]
Obviously, a and b are the (not necessarily distinct) eigenvalues of A. It is easy to verify that all matrices of the form
\[ {\bf R} = {\bf S} \begin{bmatrix} \pm\sqrt{a} & 0 \\ 0 & \pm\sqrt{b} \end{bmatrix} {\bf S}^{-1} \]
furnish square roots of A. If \( a\ne 0 \ \& \ b \ne 0 , \) then there are four such square roots. If just one of a and b is zero, there are two such square roots. If a = b = 0, there is just one. But are these the only square roots?

  Now it is easy to check that (with A, S, and Λ as above)

\[ {\bf X}^2 = {\bf \Lambda} \qquad\mbox{if and only if} \qquad {\bf R}^2 = {\bf A} , \quad \mbox{where} \quad {\bf R} = {\bf S} {\bf X} {\bf S}^{-1} . \]
Therefore, it is suffices to find all matrices X satisfying \( {\bf X}^2 = {\bf \Lambda} . \) Let
\[ {\bf X} = \begin{bmatrix} x & y \\ z & w \end{bmatrix} \]
so that from \( {\bf X}^2 = {\bf \Lambda} \) we obtain four simultaneous quadratic equations
\begin{cases} x^2 + yz &=a, \qquad y\left( x+w\right) &=0, \\ z\left( x+w \right) &=0, \qquad w^2 + yz &=b. \end{cases}
If \( x+w \ne 0 , \) then we require that \( y=z=0 , \) which in turn leads to \( x = \pm \sqrt{a} , \quad w= \pm \sqrt{b} , \) and the matrix roots already found above. If, on the other hand, we have \( x+w =0, \) then we get
\[ x^2 + yz =a, \qquad x^2 + yz =b . \]
These equations are contradictory if \( a\ne b , \) so for matrices with two distinct eigenvalues (and hence two linearly independent eigenvectors) we have already found all square roots and we can state that there are just the four ones if \( a\ne b \ne 0, \) and just two if one of a and b is zero.

  the second type of matrix to consider is one which has just one eigenvalue, but two independent eigenvectors. Then \( a =b , \quad {\bf \Lambda} = a{\bf I} = {\bf A} \) and the matrix S is redundant. Then the above equations with \( x+w =0 \) yield the consistent equations

\[ x^2 + yz =a, \qquad x^2 + yz =a . \]
The case y = 0 leads back to the "obvious" square roots already found, and the cases \( y \ne 0 \) give \( z= (a-x^2 )/y \) and yield
\[ {\bf X} = \begin{bmatrix} x & y \\ \frac{a-x^2}{y} & -x \end{bmatrix} , \qquad y \ne 0, \]
as a square root of aI for any x and \( y \ne 0. \) In particular, by taking a = 0 this gives all the additional square roots of the zero matrix.

  There remains just the third type of matrix to discuss, namely a defective A that has just one eigenvalue a and only one linearly independent eigenvector. In this case linear algebra tells us that this matrix is similar to the Jordan form:

\[ {\bf A} = {\bf S} {\bf J} {\bf S}^{-1} \qquad \mbox{with} \quad {\bf J} = \begin{bmatrix} a&1 \\ 0&a \end{bmatrix} \]
and, rather as before,
\[ {\bf X}^2 = {\bf J} \qquad\mbox{if and only if} \qquad {\bf R}^2 = {\bf A} , \quad \mbox{where} \quad {\bf R} = {\bf S} {\bf X} {\bf S}^{-1} . \]
Now if \( a \ne 0, \) then the methods used so far reveal that the matrix J has just two square roots:
\[ \pm \sqrt{a}\, \begin{bmatrix} 1&\frac{1}{2a} \\ 0&1 \end{bmatrix} , \]
and it follows that A has precisely two square roots, namely,
\[ \pm \sqrt{a}\, {\bf S} \begin{bmatrix} 1&\frac{1}{2a} \\ 0&1 \end{bmatrix} {\bf S}^{-1} . \]
Finally, is a = 0 (i.e., the defective matrix A has the determinant and its trace to be zero), then J possesses no square root at all. It follows that every 2-by-2 matrix that does not possess a square root is of the form
\[ {\bf S} \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} {\bf S}^{-1} , \qquad \det{\bf S} \ne 0. \qquad ■ \]

 

 Situation is more complicated for higher dimensions. It is not true that not defective 3-by-3 matrix has only eight roots. Let us consider the symmetric positive definite matrix:

\[ {\bf A} = \begin{bmatrix} 2&1&1 \\ 1&2&1 \\ 1&1&2 \end{bmatrix} \]
that has two distinct eigenvalues, one \( \lambda =4 \) is simple, and another one \( \lambda = 1 \) has multiplicity 2. Matrix A is not defective and it is diagonalizable. However, this matrix has more than eight roots; we list some of them
\[ \pm \frac{1}{3} \begin{bmatrix} 4&1&1 \\ 1&4&1 \\ 1&1&4 \end{bmatrix} , \quad \pm \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 1&1&0 \end{bmatrix} , \quad \pm \frac{1}{3}\,\begin{bmatrix} 1&4&1 \\ 4&1&1 \\ 1&1&4 \end{bmatrix} , \quad \pm \frac{1}{3}\,\begin{bmatrix} 4&1&1 \\ 1&1&4 \\ 1&4&1 \end{bmatrix} , \quad \pm \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 1&1&0 \end{bmatrix} , \quad \pm \begin{bmatrix} 0&1&1 \\ 1&1&0 \\ 1&0&1 \end{bmatrix} . \]

Example: Consider the defective matrix

\[ {\bf A} = \begin{bmatrix} 1 & -1 \\ 1 & -1 \end{bmatrix} \]
that has one eigenvalue \( \lambda = 0 \) of geometrical multiplicity 1. The zero power of A is the identity matrix, so its kernel is zero. The null space of matrix A is one-dimensional spanned on the vector \( \langle 1 , 1 \rangle . \) Therefore, the kernel of A is one-dimensional and the first number is d1 =1. All powers of matrix A are zero matrices having null spaces of dimension 2. Therefore, we get the sequence \( d_1 =1 , \ d_2 =1, \ d_3 = d_4 = \cdots = 0 , \) the given matrix has no square root.

Example: Consider a defective 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} 0 & 4 & 0 \\ 0 & 0&0 \\ 0&0&0 \end{bmatrix} . \]
It has one triple eigenvalue \( \lambda = 0 \) of geometrical multiplicity 2. The zero power of A is the identity matrix, so its kernel is zero. The null space of matrix A is a two-dimensional space spanned on vectors \( \langle , 1 , 0, 0 \rangle \) and \( \langle , 0 , 0, 1 \rangle . \) The minimal polynomial for matrix A is \( \psi (\lambda ) = \lambda^2 , \) and none of discussed methods is able to find a square root for the given matrix.

However, matrix A has infinite many square roots:

\[ {\bf R} = \begin{bmatrix} 0 & 0 & a \\ 0 & 0&0 \\ 0&b&0 \end{bmatrix} , \qquad \mbox{with} \qquad {\bf R}^2 = {\bf A} , \quad a\,b =4, \quad a\ne 0, \quad b\ne 0 . \]

Now we verify that the conditions of the above theorem are valid for matrix A. Indeed, the null space of the given matrix is 2-dimensional. Since all powers of matrix A are zero matrices, we get the sequence:

\[ d_1 =2, \ d_2 =1, \ d_3 = d_4 = \cdots = 0. \]
Therefore, we see that the conditions of the theorem are fulfilled. ■

 It is instructive to consider the cases where A is a rank-1 n-by-n matrix so \( {\bf A} = {\bf u}\otimes {\bf v} = {\bf u}\,{\bf v}^{\ast} \) is an outer product of two n-vectors. In this case (rank(A) = 1), a function of rank-1 matrix is the sum of two terms:

\[ f \left( {\bf A} \right) \equiv f \left( {\bf u}\,{\bf v}^{\ast} \right) = f(0) \,{\bf I} + \left( \frac{f \left( {\bf v}^{\ast} \,{\bf u} \right) - f(0)}{{\bf v}^{\ast} \,{\bf u} -0} \right) {\bf A} . \]

Example: Consider two vectors: \( {\bf u} = \langle 1,1,1 \rangle \) and \( {\bf v} = \langle 1,0,3 \rangle . \) Then their outer product defines the matrix

\[ {\bf A} = {\bf u} \otimes {\bf v} = {\bf u} \, {\bf v}^{\ast} = \begin{bmatrix} 1&0&3 \\ 1&0&3 \\ 1&0&3 \end{bmatrix} . \]
This matrix has one simple eigenvalue \( \lambda =4 \) and another one double eigenvalue \( \lambda = 0 . \) Mathematica confirms:
A = Outer[Times, {1, 1, 1}, {1, 0, 3}] (* or *)
A = {{1, 0, 3}, {1, 0, 3}, {1, 0, 3}}
ss = Eigenvalues[A]
{4, 0, 0}
Eigenvectors[A]
{{1, 1, 1}, {-3, 0, 1}, {0, 1, 0}}
CharacteristicPolynomial[A, x]
4 x^2 - x^3
Since matrix A is diagonilizable, its minimal polynomial is of second degree: \( \psi (\lambda ) = \lambda \left( \lambda -4 \right) . \) We seek a square root of A in the form

\[ {\bf A}^{1/2} = b_0 {\bf I} + b_1 {\bf A} , \]
where coefficients are determined from the system of algebraic equations
\begin{eqnarray*} 2 &=& b_0 + 4\, b_1\\ 0 &=& b_0 . \end{eqnarray*}
So one root we find immediately
\[ {\bf R} = {\bf A}^{1/2} = \frac{1}{2}\, {\bf A} . \]
In our case, \( f (\lambda ) = \sqrt{\lambda} \) and
\[ {\bf v}^{\ast} \,{\bf u} = \langle 1,0,3 \rangle^{\ast} \langle 1, 1, 1 \rangle = 1+3 =4 . \]
Therefore, \( f ({\bf v}^{\ast} \,{\bf u} ) = \sqrt{4} =2 \) and the finite difference becomes
\[ \frac{f \left( {\bf v}^{\ast} \,{\bf u} \right) - f(0)}{{\bf v}^{\ast} \,{\bf u} -0} = \frac{2-0}{4-0} = \frac{1}{2} . \]
Since any power of matrix A is
\[ {\bf A}^{m} = 4^m {\bf A} , \]
the null spaces of all powers are the same and the sequence becomes \( d_1 =2, \ d_2 = d_3 = \cdots =0 . \)

  Now suppose we want to define the exponential matrix function that corresponds to the scalar function \( f(\lambda ) = e^{\lambda \,t} . \) Again, the corresponding matrix function is the sum of two terms:

\[ e^{{\bf A}\,t} = b_0 {\bf I} + b_1 {\bf A} , \]
where coefficients are determined from the system of algebraic equations
\begin{eqnarray*} e^0 &=& b_0 \\ e^{4t} &=& b_0 + 4\, b_1 . \end{eqnarray*}
Then the exponential matrix function becomes
\[ e^{{\bf A}\,t} = {\bf I} + \frac{1}{4} \left( e^{4t} -1 \right) {\bf A} , \]

Example: Consider a positive definite 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} -75&-45&107 \\ 252&154&-351 \\ 48&30&-65 \end{bmatrix} . \]
that has three positive eigenvalues \( \lambda = 1, 4, 9 . \) MuPad confirms
A:=matrix([[-75,-45,107],[252,154,-351],[48,30,-65]])
\( \left(\begin{array}{ccc} -75 & -45 & 107\\ 252 & 154 & -351\\ 48 & 30 & -65 \end{array}\right) \)
eigofA:=linalg::eigenvectors(A)
\( \left[\left[1,1,\left[\left(\begin{array}{c} 2\\ -1\\ 1 \end{array}\right)\right]\right],\left[9,1,\left[\left(\begin{array}{c} -\frac{1}{3}\\ 3\\ 1 \end{array}\right)\right]\right],\left[4,1,\left[\left(\begin{array}{c} \frac{1}{2}\\ \frac{3}{2}\\ 1 \end{array}\right)\right]\right]\right] \)
Eigenvectors:= eigofA[1][3][1], eigofA[2][3][1], eigofA[3][3][1]
\( \left(\begin{array}{c} 2\\ -1\\ 1 \end{array}\right),\left(\begin{array}{c} -\frac{1}{3}\\ 3\\ 1 \end{array}\right),\left(\begin{array}{c} \frac{1}{2}\\ \frac{3}{2}\\ 1 \end{array}\right) \)

There are a number of different methods we can use to find the square roots of matrix A. We will first use the diagonalization method. To achieve this, we first build the transition matrix of eigenvectors and its inverse.
S:=Eigenvectors[1].Eigenvectors[2].Eigenvectors[3]
\( \left(\begin{array}{ccc} 2 & -\frac{1}{3} & \frac{1}{2}\\ -1 & 3 & \frac{3}{2}\\ 1 & 1 & 1 \end{array}\right) \)
Sinv:=S^-1
\( \left(\begin{array}{ccc} 9 & 5 & -12\\ 15 & 9 & -21\\ -24 & -14 & 34 \end{array}\right) \)
lambda:=Sinv*A*S
\( \left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 9 & 0\\ 0 & 0 & 4 \end{array}\right) \)

Due to the nature of the square root, this matrix has a total of 8 roots. For our purposes we will choose one of the eight roots as shown below.
sqrtlambda:=lambda^(1/2)
\( \left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 3 & 0\\ 0 & 0 & 2 \end{array}\right) \)

We have constructed one of the eight square roots of this positive definite matrix. The other seven are from the \( \pm \) of the diagonals along the sqrtlambda matrix.
sqrtA:=S*sqrtlambda*Sinv
\( \left(\begin{array}{ccc} -21 & -13 & 31\\ 54 & 34 & -75\\ 6 & 4 & -7 \end{array}\right) \)

We check that obtained matrix is a square root:
sqrtA*sqrtA
\( \left(\begin{array}{ccc} -75 & -45 & 107\\ 252 & 154 & -351\\ 48 & 30 & -65 \end{array}\right) \)

Now we find another square root based on another diagonal matrix
sqrt2:=matrix([[1,0,0],[0,-3,0],[0,0,2]])
\( \left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & -3 & 0\\ 0 & 0 & 2 \end{array}\right) \)

Using the above diagonal amtrix, we build another matrix square root:
sqrtA2:=S*sqrt2*Sinv
\( \left(\begin{array}{ccc} 9 & 5 & -11\\ -216 & -128 & 303\\ -84 & -50 & 119 \end{array}\right) \)

We also check whether this matrix is a square root by multiplying it with itself:
sqrtA2*sqrtA2
\( \left(\begin{array}{ccc} -75 & -45 & 107\\ 252 & 154 & -351\\ 48 & 30 & -65 \end{array}\right) \)

We show how to find square roots using Sylvester's method.

Then we find the resolvent, \( {\bf R}_{\lambda} \left( {\bf A} \right) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} : \)
R = Inverse[lambda*IdentityMatrix[3] - A]
{{(520 - 89 lambda + lambda^2)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), ( 285 - 45 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), (-683 + 107 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3)},
{(-468 + 252 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), (-261 + 140 lambda + lambda^2)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), ( 639 - 351 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3)},
{( 168 + 48 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), ( 90 + 30 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), (-210 - 79 lambda + lambda^2)/(-36 + 49 lambda - 14 lambda^2 + lambda^3)}}
Next we calculate residues, corresponding to the square root. Since matrix A has three distinct eigenvalues, these residues are multiples of Sylvester's auxiliary matrices.
Z1 = Simplify[(lambda - 1)*R] /. lambda -> 1
{{18, 10, -24}, {-9, -5, 12}, {9, 5, -12}}
Z4 = Simplify[(lambda - 4)*R*Sqrt[lambda]] /. lambda -> 4
{{-24, -14, 34}, {-72, -42, 102}, {-48, -28, 68}}
Z9 = Simplify[(lambda - 9)*R*Sqrt[lambda]] /. lambda -> 9
{{-15, -9, 21}, {135, 81, -189}, {45, 27, -63}}
We check whether these auxiliary matrices are multiples of the projection operators:
Eigenvalues[Z1]
{1, 0, 0}
Eigenvalues[Z4]
{2, 0, 0}
Eigenvalues[Z9]
{3, 0, 0}
Finally, we build square root matrices:
R = Z1 + Z4 + Z9
{{-21, -13, 31}, {54, 34, -75}, {6, 4, -7}}
\[ {\bf R}_1 = \begin{bmatrix} -21&-13&31 \\ 54&34&-75 \\ 6&4&-7 \end{bmatrix} , \qquad {\bf R}_1^2 = {\bf A} . \]
R2 = Z1 - Z4 + Z9
{{27, 15, -37}, {198, 118, -279}, {102, 60, -143}}
\[ {\bf R}_2 = \begin{bmatrix} 27&15&-37 \\ 198&118&-279 \\ 102&60&-143 \end{bmatrix} , \qquad {\bf R}_2^2 = {\bf A} . \]
R3 = Z1 - Z4 - Z9
{{57, 33, -79}, {-72, -44, 99}, {12, 6, -17}}
\[ {\bf R}_3 = \begin{bmatrix} 57&33&-79 \\ -72&-44&99 \\ 12&6&-17 \end{bmatrix} , \qquad {\bf R}_3^2 = {\bf A} . \]
R4 = Z1 + Z4 - Z9
{{9, 5, -11}, {-216, -128, 303}, {-84, -50, 119}}
\[ {\bf R}_4 = \begin{bmatrix} 9&5&-11 \\ -216&-128&303 \\ -84&-50&119 \end{bmatrix} , \qquad {\bf R}_4^2 = {\bf A} . \]
We check that these matrices are square roots:
R2.R2
{{-75, -45, 107}, {252, 154, -351}, {48, 30, -65}}
Similar equations are valid for other root matrices. Four other roots are just negative of the obtained matrices.

Example: Consider a positive definite 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} -14&-9&21 \\ 135&82&-189 \\ 45&27&-62 \end{bmatrix} . \]
that has two positive eigenvalues \( \lambda = 1, 1, 4 . \) Mathematica confirms
A = {{-14, -9, 21}, {135, 82, -189}, {45, 27, -62}}
Eigenvectors[A]
{{-1, 9, 3}, {7, 0, 5}, {-3, 5, 0}}
Since there are three linearly independent eigenvectors, matrix A is diagonalizable. Then its minimal polynomial is of the second degree: \( \psi (\lambda )= (\lambda -1)(\lambda -4) . \) The resolvent has ψ in the denominator,
R = Simplify[Inverse[lambda*IdentityMatrix[3] - A]]
{{(-19 + lambda)/((-4 + lambda) (-1 + lambda)), -(9/(
4 - 5 lambda + lambda^2)), 21/(4 - 5 lambda + lambda^2)}, {135/(
4 - 5 lambda + lambda^2), (77 + lambda)/(
4 - 5 lambda + lambda^2), -(189/(4 - 5 lambda + lambda^2))}, {45/(
4 - 5 lambda + lambda^2), 27/(
4 - 5 lambda + lambda^2), (-67 + lambda)/((-4 + lambda) (-1 + lambda))}}
We calculate residues corresponding to the square root function that are multiples of Sylvester's auxiliary matrices:
Z1 = Simplify[(lambda - 1)*R*Sqrt[lambda]] /. lambda -> 1
{{6, 3, -7}, {-45, -26, 63}, {-15, -9, 22}}
Z4 = Simplify[(lambda - 4)*R*Sqrt[lambda]] /. lambda -> 4
{{-10, -6, 14}, {90, 54, -126}, {30, 18, -42}}
Matrix Z1 is the projection operator on eigenspace corresponding to eigenvalue \( \lambda =1 , \) while Z4 is the projection operator on eigenspace corresponding to eigenvalue \( \lambda =4 , \)
Eigenvalues[Z1]
{1, 1, 0}
Eigenvectors[Z1]
{{7, 0, 5}, {-3, 5, 0}, {-1, 9, 3}}
Eigenvalues[Z4]
{2, 0, 0}
Eigenvectors[Z4]
{{-1, 9, 3}, {7, 0, 5}, {-3, 5, 0}}
Finally, we build four square roots:
R1 = Z1 + Z4
{{-4, -3, 7}, {45, 28, -63}, {15, 9, -20}}
\[ {\bf R}_1 = \begin{bmatrix} -4&-3&7 \\ 45&28&-63 \\ 15&9&-20 \end{bmatrix} , \qquad {\bf R}_1^2 = {\bf A} = \begin{bmatrix} -14&-9&21 \\ 135&82&-189 \\ 45&27&-62 \end{bmatrix} . \]
R2 = Z1 - Z4
{{16, 9, -21}, {-135, -80, 189}, {-45, -27, 64}}
\[ {\bf R}_2 = \begin{bmatrix} 16&9&-21 \\ -135&-80&189 \\ -45&-27&64 \end{bmatrix} , \qquad {\bf R}_2^2 = {\bf A} . \]
Therefore, we have four root matrices: \( \pm {\bf R}_1 \quad\mbox{and} \quad \pm {\bf R}_2 . \)
Eigenvalues[R1]
{2, 1, 1}
Eigenvalues[R2]
{-2, 1, 1}
Suppose you want to use a diagonalization procedure, so you need eigenvectors:
S = Transpose[Eigenvectors[A]]
{{-1, 7, -3}, {9, 0, 5}, {3, 5, 0}}
Then you can determine eight square root matrices
\[ {\bf S} \begin{bmatrix} \pm 2&0&0 \\ 0&\pm 1&0 \\ 0&0&\pm 1 \end{bmatrix} {\bf S}^{-1} , \]
some of them you cannot find using neither Sylvester's method nor the resovent method:
\[ {\bf R}_6 = \frac{1}{5} \begin{bmatrix} -190&-111&273 \\ -225&140&315 \\ -225&-135&320 \end{bmatrix} , \qquad {\bf R}_{8} = \begin{bmatrix} -58& -171/5 & 413/5 \\ 135&80&-189 \\ 15&9&-20 \end{bmatrix} . \]

Example: Consider a positive definite defective 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} 1&1&1 \\ -1&1&0 \\ 1&0&1 \end{bmatrix} . \]
that has one positive eigenvalue \( \lambda = 1, \) of geometrical multiplicy 1. Mathematica confirms
A = {{1, 1, 1}, {-1, 1, 0}, {1, 0, 1}}
Eigenvalues[A]
{1, 1, 1}
Eigenvectors[A]
{{0, -1, 1}, {0, 0, 0}, {0, 0, 0}}
Then we find its resolvent:
R = Simplify[Inverse[lambda*IdentityMatrix[3] - A]]
{{1/(-1 + lambda), 1/(-1 + lambda)^2, 1/(-1 + lambda)^2},
{-( 1/(-1 + lambda)^2), ((-2 + lambda) lambda)/(-1 + lambda)^3, -( 1/(-1 + lambda)^3)},
{1/(-1 + lambda)^2, 1/(-1 + lambda)^3, ( 2 - 2 lambda + lambda^2)/(-1 + lambda)^3}}
Finally, we determine the root:
K = D[Simplify[(lambda - 1)^3*R*Sqrt[lambda]], lambda, lambda]/2 /. lambda -> 1
{{1, 1/2, 1/2}, {-(1/2), 9/8, 1/8}, {1/2, -(1/8), 7/8}}
We check the answer.
K.K
{{1, 1, 1}, {-1, 1, 0}, {1, 0, 1}}
Now we check the conditions that the square root exists.