Preface


Motivated by definitions of matrix functions presented in the previous sections, we briefly discuss an exiting topic of matrix roots. Again, we do not treat it in general case and restrict ourselves to a particular class of square matrices---positive definite matrices. As we will see in the next chapter, some physical problems lead to matrix differential equation \( {\bf A}\,\ddot{\bf X} + {\bf B}\,\dot{\bf X} + {\bf C}\, {\bf X} = {\bf 0} . \) A typical approach to solve this matrix differential equation is to apply the Laplace transformation. This reduces the corresponding initial value matrix problem to algebraic matrix quadratic equation, and this in turn yields definition of matrix functions involving square roots. In particular, when B = 0, its solution is expressed through a function depending on matrix square root. Although differential equations utilize specific functions with square matrix roots, this section goes beyond these functions. The reader is advised to find more deep exposition of matrix square roots in presented references.

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part I of the course APMA0340
Introduction to Linear Algebra with Mathematica

Roots


  This web page serves as an introduction to the definition of a function of a square matrix, which may have several meanings. In particular, for some scalar function f and a square matrix A, we specify f(A) to be a matrix of the same dimensions as A. Since our objective is applications of matrix functions in differential equations, we usually utilize holomorphic (represented by convergent power series) functions of a single variable. Previously, we used several equivalent definitions of matrix functions for admissible functions that are defined on spectrum of A if the values

\[ f^{(j)} (\lambda_i ) , \qquad j=0,1,\ldots , m_i -1, \quad i=1,2,\ldots s \]
exist for every eigenvalue λi of multiplicity mi. There are known many other definitions of a function of a square matrix (see the monograph by Nicolas J. Higham, born in 1961).
Nicolas Higham.

 We know that every non-zero number has precisely two square roots, as for instance, -1 has two square roots: j and -j, where j is a unit vector in positive vertical direction on the complex plane ℂ. The situation is more complicated for matrices. In fact, we know that some matrices have infinitely many roots, others have only a finite number of roots (we can discover some of them), while some have none at all.

 The matrix square root and logarithm are among the most commonly occurring matrix functions, arising most frequently in the context of symmetric positive definite matrices. The key roles that the square root plays in, for example, the matrix sign function, the definite generalized eigenvalue problem, the polar decomposition, and the geometric mean, make it a useful theoretical and computational tool. We consider matrix square roots related for solving second order differential equation, where functions

\[ {\bf U} (t) = e^{{\bf A}\,t} , \qquad {\bf \Phi} (t) = \frac{\sin \left( t\,\sqrt{\bf A} \right)}{\sqrt{\bf A}} , \qquad \mbox{and} \qquad {\bf \Psi} (t) = \cos \left( t\,\sqrt{\bf A} \right) \]
occur frequently. Although square roots for some matrices may not exist, and for other matrices there could be infinitely many matrix-roots, the above functions are uniquely defined for arbitrary square matrix A because they are unique solutions of some initial value problems.

Indeed, one infinite family of square roots of the identity 2 × 2 matrix comprises the Householder reflections

\[ {\bf H} (\theta ) = \begin{bmatrix} \cos \theta & \sin \theta \\ \sin \theta & -\cos \theta \end{bmatrix} , \qquad \theta \in [0, 2\pi ] . \]
Then for any θ, H²(θ) = I, the identity matrix. We always assume the choice of branches of the square root function is fixed in the neighborhoods of the eigenvalues of the matrix of interest, since otherwise the square root is not even continuous.

Theorem: A n-by-n matrix A has a square root if and only if in the "ascent sequence" of integers d1, d2, ... defined by

\[ d_i = \mbox{dim} \left( \mbox{ker}\left( {\bf A}^i \right) \right) - \mbox{dim} \left( \mbox{ker}\left( {\bf A}^{i-1} \right) \right) , \qquad i=1,2,\ldots , \]
no two terms are the same odd integer.    ▣

  When a square root exists, it can be found via Cauchy integral

\[ {\bf A}^{1/2} = \frac{1}{2\pi {\bf j}} \, \int_{\gamma} {\text d} z\,z^{1/2} \left( z{\bf I} - {\bf A} \right)^{-1} , \]
where a closed contour γ encloses all eigenvalues of A. ■

Square Roots of 2-by-2 Matrices


 We start with diagonalizable 2-by-2 matrices, i.e., such matrices that are expressed in the form

\[ {\bf A} = {\bf S} {\bf \Lambda} {\bf S}^{-1} , \qquad \mbox{with } \quad {\bf \Lambda} = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} , \quad \det{\bf S} \ne 0 . \]
Obviously, a and b are the (not necessarily distinct) eigenvalues of A. It is easy to verify that all matrices of the form
\[ {\bf R} = {\bf S} \begin{bmatrix} \pm\sqrt{a} & 0 \\ 0 & \pm\sqrt{b} \end{bmatrix} {\bf S}^{-1} \]
furnish square roots of A. If \( a\ne 0 \ \& \ b \ne 0 , \) then there are four such square roots. If just one of a and b is zero, there are two such square roots. If a = b = 0, there is just one. But are these the only square roots?

  Now it is easy to check that (with A, S, and Λ as above)

\[ {\bf X}^2 = {\bf \Lambda} \qquad\mbox{if and only if} \qquad {\bf R}^2 = {\bf A} , \quad \mbox{where} \quad {\bf R} = {\bf S} {\bf X} {\bf S}^{-1} . \]
Therefore, it is suffices to find all matrices X satisfying \( {\bf X}^2 = {\bf \Lambda} . \) Let
\[ {\bf X} = \begin{bmatrix} x & y \\ z & w \end{bmatrix} \]
so that from \( {\bf X}^2 = {\bf \Lambda} \) we obtain four simultaneous quadratic equations
\begin{cases} x^2 + yz &=a, \qquad y\left( x+w\right) &=0, \\ z\left( x+w \right) &=0, \qquad w^2 + yz &=b. \end{cases}
If \( x+w \ne 0 , \) then we require that \( y=z=0 , \) which in turn leads to \( x = \pm \sqrt{a} , \quad w= \pm \sqrt{b} , \) and the matrix roots already found above. If, on the other hand, we have \( x+w =0, \) then we get
\[ x^2 + yz =a, \qquad x^2 + yz =b . \]
These equations are contradictory if \( a\ne b , \) so for matrices with two distinct eigenvalues (and hence two linearly independent eigenvectors) we have already found all square roots and we can state that there are just the four ones if \( a\ne b \ne 0, \) and just two if one of a and b is zero.

  The second type of matrix to consider is one which has just one eigenvalue, but two independent eigenvectors. Then \( a =b , \quad {\bf \Lambda} = a{\bf I} = {\bf A} \) and the matrix S is redundant. Then the above equations with \( x+w =0 \) yield the consistent equations

\[ x^2 + yz =a, \qquad x^2 + yz =a . \]
The case y = 0 leads back to the "obvious" square roots already found, and the cases \( y \ne 0 \) give \( z= (a-x^2 )/y \) and yield
\[ {\bf X} = \begin{bmatrix} x & y \\ \frac{a-x^2}{y} & -x \end{bmatrix} , \qquad y \ne 0, \]
as a square root of aI for any x and \( y \ne 0. \) In particular, by taking a = 0 this gives all the additional square roots of the zero matrix.

  There remains just the third type of matrix to discuss, namely a defective A that has just one eigenvalue a and only one linearly independent eigenvector. In this case, linear algebra tells us that this matrix is similar to the Jordan form:

\[ {\bf A} = {\bf S} {\bf J} {\bf S}^{-1} \qquad \mbox{with} \quad {\bf J} = \begin{bmatrix} a&1 \\ 0&a \end{bmatrix} \]
and, rather as before,
\[ {\bf X}^2 = {\bf J} \qquad\mbox{if and only if} \qquad {\bf R}^2 = {\bf A} , \quad \mbox{where} \quad {\bf R} = {\bf S} {\bf X} {\bf S}^{-1} . \]
Now if \( a \ne 0, \) then the methods used so far reveal that the matrix J has just two square roots:
\[ \pm \sqrt{a}\, \begin{bmatrix} 1&\frac{1}{2a} \\ 0&1 \end{bmatrix} , \]
and it follows that A has precisely two square roots, namely,
\[ \pm \sqrt{a}\, {\bf S} \begin{bmatrix} 1&\frac{1}{2a} \\ 0&1 \end{bmatrix} {\bf S}^{-1} . \]
Finally, if a = 0 (i.e., the defective matrix A has the determinant and its trace to be zero), then J possesses no square root at all. It follows that every 2-by-2 matrix that does not possess a square root is of the form
\[ {\bf S} \begin{bmatrix} 0&1 \\ 0&0 \end{bmatrix} {\bf S}^{-1} , \qquad \det{\bf S} \ne 0. \]

Example: Consider the defective matrix

\[ {\bf A} = \begin{bmatrix} 1 & -1 \\ 1 & -1 \end{bmatrix} \]
that has one eigenvalue λ = 0 of geometrical multiplicity 1. The zero power of A is the identity matrix, so its kernel is zero. The null space of matrix A is one-dimensional spanned on the vector \( \langle 1 , 1 \rangle . \) Therefore, the kernel of A is one-dimensional and the first number is d1 =1. All powers of matrix A are zero matrices having null spaces of dimension 2. Therefore, we get the sequence \( d_1 =1 , \ d_2 =1, \ d_3 = d_4 = \cdots = 0 , \) the given matrix has no square root.     ■

 

 Situation is more complicated for higher dimensions. It is not true that not defective 3-by-3 matrix has only eight roots. Let us consider the symmetric positive definite matrix:

\[ {\bf A} = \begin{bmatrix} 2&1&1 \\ 1&2&1 \\ 1&1&2 \end{bmatrix} \]
that has two distinct eigenvalues, one λ = 4 is simple, and another one λ = 1 has multiplicity 2. Matrix A is not defective and it is diagonalizable. However, this matrix has more than eight roots; we list some of them
\[ \pm \frac{1}{3} \begin{bmatrix} 4&1&1 \\ 1&4&1 \\ 1&1&4 \end{bmatrix} , \quad \pm \begin{bmatrix} 0&1&1 \\ 1&0&1 \\ 1&1&0 \end{bmatrix} , \quad \pm \frac{1}{3}\,\begin{bmatrix} 1&4&1 \\ 4&1&1 \\ 1&1&4 \end{bmatrix} , \] \[ \pm \frac{1}{3}\,\begin{bmatrix} 4&1&1 \\ 1&1&4 \\ 1&4&1 \end{bmatrix} , \quad \pm \begin{bmatrix} 1&0&1 \\ 0&1&1 \\ 1&1&0 \end{bmatrix} , \quad \pm \begin{bmatrix} 0&1&1 \\ 1&1&0 \\ 1&0&1 \end{bmatrix} . \]

Example: Consider a defective 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} 0 & 4 & 0 \\ 0 & 0&0 \\ 0&0&0 \end{bmatrix} . \]
It has one triple eigenvalue \( \lambda = 0 \) of geometrical multiplicity 2. The zero power of A is the identity matrix, so its kernel is zero. The null space of matrix A is a two-dimensional space spanned on vectors \( \langle , 1 , 0, 0 \rangle \) and \( \langle , 0 , 0, 1 \rangle . \) The minimal polynomial for matrix A is \( \psi (\lambda ) = \lambda^2 , \) and none of discussed methods is able to find a square root for the given matrix.

However, matrix A has infinite many square roots:

\[ {\bf R} = \begin{bmatrix} 0 & 0 & a \\ 0 & 0&0 \\ 0&b&0 \end{bmatrix} , \qquad \mbox{with} \qquad {\bf R}^2 = {\bf A} , \quad a\,b =4, \quad a\ne 0, \quad b\ne 0 . \]

Now we verify that the conditions of the above theorem are valid for matrix A. Indeed, the null space of the given matrix is 2-dimensional. Since all powers of matrix A are zero matrices, we get the sequence:

\[ d_1 =2, \ d_2 =1, \ d_3 = d_4 = \cdots = 0. \]
Therefore, we see that the conditions of the theorem are fulfilled, and matrix A has a square root.    ■

 It is instructive to consider the cases where A is a rank-1 n-by-n matrix so \( {\bf A} = {\bf u}\otimes {\bf v} = {\bf u}\,{\bf v}^{\ast} \) is an outer product of two n-vectors (actually, they are n×1 matrices). In this case (rank(A) = 1), a function of rank-1 matrix is the sum of two terms:

\[ f \left( {\bf A} \right) \equiv f \left( {\bf u}\,{\bf v}^{\ast} \right) = f(0) \,{\bf I} + \left( \frac{f \left( {\bf v}^{\ast} \,{\bf u} \right) - f(0)}{{\bf v}^{\ast} \,{\bf u} -0} \right) {\bf A} , \]

where v*u is a number as a product of 1×n matrix v* and n×1 matrix u.

Example: Consider two vectors: \( {\bf u} = \langle 1,1,1 \rangle \) and \( {\bf v} = \langle 1,0,3 \rangle . \) Then their outer product defines the matrix

\[ {\bf A} = {\bf u} \otimes {\bf v} = {\bf u} \, {\bf v}^{\ast} = \begin{bmatrix} 1&0&3 \\ 1&0&3 \\ 1&0&3 \end{bmatrix} . \]
This matrix has one simple eigenvalue \( \lambda =4 \) and another one double eigenvalue \( \lambda = 0 . \) Mathematica confirms:
A = Outer[Times, {1, 1, 1}, {1, 0, 3}] (* or *)
A = {{1, 0, 3}, {1, 0, 3}, {1, 0, 3}}
ss = Eigenvalues[A]
{4, 0, 0}
Eigenvectors[A]
{{1, 1, 1}, {-3, 0, 1}, {0, 1, 0}}
CharacteristicPolynomial[A, x]
4 x^2 - x^3
Since matrix A is diagonilizable, its minimal polynomial is of second degree: \( \psi (\lambda ) = \lambda \left( \lambda -4 \right) . \) We seek a square root of A in the form

\[ {\bf A}^{1/2} = b_0 {\bf I} + b_1 {\bf A} , \]
where coefficients are determined from the system of algebraic equations
\begin{eqnarray*} 2 &=& b_0 + 4\, b_1\\ 0 &=& b_0 . \end{eqnarray*}
So one root we find immediately
\[ {\bf R}_{+} = {\bf A}^{1/2} = \frac{1}{2}\, {\bf A} . \]
In our case, \( f (\lambda ) = \sqrt{\lambda} \) and
\[ {\bf v}^{\ast} \,{\bf u} = \langle 1,0,3 \rangle^{\ast} \langle 1, 1, 1 \rangle = 1+3 =4 . \]
Therefore, \( f ({\bf v}^{\ast} \,{\bf u} ) = \sqrt{4} =2 \) and the finite difference becomes
\[ \frac{f \left( {\bf v}^{\ast} \,{\bf u} \right) - f(0)}{{\bf v}^{\ast}\, {\bf u} -0} = \frac{2-0}{4-0} = \frac{1}{2} . \]
For another branch of square root, we have \( f\left( {\bf v}^{\ast}{\bf u} \right) = -\sqrt{4} = -2 . \) So another square root of matrix A becomes \( {\bf R}_{-} = -\frac{1}{2}\, {\bf A} . \) From obvious relation A² = 4 A, it follows that
\[ {\bf A}^{m} = 4^{m-1} {\bf A} , \qquad m=2,3,\ldots . \]
Hence, both matrices R+ and R- are square roots of A. From the above relation, we conclude that the null spaces of all powers of A are the same and the sequence becomes \( d_1 =2, \ d_2 = d_3 = \cdots =0 . \) Application of the Theorem yields that matrix A has a square root.

  Now suppose we want to define the exponential matrix function that corresponds to the scalar function \( f(\lambda ) = e^{\lambda \,t} . \) Again, the corresponding matrix function is the sum of two terms:

\[ e^{{\bf A}\,t} = b_0 {\bf I} + b_1 {\bf A} , \]
where coefficients are determined from the system of algebraic equations
\begin{eqnarray*} e^0 &=& b_0 \\ e^{4t} &=& b_0 + 4\, b_1 . \end{eqnarray*}
Then the exponential matrix function becomes
\[ e^{{\bf A}\,t} = {\bf I} + \frac{1}{4} \left( e^{4t} -1 \right) {\bf A} , \]

Example: Consider a positive definite 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} -75&-45&107 \\ 252&154&-351 \\ 48&30&-65 \end{bmatrix} . \]
that has three positive eigenvalues \( \lambda = 1, 4, 9 . \) Mathematica confirms
A = {{-75, -45, 107}, {252, 154, -351}, {48, 30, -65}}
Eigenvalues[A]
{9, 4, 1}
Then we find the resolvent, \( {\bf R}_{\lambda} \left( {\bf A} \right) = \left( \lambda {\bf I} - {\bf A} \right)^{-1} : \)
R = Inverse[lambda*IdentityMatrix[3] - A]
{{(520 - 89 lambda + lambda^2)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), ( 285 - 45 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), (-683 + 107 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3)},
{(-468 + 252 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), (-261 + 140 lambda + lambda^2)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), ( 639 - 351 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3)},
{( 168 + 48 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), ( 90 + 30 lambda)/(-36 + 49 lambda - 14 lambda^2 + lambda^3), (-210 - 79 lambda + lambda^2)/(-36 + 49 lambda - 14 lambda^2 + lambda^3)}}
Next we calculate residues, corresponding to the square root. Since matrix A has three distinct eigenvalues, these residues are multiples of Sylvester's auxiliary matrices.
Z1 = Simplify[(lambda - 1)*R] /. lambda -> 1
{{18, 10, -24}, {-9, -5, 12}, {9, 5, -12}}
Z4 = Simplify[(lambda - 4)*R*Sqrt[lambda]] /. lambda -> 4
{{-24, -14, 34}, {-72, -42, 102}, {-48, -28, 68}}
Z9 = Simplify[(lambda - 9)*R*Sqrt[lambda]] /. lambda -> 9
{{-15, -9, 21}, {135, 81, -189}, {45, 27, -63}}
We check whether these auxiliary matrices are multiples of the projection operators:
Eigenvalues[Z1]
{1, 0, 0}
Z1.Z1
{{18, 10, -24}, {-9, -5, 12}, {9, 5, -12}}
Eigenvalues[Z4]
{2, 0, 0}
Z4.Z4
{{-24, -14, 34}, {-72, -42, 102}, {-48, -28, 68}}
Eigenvalues[Z9]
{3, 0, 0}
Z9.Z9
Z1.Z9
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Finally, we build square root matrices:
R = Z1 + Z4 + Z9
{{-21, -13, 31}, {54, 34, -75}, {6, 4, -7}}
\[ {\bf R}_1 = \begin{bmatrix} -21&-13&31 \\ 54&34&-75 \\ 6&4&-7 \end{bmatrix} , \qquad {\bf R}_1^2 = {\bf A} . \]
R2 = Z1 - Z4 + Z9
{{27, 15, -37}, {198, 118, -279}, {102, 60, -143}}
\[ {\bf R}_2 = \begin{bmatrix} 27&15&-37 \\ 198&118&-279 \\ 102&60&-143 \end{bmatrix} , \qquad {\bf R}_2^2 = {\bf A} . \]
R3 = Z1 - Z4 - Z9
{{57, 33, -79}, {-72, -44, 99}, {12, 6, -17}}
\[ {\bf R}_3 = \begin{bmatrix} 57&33&-79 \\ -72&-44&99 \\ 12&6&-17 \end{bmatrix} , \qquad {\bf R}_3^2 = {\bf A} . \]
R4 = Z1 + Z4 - Z9
{{9, 5, -11}, {-216, -128, 303}, {-84, -50, 119}}
\[ {\bf R}_4 = \begin{bmatrix} 9&5&-11 \\ -216&-128&303 \\ -84&-50&119 \end{bmatrix} , \qquad {\bf R}_4^2 = {\bf A} . \]
We check that these matrices are square roots:
R2.R2
{{-75, -45, 107}, {252, 154, -351}, {48, 30, -65}}
Four other roots are just negative of the obtained matrices. Similar equations are valid for other root matrices.

Example: Consider a positive definite 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} -14&-9&21 \\ 135&82&-189 \\ 45&27&-62 \end{bmatrix} . \]
that has two positive eigenvalues \( \lambda = 1, 1, 4 . \) Mathematica confirms
A = {{-14, -9, 21}, {135, 82, -189}, {45, 27, -62}}
Eigenvectors[A]
{{-1, 9, 3}, {7, 0, 5}, {-3, 5, 0}}
Since there are three linearly independent eigenvectors, matrix A is diagonalizable. Then its minimal polynomial must be of the second degree: \( \psi (\lambda )= (\lambda -1)(\lambda -4) . \) The resolvent \( {\bf R}_{\lambda} ({\bf A}) = \left( \lambda{\bf I} - {\bf A} \right)^{-1} \) of A, when presented as a ratio of two irreducible polynomials, will contain the minimal polynomial ψ in the denominator.
R = Simplify[Inverse[lambda*IdentityMatrix[3] - A]]
{{(-19 + lambda)/((-4 + lambda) (-1 + lambda)), -(9/(
4 - 5 lambda + lambda^2)), 21/(4 - 5 lambda + lambda^2)}, {135/(
4 - 5 lambda + lambda^2), (77 + lambda)/(
4 - 5 lambda + lambda^2), -(189/(4 - 5 lambda + lambda^2))}, {45/(
4 - 5 lambda + lambda^2), 27/(
4 - 5 lambda + lambda^2), (-67 + lambda)/((-4 + lambda) (-1 + lambda))}}
We calculate residues corresponding to the square root function that are multiples of Sylvester's auxiliary matrices:
Z1 = Simplify[(lambda - 1)*R*Sqrt[lambda]] /. lambda -> 1
{{6, 3, -7}, {-45, -26, 63}, {-15, -9, 22}}
Z4 = Simplify[(lambda - 4)*R*Sqrt[lambda]] /. lambda -> 4
{{-10, -6, 14}, {90, 54, -126}, {30, 18, -42}}
Matrix Z1 is the projection operator on eigenspace corresponding to eigenvalue \( \lambda =1 , \) while Z4 is the projection operator on eigenspace corresponding to eigenvalue \( \lambda =4 , \)
Eigenvalues[Z1]
{1, 1, 0}
Eigenvectors[Z1]
{{7, 0, 5}, {-3, 5, 0}, {-1, 9, 3}}
Eigenvalues[Z4]
{2, 0, 0}
Eigenvectors[Z4]
{{-1, 9, 3}, {7, 0, 5}, {-3, 5, 0}}
Finally, we build four square roots:
R1 = Z1 + Z4
{{-4, -3, 7}, {45, 28, -63}, {15, 9, -20}}
\[ {\bf R}_1 = \begin{bmatrix} -4&-3&7 \\ 45&28&-63 \\ 15&9&-20 \end{bmatrix} , \qquad {\bf R}_1^2 = {\bf A} = \begin{bmatrix} -14&-9&21 \\ 135&82&-189 \\ 45&27&-62 \end{bmatrix} . \]
R2 = Z1 - Z4
{{16, 9, -21}, {-135, -80, 189}, {-45, -27, 64}}
\[ {\bf R}_2 = \begin{bmatrix} 16&9&-21 \\ -135&-80&189 \\ -45&-27&64 \end{bmatrix} , \qquad {\bf R}_2^2 = {\bf A} . \]
Therefore, we have four root matrices: \( \pm {\bf R}_1 \quad\mbox{and} \quad \pm {\bf R}_2 . \)
Eigenvalues[R1]
{2, 1, 1}
Eigenvalues[R2]
{-2, 1, 1}
Suppose you want to use a diagonalization procedure, so you need eigenvectors:
S = Transpose[Eigenvectors[A]]
{{-1, 7, -3}, {9, 0, 5}, {3, 5, 0}}
Then you can determine eight square root matrices
\[ {\bf S} \begin{bmatrix} \pm 2&0&0 \\ 0&\pm 1&0 \\ 0&0&\pm 1 \end{bmatrix} {\bf S}^{-1} , \]
some of them you cannot find using neither Sylvester's method nor the resolvent method:
\[ {\bf R}_6 = \frac{1}{5} \begin{bmatrix} -190&-111&273 \\ -225&140&315 \\ -225&-135&320 \end{bmatrix} , \qquad {\bf R}_{8} = \begin{bmatrix} -58& -171/5 & 413/5 \\ 135&80&-189 \\ 15&9&-20 \end{bmatrix} . \]

Example: Consider a positive definite unimodal defective 3-by-3 matrix

\[ {\bf A} = \begin{bmatrix} 1&1&1 \\ -1&1&0 \\ 1&0&1 \end{bmatrix} . \]
that has one positive eigenvalue \( \lambda = 1, \) of geometrical multiplicity 1. Mathematica confirms
A = {{1, 1, 1}, {-1, 1, 0}, {1, 0, 1}}
Eigenvalues[A]
{1, 1, 1}
Eigenvectors[A]
{{0, -1, 1}, {0, 0, 0}, {0, 0, 0}}
Then we find its resolvent:
R = Simplify[Inverse[lambda*IdentityMatrix[3] - A]]
{{1/(-1 + lambda), 1/(-1 + lambda)^2, 1/(-1 + lambda)^2},
{-( 1/(-1 + lambda)^2), ((-2 + lambda) lambda)/(-1 + lambda)^3, -( 1/(-1 + lambda)^3)},
{1/(-1 + lambda)^2, 1/(-1 + lambda)^3, ( 2 - 2 lambda + lambda^2)/(-1 + lambda)^3}}
Finally, we determine the root:
K = D[Simplify[(lambda - 1)^3*R*Sqrt[lambda]], lambda, lambda]/2 /. lambda -> 1
{{1, 1/2, 1/2}, {-(1/2), 9/8, 1/8}, {1/2, -(1/8), 7/8}}
We check the answer.
K.K
{{1, 1, 1}, {-1, 1, 0}, {1, 0, 1}}
Now we check the conditions that the square root exists. Since the denominator of A is 1, any powers of A will be again a unimodal matrix. So all null spaces of matrix A and all its powers will be zero. Therefore, the sequence di in Theorem will consist of all zeroes, and the given matrix has a square root.

 

  1. Anghel, N., Square roots of real 3 × 3 matrices vs. quartic polynomials with real zeros, An. St. Univ. Ovidius Constanta, 2017, Vol. 25, No. 3, pp. 45--58. doi: 10.1515/auom-2017-0034
  2. Choy, Y.K., Introduction to the square root of a 2 by 2 matrix, Queen's College, Hong Kong.
  3. Higham, Nicholas J., Functions of Matrices. Theory and Computation. SIAM, 2008,
  4. Konvalina, J., A combinatorial formula for powers of 2×2 matrices, Mathematics Magazine, 2015, Vol. 88, Issue 4, pp. 280--284. https://doi.org/10.4169/math.mag.88.4.280
  5. Levinger, B., The square root of a 2 × 2 matrix, Mathematics Magazine, 1980, Vol. 53, Issue 4, pp. 222--224. https://doi.org/10.1080/0025570X.1980.11976858
  6. MacKinnon, N., Four routes to matrix roots, 1989, Vol. 73, pp. 135--136.
  7. Scott, H.H., On square-rooting matrices, Math Gazette 1990, Vol. 74, pp.111--114.
  8. Somayya, P.C., A method for finding a square root of a 2 × 2 matrice, The Mathematics Education, 1997, Vol. XXXI, No. 1, Bihar State, India.
  9. Square roots of a 2 × 2 matrice, Wikipadia.
  10. Sullivan, D., The square roots of 2 × 2 matrices, Mathematics Magazine, 1993, Vol. 66, No. 5, pp. 314--316.
  11. Yandl, A.L. and Swenson, C., A class of matrices with zero determinant, Mathematics Magazine, 2012, Vol. 85, Issue 2, pp. 126--130. https://doi.org/10.4169/math.mag.85.2.126

 

Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions