Matrix Algebra

Sage is a powerful system for studying and exploring many different areas of mathematics. This page presents some topics from Linear Algebra needed for construction of solutions to systems of linear ordinary differential equations and some applications.

 

Positive Matrices

A square real matrix A is called positive-definite if all its eigenvalues are positive. In other words, if
\[ {\bf x}^{\mathrm T} \,{\bf A} \, {\bf x} >0 \]
for all nonzero real vectors x, and \( {\bf x}^{\ast} = {\bf x}^{\mathrm T} \) denotes the transposed vector. Correspondingly, a square matrix is called positive-semidefinite if all its eigenvalues are nonnegative:
\[ {\bf x}^{\ast} \,{\bf A} \, {\bf x} \ge 0 . \]
A square matrix A is called positive if all its entries are positive numbers. In particular, all Markov matrices are positive.

The determinant of a positive definite matrix is always positive, so a positive definite matrix is always nonsingular (invertible). A positive definite matrix has at least one matrix square root. Furthermore, exactly one of its matrix square roots is itself positive definite. A real symmetric matrix A is positive definite iff there exists a real nonsingular matrix M such that

\[ {\bf A} = {\bf M}\, {\bf M}^{\mathrm T} . \]
Powers of square matrices will be denoted by exponents in the same manner as for scalars. For example, the square of a matrix A will be called \( {\bf A}^2 = {\bf A}\, {\bf A} , \) and it is equal to A times A. In a similar way, higher powers are defines as
\[ {\bf A}^n = {\bf A}\, {\bf A} \cdots {\bf A} = {\bf A}\, {\bf A}^{n-1} , \]
where the matrix A appears n times in the product on the right-hand side. The zero power of a square matrix is defined as the identity matrix, except when the matrix is null:
\[ {\bf A}^0 = {\bf I} , \]
where I is the identity matrix.

 

Square roots

Square roots of the matrices are identified in the cusomary manner:

\[ {\bf R} = \sqrt{\bf A} = {\bf A}^{1/2} \qquad \Longrightarrow \qquad {\bf R}\, {\bf R} = {\bf A} . \]
The square roots of a matrix is not unique, and most matrices have several different square roots; some even have infinite number of square roots, but some nilpotent matrices have no root. Actually, the \( n \times n \) identity matrix has infinitely many square roots for \( n \ge 2. \) Recall that an involutory matrix is a matrix that is its own inverse. That is, multiplication by matrix A is an involution if and only if \( {\bf A}^2 = {\bf I}. \) Involutory matrices are all square roots of the identity matrix. Another example of the involutory matrix gives the Householder matrix (or Householder reflection) \( {\bf P} = {\bf I} - 2\,{\bf v}\,{\bf v}^T , \) where v is a n-column vector of unit length and I is the identity matrix. It is named in honor of the American mathematician Alston S. Householder (1904--1993). In \( \mathbb{R}^2 \) the Householder matrix represents a reflection about the line through the origin that is orthogonal to v, and in \( \mathbb{R}^3 \) it represents a reflection about the plane theough the origin that is orthogonal to v. We present some examples.

Example. Each of the following matrices is an involutory one (an involutory matrix is a matrix that is its own inverse):

\[ \begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix} , \qquad \begin{bmatrix} 1&0&0 \\ 0&-1&0 \\ 0&0&-1 \end{bmatrix} ,\qquad \frac{1}{3} \begin{bmatrix} 1&-2&-2 \\ -2&1&-2 \\ -2&-2&1 \end{bmatrix} . \]

Example. The following \( 2 \times 2 \) matrices (identity matrix and negative identity matrix) have infinite many roots depending on two parameters (denoted as a and b):

\begin{align*} {\bf I}^{1/2} &= \begin{bmatrix} 1&0 \\ 0& 1 \end{bmatrix}^{1/2} = \frac{1}{b} \begin{bmatrix} ab&b^2 \\ 1-a^2& -ab \end{bmatrix} , \quad b\ne 0.\; \\ {\bf I}^{1/2} &= \begin{bmatrix} 1&0 \\ 0& 1 \end{bmatrix}^{1/2} = \begin{bmatrix} 1&0 \\ a& -1 \end{bmatrix} , \\ \left( -{\bf I} \right)^{1/2} &= \begin{bmatrix} -1&0 \\ 0& -1 \end{bmatrix}^{1/2} = \frac{1}{b} \begin{bmatrix} ab&-b^2 \\ 1+a^2& -ab \end{bmatrix} , \quad b\ne 0. \end{align*}

Example. The following \( 2 \times 2 \) nilpotent matrices have no square root

\[ {\bf A} = \begin{bmatrix} 0&1 \\ 0& 0 \end{bmatrix} \qquad \mbox{and} \qquad {\bf B} = \begin{bmatrix} 1&1 \\ -1& -1 \end{bmatrix} . \]
Both matrices satisfy the equation \( {\bf A}^2 = {\bf 0} \quad\mbox{and}\quad {\bf B}^2 = {\bf 0} .\) They have a double eigenvalue \( \lambda =0 , \) with zero trace. However, we can define the following matrix-functions:
\[ {\bf \Phi}_A (t) = \cos \left( \sqrt{\bf A} \,t \right) , \quad {\bf \Phi}_B (t) = \cos \left( \sqrt{\bf B} \,t \right) ,\qquad \mbox{and} \qquad {\bf \Psi}_A (t) = \frac{\sin \left( \sqrt{\bf A} \,t \right)}{\sqrt{\bf A}} , \quad {\bf \Psi}_B (t) = \frac{\sin \left( \sqrt{\bf B} \,t \right)}{\sqrt{\bf B}} . \]
To determine these functions, we first calculate the resolvents:
\[ {\bf R}_{\lambda} ({\bf A}) = \left( \lambda{\bf I} - {\bf A} \right)^{-1} = \frac{1}{\lambda^2} \begin{bmatrix} \lambda&1 \\ 0 & \lambda \end{bmatrix} , \qquad {\bf R}_{\lambda} ({\bf B}) = \left( \lambda{\bf I} - {\bf B} \right)^{-1} = \frac{1}{\lambda^2} \begin{bmatrix} \lambda+1&1 \\ -1 & \lambda -1\end{bmatrix} , \qquad \]
Then
\begin{align*} {\bf \Phi}_A (t) = \cos \left( \sqrt{\bf A} \,t \right) &= \frac{{\text d}}{{\text d}\lambda} \, \cos \left( \sqrt{\lambda}\,t \right) \begin{bmatrix} \lambda & 1 \\ 0 &\lambda \end{bmatrix}_{\lambda =0} = \begin{bmatrix} 1 & -t^2 /2 \\ 0 &1 \end{bmatrix} , \\ {\bf \Phi}_B (t) = \cos \left( \sqrt{\bf B} \,t \right) &= \frac{{\text d}}{{\text d}\lambda} \, \cos \left( \sqrt{\lambda}\,t \right) \begin{bmatrix} \lambda +1 & 1 \\ -1 &\lambda -1 \end{bmatrix}_{\lambda =0} = \begin{bmatrix} 1 - t^2 /2& -t^2 /2 \\ t^2 /2 &1 +t^2 /2 \end{bmatrix} , \\ {\bf \Psi}_A (t) = \frac{\sin \left( \sqrt{\bf A} \,t \right)}{\sqrt{\bf A}} &= \frac{{\text d}}{{\text d}\lambda} \, \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \begin{bmatrix} \lambda & 1 \\ 0 &\lambda \end{bmatrix}_{\lambda =0} = \begin{bmatrix} t & -t^3 /6 \\ 0 &t \end{bmatrix} , \\ {\bf \Psi}_B (t) = \frac{\sin \left( \sqrt{\bf B} \,t \right)}{\sqrt{\bf B}} &= \frac{{\text d}}{{\text d}\lambda} \, \frac{\sin \left( \sqrt{\lambda}\,t \right)}{\sqrt{\lambda}} \begin{bmatrix} \lambda +1 & 1 \\ -1 &\lambda -1\end{bmatrix}_{\lambda =0} = \begin{bmatrix} t - t^3 /6 & -t^3 /6 \\ t^3 /6 &t + t^3 /6 \end{bmatrix} . \end{align*}
These matrix-functions are solutions to the following initial value problems:
\[ \ddot{\bf \Phi}_A (t) + {\bf A} \, {\bf \Phi}_A = {\bf 0} , \quad {\bf \Phi}_A (0) = {\bf I}, \quad \dot{\bf \Phi}_A (0) = {\bf 0}; \]
\[ \ddot{\bf \Psi}_A (t) + {\bf A} \, {\bf \Psi}_A = {\bf 0} , \quad {\bf \Psi}_A (0) = {\bf 0}, \quad \dot{\bf \Psi}_A (0) = {\bf I}; \]
and similar for matrix B.

sage: M.rank()
sage: M.right_nullity()

Let us show that matrix B has no square root under the field of real numbers by contradiction. Suppose that opposite is true and there exists a matrix \( {\bf K} = \begin{bmatrix} a& b \\ c & d \end{bmatrix} , \) for some constants a, b, c, and d, such that \( {\bf K}^2 = \begin{bmatrix} a^2 +bc& ab+bd \\ ac + cd& bc+d^2 \end{bmatrix} = {\bf B} = \begin{bmatrix} 1& 1 \\ -1 & -1 \end{bmatrix} . \) From this matrix equation, we get four algebraic equations:

\[ a^2 + bc =1, \quad b(a+d) =1, \quad c(a+d) =-1, \quad bc+ d^2 =-1. \]
Therefore, \( c=-b \) and the matrix K becomes \( {\bf K} = \begin{bmatrix} a& b \\ -b & d \end{bmatrix} . \) So instead of four equations, we have three:
\[ a^2 - b^2 =1, \quad b(a+d) =1, \quad d^2 -b^2 =-1 . \]
Since \( b^2 = a^2 -1 = (a-1)(a+1) = d^2 +1 \ge 1, \) we get
\[ b= \sqrt{a^2 -1} , \qquad d= \sqrt{a^2 -2} . \]
Therefore, \( a^2 >2 \) and we have from the equation \( b(a+d) =1 \) that
\[ \sqrt{a^2 -1} \left( a + \sqrt{a^2 -2} \right) =1 , \]
which has no solution among real numbers satisfying the enequality \( a > \sqrt{2} . \)

Example. The following 3-by-3 (nilpotent) matrix

\[ {\bf A} = \begin{bmatrix} 0&0&b \\ 0&0&0 \\ 0&0&0 \end{bmatrix} , \quad b\ne 0 , \]
has infinite many square roots depending on two parameters, k and a:
\[ {\bf A}^{1/2} = \begin{bmatrix} 0&a&k \\ 0&0&b/a \\ 0&0&0 \end{bmatrix} , \quad a\ne 0 . \]

sage: M.rank()
sage: M.right_nullity()

 

Applications