Determinants and Inverse Matrices

The determinant of a square \( n \times n \) matrix A is the value that is calculated as the sum of n! terms, half of them are taken with sign plus, and another half has opposite sign. The concept of a determinant first appears, nearly two millennium before its supposed invention by the Japanese mathematician Seki Kowa (1642--1708) in 1683 or his German contemporary Gottfried Leibniz (1646--1716). Traditionally, the determinant of a square matrix is denoted by det(A), det A, or |A|.

In case of a \( 2 \times 2 \) matrix A, the determinant is

\[ \det {\bf A} = \det \begin{bmatrix} a&b \\ c&d \end{bmatrix} = \left\vert \begin{array}{cc} a&b \\ c&d \end{array} \right\vert = ad-bc . \]
Similarly, for a \( 3 \times 3 \) matrix, we have a specific formula:
\[ \det \begin{bmatrix} a&b&c \\ d&e&f \\ g&h&i \end{bmatrix} = a\,\left\vert \begin{array}{cc} e&f \\ h&i \end{array} \right\vert - b\,\left\vert \begin{array}{cc} d&f \\ g&i \end{array} \right\vert + c \,\left\vert \begin{array}{cc} d&e \\ g&h \end{array} \right\vert = aei + bfg + cdh - ceg -bdi -afh . \]
Each determinant of a \( 2 \times 2 \) matrix in this equation is called a "minor" of the matrix A. The same sort of procedure can be used to find the determinant of a \( 4 \times 4 \) matrix, the determinant of a \( 5 \times 5 \) matrix, and so forth.

Det[M] gives the determinant of the square matrix M:

M={{0,1},{-1,3}}
Out[2]= 1

We list the main properties of determinants:

1. \( \det ({\bf I} ) = 1 ,\) where I is the identity matrix (all entries are zeroes except diagonal terms, which all are ones).
2. \( \det \left( {\bf A}^{\mathrm T} \right) = \det \left( {\bf A} \right) . \)
3. \( \det \left( {\bf A}^{-1} \right) = 1/\det \left( {\bf A} \right) = \left( \det {\bf A} \right)^{-1} . \)
4. \( \det \left( {\bf A}\, \det {\bf B} \right) = \det {\bf A} \, \det {\bf B} . \)
5. \( \det \left( c\,{\bf A} \right) = c^n \,\det \left( {\bf A} \right) \) for \( n\times n \) matrix A and a scalar c.
6. If \( {\bf A} = [a_{i,j}] \) is a triangular matrix, i.e. \( a_{i,j} = 0 \) whenever i > j or, alternatively, whenever i < j, then its determinant equals the product of the diagonal entries:

\[ \det \left( {\bf A} \right) = a_{1,1} a_{2,2} \cdots a_{n,n} = \prod_{i=1}^n a_{i,i} . \]

An \( n \times n \) square matrix A is called invertible if there exists an \( n \times n \) matrix B such that

\[ {\bf A}\, {\bf B} = {\bf B}\,{\bf A} = {\bf I} , \]
where I is the identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A and is called the inverse of A, denoted by \( {\bf A}^{-1} . \) A matrix that is its own inverse, i.e. \( {\bf A} = {\bf A}^{-1} \) and \( {\bf A}^{2} = {\bf I}, \) is called an involution.
M := matrix([[0,1],[-1,3]])
\( \left(\begin{array}{cc} 0 & 1\\ -1 & 3 \end{array}\right) \)
det(M)
1
B := matrix([[0,-1],[-1,0]])
\( \left(\begin{array}{cc} 0 & -1\\ -1 & 0 \end{array}\right) \)
inverse(B)
\( \left(\begin{array}{cc} 0 & -1\\ -1 & 0 \end{array}\right) \)

Its second power \( {\bf B}\,{\bf B} = {\bf B}^2 = -{\bf I} = \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix} \) is the square root of the negative identity matrix, \( -{\bf I} = -\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} . \) While the fourth power of the matrix B is the identity matrix: \( {\bf B}^4 = {\bf I} . \)

B*B*B*B
\( \left(\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right) \)

A matrix A is called singular if and only if its determinant is zero. Otherwise, the matrix is nonsingular or invertible (because an inverse matrix exists for such matrix). The Cayley--Hamilton method for \( 2 \times 2 \) matrix gives
\[ {\bf A}^{-1} = \frac{1}{\det {\bf A}} \left[ \left( \mbox{tr} {\bf A} \right) {\bf I} - {\bf A} \right] . \]

We list some basic properties of inverse operation:

1. \( \left( {\bf A}^{-1} \right)^{-1} = {\bf A} . \)
2. \( \left( c\,{\bf A} \right)^{-1} = c^{-1} \,{\bf A}^{-1} \) for nonzero scalar c.
3. \( \left( {\bf A}^{\mathrm T} \right)^{-1} = \left( {\bf A}^{-1} \right)^{\mathrm T} . \)
4. \( \left( {\bf A}\, {\bf B} \right)^{-1} = {\bf B}^{-1} {\bf A}^{-1} . \)

A square matrix whose transpose is equal to its inverse is called an orthogonal matrix; that is, A is orthogonal if \( {\bf A}^{\mathrm T} = {\bf A}^{-1} . \) A matrix that is its own inverse, i.e., \( {\bf A} = {\bf A}^{-1} \) is called an involution.

 

 

Return to MuPad page

Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations

Return to the Part 6 Special Functions