The determinant of a square \( n \times n \) matrix A is the value that is calculated as the sum of n! terms, half of them are taken with sign plus, and another half has opposite sign. The concept of a determinant first appears, nearly two millennium before its supposed invention by the Japanese mathematician Seki Kowa (1642--1708) in 1683 or his German contemporary Gottfried Leibniz (1646--1716). Traditionally, the determinant of a square matrix is denoted by det(A), det A, or |A|.
In case of a \( 2 \times 2 \) matrix A, the determinant is
Det[M] gives the determinant of the square matrix M:
We list the main properties of determinants:
1. \( \det ({\bf I} ) = 1 ,\) where I is the identity matrix (all entries are zeroes except diagonal terms, which all are ones).
2. \( \det \left( {\bf A}^{\mathrm T} \right) = \det \left( {\bf A} \right) . \)
3. \( \det \left( {\bf A}^{-1} \right) = 1/\det \left( {\bf A} \right) = \left( \det {\bf A} \right)^{-1} . \)
4. \( \det \left( {\bf A}\, \det {\bf B} \right) = \det {\bf A} \, \det {\bf B} . \)
5. \( \det \left( c\,{\bf A} \right) = c^n \,\det \left( {\bf A} \right) \) for \( n\times n \) matrix
A and a scalar c.
6. If \( {\bf A} = [a_{i,j}] \) is a triangular matrix, i.e. \( a_{i,j} = 0 \) whenever i > j or, alternatively, whenever i < j, then its determinant equals the product of the diagonal entries:
An \( n \times n \) square matrix A is called invertible if there exists an \( n \times n \) matrix B such that
Its second power \( {\bf B}\,{\bf B} = {\bf B}^2 = -{\bf I} = \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix} \) is the square root of the negative identity matrix, \( -{\bf I} = -\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} . \) While the fourth power of the matrix B is the identity matrix: \( {\bf B}^4 = {\bf I} . \)
We list some basic properties of inverse operation:
1. \( \left( {\bf A}^{-1} \right)^{-1} = {\bf A} . \)
2. \( \left( c\,{\bf A} \right)^{-1} = c^{-1} \,{\bf A}^{-1} \) for nonzero scalar c.
3. \( \left( {\bf A}^{\mathrm T} \right)^{-1} = \left( {\bf A}^{-1} \right)^{\mathrm T} . \)
4. \( \left( {\bf A}\, {\bf B} \right)^{-1} = {\bf B}^{-1} {\bf A}^{-1} . \)
A square matrix whose transpose is equal to its inverse is called an orthogonal matrix; that is, A is orthogonal if \( {\bf A}^{\mathrm T} = {\bf A}^{-1} . \) A matrix that is its own inverse, i.e., \( {\bf A} = {\bf A}^{-1} \) is called an involution.
Return to MuPad page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations