General definition of determinant (the term determinant was first introduced by the German mathematician Carl Friedrich Gauss in 1801) is difficult and counterintuitive. The determinant of a square \( n \times n \) matrix **A** is the value that is calculated as the sum of n! terms, half of
them are taken with sign plus, and another half has oposite sign. The concept of a determinant first appears, nearly two millennia before its supposed
invention by the Japanese mathematician Seki Kowa (1642--1708) in 1683 or his German contemporary Gottfried Leibniz (1646--1716). Traditionally, the determinant of a square matrix is denoted by det(A), det A, or |A|.

We define it recursively using cofactor expansion. For a \( 1 \times 1 \) matrix that consists of one element, \( {\bf A} = [a] , \) its determinant is \( \det {\bf A} = \left\vert {\bf A} \right\vert = a . \) For a \( 2\times 2 \) matrix \( {\bf A} = \left[ \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array} \right] , \) its determinant is \( \det {\bf A} = a_{11} a_{22} - a_{12} a_{21} . \) Similarly, for a \( 3 \times 3 \) matrix, we have a specific formula:

If **A** is a square matrix, then the minor of entry \( a_{ij} \) is denoted by
\( {\bf M}_{ij} \) and is defined to be the determinant of the submatrix that remains after the i-th row and j-th column are deleted from **A**.
The number \( (-1)^{i+j} {\bf M}_{ij} \) is denoted by \( {\bf C}_{ij} \)
and is called the cofactor of entry \( a_{ij} . \)

matlab uses two standard commands to evaluate the determinant of a square matrix

` det(A)`

` inv(A)`

` A^(-1)`

We list the main properties of determinants:

1. \( \det ({\bf I} ) = 1 ,\) where **I** is the identity matrix (all entries are zeroes except diagonal terms, which all are ones).

2. \( \det \left( {\bf A}^{\mathrm T} \right) = \det \left( {\bf A} \right) . \)

3. \( \det \left( {\bf A}^{-1} \right) = 1/\det \left( {\bf A} \right) = \left( \det {\bf A} \right)^{-1} . \)

4. \( \det \left( {\bf A}\, \det {\bf B} \right) = \det {\bf A} \, \det {\bf B} . \)

5. \( \det \left( c\,{\bf A} \right) = c^n \,\det \left( {\bf A} \right) \) for \( n\times n \) matrix
**A** and a scalar c.

6. If \( {\bf A} = [a_{i,j}] \) is a triangular matrix, i.e. \( a_{i,j} = 0 \) whenever i > j or, alternatively, whenever i < j, then its determinant equals the product of the diagonal entries:

An \( n \times n \) square matrix **A** is called **invertible** if there exists an
\( n \times n \) matrix **B** such that

**I**is the identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix

**B**is uniquely determined by

**A**and is called the inverse of

**A**, denoted by \( {\bf A}^{-1} . \) A matrix that is its own inverse, i.e. \( {\bf A} = {\bf A}^{-1} \) and \( {\bf A}^{2} = {\bf I}, \) is called an involution.

Its second power \( {\bf B}\,{\bf B} = {\bf B}^2 = -{\bf I} = \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix} \) is the square root
of the negative identity matrix, \( -{\bf I} = -\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} . \) While the fourth power of the matrix **B** is the identity matrix:
\( {\bf B}^4 = {\bf I} . \)

**A**is called singular if and only if its determinant is zero. Otherwise, the matrix is nonsingular or invertible (because an inverse matrix exists for such matrix). The Cayleyâ€“Hamilton method for \( 2 \times 2 \) matrix gives

We list some basic properties of inverse operation:

1. \( \left( {\bf A}^{-1} \right)^{-1} = {\bf A} . \)

2. \( \left( c\,{\bf A} \right)^{-1} = c^{-1} \,{\bf A}^{-1} \) for nonzero scalar c.

3. \( \left( {\bf A}^{\mathrm T} \right)^{-1} = \left( {\bf A}^{-1} \right)^{\mathrm T} . \)

4. \( \left( {\bf A}\, {\bf B} \right)^{-1} = {\bf B}^{-1} {\bf A}^{-1} . \)

A square matrix whose transpose is equal to its inverse is called an orthogonal matrix;
that is, **A** is orthogonal if \( {\bf A}^{\mathrm T} = {\bf A}^{-1} . \) A matrix that is its own inverse, i.e., \( {\bf A} = {\bf A}^{-1} \)
is called an involution.

```
Complete
```