Partition Matrices

A block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.

  Partitoned matrices appear in most modern applications of linear algebra because the notation highlights essential structures of matrices. In particular, partitoned matrices play an essential role in the finite element method. Partititoning a matrix is a generalization to used previously a list of columns or rows. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Especially when dimensions of a matrix are large, it may be beneficial to view a matrix as combined from smaller submatrices. If we simultaneously partition adjacent rows and adjacent columns of a matrix into groups, this partitions the matrix into submatrices or blocks, resulting in a representation of the matrix as a partitioned or block matrix.

Example: We partition a \( 4 \times 5 \) matrix into 6 blocks:
\[ {\bf M} = \begin{bmatrix} 1&-2&3&-1&4 \\ 3&1&-2&4&-2 \\ 5&4&-3&1&1 \\ 2&-3&4&2&-3 \end{bmatrix} = \left[ \begin{array}{cc|cc|c} 1&-2&3&-1&4 \\ 3&1&-2&4&-2 \\ \hline 5&4&-3&1&1 \\ \hline 2&-3&4&2&-3 \end{array} \right] = \begin{bmatrix} {\bf A}_{11} & {\bf A}_{12} & {\bf A}_{13} \\ {\bf A}_{21} & {\bf A}_{22} & {\bf A}_{23} \\ {\bf A}_{31} & {\bf A}_{32} & {\bf A}_{33} \end{bmatrix} , \]
where the block matrices have the following entries:
\[ {\bf A}_{11} = \begin{bmatrix} 1&-2 \\ 3&1 \end{bmatrix} , \quad {\bf A}_{12} = \begin{bmatrix} 3&-1 \\ -2&4 \end{bmatrix} , \quad {\bf A}_{13} = \begin{bmatrix} 4 \\ -2 \end{bmatrix} , \quad {\bf A}_{31} = \begin{bmatrix} 2&-3 \end{bmatrix} , \quad {\bf A}_{32} = \begin{bmatrix} 4&2 \end{bmatrix} , \quad {\bf A}_{33} = \begin{bmatrix} -3 \end{bmatrix} . \]

If matrices A and B are the same size and are partitioned in exactly the same way, then it is natural to make the same partition of the ordinary matrix sum A + B, and sum corresponding blocks. Similarly, one can subtract the partitioned matrices. Multiplication of a partitioned matrix by a scalar is also computed block by block.

  It is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, and requires "conformable partitions" between two matrices A and B such that all submatrix products that will be used are defined in usual row-column rule.

Example: Let
\[ {\bf M} = \begin{bmatrix} 1&-2&3&-1&4 \\ 3&1&-2&4&-2 \\ 5&4&-3&1&1 \\ 2&-3&4&2&-3 \end{bmatrix} = \left[ \begin{array}{cc|cc|c} 1&-2&3&-1&4 \\ 3&1&-2&4&-2 \\ \hline 5&4&-3&1&1 \\ \hline 2&-3&4&2&-3 \end{array} \right] = \begin{bmatrix} {\bf A}_{11} & {\bf A}_{12} & {\bf A}_{13} \\ {\bf A}_{21} & {\bf A}_{22} & {\bf A}_{23} \\ {\bf A}_{31} & {\bf A}_{32} & {\bf A}_{33} \end{bmatrix} , \qquad {\bf B} = \left[ \begin{array}{c} 6 \\ -3 \\ \hline 1 \\ 4 \\ \hline -1 \end{array} \right] = \left[ \begin{array}{c} {\bf B}_1 \\ {\bf B}_2 \\ {\bf B}_3 \end{array} \right] . \]
Then their multiplication would be
\[ {\bf M}\,{\bf B} = \begin{bmatrix} {\bf A}_{11} {\bf B}_1 + {\bf A}_{12} {\bf B}_2 + {\bf A}_{13} {\bf B}_3 \\ {\bf A}_{21} {\bf B}_1 + {\bf A}_{22} {\bf B}_2 + {\bf A}_{23} {\bf B}_3 \\ {\bf A}_{31} {\bf B}_1 + {\bf A}_{32} {\bf B}_2 + {\bf A}_{33} {\bf B}_3 \end{bmatrix} , \]
where
\[ {\bf A}_{11} {\bf B}_1 = \begin{bmatrix} 1&-2 \\ 3&1 \end{bmatrix} \begin{bmatrix} 6 \\ -3 \end{bmatrix} = \begin{bmatrix} 12 \\ 15 \end{bmatrix} , \qquad {\bf A}_{12} {\bf B}_2 = \begin{bmatrix} 3&-1 \\ -2& 4 \end{bmatrix} \begin{bmatrix} 1 \\ 4 \end{bmatrix} = \begin{bmatrix} -1 \\ 14 \end{bmatrix} , \qquad {\bf A}_{13} {\bf B}_3 = \begin{bmatrix} 4 \\ -2 \end{bmatrix} \begin{bmatrix} -1 \end{bmatrix} = \begin{bmatrix} -4 \\ 2 \end{bmatrix} , \]
\[ {\bf A}_{21} {\bf B}_1 = \begin{bmatrix} 5 & 4 \end{bmatrix} \begin{bmatrix} 6 \\ -3 \end{bmatrix} = \begin{bmatrix} 18 \end{bmatrix} , \qquad {\bf A}_{22} {\bf B}_2 = \begin{bmatrix} -3&1 \end{bmatrix} \begin{bmatrix} 1 \\ 4 \end{bmatrix} = \begin{bmatrix} 1 \end{bmatrix} , \qquad {\bf A}_{23} {\bf B}_3 = \begin{bmatrix} 1 \end{bmatrix} \begin{bmatrix} -1 \end{bmatrix} = \begin{bmatrix} -1 \end{bmatrix} , \]
\[ {\bf A}_{31} {\bf B}_1 = \begin{bmatrix} 2&-3 \end{bmatrix} \begin{bmatrix} 6 \\ -3 \end{bmatrix} = \begin{bmatrix} 21 \end{bmatrix} , \qquad {\bf A}_{32} {\bf B}_2 = \begin{bmatrix} 4&2 \end{bmatrix} \begin{bmatrix} 1 \\ 4 \end{bmatrix} = \begin{bmatrix} 12 \end{bmatrix} , \qquad {\bf A}_{33} {\bf B}_3 = \begin{bmatrix} -3 \end{bmatrix} \begin{bmatrix} -1 \end{bmatrix} = \begin{bmatrix} 3 \end{bmatrix} . \]
Combining all terms together, we get
\[ {\bf M}\,{\bf B} = \left[ \begin{array}{c} 7 \\ 31 \\ \hline 18 \\ \hline 36 \end{array} \right] . \]
We check calculations with Mathematica:
M = {{ 1, -2, 3, -1, 4},{3, 1, -2, 4, -2},{5, 4, -3, 1, 1},{2, -3, 4, 2, -3}}
B = {{6}, {-3}, {1}, {4}, {-1}}
M.B
Out[1]= {{7}, {31}, {18}, {36}}
Suppose that a \( (p+q) \times (p+q) \) matrix M can be partitioned into four submatrix blocks as
\[ {\bf M} = \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} , \]
where both matrices A and D are square matrices of dimensions \( p\times p \) and \( q\times q, \) respectively. If A is nonsingular, the Schur complement of M with respect to A is defined as
\[ {\bf M}/{\bf A} = {\bf D} - {\bf C}\,{\bf A}^{-1} {\bf B} . \]
If D is nonsingular, the Schur complement of M with respect to D is defined as
\[ {\bf M}/{\bf D} = {\bf A} - {\bf B}\,{\bf D}^{-1} {\bf C} . \]

Issai Schur

The Schur complement is named after Issai Schur (1875--1941), who introduced it in 1917 (I. Schur, Potenzreihen im Innern des Einheitskreises, J. Reine Angew. Math., 147, 1917, 205–232). The USA mathematician Emilie Virginia Haynsworth (1916--1985) was the first in 1968 paper to call it the Schur complement. The Schur complement is a key tool in the fields of numerical analysis, statistics and matrix analysis.

  Issai Schur was a Russian mathematician (he was born in Mogilev, now Belarus) who worked in Germany for most of his life. Issai spoke German without a trace of an accent, and nobody even guessed that it was not his first language. He obtained his doctorate in 1901, became lecturer in 1903 and, after a stay at the University of Bonn, professor in 1919. As a student of Ferdinand Georg Frobenius (1849--1917), he worked on group representations. He is perhaps best known today for his result on the existence of the Schur decomposition, which we will discuss later.

  In 1922 Schur was elected to the Prussian Academy, proposed by Planck, the secretary of the Academy. From 1933 events in Germany made Schur's life increasingly difficult. Schur considered himself as a German, not a Jew, but the Nazis had different opinion. Later in 1935 Schur was dismissed from his chair in Berlin but he continued to work there suffering great hardship and difficulties. Schur left Germany for Palestine in 1939, broken in mind and body, having the final humiliation of being forced to find a sponsor to pay the 'Reichs flight tax' to allow him to leave Germany. Without sufficient funds to live in Palestine he was forced to sell his beloved academic books to the Institute for Advanced Study in Princeton.

Example: Consider the matrix
\[ {\bf M} = \begin{bmatrix} 1&-2&3&-1&4 \\ 3&1&-2&4&-2 \\ 5&4&-3&1&1 \\ 2&-3&4&2&-3 \end{bmatrix} = \left[ \begin{array}{cc|ccc} 1&-2&3&-1&4 \\ 3&1&-2&4&-2 \\ \hline 5&4&-3&1&1 \\ 2&-3&4&2&-3 \\ 0&1&3&-1&2 \end{array} \right] = \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} , \]
where
\[ {\bf A} = \begin{bmatrix} 1&-2 \\ 3&1 \end{bmatrix} , \quad {\bf B} = \begin{bmatrix} 3&-1&4 \\ -2&4&2 \end{bmatrix} , \quad {\bf C} = \begin{bmatrix} 5&4 \\ 2&-3 \\ 0&1 \end{bmatrix} , \quad {\bf D} = \begin{bmatrix} -3&1&1 \\ 4&2& -3 \\ 3&-1&2 \end{bmatrix} . \]
Since both square matrices A and D are not singular (\( \det{\bf A} =7 , \quad \det{\bf D} = -30 \) ), we can evaluate both Schur complements:
\begin{align*} {\bf M}/{\bf A} &= {\bf D} - {\bf C}\,{\bf A}^{-1} {\bf B} = \frac{1}{7} \begin{bmatrix} 28&-56&7 \\ -3&21&-67 \\ 32&-14&24 \end{bmatrix} , \\ {\bf M}/{\bf D} &= {\bf A} - {\bf B}\,{\bf D}^{-1} {\bf C} = - \frac{1}{3} \begin{bmatrix} 7&19 \\ 42 & 27 \end{bmatrix} . \end{align*}
We check calculations with Mathematica:
M = {{1, -2, 3, -1, 4}, {3, 1, -2, 4, -2}, {5, 4, -3, 1, 1}, {2, -3, 4, 2, -3}, {0, 1, 3, -1, 2}}
DD = {{-3, 1, 1}, {4, 2, -3}, {3, -1, 2}}
A = {{1, -2}, {3, 1}}
B = {{3, -1, 4}, {-2, 4, 2}}
CC = {{5, 4}, {2, -3}, {0, 1}}
MA = (DD - CC.Inverse[A].B)*7
Out[6]= {{28, -56, 7}, {-3, 21, -67}, {32, -14, 24}}
MD = (A - B.Inverse[DD].CC)*3
Out[7]= {{-7, -19}, {-42, -27}}

Block Matrix Determinant

For a block matrix \( {\bf M} = \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} \) with square matrices A and D, we have
\[ \det \left( {\bf M} \right) = \det \left( {\bf A} \right) \left( {\bf M} / {\bf A} \right) \qquad\mbox{and} \qquad \det \left( {\bf M} \right) = \det \left( {\bf D} \right) \left( {\bf M} / {\bf D} \right) , \]
provided that inverse matrices exist. The latter follows from the equation
\[ \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} \begin{bmatrix} {\bf I} & {\bf 0} \\ - {\bf D}^{-1} {\bf C} & {\bf I} \end{bmatrix} = \begin{bmatrix} {\bf A} - {\bf B}\, {\bf D}^{-1} {\bf C} & {\bf B} \\ {\bf 0} & {\bf D} \end{bmatrix} , \]
where I is the identity matrix (taken to be of the appropriate dimension), and square matrix D is assumed to be nonsingular. Similar formula is valid for invertible matrix A. The following block diagonalization forms learly display the Schur complement role. If A is nonsingular,
\begin{align*} &\begin{bmatrix} {\bf I} & {\bf 0} \\ -{\bf C}\, {\bf A}^{-1} & {\bf I} \end{bmatrix} \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} \begin{bmatrix} {\bf I} & - {\bf A}^{-1} {\bf B} \\ {\bf 0} & {\bf I} \end{bmatrix} = \begin{bmatrix} {\bf A} & {\bf 0} \\ {\bf 0} & {\bf M} / {\bf A} \end{bmatrix} , \\ &\begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} = \begin{bmatrix} {\bf I} & {\bf 0} \\ {\bf C}\, {\bf A}^{-1} & {\bf I} \end{bmatrix} \begin{bmatrix} {\bf A} & {\bf 0} \\ {\bf 0} & {\bf M} / {\bf A} \end{bmatrix} \begin{bmatrix} {\bf I} & {\bf A}^{-1} {\bf B} \\ {\bf 0} & {\bf I} \end{bmatrix} . \end{align*}
If D is nonsingular, we have
\begin{align*} &\begin{bmatrix} {\bf I} & - {\bf B}\,{\bf D}^{-1} \\ {\bf 0} & {\bf I} \end{bmatrix} \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} \begin{bmatrix} {\bf I} & {\bf 0} \\ -{\bf D}^{-1} {\bf C} & {\bf I} \end{bmatrix} = \begin{bmatrix} {\bf M} / {\bf D} & {\bf 0} \\ {\bf 0} & {\bf D} \end{bmatrix} , \\ &\begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} = \begin{bmatrix} {\bf I} & {\bf B}\,{\bf D}^{-1} \\ {\bf 0} & {\bf I} \end{bmatrix} \begin{bmatrix} {\bf M} / {\bf D} & {\bf 0} \\ {\bf 0} & {\bf D} \end{bmatrix} \begin{bmatrix} {\bf I} & {\bf 0} \\ {\bf D}^{-1} {\bf C} & {\bf I} \end{bmatrix} . \end{align*}

All of these equations can be directly verified by matrix multiplication.

Example: Consider four block matrices:
\[ {\bf A}_1 = \begin{bmatrix} 3&1 \\ 5&2 \end{bmatrix} , \quad {\bf A}_2 = \begin{bmatrix} 1&2&3 \\ 2&3&4 \end{bmatrix} , \quad {\bf A}_3 = \begin{bmatrix} -1&1 \\ 2&3 \\ 3&4 \end{bmatrix} , \quad {\bf A}_4 = \begin{bmatrix} 2&1&-1 \\ 1&1&1 \\ 2&3&4 \end{bmatrix} . \]
With Mathematica, we build 5-by-5 matrix from these blockes:
A1 = {{3, 1}, {5, 2}}
A2 = {{1, 2, 3}, {2, 3, 4}}
A3 = {{-1, 1}, {2, 3}, {3, 4}}
A4 = {{2, 1, -1}, {1, 1, 1}, {2, 3, 4}}
M = ArrayFlatten[{{A1, A2}, {A3, A4}}]
\[ {\bf M} = \begin{bmatrix} {\bf A}_1&{\bf A}_2 \\ {\bf A}_3&{\bf A}_4 \end{bmatrix} = \begin{bmatrix} 3&1&1&2&3 \\ 5&2&2&3&4 \\ -1&1&2&1&-1 \\ 2&3&1&1&1 \\ 3&4&2&3&4 \end{bmatrix} . \]
Then we calculate Schur complements
\[ {\bf M}/{\bf A} = {\bf A}_4 - {\bf A}_3 {\bf A}_1^{-1} {\bf A}_2 = \begin{bmatrix} 1&3&4 \\ -2&2&6 \\ -2&4&10 \end{bmatrix} , \qquad {\bf M}/{\bf D} = {\bf A}_1 - {\bf A}_2 {\bf A}_4^{-1} {\bf A}_3 = \begin{bmatrix} 2&0 \\ 2&-2 \end{bmatrix} . \]
MA = A4 - A3.Inverse[A1].A2
Out[6]= {{1, 3, 4}, {-2, 2, 6}, {-2, 4, 10}}
Det[MA]
Out[7]= 4
MD = A1 - A2.Inverse[A4].A3
Out[8]= {{2, 0}, {2, -2}}
Det[MD]
Out[9]= -4
Therefore,
\[ \det{\bf M} = \det {\bf A}_1 \det \left( {\bf M}/{\bf A} \right) = \det {\bf A}_4 \det \left( {\bf M}/{\bf D} \right) =4 . \]

Block Matrix Inversion

To find the inverse matrix to a block matrix \( {\bf M} = \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} , \) we have
\begin{align*} {\bf M}^{-1} &= \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix}^{-1} = \begin{bmatrix} {\bf A}^{-1} + {\bf A}^{-1} {\bf B} \left( {\bf M}/{\bf A} \right)^{-1} {\bf C} \,{\bf A}^{-1} & - {\bf A}^{-1} {\bf B}\left( {\bf M}/{\bf A} \right)^{-1} \\ - \left( {\bf M}/{\bf A} \right)^{-1} {\bf C} \,{\bf A}^{-1} & \left( {\bf M}/{\bf A} \right)^{-1} \end{bmatrix} , \\ \\ {\bf M}^{-1} &= \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix}^{-1} = \begin{bmatrix} \left( {\bf M}/{\bf D} \right)^{-1} & - \left( {\bf M}/{\bf D} \right)^{-1} {\bf B} \,{\bf D}^{-1} \\ - {\bf D}^{-1} {\bf C} \left( {\bf M}/{\bf D} \right)^{-1} & {\bf D}^{-1} + {\bf D}^{-1} {\bf C} \left( {\bf M}/{\bf D} \right)^{-1} {\bf B} \, {\bf D}^{-1} \end{bmatrix} . \end{align*}
When m = n so that A, B, C, and D are all square, and all of them are nonsingular, one obtains the elegant formula
\[ {\bf M}^{-1} = \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix}^{-1} = \begin{bmatrix} \left( {\bf M}/{\bf D} \right)^{-1} & \left( {\bf M}/{\bf B} \right)^{-1} \\ \left( {\bf M}/{\bf C} \right)^{-1} & \left( {\bf M}/{\bf A} \right)^{-1} \end{bmatrix} . \]
Example: Let us consider a 5-by-5 matrix that is partitioned into four blocks:
\[ {\bf M} = \begin{bmatrix} {\bf A} & {\bf B} \\ {\bf C} & {\bf D} \end{bmatrix} , \]
where
\[ {\bf A} = \begin{bmatrix} 3&5 \\ 1&2 \end{bmatrix} , \quad {\bf B} = \begin{bmatrix} 2 &-1&3 \\ -1&1&2 \end{bmatrix} , \quad {\bf C} = \begin{bmatrix} -3&2 \\ 1&-3 \\ 2&1 \end{bmatrix} , \quad {\bf D} = \begin{bmatrix} 2&4&-6 & \\ 3&-11&18 \\ -2&-8&13 \end{bmatrix} . \]
Next we calculate Schur's complements:
\begin{align*} {\bf M}/{\bf A} &= {\bf D} - {\bf C}\,{\bf A}^{-1} {\bf B} = \begin{bmatrix} 39&-25&-24 \\ -27&8&31 \\ -15&2&18\end{bmatrix} , \\ {\bf M}/{\bf D} &= {\bf A} - {\bf B}\,{\bf D}^{-1} {\bf C} = \begin{bmatrix} 19/2 & 3 \\ 37& 69 \end{bmatrix} . \end{align*}
We check calculations with Mathematica:
A = {{3, 5}, {1, 2}}
B = {{2, -1, 3}, {-1, 1, 2}}
CC = {{-3, 2}, {1, -3}, {2, 1}}
DD= {{2, 4, -6}, {-3, -11, 18}, {-2, -8, 13}}
MA = DD - CC.Inverse[A].B
Out[5]= {{39, -25, -24}, {-27, 8, 31}, {-15, 2, 18}}
MD = A - B.Inverse[DD].CC
Out[6]= {{19/2, 3}, {37, 69}}
Now we calculate components in the inverse matrix (of course, using Mathematica):
Inverse[MD]
Out[7]= {{46/363, -(2/363)}, {-(74/1089), 19/1089}}
Inverse[MA]
Out[8]= {{82/1089, 134/363, -(53/99)}, {7/363, 38/121, -(17/33)}, {2/33, 3/11, -(1/3)}}
-Inverse[MD].B.Inverse[DD]
Out[9]= {{-(109/363), -(4/121), -(4/33)}, {128/1089, -(83/363), 38/99}}
-Inverse[DD].CC.Inverse[MD]
Inverse[DD] + Inverse[DD].CC.Inverse[MD].B.Inverse[DD]
Inverse[A] + Inverse[A].B.Inverse[MA].CC.Inverse[A]
-Inverse[MA].CC.Inverse[A]
-Inverse[A].B.Inverse[MA]
Out[10]= {{395/1089, -(175/1089)}, {47/363, 140/363}, {4/33, 7/33}}
Out[11]= {{82/1089, 134/363, -(53/99)}, {7/363, 38/121, -(17/33)}, {2/33, 3/11, -(1/3)}}
Out[12]= {{46/363, -(2/363)}, {-(74/1089), 19/1089}}
Out[13]= {{395/1089, -(175/1089)}, {47/363, 140/363}, {4/33, 7/33}}
Out[14]= {{-(109/363), -(4/121), -(4/33)}, {128/1089, -(83/363), 38/99}}
Finally, we check the answer:
M = {{3, 5, 2, -1, 3}, {1, 2, -1, 1, 2}, {-3, 2, 2, 4, -6}, {1, -3, 3, -11, 18}, {2, 1, -2, -8, 13}}
Inverse[M]*3501
Out[16]= {{522, -54, -1035, -36, -540}, {304, -179, 524, -249, 544}, {395, -175, 82, 402, -583}, {-291, 1680, -87, 342, -705}, {-222, 1029, 78, 297, -213}}
Therefore,
\[ {\bf M}^{-1} = \frac{1}{3501} \begin{bmatrix} 522&-54&-1035&-36&540 \\ 304&-179&524&-249&544 \\ 395&-175&82&402&-583 \\ -291&1680& -87&342&-705 \\ -222&1029&78&297&-213 \end{bmatrix} . \qquad ■ \]

Block Diagonal Matrices

A block diagonal matrix is a block matrix that is a square matrix, and having main diagonal blocks square matrices, such that the off-diagonal blocks are zero matrices. A block diagonal matrix M has the form
\[ {\bf M} = \begin{bmatrix} {\bf A}_1 & {\bf 0} & \cdots & {\bf 0} \\ {\bf 0} & {\bf A}_2 & \cdots & {\bf 0} \\ \vdots & \vdots& \ddots & \vdots \\ {\bf 0} & {\bf 0} & \cdots & {\bf A}_n \end{bmatrix} , \]
where Ak is a square matrix; in other words, it is the direct sum of 1, ... , An: \( {\bf M} = {\bf A}_1 \oplus {\bf A}_2 \oplus \cdots \oplus {\bf A}_n . \) It can also be indicated as diag(\( {\bf A}_1 , {\bf A}_2 , \ldots , {\bf A}_n \) ). Any square matrix can trivially be considered a block diagonal matrix with only one block.

 

For the determinant and trace, the following properties hold
\begin{align*} \det \left( {\bf M} \right) &= \det {\bf A}_1 \times \det {\bf A}_2 \times \cdots \times \det {\bf A}_n , \\ \mbox{tr}\left( {\bf M} \right) &= \mbox{tr} \left( {\bf A}_1 \right) + \mbox{tr} \left( {\bf A}_2 \right) + \cdots + \mbox{tr} \left( {\bf A}_n \right) . \end{align*}
The inverse of a block diagonal matrix is another block diagonal matrix, composed of the inverse of each block, as follows:
\[ {\bf M}^{-1} = \begin{bmatrix} {\bf A}_1 & {\bf 0} & \cdots & {\bf 0} \\ {\bf 0} & {\bf A}_2 & \cdots & {\bf 0} \\ \vdots & \vdots& \ddots & \vdots \\ {\bf 0} & {\bf 0} & \cdots & {\bf A}_n \end{bmatrix}^{-1} = \begin{bmatrix} {\bf A}_1^{-1} & {\bf 0} & \cdots & {\bf 0} \\ {\bf 0} & {\bf A}_2^{-1} & \cdots & {\bf 0} \\ \vdots & \vdots& \ddots & \vdots \\ {\bf 0} & {\bf 0} & \cdots & {\bf A}_n^{-1} \end{bmatrix} . \]
Example: Suppose we are given two square matrices:
\[ {\bf A}_1 = \begin{bmatrix} 3&1 \\ 5&2 \end{bmatrix} , \quad {\bf A}_4 = \begin{bmatrix} 2&1&-1 \\ 1&1&1 \\ 2&3&4 \end{bmatrix} . \]
With Mathematica, we build 5-by-5 block diagonal matrix from these two blockes:
A1 = {{3, 1}, {5, 2}}
A4 = {{2, 1, -1}, {1, 1, 1}, {2, 3, 4}}
zero23 = ConstantArray[0, {2, 3}]
zero32 = ConstantArray[0, {3, 2}]
A = ArrayFlatten[{{A1, zero23}, {zero32, A4}}]
\[ {\bf A} = \begin{bmatrix} {\bf A}_1&{\bf 0} \\ {\bf 0}&{\bf A}_4 \end{bmatrix} = \begin{bmatrix} 3&1&0&0&0 \\ 5&2&0&0&0 \\ 0&0&2&1&-1 \\ 0&0&1&1&1 \\ 0&0&2&3&4 \end{bmatrix} . \]
Then we calculate its determinant and determinants of corresponding blocks:
Det[A]
Out[1]= -1
Det[A1]
Out[1]= -1
Det[A4]
Out[1]= 1
which is in accordance with the formula \( \det{\bf A} = \det {\bf A}_1 \det {\bf A}_2 . \) Finally, we calculate its inverse:
Inverse[A]
Out[4]= {{2, -1, 0, 0, 0}, {-5, 3, 0, 0, 0}, {0, 0, -1, 7, -2}, {0, 0, 2, -10, 3}, {0, 0, -1, 4, -1}}
Inverse[A1]
Out[5]= {{2, -1}, {-5, 3}}
Inverse[A4]
Out[6]= {{-1, 7, -2}, {2, -10, 3}, {-1, 4, -1}}
Therefore,
\[ {\bf A}^{-1} = \begin{bmatrix} 3&1&0&0&0 \\ 5&2&0&0&0 \\ 0&0&2&1&-1 \\ 0&0&1&1&1 \\ 0&0&2&3&4 \end{bmatrix}^{-1} = \begin{bmatrix} 2&-1&0&0&0 \\ -5&3&0&0&0 \\ 0&0&-1&7&-2 \\ 0&0&2&-10&3 \\ 0&0&-1&4&-1 \end{bmatrix} . \]

Block Tridiagonal Matrices

A block tridiagonal matrix is another special block matrix, which is just like the block diagonal matrix a square matrix, having square matrices (blocks) in the lower diagonal, main diagonal and upper diagonal, with all other blocks being zero matrices. It is essentially a tridiagonal matrix but has submatrices in places of scalars. A block tridiagonal matrix M has the form
\[ {\bf M} = \begin{bmatrix} {\bf B}_1 & {\bf C}_1 & {\bf 0} &&& \cdots & {\bf 0} \\ {\bf A}_2 & {\bf B}_2 & {\bf C}_2 &&& \cdots & {\bf 0} \\ \vdots& \ddots & \ddots & \ddots &&& \vdots \\ && {\bf A}_k & {\bf B}_k & {\bf C}_k & & {\bf 0} \\ &&& \ddots & \ddots & \ddots && \\ &&&&{\bf A}_{n-1}&{\bf B}_{n-1} & {\bf C}_{n-1} \\ {\bf 0}&&\cdots&&&{\bf A}_n&{\bf B}_n \end{bmatrix} , \]

where Ak, Bk and Ck are square sub-matrices of the lower, main and upper diagonal respectively.

Example 7: Suppose we are given two square matrices: ■

Product of two block matrices

Suppose that we are given two matrices Am×p and Bp×n that are partitioned as follows:
\[ {\bf A}_{m\times p} = \begin{bmatrix} {\bf E}_{r\times k} & {\bf F}_{r\times p-k} \\ {\bf G}_{m- r\times k} & {\bf H}_{m - r\times p-k} \end{bmatrix} \qquad \mbox{and} \qquad {\bf B}_{p\times n} = \begin{bmatrix} {\bf P}_{k\times l} & {\bf Q}_{k\times n-l} \\ {\bf R}_{p- k\times l} & {\bf S}_{p- k\times n-l} \end{bmatrix} . \]
Then their product is
\[ {\bf A}\, {\bf B} = \begin{bmatrix} \left( {\bf E}\,{\bf P} + {\bf F}\,{\bf R} \right)_{r\times l} & \left( {\bf E}\,{\bf Q} + {\bf F}\,{\bf S} \right)_{r\times n - l} \\ \left( {\bf G}\,{\bf P} + {\bf H}\,{\bf R} \right)_{m- r\times l} & \left( {\bf G}\,{\bf Q} + {\bf H}\,{\bf S} \right)_{m-r\times n - l} \end{bmatrix} . \]
Example: Suppose we are given two square matrices: Let
\[ {\bf A} = \begin{bmatrix} \phantom{-}2 & 1 & 1 \\ -2& 3 &0 \\ -1&0& 1 \end{bmatrix} \qquad \mbox{and} \qquad {\bf B} = \begin{bmatrix} -1& 1 & -1 & 0 \\ \phantom{-}2 & 1 & \phantom{-}1 & 0 \\ \phantom{-}3 & 2 & \phantom{-}2 & 1 \end{bmatrix} . \]
We find their product by partition these matrices:
\[ {\bf A} = \begin{bmatrix} {\bf E} & {\bf F} \\ {\bf G} & {\bf H} \end{bmatrix}\qquad \mbox{and} \qquad {\bf B} = \begin{bmatrix} {\bf P} & {\bf Q} \\ {\bf R} & {\bf S} \end{bmatrix} . \]

Direct Sum

For any arbitrary matrices \( {\bf A} = [a_{i,j} ] \) (of size \( n \times m \) ) and \( {\bf B} = [b_{i,j}] \) (of size \( p \times q \) ), we have the direct sum of A and B, denoted by \( {\bf A} \oplus {\bf B} \) and defined as
\[ {\bf A} \oplus {\bf B} = \begin{bmatrix} a_{11} & \cdots & a_{1m} & 0 & \cdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & \cdots & a_{nm} & 0 & \cdots & 0 \\ 0& \cdots & 0 & b_{11} & \cdots & b_{1q} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0& \cdots & 0 & b_{p1} & \cdots & b_{pq} \end{bmatrix} . \]

Direct Product

Let A be n-by-m matrix and B be p-by-q matrix, then their Kronecker product or direct product, denoted by \( {\bf A}\otimes {\bf B} ,\) is the \( pn \times mq \) block matrix:
\[ {\bf A} \otimes {\bf B} = \begin{bmatrix} a_{11} {\bf B} & \cdots & a_{1m} {\bf B} \\ \vdots& \ddots & \vdots \\ a_{n1} {\bf B} & \cdots & a_{nm} {\bf B} \end{bmatrix} . \]
More explicitly:
\[ {\bf A} \otimes {\bf B} = \begin{bmatrix} a_{11} b_{11} & a_{11} b_{12} & \cdots & a_{1m} b_{1q} & \cdots & \cdots & a_{1m} b_{11} & a_{1n} b_{12} & \cdots & a_{1m} b_{1q} \\ a_{11} b_{21} & a_{11} b_{21} & \cdots & a_{11} b_{2q} & \cdots & \cdots & a_{1m} b_{21} & a_{1m} b_{22} & \cdots & a_{1m} b_{2q} \\ \vdots & \vdots & \ddots & \vdots & && \vdots & \vdots & \ddots & \vdots \\ a_{11} b_{p1} & a_{11} b_{p2} & \cdots & a_{11} b_{pq} & \cdots & \cdots & a_{1m} b_{p1} & a_{1m} b_{p2} & \cdots & a_{1m} b_{pq} \\ \vdots & \vdots & & \vdots & \ddots && \vdots & \vdots & & \vdots \\ \vdots & \vdots & & \vdots & &\ddots & \vdots & \vdots & & \vdots \\ a_{n1} b_{11} & a_{n1} b_{12} & \cdots & a_{n1} b_{1q} & \cdots & \cdots & a_{nm} b_{11} & a_{nm} b_{12} & \cdots & a_{nm} b_{1q} \\ a_{n1} b_{21} & a_{n1} b_{22} & \cdots & a_{n1} b_{2q} & \cdots & \cdots & a_{nm} b_{21} & a_{nm} b_{22} & \cdots & a_{nm} b_{2q} \\ \vdots & \vdots & \ddots & \vdots & && \vdots & \vdots & \ddots & \vdots \\ a_{n1} b_{p1} & a_{n1} b_{p2} & \cdots & a_{n1} b_{pq} & \cdots & \cdots & a_{nm} b_{p1} & a_{nm} b_{p2} & \cdots & a_{nm} b_{pq} \end{bmatrix} . \]
The Kronecker product is named after the German mathematician Leopold Kronecker (1823--1891), even though there is little evidence that he was the first to define and use it. Indeed, the Kronecker product should be called the Zehfuss product because Johann Georg Zehfuss (1832--1901) published a paper in 1858 (Zeit. fur Math und Physik, 3, 1858, 298--301), in which he described the matrix operation we now know as the Kronecker product.