Preface


In this section, you will learn how to execute the basic arithmetic operations (addition, subtraction, and multiplication) with matrices as well as some other matrix manipulation tools.    ▣

Return to computing page for the first course APMA0330
Return to computing page for the second course APMA0340
Return to Mathematica tutorial for the first course APMA0330
Return to Mathematica tutorial for the second course APMA0340
Return to the main page for the first course APMA0330
Return to the main page for the second course APMA0340
Return to Part I of the course APMA0340
Introduction to Linear Algebra with Mathematica

Basic Matrix Operations


Recall that we will denote matrices with upper case letters in bold font while vectors are denoted by lower case letters. It is customary to denote the numbers that make up the matrix, called entries, of the matrix, as the lower case version of the letter names the matrix, with two subscripts. For example,

\[ {\bf A} = \left[ a_{i,j} \right] = \left[ a_{i,j} \right]_{i,j=1}^{m,n} \qquad \mbox{and} \qquad {\bf B} = \left( b_{i,j} \right) = \left( b_{i,j} \right)_{i,j=1}^{n} , \]
with A being a rectangular m×n matrix while B being a square matrix. The notation (A)ij is also common, depending on the setting.

The subscripts denote the position of the entry. The entry 𝑎i,j or just 𝑎ij occupies the i-th row and j-th column of the matrix A. Two matrices A and B are equal if they have the same dimensions and 𝑎ij = bij for every i and j.

A square matrix is a matrix that has the same number of rows as columns; that is, and n × n matrix for some positive integer n. When n = 0, we get just one entry. If A is a square matrix, the entries 𝑎11, 𝑎22, … , 𝑎nn make up the main diagonal of A, The trace of a square matrix is the sum of the entries on the main diagonal.

A square matrix is a diagonal matrix if the only non-zero entries of A are on the main diagonal. A square matrix is upper (lower) triangular if the only non-zero entries are above (below) of on the main diagonal. A matrix that consists of either a single row or a single column is called a vector (more precisely, either a row-vector or column-vector).

 

Arithmetic operations


We define four arithmetic operations on matrices: Matrix addition or subtraction, scalar multiplication, and matrix multiplication. Matrix division is considered in the next section.

Matrix addition/subtraction. In order to add or subtract matrices, the matrices must have the same dimensions, and then one simply adds/subtracts the corresponding entries. For example,

\[ \begin{bmatrix} \phantom{-}2&7 \\ -1&3 \\ -4&6 \end{bmatrix} + \begin{bmatrix} 3&-4 \\ 2& -8 \\ 3&-3 \end{bmatrix} = \begin{bmatrix} \phantom{-}2+3 &7-4 \\ -1+2 & 3 -8 \\ -4 + 3 & 6-3 \end{bmatrix} = \begin{bmatrix} \phantom{-}5&\phantom{-}3 \\ \phantom{-}1&-5 \\ -1&\phantom{-}3 \end{bmatrix} \]
and
\[ \begin{pmatrix} 2&-1&-4 \\ 7&\phantom{-}3&\phantom{-}6 \end{pmatrix} - \begin{pmatrix} \phantom{-}3&\phantom{-}2&\phantom{-}3 \\ -4&-8&-3 \end{pmatrix} = \begin{pmatrix} -1&-1&-7 \\ 11&11&\phantom{-}9 \end{pmatrix} . \]
To be more formal---and to begin to get used to the abstract notation---we could express this idea as
\[ {\bf A} \pm {\bf B} = \left[ a_{i,j} \right] \pm \left[ b_{i,j} \right] = \left[ a_{i,j} \pm b_{i,j} \right] . \]

Scalar Multiplication A scalar multiplication means multiplying a matrix by a number and is accomplished by multiplying every entry in the matrix by the scalar. So

\[ k\, {\bf A} = k \left[ a_{i,j} \right] = \left[ k\,a_{i,j} \right] , \qquad k\in \mathbb{R}. \]

Matrix multiplication

We don't know exactly who invented nor when the multiplication of matrices was invented. However, we do know that the work of 1812 by Jacques Philippe Marie Binet (1786--1856) contains the definition of the product of matrices. Let A [𝑎i,j] be a m × n matrix and B = [bi,j] be a n × k matrix. Its product, a \( {\bf C} = {\bf A}\,{\bf B} \) is a m × k matrix, in which the n entries across the rows of A are multiplied with the n entries down the columns of B:

\[ {\bf C} = \left[ c_{ij} \right] , \quad\mbox{where} \quad c_{ij}= \sum_{k=1}^n a_{ik}b_{kj} . \]

Mathematica multiplies and divides matrices

Mathematica uses two operations for multiplication of matrices: asterisk (*) and dot (.). The asterisk command can be applied only when two matrices have the same dimensions; in this case the output is the matrix containing corresponding products of corresponding entry. For example, we multiply two 2×3 matrices:

A = {{1, 2, 3}, {-1, -2, -3}}
% // MatrixForm
\( \begin{pmatrix} 1&2&3 \\ -1&-2&-3 \end{pmatrix} \)
B = {{4, 5, 6}, {-4, -5, -6}}
(A*B) // TraditionalForm
\( \begin{pmatrix} 4&10&18 \\ 4&10&18 \end{pmatrix} \)
The dot product can be performed only when the number of rows m in the first factor is the same as the number of columns m of the second factor. Their product is denoted by A.B and it is defined by
\begin{equation} \label{EqBasic.1} {\bf C} = \left[ c_{i,j} \right] = {\bf A}. {\bf B} , \qquad c_{i,j} = \sum_{k=1}^m a_{i,k} b_{k,j} , \quad i = 1, 2, \ldots , m; \quad j=1,2,\ldots , n. \end{equation}
For example, we take 2×3 matrix A and multiply it by 3×2 matrix B:
A = {{1, 2, 3}, {-1, -2, -3}}
B = {{4,5},{6,7},{-8,-9}}
Then their product will be a 2×2 matrix:
(A.B) // MatrixForm
\( \begin{pmatrix} -8&-8 \\ 8&8 \end{pmatrix} \)

With regular arithmetic commands "/" for division and "*" multiplication, Mathematica performs these operations separately for each entry. So for two given matrices (divisor should not contain a zero entry) of the same size

\[ {\bf A} = \left[ a_{i,j} \right] \qquad \mbox{and} \qquad {\bf B} = \left[ b_{i,j} \right] , \]
we have
\[ {\bf A} \ast {\bf B} = \left[ a_{i,j}* b_{i,j} \right] \qquad \mbox{and} \qquad {\bf A} / {\bf B} = \left[ a_{i,j} / b_{i,j} \right] = \left[ \frac{a_{i,j}}{b_{i,j}} \right] . \]
For instance,
\[ {\bf A} = \begin{bmatrix} 1&4&3 \\ 2&-1&2 \\ 1&2&2 \end{bmatrix} \qquad \Longrightarrow \qquad {\bf A}\ast {\bf A} = \begin{bmatrix} 1&16&9 \\ 4&1&4 \\ 1 & 4 & 4 \end{bmatrix} \qquad \mbox{and} \qquad {\bf A} / {\bf A} = \begin{bmatrix} 1&1&1 \\ 1&1&1 \\ 1&1&1 \end{bmatrix} . \]

Now we start working with matrices. A period “.” can also be used for matrix multiplication between one matrix and a vector or two matrices. It is important to note that when doing matrix multiplication an (m x n) matrix can only be multiplied by an (n x s) where m, n, and s are whole numbers, producing an (m x s) matrix.

Mathematica knows what vector should be used when a matrix is multiplied by a vector. For example,

A={{16,-9},{-12,13}};
v={1,2};
A.v
Out[3]= {-2,14}
v.A
Out[4]= {-8, 17}
This example shows that when a matrix is multiplied by a vector from the right (this also means that a matrix is operated on a vector as a transformation), Mathematica treats it as a column-vector. When the vector is multiplied by a matrix from the right, Mathematica treats the same vector as a row-vector. However, we can specify either row-vector or column-vector and multiply by a matrix from left or right:
v={2,1,-1} (* row-vector *)
u = {{2}, {1}, {-1}} (* column-vector *)

Products:

A.v
Out[17]= {1, 1} (* row *)
A.u
Out[18]= {{1}, {1}} (* column *)

Now we generate a special matrix, called the zero matrix, whose entries are all zeroes.

Z23 = Table[0, {i, 2}, {j, 3}]          (* generates a list of zero values *)  
Out[1]= {{0, 0, 0}, {0, 0, 0}}
This matrix can be added (or subtracted from) to any matrix of the same dimensions (in our case, it is 2 × 3 matrix). We can verify this by showing that A + Z23 is equal to A:
A + Z23 == A
Out[2]= True
A == M (* when AM   *)
Out[6]= False
Example 1: Write the vector \( {\bf a}=2{\bf i}+3{\bf j}-4{\bf k} \) as the sum of two vectors, one parallel, and one perpendicular to the vector \( {\bf b}=2{\bf i}-{\bf j}-3{\bf k} .\)

We find the parallel vector:

a = {2, 3, -4}
b = {2, -1, -3}
madB2 = (b[[1]])^2 + (b[[2]])^2 + (b[[3]])^2
Aparallel = a.b/madB2*b
Out[4]= {13/7, -(13/14), -(39/14)}
To find the perpendicular vector, we type:
Aperpendicular = a - Aparallel
Out[5]= {1/7, 55/14, -(17/14)}
Another more straight forward way:
Aparallel = (a.b)*b/(b.b)
To verify the answer, we type:
Aparallel + Aperpendicular -a
Out[9]= 0
Example 2: Let a = [1,3,-4] and b = [-1,1,-2] be two vectors. Find \( {\bf a}\cdot ({\bf a}\times {\bf b}), \) where \( {\bf a}\cdot {\bf b} \) is dot product, and \( {\bf a}\times {\bf b} \) is cross product of two vectors. Input variables are vectors a and b. Function will calculate the cross product of a and b, and then calculate the dot product of that value and a. Notice that any input values for a and b will result in zero sum.
a={1,3,-4}
b={-1,1,-2}
c=a.Cross[a,b]
{0,0,0}
{-1,1,-2}
0

 

Trace of a Matrix


The trace of a matrix is the sum of its diagonal elements.

As a simple introductory example, consider 2×2 matrix of zero trace:

\[ {\bf A} = \begin{bmatrix} \phantom{-}0 & 1 \\ -1& 0 \end{bmatrix} . \]
Now we check with Mathematica:
(A = {{0, 1}, {-1, 0}}) // MatrixForm
Out[2]= \( \begin{pmatrix} 0 & 1 \\ -1&0 \end{pmatrix} \)
Tr[A]
Out[2]= 0
As you see, Mathematica has a dedicated command to evaluate the trace: Tr[·], which we will abreviate as tr(·).
Theorem 1: If A is m×n matrix and B is n×m matrix, then
\[ \mbox{tr}({\bf A}\,{\bf B}) = \mbox{tr}({\bf B}\,{\bf A}) . \]

 

Matrix Norms


The set ℳm,n of all m × n matrices under the field of either real or complex numbers is a vector space of dimension m  · n. In order to determine how close two matrices are, and in order to define the convergence of sequences of matrices, a special concept of matrix norm is employed, with notation \( \| {\bf A} \| . \) A norm is a function from a real or complex vector space to the nonnegative real numbers that satisfies the following conditions:

  • Positivity:     ‖A‖ ≥ 0     ‖A‖ = 0 iff A = 0.
  • Homogeneity:     ‖kA‖ = |k| ‖A‖ for arbitrary scalar k.
  • Triangle inequality:     ‖A + B‖ ≤ ‖A‖ + ‖B‖.
Since the set of all matrices admits the operation of multiplication in addition to the basic operation of addition (which is included in the definition of vector spaces), it is natural to require that matrix norm satisfies the special property:
  • A · B‖ ≤ ‖A‖ · ‖B‖.
Once a norm is defined, it is the most natural way of measure distance between two matrices A and B as d(A, B) = ‖AB‖ = ‖BA‖. However, not all distance functions have a corresponding norm. For example, a trivial distance that has no equivalent norm is d(A, A) = 0 and d(A, B) = 1 if AB. The norm of a matrix may be thought of as its magnitude or length because it is a nonnegative number. Their definitions are summarized below for an \( m \times n \) matrix A, to which corresponds a self-adjoint (m+n)×(m+n) matrix B:

\[ {\bf A} = \left[ \begin{array}{cccc} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n} \end{array} \right] \qquad \Longrightarrow \qquad {\bf B} = \begin{bmatrix} {\bf 0} & {\bf A}^{\ast} \\ {\bf A} & {\bf 0} \end{bmatrix} . \]
Here A* denotes the adjoint matrix: \( {\bf A}^{\ast} = \overline{{\bf A}^{\mathrm T}} = \overline{\bf A}^{\mathrm T} . \)
For a rectangular m-by-n matrix A and given norms \( \| \ \| \) in \( \mathbb{R}^n \mbox{ and } \mathbb{R}^m , \) the norm of A is defined as follows:
\begin{equation} \label{EqBasic.2} \| {\bf A} \| = \sup_{{\bf x} \ne {\bf 0}} \ \dfrac{\| {\bf A}\,{\bf x} \|_m}{\| {\bf x} \|_n} = \sup_{\| {\bf x} \| = 1} \ \| {\bf A}\,{\bf x} \| . \end{equation}
This matrix norm is called the operator norm or induced norm.
The term "induced" refers to the fact that the definition of a norm for vectors such as A x and x is what enables the definition above of a matrix norm. This definition of matrix norm is not computationally friendly, so we use other options. The most important norms are as follow

The operator norm corresponding to the p-norm for vectors, p ≥ 1, is:

\begin{equation} \label{EqBasic.3} \| {\bf A} \|_{p,q} = \sup_{{\bf x} \ne 0} \, \frac{\| {\bf A}\,{\bf x} \|_q}{\| {\bf x} \|_p} = \sup_{\| {\bf x} \|_p =1} \, \| {\bf A}\,{\bf x} \|_q , \end{equation}
where \( \| {\bf x} \|_p = \left( x_1^p + x_2^p + \cdots + x_n^p \right)^{1/p} .\)

1-norm (is commonly known as the maximum column sum norm) of a matrix A may be computed as

\begin{equation} \label{EqBasic.4} \| {\bf A} \|_1 = \max_{1 \le j \le n} \,\sum_{i=1}^n | a_{i,j} | . \end{equation}

The infinity norm, \( \infty - \) norm of matrix A may be computed as

\begin{equation} \label{EqBasic.5} \| {\bf A} \|_{\infty} = \| {\bf A}^{\ast} \|_{1} = \max_{1 \le i \le n} \,\sum_{j=1}^n | a_{i,j} | , \end{equation}
which is simply the maximum absolute row sum of the matrix.

In the special case of p = 2 we get the Euclidean norm (which is equal to the largest singular value of a matrix)

\begin{equation} \label{EqBasic.6} \| {\bf A} \|_2 = \sup_{\bf x} \left\{ \| {\bf A}\, {\bf x} \|_2 \, : \quad \mbox{with} \quad \| {\bf x} \|_2 =1 \right\} = \sigma_{\max} \left( {\bf A} \right) = \sqrt{\rho \left( {\bf A}^{\ast} {\bf A} \right)} , \end{equation}
where σmax(A) represents the largest singular value of matrix A.

The Frobenius norm (non-induced norm):

\begin{equation} \label{EqBasic.7} \| {\bf A} \|_F = \left( \sum_{i=1}^m \sum_{j=1}^n |a_{i.j} |^2 \right)^{1/2} = \left( \mbox{tr}\, {\bf A} \,{\bf A}^{\ast} \right)^{1/2} = \left( \mbox{tr}\, {\bf A}^{\ast} {\bf A} \right)^{1/2} . \end{equation}
The Euclidean norm and the Frobenius norm are related via the inequality:
\[ \| {\bf A} \|_2 = \sigma_{\max}\left( {\bf A} \right) \le \| {\bf A} \|_F = \left( \sum_{i=1}^m \sum_{j=1}^n |a_{i.j} |^2 \right)^{1/2} = \left( \mbox{tr}\, {\bf A} \,{\bf A}^{\ast} \right)^{1/2} . \]

There is also another function that that provides infinum of all norms of a square matrix: \( \rho ({\bf A}) \le \|{\bf A}\| . \)

The spectral radius of a square matrix A is
\begin{equation} \label{EqBasic.8} \rho ({\bf A}) = \lim_{k\to \infty} \| {\bf A}^k \|^{1/k} = \max \left\{ |\lambda | : \ \lambda \mbox{ is eigenvalue of }\ {\bf A} \right\} . \end{equation}
Theorem 2: For any matrix norm ‖·‖ on the space of square matrices and for any square matrix A, we have
\[ \rho\left( {\bf A} \right) \le \| {\bf A} \| . \]
For any positive integer k, we have
\begin{equation} \label{EqBasic.9} \rho ({\bf A}) \le \| {\bf A}^k \|^{1/k} . \end{equation}

Some properties of the matrix norms are presented in the following

Theorem 3: Let A and B be \( m \times n \) matrices and let \( k \) be a scalar.

  • \( \| {\bf A} \| \ge 0 \) for any square matrix A.
  • \( \| {\bf A} \| =0 \) if and only if the matrix A is zero: \( {\bf A} = {\bf 0}. \)
  • \( \| k\,{\bf A} \| = |k| \, \| {\bf A} \| \) for any scalar \( k. \)
  • \( \| {\bf A} + {\bf B}\| \le \| {\bf A} \| + \| {\bf B} \| .\)
  • \( \left\vert \| {\bf A} - {\bf B}\|\right\vert \le \| {\bf A} - {\bf B} \| .\)
  • \( \| {\bf A} \, {\bf B}\| \le \| {\bf A} \| \, \| {\bf B} \| . \)

All these norms are equivalent and we present some inequalities:

\[ \| {\bf A} \|_2^2 \le \| {\bf A}^{\ast} \|_{\infty} \cdot \| {\bf A} \|_{\infty} = \| {\bf A} \|_{1} \cdot \| {\bf A} \|_{\infty} , \]
where A* is the adjoint matrix to A (transposed and complex conjugate).

Theorem 4: Let ‖ ‖ be any matrix norm and let B be a matrix such that  ‖B‖ < 1. Then matrix I + B is invertible and
\[ \| \left( {\bf I} + {\bf B} \right)^{-1} \| \le \frac{1}{1 - \| {\bf B} \|} . \]
Theorem 5: Let ‖ ‖ be any matrix norm, and let matrix I + B is singular, where I is the identity matrix. Then ‖B‖ ≥ 1 for every matrix norm.

Mathematica has a special command for evaluating norms:
Norm[A] = Norm[A,2] for evaluating the Euclidean norm of the matrix A;
Norm[A,1] for evaluating the 1-norm;
Norm[A, Infinity] for evaluating the ∞-norm;
Norm[A, "Frobenius"] for evaluating the Frobenius norm.

A = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
Norm[A]
Sqrt[3/2 (95 + Sqrt[8881])]
N[%]
16.8481

 

Example 3: Evaluate the norms of the matrix \( {\bf A} = \left[ \begin{array}{cc} \phantom{-}1 & -7 & 4 \\ -2 & -3 & 1\end{array} \right] . \)

The absolute column sums of A are \( 1 + | -2 | =3 \) , \( |-7| + | -3 | =10 , \) and \( 4+1 =5 . \) The larger of these is 10 and therefore \( \| {\bf A} \|_1 = 10 . \)

Norm[A, 1]
10

The absolute row sums of A are \( 1 + | -7 | + 4 =12 \) and \( | -2 | + |-3| + 1 = 6 ; \) therefore, \( \| {\bf A} \|_{\infty} = 12 . \)

Norm[Transpose[A], 1]
12

The Euclidean norm of A is the largest singular value. So we calculate

\[ {\bf S} = {\bf A}^{\ast} {\bf A} = \begin{bmatrix} 5&-1&2 \\ -1&58&-31 \\ 2&-31&17 \end{bmatrix} , \qquad \mbox{tr} \left( {\bf S} \right) = 80. \]
Its eigenvalues are
Eigenvalues[Transpose[A].A]
{40 + Sqrt[1205], 40 - Sqrt[1205], 0}
Taking the square root of the largest one, we obtain the Euclidean norm of matrix A:
N[Sqrt[40 + Sqrt[1205]]]
8.64367
Mathematica also knows how to find the Euclidean norm:
Norm[A, 2]
Sqrt[40 + Sqrt[1205]]
We compare it with the Frobenius norm:
Norm[A, "Frobenius"]
4 Sqrt[5]
N[%]
8.94427
Norm[A]
Sqrt[40 + Sqrt[1205]]
N[%]
8.64367
To find its exact value, we evaluate the product
\[ {\bf M} = {\bf A}\,{\bf A}^{\ast} = \left[ \begin{array}{cc} \phantom{-}1 & -7 & 4\\ -2 & -3 & 1 \end{array} \right] \, \left[ \begin{array}{cc} 1 & -2 \\ -7 & -3 \\ 4&-1 \end{array} \right] = \left[ \begin{array}{cc} 66 & 23 \\ 23& 14 \end{array} \right] , \qquad \mbox{tr} \left( {\bf M} \right) = 80. \]

This matrix\( {\bf A}\,{\bf A}^{\ast} \) has two eigenvalues \( 40 \pm \sqrt{1205} . \) Hence, the Euclidean norm of the matrix A is \( \sqrt{40 + \sqrt{1205}} \approx 8.64367 . \)

Therefore,
\[ \| {\bf A} \|_2 = 8.64367 < \| {\bf A} \|_F = 8.94427 < \| {\bf A} \|_1 = 10 < \| {\bf A} \|_{\infty} = 12 . \]
   ■
Example 4: Let us consider the matrix
\[ {\bf A} = \begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix} . \]
Its conjugate transpose (adjoint) matrix is
\[ {\bf A}^{\ast} = \begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix}^{\mathrm T} = \begin{bmatrix} 1&4&7 \\ 2&5&8 \\ 3&6&9 \end{bmatrix} . \]
So
\[ {\bf S} = {\bf A}^{\ast} {\bf A} = \begin{bmatrix} 66&78&90 \\ 78&93&108 \\ 90&108&126 \end{bmatrix} \]
We check with Mathematica:
A = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
S = Transpose[A].A
Their eigenvalues are
Eigenvalues[A]
{3/2 (5 + Sqrt[33]), 3/2 (5 - Sqrt[33]), 0}
Eigenvalues[S]
{3/2 (95 + Sqrt[8881]), 3/2 (95 - Sqrt[8881]), 0}
N[%]
{283.859, 1.14141, 0.}
Therefore, the largest singular number of A is
\[ \sigma = \sqrt{\frac{3}{2} \left( 95 + \sqrt{8881} \right)} \approx 16.8481. \]
We also check the opposite product
Eigenvalues[A.Transpose[A]]
{3/2 (95 + Sqrt[8881]), 3/2 (95 - Sqrt[8881]), 0}
\[ {\bf M} = {\bf A}\, {\bf A}^{\ast} = \begin{bmatrix} 14&32&50 \\ 32&77&122 \\ 50&122&194 \end{bmatrix} \]
These matrices S and M have the same eigenvalues. Therefore, we found the Euclidean (operator) norm of A to be approximately 16.8481. Mathematica knows this norm:
Norm[A]
Sqrt[3/2 (95 + Sqrt[8881])]
The spectral radius of A is the largest eigenvalue:
\[ \rho ({\bf A}) = \frac{3}{2} \left( 5 + \sqrt{33} \right) \approx 16.1168 , \]
which is slightly less than its operator (Euclidean) norm.

The Frobenius norm of matrix \( {\bf A} = \begin{bmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{bmatrix} \) is

\[ \| {\bf A} \|_F = \left( \sum_{i=1}^m \sum_{j=1}^n |a_{i.j} |^2 \right)^{1/2} = \left( 1^2 + 2^2 + 3^2 + 4^2 + 5^2 + 6^2 + 7^2 +8^2 +9^2 \right)^{1/2} = \sqrt{285} = \left( \mbox{tr}\, {\bf A} \,{\bf A}^{\ast} \right)^{1/2} . \]
A = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
Tr[A.Transpose[A]]
285
Sum[k^2, {k, 1, 9}]
285
N[Sqrt[285]]
16.8819
Mathematica has a dedicated command:
Norm[A, "Frobenius"]
16.8819

To find 1-norm of A, we add elements in every column; it turns out that the last column has the largest entries, so

\[ \| {\bf A} \|_1 = 3+6+9=18. \]
If we add entries in every row, then the last row contains the largest values and we get
\[ \| {\bf A} \|_{\infty} = 7+8+9=24. \]
   ■

 

Return to Mathematica page
Return to the main page (APMA0340)
Return to the Part 1 Matrix Algebra
Return to the Part 2 Linear Systems of Ordinary Differential Equations
Return to the Part 3 Non-linear Systems of Ordinary Differential Equations
Return to the Part 4 Numerical Methods
Return to the Part 5 Fourier Series
Return to the Part 6 Partial Differential Equations
Return to the Part 7 Special Functions