Eigenvalues (translated from German, this means proper values) are a special set of scalars associated with every square matrix that are sometimes also known as characteristic roots, characteristic values, or proper values. Each eigenvalue is paired with a corresponding set of so-called eigenvectors. The determination of the eigenvalues and eigenvectors of a system is extremely important in physics and engineering, where it arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few.

If A is a square \( n \times n \) matrix with real entries and v is an \( n \times 1 \) column vector, then the product w = Av is defined and is another \( n \times 1 \) column vector. It does not matter whether v is real vector v ∈ ℝn or complex v ∈ ℂn. Therefore, any square matrix with real entries (we deal only with real matrices) can be considered as a linear operator A : vw = Av, acting either in ℝn or ℂn. Of course, one can use any Euclidean space not necessarily ℝn or ℂn.

Although a transformation vAv may move vectors in a variety of directions, it often happen that we are looking for such vectors on which action of A is just multiplication by a constant. Such a linear transformation is usually referred to as the spectral representation of the operator A. It is important in many applications to determine whether there exist nonzero column vectors v such that the product vector \( {\bf A}\,{\bf v} \) is a constant multiple (which we denote as λ) of v.

Example 1A: Let us consider the following matrix and two vectors:
\[ {\bf A} = \begin{bmatrix} 1 & \phantom{-}2 \\ 4&-1 \end{bmatrix} \qquad \mbox{and} \qquad {\bf v} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} ,\quad {\bf u} = \begin{bmatrix} -1 \\ \phantom{-}1 \end{bmatrix} \]
Then
\[ {\bf A} \, {\bf v} = \begin{bmatrix} 3 \\ 3 \end{bmatrix} = 3 \begin{bmatrix} 1 \\ 1 \end{bmatrix} = 3\,{\bf v} \qquad \mbox{and} \qquad {\bf A} \, {\bf u} = \begin{bmatrix} \phantom{-}1 \\ -5 \end{bmatrix} \]
     
line1 = Graphics[{Blue, Thickness[0.01], Line[{{-2, -2}, {2, 2}}]}];
line2 = Graphics[{Blue, Thickness[0.01], Line[{{1, -1}, {-1.3, 1.3}}]}];
line3 = Graphics[{Blue, Thickness[0.01], Line[{{0, 0}, {0.5, -2.5}}]}];
p1 = Graphics[{PointSize[0.03], Point[{{0.7, 0.7}, {2, 2}}, VertexColors -> {Red, Magenta}]}];
p2 = Graphics[{PointSize[0.03], Point[{{-0.7, 0.7}, {0.5, -2.5}}, VertexColors -> {Red, Magenta}]}];
f1 = Interpolation[{{0.7, 0.7}, {0.9, 1.4}, {2, 2}}];
c1 = Plot[f1[x], {x, 0.7, 2}, PlotStyle -> Dashed];
c2 = ListLinePlot[{{-0.7, 0.7}, {-1, -0.2}, {0.5, -2.5}}, InterpolationOrder -> 2, PlotStyle -> Dashed];
t1 = Graphics[{Black, Text[Style["v=(1,1)", Bold, 18], {1.3, 0.7}]}];
t2 = Graphics[{Black, Text[Style["3*v", Bold, 18], {1.6, 2.0}]}];
t3 = Graphics[{Black, Text[Style["u=(-1,1)", Bold, 18], {-1.3, 0.7}]}];
t4 = Graphics[{Black, Text[Style["Au", Bold, 18], {0.1, -2.4}]}];
arx = Graphics[{Arrowheads[0.06], Thick, Arrow[{{-2, 0}, {2, 0}}]}];
ary = Graphics[{Arrowheads[0.06], Thick, Arrow[{{0, -2}, {0, 2}}]}];
Show[line1, line2, line3, p1, p2, c1, c2, t1, t2, t3, t4, arx, ary]
       Figure 1: Effects of multiplication by A.            Mathematica code

Example 1B: As another example, we can consider a stochastic matrix that describes the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability (in every column, the sum of entries is 1). It is also called a probability matrix, transition matrix, substitution matrix, or Markov matrix.
\[ {\bf A} = \begin{bmatrix} 6/10 & 3/10 \\ 4/10&7/10 \end{bmatrix} = \frac{1}{10} \begin{bmatrix} 6 & 3 \\ 4 &7 \end{bmatrix} . \]
It is easy to verify that matrix A leaves the vector [3, 4] untouched. Indeed,
A = {{6/10, 3/10}, {4/10, 7/10}};
A.{{3}, {4}}
{{3}, {4}}
but shrinks vector [−1, 1]
A.{{-1}, {1}}
{{-(3/10)}, {3/10}}
So we observe that
\[ {\bf A} \begin{bmatrix} 3 \\ 4 \end{bmatrix} = \begin{bmatrix} 3 \\ 4 \end{bmatrix} \qquad \mbox{and} \qquad {\bf A} \begin{bmatrix} -1 \\ \phantom{-}1 \end{bmatrix} = \frac{3}{10} \begin{bmatrix} -1 \\ \phantom{-}1 \end{bmatrix} . \]

  

Example 1C: Any constant coefficient difference equation of order n
\[ x_{k+n} = p_{n-1} x_{n+k-1} + p_{n-2} x_{n+k-2} + \cdots + p_{0} x_k , \qquad k=0,1,2,\ldots , \]
where pi are constants, can be written as a difference equation of first order
\[ {\bf x}_{k+1} = {\bf A}\,{\bf x}_{k} , \qquad k=0,1,2,\ldots , \]
in n dimensional space ℝn, with some n×n matrix A. As a well-known example, we consider the Fibonacci recurrence
\[ F_{k+2} = F_{k+1} + F_k , \qquad F_0 =0, \quad F_1 = 1, \qquad k=0,1,2,\ldots \]

Let L be the linear operator on ℝ² represented by the matrix

\[ L({\bf v}) = \begin{pmatrix} 1&1 \\ 1&0 \end{pmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} x+y \\ y \end{bmatrix} , \qquad {\bf A} = \begin{pmatrix} 1&1 \\ 1&0 \end{pmatrix} , \]
with respect to the standard basis of ℝ². Here v = [x, y]T is an arbitrary column-vector from ℝ². In particular, for the vector uk whose coordinates are two consecutive Fibonacci numbers [Fk, Fk-1]T, we have that
\[ L\left( {\bf u}_k \right) = {\bf A} \begin{bmatrix} F_k \\ F_{k-1} \end{bmatrix} = \begin{pmatrix} 1&1 \\ 1&0 \end{pmatrix} \begin{bmatrix} F_k \\ F_{k-1} \end{bmatrix} = \begin{bmatrix} F_k + F_{k-1} \\ F_k \end{bmatrix} = \begin{bmatrix} F_{k+1} \\ F_{k} \end{bmatrix} = {\bf u}_{k+1} . \]
Thus, we can produce a vector whose coordinates are two consecutive Fibonacci numbers by applying L many times to the vector u1 with coordinates [F1, F0]T = [1, 0]T:
\[ L\left( {\bf u}_k \right) = {\bf A}^k \begin{bmatrix} F_1 \\ F_{0} \end{bmatrix} , \qquad k=1,2,\ldots . \]
    ■
If a homogeneous equation
\begin{equation} \label{Eq.Eigen.1} {\bf A} \, {\bf v} = \lambda\,{\bf v} \end{equation}
has a nontrivial solution v (meaning it is not identically zero), then the vector v is called an eigenvector, corresponding to λ, which is called the associated eigenvalue. The set of all eigenvalues is called the spectrum of matrix A.
Example 2: Let us reconsider the matrix:
\[ {\bf A} = \begin{bmatrix} 1 & \phantom{-}2 \\ 4&-1 \end{bmatrix} . \]
You can verify that λ = 3 and λ = −3 are eigenvalues with eigenvectors
\[ {\bf A} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = 3 \begin{bmatrix} 1 \\ 1 \end{bmatrix} \qquad\mbox{and} \qquad {\bf A} \begin{bmatrix} -1 \\ \phantom{-}2 \end{bmatrix} = -3 \begin{bmatrix} -1 \\ \phantom{-}2 \end{bmatrix} = \begin{bmatrix} \phantom{-}3 \\ -6 \end{bmatrix} . \]
    ■

There are special classes of matrices for which eigenvalues can be identified uniquely. They are triangular matrices and diagonal matrices. If A is a diagonal, upper triangular, or lower triangular matrix, then entries on its diagonal are its eigenvalues.

If A is an n×n triangular matrix (upper triangular, lower triangular, or diagonal). then the eigenvalues of A are the entries on the main diagonal of A.

Example 3: We consider the following 5×5 lower triangular matrix
\[ {\bf A} = \begin{bmatrix} 1 & 0&0&\phantom{-}0&\phantom{-}0 \\ 3&2 &0&\phantom{-}0&\phantom{-}0 \\ 5&4 & 0&\phantom{-}0&\phantom{-}0 \\ 0&7&6&-2&\phantom{-}0 \\ 4&3&2&\phantom{-}1&-1 \end{bmatrix} . \]
For a matrix of this size, it is appropriate to use a software. So we ask Mathematica for help:
A = {{1, 0, 0, 0, 0}, {3, 2, 0, 0, 0}, {5, 4, 0, 0, 0}, {0, 7, 6, -2, 0}, {4, 3, 2, 1, -1}};
We know that the entries on the main diagonal are eigenvalues, but we check the Mathematica output
Eigenvalues[A]
{-2, 2, -1, 1, 0}
The corresponding eigenvectors are
Eigenvectors[A]
{{0, 0, 0, -1, 1}, {0, 12, 24, 57, 47}, {0, 0, 0, 0, 1}, {-1, 3, 7, 21, 20}, {0, 0, 1, 3, 5}}
The trace of matrix A is zero as well as its determinant. So its characteristic polynomial is
CharacteristicPolynomial[A, lambda]
-4 lambda + 5 lambda^3 - lambda^5
Note that Mathematica evaluates the characteristic polynomial as det(A - λI), which has the opposite sign to our standard definition.

We can calculate it by evaluating the determinant:

Expand[Det[x*IdentityMatrix[5] - A]]
4 x - 5 x^3 + x^5
    ■
A number λ can be eigenvalue for a square matrix A only when the determinant of the matrix corresponding to the system \( \lambda \,{\bf v} - {\bf A}\, {\bf v} = {\bf 0} \) vanishes, namely, \( \det \left( \lambda\, {\bf I} - {\bf A} \right) =0 ,\) where I is the identity matrix.
If A is a square matrix, the determinant \( \chi (\lambda ) = \det \left( \lambda\, {\bf I} - {\bf A} \right) . \) is called the characteristic polynomial and we denote it by χ(λ).
Samuelson's formula allows the characteristic polynomial to be computed recursively without divisions.
A scalar (real or complex) is an eigenvalue of a square matrix A if and only if it is a root of characteristic polynomial:
\begin{equation} \label{EqEigen.2} \det \left( \lambda\, {\bf I} - {\bf A} \right) = 0 . \end{equation}

This determinant is called the characteristic polynomial and we denote it by \( \chi (\lambda ) = \det \left( \lambda\, {\bf I} - {\bf A} \right) . \) Every square matrix has an eigenvalue and corresponding eigenvectors. Therefore, eigenvalues are the nulls of the characteristic polynomial and they are the roots of the equation \( \chi (\lambda ) = 0. \) The characteristic polynomial is always a polynomial of degree n, where n is the dimension of the square matrix A. Its coefficients can be expressed through eigenvalues:
\begin{equation} \label{EqEigen.3} \chi (\lambda ) = \det \left( \lambda\, {\bf I} - {\bf A} \right) = \lambda^n - \left( \mbox{tr} {\bf A} \right) \lambda^{n-1} + \cdots + (-1)^n \,\det {\bf A} , \end{equation}
where \( \mbox{tr} {\bf A} = a_{11} + a_{22} + \cdots + a_{nn} = \lambda_1 + \lambda_2 + \cdots + \lambda_n \) is the trace of the matrix A, that is, the sum of its diagonal elements, which is equal to the sum of all eigenvalues (including their multiplicities). This is true for arbitrary matrices. The set of all eigenvalues is called the spectrum of the matrix A.

A square matrix A is invertible if and only if λ = 0 is not an eigenvalue of A.

If λ1, λ2, … , λr are distinct eigenvalues of a square matrix A, and is v1, v2, … , vr are corresponding eigenvectors, then { v1, v2, … , vr } is a linearly independent set.

For any polynomial p(s), if λ is an eigenvalue of a matrix A, and v is a corresponding eigenvectors, then p(λ) is an eigenvalue of p(A) and v is a corresponding eigenvector.

 

Mathematica has some special commands (Eigensystem, Eigenvalues, Eigenvectors, and CharacteristicPolynomial) to deal with eigenvalues and eigenvectors for square matrices. We show how to use them in a sequence of examples.

Example 4: We start with a permutation matrix
\[ {\bf B} = \begin{bmatrix} \phantom{-}0&1 \\ -1&0 \end{bmatrix} . \]
The main command that deals with an eigenvalues problem is Eigensystem[B], which gives a list {values, vectors} of the eigenvalues and eigenvectors of the square matrix B.

B := {{0, 1}, {-1, 0}}
Out[1]= {{0, 1}, {-1, 0}}
Its characteristic polynomial
\[ \chi (\lambda ) = \det \left( \lambda {\bf I} - {\bf B} \right) = \begin{vmatrix} \lambda & -1 \\ 1 & \lambda \end{vmatrix} = \lambda^2 + 1 \]
has two complex roots ±j. Mathematica confirms:
Eigenvalues[B]
Out[2]= {I,-I}
Mathematica has three different characters to represent the imaginary unit: I and two others are also used in Wolfram language when entered into a worksheet with commands \[ImaginaryI] or \[ImaginaryJ], provide outputs 𝕚 or 𝕛, respectively.

To find an eigenvector corresponding to the eigenvalue λ = j, we need to solve the system of equations

\[ {\bf B}\,{\bf x} = {\bf j\,x} \qquad \mbox{or} \qquad \begin{cases} x_2 &= {\bf j}\, x_1 , \\ -x_1 &= {\bf j}\, x_2 , \end{cases} \]
where x = [ x1, x2 ]T is the eigenvector. Solving the system of equations, we get
\[ \begin{split} x_1 & = x_1 , \\ x_2 &= {\bf j}\,x_1 , \end{split} \qquad \Longrightarrow \qquad {\bf x} = \begin{bmatrix} x_1 \\ {\bf j}\,x_1 \end{bmatrix} = x_1 \begin{bmatrix} 1 \\ {\bf j} \end{bmatrix} . \]
So an eigenvector corresponding to the eigenvalue λ = j is a constant multiple of the vector [ 1, j ]T. Mathematica confirms:
Eigenvectors[B]
Out[3]= {{-I,1},{I,1}}
Eigensystem[B]
Out[4]= {{I, -I}, {{-I, 1}, {I, 1}}}
Example 5: Let us take a look at a singular matrix
\[ {\bf A} = \begin{bmatrix} 1&2 \\ 2&4 \end{bmatrix} . \]
First, we find its eigenvalues by solving the equation
\[ \det\left( \lambda{\bf I} - {\bf A} \right) = \begin{vmatrix} \lambda -1&-2 \\ -2&\lambda -4 \end{vmatrix} = \left( \lambda -1 \right)\left( \lambda -4 \right) - 4 = 0 . \]
Since this quadratic equation \( \det\left( \lambda{\bf I} - {\bf A} \right) = \lambda \left( \lambda -5 \right) = 0 \) has two real roots λ = 0 and λ = 5, we know thgat these numbers are eigenvalues of matrix A.
A = {{1, 2}, {2, 4}} ;
sys[lambda_] = lambda*IdentityMatrix[2] - A ;
p[lambda_] = Det[sys[lambda]] ;
Solve[p[lambda] == 0, lambda] ;
{lambda1, lambda2} = Eigenvalues[A]
{5, 0}
Now we find eigenvectors corresponding to these eigenvalues. For λ = 0, we need to solve the vector equation
\[ {\bf A} {\bf x} = {\bf 0} \qquad \mbox{or} \qquad \begin{cases} x_1 + 2 x_2 &= 0, \\ 2x_1 + 4 x_2 &= 0. \end{cases} \]
This leads to the relation x1 = −2x2. So the eigenvector for λ = 0 becomes
\[ {\bf x} = \begin{bmatrix} -2x_2 \\ x_2 \end{bmatrix} = x_2 \begin{bmatrix} -2 \\ \phantom{-}1 \end{bmatrix} , \]
where x2 is an arbitrary nonzero real number.

The eigenvector for λ = 5 is obtained by solving the system of equations:

\[ {\bf A} {\bf x} = 5{\bf x} \qquad \mbox{or} \qquad \begin{cases} x_1 + 2 x_2 &= 5x_1 , \\ 2x_1 + 4 x_2 &= 5 x_2 . \end{cases} \]
From this system of algebraic equations we find x2 = 2x1. So the eigenvector for λ = 5 becomes
\[ {\bf x} = \begin{bmatrix} x_1 \\ 2x_1 \end{bmatrix} = x_1 \begin{bmatrix} 1 \\ 2 \end{bmatrix} , \]
where x1 is an arbitrary nonzero real number.

Now we use Mathematica; in the folloing, The [[1]] indicates the first part of the expression, which can also be indicated with Part[expression, 1]. In this case it is negligible because the result is only one part.

v1 = NullSpace[sys[lambda1]][[1]]
{1, 2}
v2 = NullSpace[sys[lambda2]]
{-2, 1}
To check eigenvalues, we type
A.v2==lambda2*v2
True

This can be obtained manually as follows:

A = {{1, 2}, {2, 4}}
Out[1]= {{1, 2}, {2, 4}}
sys[lambda_] = lambda*IdentityMatrix[2]-A
Out[2]= {{-1 + lambda, -2}, {-2, -4 + lambda}}
p[lambda_] =Det[sys[lambda]]
Out[3]= -5 lambda + lambda^2
To find the roots of the characteristic equation (eigenvalues of the matrix A):

Solve[p[lambda]==0]
Out[4]= {{lambda -> 0}, {lambda -> 5}}
To capture the eigenvalues:
{lambda1,lambda2} = x/.Solve[p[x]==0]
Out[5]= {0, 5}
To show the basis of the null space of the matrix A:
v1 = NullSpace[sys[lambda1]][[1]]
Out[6]= {-2, 1}

Example 6: Consider a defective matrix:

\[ {\bf A} = \begin{bmatrix} 1&1&0 \\ 0&0&1 \\ 0&0&1 \end{bmatrix} . \]
A = {{1, 1, 0}, {0, 0, 1}, {0, 0, 1}}
Eigenvalues[A]
Out[2]= {1, 1, 0}

Therefore, we know that the matrix A has one double eigenvalue λ = 1, and one simple eigenvalue λ = 0 (which indicates that matrix A is singular). Next, we find the corresponding eigenvectors

Eigenvectors[A]
Out[3]= {{1, 0, 0}, {0, 0, 0}, {-1, 1, 0}}

So Mathematica provides us only one eigenvector \( \xi = \left[ 1,0,0 \right] \) corresponding to the eigenvalue λ = 1 (therefore, A is defective) and one eigenvector v = <-1,1,0> corresponding eigenvalue λ = 0. To check this, we introduce the matrix B1:

B1 = IdentityMatrix[3] - A
Eigenvalues[B1]
Out[5]= {1, 0, 0}

which means that B1 has one simple eigenvalue \( \lambda = 1 \) and one double eigenvalue \( \lambda =0. \) Then we check that \( \xi \) is an eigenvector of the matrix A:

B1.{1, 0, 0}
Out[6]= {0, 0, 0}

Then we check that v is the eigenvector corresponding to \( \lambda = 0: \)

A.{-1, 1, 0}
Out[7]= {0, 0, 0}

To find the generalized eigenvector corresponding to \( \lambda = 1, \) we use the following Mathematica command

LinearSolve[B1, {1, 0, 0}]
Out[8]= {0, -1, -1}

This gives us another generalized eigenvector \( \xi_2 = \left[ 0,-1,-1 \right] \) corresponding to the eigenvalue \( \lambda = 1 \) (which you can multiply by any constant). To check it, we calculate:

B1.B1.{0, 1, 1}
Out[9]= {0, 0, 0}

but the first power of B1 does not annihilate it:

B1.{0, 1, 1}
out[10]= {-1, 0, 0}

 

The characteristic polynomial can be found either with Mathematica's command CharacteristicPolynomial or multiplying (λ - λk)m for each eigenvalue λk of multiplicity m, when eigenvalues are available.Remember that for odd dimensions, Mathematica's command CharacteristicPolynomial provides negative value because it is based on the formula det(A - λI) rather than Eq.\eqref{EqEigen.2}.

Example 7: We consider the following 3×3 matrix:

\[ {\bf B} = \begin{bmatrix} 13& \phantom{-}8& -14 \\ -6& -3& \phantom{-}7 \\ \phantom{-}6& \phantom{-}4& -6 \end{bmatrix} . \]
Its eigenvalues and eigenvectors are
S = {{1, 2, -1}, {2, -1, 5}, {2, 1, 2}};
B = S.{{1, 0, 0}, {0, 2, 0}, {0, 0, 1}}.Inverse[S]
{{13, 8, -14}, {-6, -3, 7}, {6, 4, -6}}
Eigenvalues[B]
{2, 1, 1}
Eigenvectors[B]
{{2, -1, 1}, {7, 0, 6}, {-2, 3, 0}}
Manual determination of eigenvalues and eigenvectors is quite tedious.
B = {{13, 8, -14}, {-6, -3, 7}, {6, 4, -6}}; sys[lambda_] = lambda*IdentityMatrix[3] - B; p[lambda_] = Det[sys[lambda]]; Solve[p[lambda] == 0, lambda]; lambda1 = lambda /. Solve[p[lambda] == 0, lambda][[1]] lambda2 = lambda /. Solve[p[lambda] == 0, lambda][[2]] lambda3 = lambda /. Solve[p[lambda] == 0, lambda][[3]]
1, 1, 2
In order to determine eigenvectors corresponding to the eigenvalue λ = 1, we need to solve the system of algebraic equations:
\[ {\bf B}\,{\bf x} = {\bf x} \qquad \mbox{or} \qquad \begin{cases} 13\,x_1 + 8\,x_2 -14\,x_3 &= x_1 , \\ -6\,x_1 - 3\,x_2 + 7\,x_2 &= x_2 , \\ \phantom{-}6\, x_1 + 4\, x_2 -6\, x_3 &= x_3 . \end{cases} \]
From the first equation, we find
\[ 6\,x_1 = -4\,x_2 + 7\,x_3 . \]
Substitution x1 into two other equations yields
\[ \begin{cases} 4\,x_2 - 7\,x_3 - 3\,x_2 + 7\,x_2 &= x_2 , \\ -4\,x_2 + 7\,x_3 + 4\, x_2 -6\, x_3 &= x_3 , \end{cases} \qquad \mbox{or} \qquad \begin{cases} 0 &= 0, \\ 0 &= 0. \end{cases} \]
This means that two equations do not define any constraint on the coordinates of the eigenvector. In other words, all three equations are equivalent and you can take any of them to define a relation between its coordinates. Therefore, the coordinates of the eigenvectors corresponding to the double eigenvalue λ = 1 are related via the identity
\[ {\bf x} = \begin{bmatrix} -4\,x_2 + 7\,x_3 \\ 6\,x_2 \\ 6\,x_3 \end{bmatrix} = x_2 \begin{bmatrix} -4 \\ \phantom{-}6 \\ \phantom{-}0 \end{bmatrix} + x_3 \begin{bmatrix} 7 \\ 0 \\ 6 \end{bmatrix} \]
So there are two linearly independent eigenvectors corresponding to the eigenvalue λ = 1: [−2, 3, 0]T and [7, 0, 6]T. ■

======================================================

If A is a square \( n \times n \) matrix and v is an \( n \times 1 \) column vector, then the product \( {\bf A}\,{\bf v} \) is defined and is another \( n \times 1 \) column vector. It is important in many applications to determine whether there exist nonzero column vectors v such that the product vector \( {\bf A}\,{\bf v} \) is a constant multiple (which we denote as λ) of v.

If a homogeneous equation

\[ {\bf A} \, {\bf v} = \lambda\,{\bf v} \]
has a nontrivial (which means not identically zero) solution v, then the vector v is called the eigenvector, corresponding to eigenvalue \( \lambda .\) This may happen only when the determinant of the system \( \lambda \,{\bf v} - {\bf A}\, {\bf v} = {\bf 0} \) is zero, namely, \( \det \left( \lambda\, {\bf I} - {\bf A} \right) =0 ,\) where I is the identity matrix. This determinant is called the characteristic polynomial and we denote it by \( \chi (\lambda ) = \det \left( \lambda\, {\bf I} - {\bf A} \right) . \) Therefore, eigenvalues are the nulls of the characteristic polynomial and they are the roots of the equation \( \chi (\lambda ) = 0. \) The characteristic polynomial is always a polynomial of degree n, where n is the dimension of the square matrix A. It can be expressed through eigenvalues:
\[ \chi (\lambda ) = \det \left( \lambda\, {\bf I} - {\bf A} \right) = \lambda^n - \left( \mbox{tr} {\bf A} \right) \lambda^{n-1} + \cdots + (-1)^n \,\det {\bf A} , \]
where \( \mbox{tr} {\bf A} = a_{11} + a_{22} + \cdots + a_{nn} = \lambda_1 + \lambda_2 + \cdots + \lambda_n \) is the trace of the matrix A, that is, the sum of its diagonal elements, which is equal to the sum of all eigenvalues (including their multiplicities).

The set of all eigenvalues is called the spectrum of the matrix A.

 

 

Eigensystem[B] gives a list {values, vectors} of the eigenvalues and eigenvectors of the square matrix B.

B := {{0, 1}, {-1, 0}}
Out[1]= {{0, 1}, {-1, 0}}
Eigenvalues[B]
Out[2]= {I,-I}
Eigenvectors[B]
Out[3]= {{-I,1},{I,1}}
Eigensystem[B]
Out[4]= {{I, -I}, {{-I, 1}, {I, 1}}}

Consider a defective matrix:
A = {{1, 1, 0}, {0, 0, 1}, {0, 0, 1}}
Eigenvalues[A]
Out[2]= {1, 1, 0}

Therefore, we know that the matrix A has one double eigenvalue \( \lambda = 1, \) and one simple eigenvalue \( \lambda = 0 \) (which indicates that the matrix A is a singular one). Next, we find eigenvectors

Eigenvectors[A]
Out[3]= {{1, 0, 0}, {0, 0, 0}, {-1, 1, 0}}

So Mathematica provides us only one eigenvector \( \xi = \left[ 1,0,0 \right] \) corresponding to the eigenvalue \( \lambda =1 \) (therefore, it is defective) and one eigenvector v = <-1,1,0> corresponding eigenvalue \( \lambda = 0. \) To check it, we introduce the matrix B1:

B1 = IdentityMatrix[3] - A
Eigenvalues[B1]
Out[5]= {1, 0, 0}

which means that B1 has one simple eigenvalue \( \lambda = 1 \) and one double eigenvalue \( \lambda =0. \) Then we check that \( \xi \) is the eigenvector of the matrix A:

B1.{1, 0, 0}
Out[6]= {0, 0, 0}

Then we check that v is the eigenvector corresponding to \( \lambda = 0: \)

A.{-1, 1, 0}
Out[7]= {0, 0, 0}

To find the generalized eigenvector corresponding to \( \lambda = 1, \) we use the following Mathematica command

LinearSolve[B1, {1, 0, 0}]
Out[8]= {0, -1, -1}

This gives us another generalized eigenvector \( \xi_2 = \left[ 0,-1,-1 \right] \) corresponding to the eigenvalue \( \lambda = 1 \) (which you can multiply by any constant). To check it, we calculate:

B1.B1.{0, 1, 1}
Out[9]= {0, 0, 0}

but the first power of B1 does not annihilate it:

B1.{0, 1, 1}
out[10]= {-1, 0, 0}

 

The characteristic polynomial can be found either with Mathematica's command
CharacteristicPolynomial or directly.

A := {{0, 1}, {-1, 0}}
CharacteristicPolynomial[A, lambda]
Out[2]= 1 + lambda^2
sys[lambda_] = lambda*IdentityMatrix[2] - A
Out[3]= {{lambda, -1}, {1, lambda}}
p[lambda_] = Det[sys[lambda]]                (* characteristic polynomial *)
Out[4]= 1 + lambda^2
Solve[p[lambda] == 0] // Flatten
Out[5]= {lambda -> -I, lambda -> I}       (* I is the imaginary unit *)

This can be obtained manually as follows:

A = {{1, 2}, {2, 4}}
Out[1]= {{1, 2}, {2, 4}}
sys[lambda_] = lambda*IdentityMatrix[2]-A
Out[2]= {{-1 + lambda, -2}, {-2, -4 + lambda}}
p[lambda_] =Det[sys[lambda]]
Out[3]= -5 lambda + lambda^2
Find the roots of the characteristic equation (eigenvalues of the matrix A):

Solve[p[lambda]==0]
Out[4]= {{lambda -> 0}, {lambda -> 5}}
Capture the eigenvalues:
{lambda1,lambda2} = x/.Solve[p[x]==0]
Out[5]= {0, 5}
To show the basis of the null space of the matrix A:
v1 = NullSpace[sys[lambda1]][[1]]
Out[6]= {-2, 1}

Two matrices A and B are called similar if there exists a nonsingular matrix S such that \( {\bf A} = {\bf S}\,{\bf B}\,{\bf S}^{-1} .\) Similar matrices always has the same eigenvalues, but their eigenvectors could be different. Let us consider an example of two matrices, one of them is a diagonal one, and another is similar to it:

A = {{1, 0, 0}, {0, 2, 0}, {0, 0, 0.5}}
S = {{2, -1, 3}, {1, 3, -3}, {-5, -4, 1}}
B = Inverse[S].A.S
Out[3]= {{-25., -45., 36.}, {39.5, 70., -55.5}, {30.5, 53., -41.5}}
Therefore, the matrix B is similar to the diagonal matrix A. We call such matrix B diagonalizable.
Eigenvalues[B]
Out[4]= {2., 1., 0.5}
Eigenvectors[B]
Out= {{0.457144, -0.706496, -0.540262}, {-0.451129, 0.701757, 0.55138}, {-0.46569, 0.698535, 0.543305}}
Eigenvectors[A]
Out= {{0., 1., 0.}, {1., 0., 0.}, {0., 0., 1.}}

Therefore, these two similar matrices share the same eigenvalues, but they have distinct eigenvectors.

matlab can also calculate the eigenvalues and eigenvectors. Start out with finding the eigenvalues:
			eigenvalues=eig(E)
If you need to see eigenvalues along with eigenvectors, type:
E= [7 4; -10 -5]
[V,D]=eig(E)

where in the output, matrix V corresponds to the matrix of eigenvectors and matrix D is a diagonal matrix where each entry is an eigenvalue.

The coefficients of the characteristic polynomial are returned by

			poly(E)



Complete