MATLAB TUTORIAL, part 2.1: Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

If A is a square \( n \times n \) matrix and v is an \( n \times 1 \) column vector, then the product \( {\bf A}\,{\bf v} \) is defined and is another \( n \times 1 \) column vector. It is important in many applications to determine whether there exist nonzero column vectors v such that the product vector \( {\bf A}\,{\bf v} \) is a constant multiple (which we denote as λ) of v.

If a homogeneous equation

\[ {\bf A} \, {\bf v} = \lambda\,{\bf v} \]
has a nontrivial (which means not identically zero) solution v, then the vector v is called the eigenvector, corresponding to eigenvalue \( \lambda .\) This may happen only when the determinant of the system \( \lambda \,{\bf v} - {\bf A}\, {\bf v} = {\bf 0} \) is zero, namely, \( \det \left( \lambda\, {\bf I} - {\bf A} \right) =0 ,\) where I is the identity matrix. This determinant is called the characteristic polynomial and we denote it by \( \chi (\lambda ) = \det \left( \lambda\, {\bf I} - {\bf A} \right) . \) Therefore, eigenvalues are the nulls of the characteristic polynomial and they are the roots of the equation \( \chi (\lambda ) = 0. \) The characteristic polynomial is always a polynomial of degree n, where n is the dimension of the square matrix A. It can be expressed through eigenvalues:
\[ \chi (\lambda ) = \det \left( \lambda\, {\bf I} - {\bf A} \right) = \lambda^n - \left( \mbox{tr} {\bf A} \right) \lambda^{n-1} + \cdots + (-1)^n \,\det {\bf A} , \]
where \( \mbox{tr} {\bf A} = a_{11} + a_{22} + \cdots + a_{nn} = \lambda_1 + \lambda_2 + \cdots + \lambda_n \) is the trace of the matrix A, that is, the sum of its diagonal elements, which is equal to the sum of all eigenvalues (including their multiplicities).

The set of all eigenvalues is called the spectrum of the matrix A.

 

A = [-2,-4,2;-2,1,2;4,2,5]
[Eigenvalues_A,Eigenvectors_A] = my_eigen(A)
function [eigenvalues, eigenvectors] = my_eigen(A)
%%%This function takes a n by n matrix A and
%%%returns the eigenvalues and their assocaited eigenvectors.
    [m,n] = size(A);
    assert(isequal(m,n), "A is not a square matrix.") %Check to make sure A is a square matrix.
    syms l
    lI = eye(m)*l;
    char_poly = det(A - lI);
    eigenvalues = solve(char_poly,l);
    for i = 1:m %This is to make sure any eigenvalues that = 0 are actually 0.
        if abs(eigenvalues(i)) < 10^-5
            eigenvalues(i) = 0;
        end
    end
    eigenvalues = transpose(eigenvalues); %This makes it easier to know which
                                          %eigenvalue goes with which eigenvector.
    eigenvectors = zeros(m);  %This line is to make the code run smoother.
    for i = 1:m
       matrix = A - eye(m)*eigenvalues(i);
       a_eigenvector = null(matrix); %An eigenvector is the nullspace of a matrix minus one of its eigenvalues along the diagonal.
       a_eigenvector = a_eigenvector/min(abs(a_eigenvector)); %This makes the eigenvector look nice and readable.
       eigenvectors(:,i) = a_eigenvector;
    end
end

 

Eigensystem[B] gives a list {values, vectors} of the eigenvalues and eigenvectors of the square matrix B.

B := {{0, 1}, {-1, 0}}
Out[1]= {{0, 1}, {-1, 0}}
Eigenvalues[B]
Out[2]= {I,-I}
Eigenvectors[B]
Out[3]= {{-I,1},{I,1}}
Eigensystem[B]
Out[4]= {{I, -I}, {{-I, 1}, {I, 1}}}

Consider a defective matrix:
A = {{1, 1, 0}, {0, 0, 1}, {0, 0, 1}}
Eigenvalues[A]
Out[2]= {1, 1, 0}

Therefore, we know that the matrix A has one double eigenvalue \( \lambda = 1, \) and one simple eigenvalue \( \lambda = 0 \) (which indicates that the matrix A is a singular one). Next, we find eigenvectors

Eigenvectors[A]
Out[3]= {{1, 0, 0}, {0, 0, 0}, {-1, 1, 0}}

So Mathematica provides us only one eigenvector \( \xi = \left[ 1,0,0 \right] \) corresponding to the eigenvalue \( \lambda =1 \) (therefore, it is defective) and one eigenvector v = <-1,1,0> corresponding eigenvalue \( \lambda = 0. \) To check it, we introduce the matrix B1:

B1 = IdentityMatrix[3] - A
Eigenvalues[B1]
Out[5]= {1, 0, 0}

which means that B1 has one simple eigenvalue \( \lambda = 1 \) and one double eigenvalue \( \lambda =0. \) Then we check that \( \xi \) is the eigenvector of the matrix A:

B1.{1, 0, 0}
Out[6]= {0, 0, 0}

Then we check that v is the eigenvector corresponding to \( \lambda = 0: \)

A.{-1, 1, 0}
Out[7]= {0, 0, 0}

To find the generalized eigenvector corresponding to \( \lambda = 1, \) we use the following Mathematica command

LinearSolve[B1, {1, 0, 0}]
Out[8]= {0, -1, -1}

This gives us another generalized eigenvector \( \xi_2 = \left[ 0,-1,-1 \right] \) corresponding to the eigenvalue \( \lambda = 1 \) (which you can multiply by any constant). To check it, we calculate:

B1.B1.{0, 1, 1}
Out[9]= {0, 0, 0}

but the first power of B1 does not annihilate it:

B1.{0, 1, 1}
out[10]= {-1, 0, 0}

 

The characteristic polynomial can be found either with Mathematica's command
CharacteristicPolynomial or directly.

A := {{0, 1}, {-1, 0}}
CharacteristicPolynomial[A, lambda]
Out[2]= 1 + lambda^2
sys[lambda_] = lambda*IdentityMatrix[2] - A
Out[3]= {{lambda, -1}, {1, lambda}}
p[lambda_] = Det[sys[lambda]]                (* characteristic polynomial *)
Out[4]= 1 + lambda^2
Solve[p[lambda] == 0] // Flatten
Out[5]= {lambda -> -I, lambda -> I}       (* I is the imaginary unit *)

This can be obtained manually as follows:

A = {{1, 2}, {2, 4}}
Out[1]= {{1, 2}, {2, 4}}
sys[lambda_] = lambda*IdentityMatrix[2]-A
Out[2]= {{-1 + lambda, -2}, {-2, -4 + lambda}}
p[lambda_] =Det[sys[lambda]]
Out[3]= -5 lambda + lambda^2
Find the roots of the characteristic equation (eigenvalues of the matrix A):

Solve[p[lambda]==0]
Out[4]= {{lambda -> 0}, {lambda -> 5}}
Capture the eigenvalues:
{lambda1,lambda2} = x/.Solve[p[x]==0]
Out[5]= {0, 5}
To show the basis of the null space of the matrix A:
v1 = NullSpace[sys[lambda1]][[1]]
Out[6]= {-2, 1}

Two matrices A and B are called similar if there exists a nonsingular matrix S such that \( {\bf A} = {\bf S}\,{\bf B}\,{\bf S}^{-1} .\) Similar matrices always has the same eigenvalues, but their eigenvectors could be different. Let us consider an example of two matrices, one of them is a diagonal one, and another is similar to it:

A = {{1, 0, 0}, {0, 2, 0}, {0, 0, 0.5}}
S = {{2, -1, 3}, {1, 3, -3}, {-5, -4, 1}}
B = Inverse[S].A.S
Out[3]= {{-25., -45., 36.}, {39.5, 70., -55.5}, {30.5, 53., -41.5}}
Therefore, the matrix B is similar to the diagonal matrix A. We call such matrix B diagonalizable.
Eigenvalues[B]
Out[4]= {2., 1., 0.5}
Eigenvectors[B]
Out= {{0.457144, -0.706496, -0.540262}, {-0.451129, 0.701757, 0.55138}, {-0.46569, 0.698535, 0.543305}}
Eigenvectors[A]
Out= {{0., 1., 0.}, {1., 0., 0.}, {0., 0., 1.}}

Therefore, these two similar matrices share the same eigenvalues, but they have distinct eigenvectors.

matlab can also calculate the eigenvalues and eigenvectors. Start out with finding the eigenvalues:
			eigenvalues=eig(E)
If you need to see eigenvalues along with eigenvectors, type:
E= [7 4; -10 -5]
[V,D]=eig(E)

where in the output, matrix V corresponds to the matrix of eigenvectors and matrix D is a diagonal matrix where each entry is an eigenvalue.

The coefficients of the characteristic polynomial are returned by

			poly(E)



Complete

Applications