If **A** is a square \( n \times n \) matrix and **v** is an \( n \times 1 \)
column vector, then the product \( {\bf A}\,{\bf v} \) is defined and is another \( n \times 1 \)
column vector. It is important in many applications to determine whether there exist nonzero column vectors **v** such
that the product vector \( {\bf A}\,{\bf v} \) is a constant multiple (which we denote as λ) of **v**.

If a homogeneous equation

**v**, then the vector

**v**is called the eigenvector, corresponding to eigenvalue \( \lambda .\) This may happen only when the determinant of the system \( \lambda \,{\bf v} - {\bf A}\, {\bf v} = {\bf 0} \) is zero, namely, \( \det \left( \lambda\, {\bf I} - {\bf A} \right) =0 ,\) where

**I**is the identity matrix. This determinant is called the characteristic polynomial and we denote it by \( \chi (\lambda ) = \det \left( \lambda\, {\bf I} - {\bf A} \right) . \) Therefore, eigenvalues are the nulls of the characteristic polynomial and they are the roots of the equation \( \chi (\lambda ) = 0. \) The characteristic polynomial is always a polynomial of degree n, where n is the dimension of the square matrix

**A**. It can be expressed through eigenvalues:

**A**, that is, the sum of its diagonal elements, which is equal to the sum of all eigenvalues (including their multiplicities). The set of all eigenvalues is called the spectrum of the matrix

**A**.

```
A = [-2,-4,2;-2,1,2;4,2,5]
[Eigenvalues_A,Eigenvectors_A] = my_eigen(A)
function [eigenvalues, eigenvectors] = my_eigen(A)
%%%This function takes a n by n matrix A and
%%%returns the eigenvalues and their assocaited eigenvectors.
[m,n] = size(A);
assert(isequal(m,n), "A is not a square matrix.") %Check to make sure A is a square matrix.
syms l
lI = eye(m)*l;
char_poly = det(A - lI);
eigenvalues = solve(char_poly,l);
for i = 1:m %This is to make sure any eigenvalues that = 0 are actually 0.
if abs(eigenvalues(i)) < 10^-5
eigenvalues(i) = 0;
end
end
eigenvalues = transpose(eigenvalues); %This makes it easier to know which
%eigenvalue goes with which eigenvector.
eigenvectors = zeros(m); %This line is to make the code run smoother.
for i = 1:m
matrix = A - eye(m)*eigenvalues(i);
a_eigenvector = null(matrix); %An eigenvector is the nullspace of a matrix minus one of its eigenvalues along the diagonal.
a_eigenvector = a_eigenvector/min(abs(a_eigenvector)); %This makes the eigenvector look nice and readable.
eigenvectors(:,i) = a_eigenvector;
end
end
```

**Eigensystem[B]** gives a list {values, vectors} of the eigenvalues and eigenvectors of the square matrix **B**.

Therefore, we know that the matrix** A** has one double eigenvalue \( \lambda = 1, \) and one simple eigenvalue
\( \lambda = 0 \) (which indicates that the
matrix** A** is a singular one). Next, we find eigenvectors

So

*Mathematica*provides us only one eigenvector \( \xi = \left[ 1,0,0 \right] \) corresponding to the eigenvalue \( \lambda =1 \) (therefore, it is defective) and one eigenvector

**v**= <-1,1,0> corresponding eigenvalue \( \lambda = 0. \) To check it, we introduce the matrix

**B1**:

which means that **B1** has one simple eigenvalue \( \lambda = 1 \) and one
double eigenvalue \( \lambda =0. \) Then we check that \( \xi \) is the eigenvector
of the matrix **A**:

Then we check that** v** is the eigenvector corresponding to \( \lambda = 0: \)

To find the generalized eigenvector corresponding to \( \lambda = 1, \) we
use the following *Mathematica* command

This gives us another generalized eigenvector \( \xi_2 = \left[ 0,-1,-1 \right] \) corresponding to the eigenvalue \( \lambda = 1 \) (which you can multiply by any constant). To check it, we calculate:

but the first power of **B1** does not annihilate it:

The characteristic polynomial can be found either with *Mathematica*'s command

**CharacteristicPolynomial** or directly.

CharacteristicPolynomial[A, lambda]

This can be obtained manually as follows:

Two matrices **A** and **B** are called similar if there exists a nonsingular matrix **S** such that
\( {\bf A} = {\bf S}\,{\bf B}\,{\bf S}^{-1} .\) Similar matrices always has the same eigenvalues, but their eigenvectors could
be different. Let us consider an example of two matrices, one of them is a diagonal one, and another is similar to it:

S = {{2, -1, 3}, {1, 3, -3}, {-5, -4, 1}}

B = Inverse[S].A.S

**B**is similar to the diagonal matrix

**A**. We call such matrix

**B**diagonalizable.

Therefore, these two similar matrices share the same eigenvalues, but they have distinct eigenvectors.

matlab can also calculate the eigenvalues and eigenvectors. Start out with finding the eigenvalues:` eigenvalues=eig(E)`

```
E= [7 4; -10 -5]
[V,D]=eig(E)
```

where in the output, matrix **V** corresponds to the matrix of
eigenvectors and matrix **D** is a diagonal matrix where each entry is an eigenvalue.

The coefficients of the characteristic polynomial are returned by

` poly(E)`

```
Complete
```