Eigenvalues and Eigenvectors
The eigenvalues and corresponding eigenvectors of a matrix
are determined by solving
Next, we provide some basic background on linear algebra.
First, we define the similarity transformation.
Specifically, we say that the matrix
is similar
to matrix
if
and
have the same eigenvalues (i.e. the same eigenspectrum)
but not necessarily the same eigenvectors. Therefore, the transformation
Remark 1: The transpose matrix
is similar to matrix
since they have the same
characteristic polynomial. However, they do not have the same eigenvectors.
In contrast, the inverse matrix
has the same
eigenvectors with
but inverse eigenvalues,
.
This is true because
Remark 2: The matrix
,
where k is a positive integer has eigenvalues
,
where
are the eigenvalues of
.
However,
and
have the same eigenvectors.
This can be extended further and it is easy to show
that if we construct a polynomial matrix:
We have already seen that computing the eigenvalues accurately from
the determinant may not always be possible, although the
Newton-Raphson method of chapter
is an accurate method of computing the roots of polynomials but it may be
inefficient. In the following, we present a simple method
to compute iteratively and selectively the maximum and minimum eigenvalues
and corresponding eigenvectors.
Power Method
This is a very simple method to obtain the maximum eigenvalue.
The main idea is to obtain iterates from
To see why this process converges and at what rate we
project the initial guess
to the space
spanned by all the eigenvector
of
,
i.e.,
The convergence rate is determined by the
relative convergence of the second largest to the largest term, i.e.,
the ratio
A pseudo-code for this algorithm is:
In the algorithm above the value of
is
estimated from the maximum component of
However, any other norm can be used, e.g., the L2 norm
The convergence of the power method can be enhanced
by shifting the eigenvalues,
so instead of multiplying the initial guess by powers of we multiply by powers of
which has
eigenvalues
while the eigenvectors remain the same.
The corresponding convergence rate is then estimated by
The Inverse Shifted Power Method
In order to compute selectively the smallest eigenvalue
we can apply again the power method by multiplying by
powers of the inverse, i.e.,
Thus, the iteration procedure here is
The following pseudo-code summarizes the algorithm:
Remark 1: Note that we do not actually compute
explicitly the inverse
or
but we simply do an LU factorization only once outside the loop.
So the computational complexity of this algorithm is
times the number of iterations plus the initial
cost for
the LU factorization.
Remark 2: To accelerate convergence, we can start with
a few iterations using the standard power method, obtain a first good guess
and corresponding shift
via the Rayleigh quotient,
and then switch to the inverse iteration method.
Remark 3: The matrix
is ill-conditioned, however in practice the error associated
with this seems to favor the inverse iteration as it grows toward
the direction of the desired eigenvector. Therefore, the inverse shifted power
method is a stable method.
We can modify the inverse shifted power method and
enhance convergence even more (to third-order) if we update the
value of the shift adaptively using the Rayleigh quotient.
The following algorithm presents this modification :
Remark: While this algorithm triples
the number of correct digits in each iteration
it requires
work at each iteration
because the matrix
changes in each iteration.
A more economical approach is to ``freeze''
for a
few iterations so that the LU decomposition is not employed in each iteration.
The resulted convergence rate is then less than cubic but overall this is
a more efficient approach.