next up previous
Next: About this document ...

Householder Deflation



The Householder transformation is very useful in ``deflating'' a matrix, ${\bf A}$, i.e., reducing its size from $n\times n$ to $(n-1) \times (n-1).$ This is needed often in eigensolvers, especially in conjunction with local and semi-direct methods. The objective is to compute the subdominant eigenvalues after we compute the maximum or minimum eigenvalue with the power method or inverse power method, respectively.

To wit, let us assume that we have computed the maximum eigenvalue $\lambda_1$ and the corresponding eigenvector ${\bf v}_1.$ Then, we obtain a Householder matrix ${\bf H}$ by

\begin{displaymath}{\bf H} {\bf v}_1 = \left [ \begin{array}{ccc}
\alpha\\ 0\\ \vdots\\ 0\end{array}\right ] = \alpha {\bf e}_1.\end{displaymath}

We now construct the matrix ${\bf G} \equiv {\bf H} {\bf A} {\bf H}^T$ and investigate its structure. Since $(\lambda_1, {\bf v}_1)$ is an eigenpair it satisfies

\begin{displaymath}{\bf A}{\bf v}_1 = \lambda_1 v_1
\Rightarrow {\bf H} {\bf A}{\bf v}_1 = \lambda _1 {\bf H} v_1
\end{displaymath}


\begin{displaymath}\Rightarrow {\bf H}{\bf A} \cdot {\bf I} \cdot {\bf v}_1 =
\...
...({\bf H}^T\cdot {\bf H}){\bf v}_1 =
\lambda_1 {\bf H}{\bf v}_1\end{displaymath}


\begin{displaymath}\Rightarrow {\bf H}\cdot {\bf A}\cdot {\bf H}^T ({\bf H} {\bf v}_1)
= \lambda _1 ({\bf H}{\bf v}_1),
\end{displaymath}

where we recall that ${\bf H}$ is orthonormal and thus ${\bf I} = {\bf H}^T\cdot {\bf H}.$ We also have by definition that

\begin{displaymath}{\bf H}{\bf v}_1 = \alpha {\bf e}_1\end{displaymath}

so by substituting above we obtain

\begin{displaymath}{\bf H} \cdot {\bf A} \cdot{\bf H}^T (\alpha {\bf e}_1) =
\l...
...begin{array}{ccc}\lambda_1\\ 0\\ \vdots\\ 0\end{array}\right ].\end{displaymath}

Therefore, the matrix ${\bf G}$ has the form

\begin{displaymath}{\bf G} = \left [ \begin{array}{cccccc}
\lambda_1 & x & x & x & x &x \\
\vdots & & & {\bf A}_{n-1}\\
0\end{array}\right ]\end{displaymath}

where ${\bf A}_{n-1}$ is a submatrix $(n-1) \times (n-1)$ which has the same (n-1) remaining eigenvalues with ${\bf A}$. This can be seen by computing

\begin{displaymath}\det ({\bf G}-\lambda {\bf I}) = (\lambda_1 -\lambda )\cdot
\det ({\bf A}_{n-1}-\lambda I) = 0.
\end{displaymath}

To compute the first subdominant eigenvalue of ${\bf A}$ we then need to apply the power method to ${\bf A}_{n-1}$, and so on.

The deflation procedure can be applied efficiently if we want to compute two to three eigenvalues. For a larger number of eigenvalues we need to switch to global eigensolvers, see section [*].



Remark: Another deflation procedure is to subtract off from the matrix ${\bf A}$ the contribution $\lambda_1 {\bf v} {\bf v}^T$, so the new matrix

\begin{displaymath}{\bf A}_1 \equiv {\bf A} - \lambda_1 {\bf v} {\bf v}^T
\end{displaymath}

has eigenvalues 0, $\lambda_2, \lambda_3, \ldots, \lambda_n$. This procedure could sometimes be unstable unlike the Householder deflation, which is always stable.



 
next up previous
Next: About this document ...
George Karniadakis
2001-11-05