1 Numerical determination of eigenvalues and eigenvectors
1.1 Preliminaries
Before discussing numerical methods of calculating eigenvalues and eigenvectors we remind you of three results for a matrix with an eigenvalue and associated eigenvector .
- (if it exists) has an eigenvalue with associated eigenvector .
- The matrix has an eigenvalue and associated eigenvector .
-
The matrix
,
i.e. the inverse (if it exists) of the matrix
,
has eigenvalue
and corresponding eigenvector
.
Here is any real number.
Task!
The matrix has eigenvalues with associated eigenvectors respectively.
The inverse exists and is
Without further calculation write down the eigenvalues and eigenvectors of the following matrices:
-
The eigenvalues of
are
.
(Notice that the dominant eigenvalue of
yields the
smallest magnitude eigenvalue of .)
-
The matrix here is
.
Thus its eigenvalues are the same as those of
increased by 1 i.e.
.
-
The matrix here is
.
Thus its eigenvalues are the reciprocals of the eigenvalues of
. The latter has eigenvalues so has eigenvalues .
In each of the above cases the eigenvectors are the same as those of the original matrix .
1.2 The power method
This is a direct iteration method for obtaining the dominant eigenvalue (i.e. the largest in magnitude), say , for a given matrix and also the corresponding eigenvector.
We will not discuss the theory behind the method but will demonstrate it in action and, equally importantly, point out circumstances when it fails .
Task!
Let . By solving obtain the eigenvalues of and also obtain the eigenvector associated with the dominant eigenvalue.
which gives
so
The eigenvector for is obtained as usual by solving , so
from which so or any multiple.
If we normalize here such that the largest component of is 1
We shall now demonstrate how the power method can be used to obtain and where .
-
We choose an
arbitrary
column vector
-
We premultiply this by
to give a new column vector
:
-
We ‘normalize’
to obtain a column vector
with largest component 1: thus
-
We continue the process
Task!
Continue this process for a further step and obtain and , quoting values to 6 d.p.
The first 8 steps of the above iterative process are summarized in the following table (the first three rows of which have been obtained above):
Table 1
Step | |||||
In Table 1, refers to the largest magnitude component of which is used to normalize to give . We can see that is converging to 9 which we know is the dominant eigenvalue of . Also is converging towards the associated eigenvector .
Depending on the accuracy required, we could decide when to stop the iterative process by looking at the difference at each step.
Task!
Using the power method obtain the dominant eigenvalue and associated
eigenvector of
using a starting column vector
Calculate and :
so using , the largest magnitude component of .
Carry out the next two steps of this iteration to obtains and :
After just 3 iterations there is little sign of convergence of the normalizing factor . However the next two values obtained are
and, after 14 iterations, and the power method converges, albeit slowly, to
which (correct to 4 d.p.) is the dominant eigenvalue of . The corresponding eigenvector is
It is clear that the power method requires, for its practical execution, a computer.
1.3 Problems with the power method
- If the initial column vector is an eigenvector of other than that corresponding to the dominant eigenvalue, say , then the method will fail since the iteration will converge to the wrong eigenvalue, say , after only one iteration (because in this case).
-
It is possible to show that the speed of convergence of the power method depends
on the ratio
If this ratio is small the method is slow to converge.
In particular, if the dominant eigenvalue is complex the method will fail completely to converge because the complex conjugate will also be an eigenvalue and
- The power method only gives one eigenvalue, the dominant one (although this is often the most important in applications).
1.4 Advantages of the power method
- It is simple and easy to implement.
- It gives the eigenvector corresponding to as well as itself. (Other numerical methods require separate calculation to obtain the eigenvector.)
1.5 Finding eigenvalues other than the dominant
We discuss this topic only briefly.
1.5.1 Obtaining the smallest magnitude eigenvalue
If has dominant eigenvalue then its inverse has an eigenvalue (as we discussed at the beginning of this Section.) Clearly will be the smallest magnitude eigenvalue of . Conversely if we obtain the largest magnitude eigenvalue, say , of by the power method then the smallest eigenvalue of is the reciprocal,
This technique is called the inverse power method .
Example
If then the inverse is .
Using in the power method applied to gives . Hence the smallest magnitude eigenvalue of is . The corresponding eigenvector is
In practice, finding the inverse of a large order matrix can be expensive in computational effort. Hence the inverse power method is implemented without actually obtaining as follows.
As we have seen, the power method applied to utilizes the scheme:
where , being the largest magnitude component of .
For the inverse power method we have
which can be re-written as
Thus can actually be obtained by solving this system of linear equations without needing to obtain . This is usually done by a technique called decomposition i.e. writing (once and for all) in the form
being a lower triangular matrix and upper triangular.
1.5.2 Obtaining the eigenvalue closest to a given number
Suppose is the (unknown) eigenvalue of closest to . We know that if are the eigenvalues of then are the eigenvalues of the matrix . Then will be the smallest magnitude eigenvalue of but will be the largest magnitude eigenvalue of . Hence if we apply the power method to we can obtain . The method is called the shifted inverse power method.
1.5.3 Obtaining all the eigenvalues of a large order matrix
In this case neither solving the characteristic equation nor the power method (and its variants) is efficient.
The commonest method utilized is called the QR technique . This technique is based on similarity transformations i.e. transformations of the form
where has the same eigenvalues as . (We have seen earlier in this Workbook that one type of similarity transformation is where is formed from the eigenvectors of . However, we are now, of course, dealing with the situation where we are trying to find the eigenvalues and eigenvectors of .)
In the method is reduced to upper (or lower) triangular form. We have already seen that a triangular matrix has its eigenvalues on the diagonal.
For details of the method, or more efficient techniques, one of which is based on what is called a Householder transformation, the reader should consult a text on numerical methods.