2 Do these iterative methods always work?

No. It is not difficult to invent examples where the iteration fails to approach the solution of A X = B . The key point is related to matrix norms seen in the preceding Section.

The two iterative methods we encountered above are both special cases of the general form

X ( k + 1 ) = M X ( k ) + N .

  1. For the Jacobi method we choose M = D 1 ( L + U ) and N = D 1 B .
  2. For the Gauss-Seidel method we choose M = ( D + L ) 1 U and N = ( D + L ) 1 B .

The following Key Point gives the main result.

Key Point 13

For the iterative process X ( k + 1 ) = M X ( k ) + N the iteration will converge to a solution if the norm of M is less than 1 .

Care is required in understanding what Key Point 13 says. Remember that there are lots of different ways of defining the norm of a matrix (we saw three of them). If you can find a norm ( any norm ) such that the norm of M is less than 1, then the iteration will converge. It doesn’t matter if there are other norms which give a value greater than 1, all that matters is that there is one norm that is less than 1.

Key Point 13 above makes no reference to the starting “guess" X ( 0 ) . The convergence of the iteration is independent of where you start! (Of course, if we start with a really bad initial guess then we can expect to need lots of iterations.)

Task!

Show that the Jacobi iteration used to approximate the solution of

4 1 1 1 5 2 1 0 2 x 1 x 2 x 3 = 1 2 3

is certain to converge. (Hint: calculate the norm of D 1 ( L + U ) .)

The Jacobi iteration matrix is

D 1 ( L + U ) = 4 0 0 0 5 0 0 0 2 1 0 1 1 1 0 2 1 0 0 = 0.25 0 0 0 0.2 0 0 0 0.5 0 1 1 1 0 2 1 0 0 = 0 0.25 0.25 0.2 0 0.4 0.5 0 0

and the infinity norm of this matrix is the maximum of 0.25 + 0.25 , 0.2 + 0.4 and 0.5 , that is

∥ D 1 ( L + U ) ∥ = 0.6

which is less than 1 and therefore the iteration will converge.

2.1 Guaranteed convergence

If the matrix has the property that it is strictly diagonally dominant , which means that the diagonal entry is larger in magnitude than the absolute sum of the other entries on that row, then both Jacobi and Gauss-Seidel are guaranteed to converge. The reason for this is that if A is strictly diagonally dominant then the iteration matrix M will have an infinity norm that is less than 1.

A small system is the subject of Example 20 below. A large system with slow convergence is the subject of Engineering Example 1 on page 62.

Example 20

Show that A = 4 1 1 1 5 2 1 0 2 is strictly diagonally dominant.

Solution

Looking at the diagonal entry of each row in turn we see that

4 > | 1 | + | 1 | = 2 5 > 1 + 2 = 3 2 > 1 + 0 = 1

and this means that the matrix is strictly diagonally dominant.

Given that A above is strictly diagonally dominant it is certain that both Jacobi and Gauss-Seidel will converge.

2.2 What’s so special about strict diagonal dominance?

In many applications we can be certain that the coefficient matrix A will be strictly diagonally dominant. We will see examples of this in HELM booklet  32 and HELM booklet  33 when we consider approximating solutions of differential equations.

Exercises
  1. Consider the system

    2 1 1 2 x 1 x 2 = 2 5

    1. Use the starting guess X ( 0 ) = 1 1 in an implementation of the Jacobi method to show that X ( 1 ) = 1.5 3 . Find X ( 2 ) and X ( 3 ) .
    2. Use the starting guess X ( 0 ) = 1 1 in an implementation of the Gauss-Seidel method to show that X ( 1 ) = 1.5 3.25 . Find X ( 2 ) and X ( 3 ) .

    (Hint: it might help you to know that the exact solution is x 1 x 2 = 3 4 .)

    1. Show that the Jacobi iteration applied to the system

      5 1 0 0 1 5 1 0 0 1 5 1 0 0 1 5 x 1 x 2 x 3 x 4 = 7 10 6 16

       can be written

      X ( k + 1 ) = 0 0.2 0 0 0.2 0 0.2 0 0 0.2 0 0.2 0 0 0.2 0 X ( k ) + 1.4 2 1.2 3.2 .

    2. Show that the method is certain to converge and calculate the first three iterations using

        zero starting values.

       (Hint: the exact solution to the stated problem is 1 2 1 3 .)

    1. 2 x 1 ( 1 ) = 2 1 x 2 ( 0 ) = 2

      and therefore x 1 ( 1 ) = 1.5

      2 x 2 ( 1 ) = 5 1 x 1 ( 0 ) = 6

      which implies that x 2 ( 1 ) = 3 . These two values give the required entries in X ( 1 ) . A second and third iteration follow in a similar way to give

      X ( 2 ) = 2.5 3.25 and X ( 3 ) = 2.625 3.75

    2. 2 x 1 ( 1 ) = 2 1 x 2 ( 0 ) = 3

      and therefore x 1 ( 1 ) = 1.5 . This new approximation to x 1 is used straight away when finding a new approximation to x 2 ( 1 ) .

      2 x 2 ( 1 ) = 5 1 x 1 ( 1 ) = 6.5

      which implies that x 2 ( 1 ) = 3.25 . These two values give the required entries in X ( 1 ) . A second and third iteration follow in a similar way to give

      X ( 2 ) = 2.625 3.8125 and X ( 3 ) = 2.906250 3.953125

      where X ( 3 ) is given to 6 decimal places

    1. In this case D = 5 0 0 0 0 5 0 0 0 0 5 0 0 0 0 5 and therefore D 1 = 0.2 0 0 0 0 0.2 0 0 0 0 0.2 0 0 0 0 0.2 .

      So the iteration matrix M = D 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 = 0 0.2 0 0 0.2 0 0.2 0 0 0.2 0 0.2 0 0 0.2 0

      and that the Jacobi iteration takes the form

      X ( k + 1 ) = M X ( k ) + M 1 7 10 6 16 = 0 0.2 0 0 0.2 0 0.2 0 0 0.2 0 0.2 0 0 0.2 0 X ( k ) + 1.4 2 1.2 3.2

      as required.

    2. Using the starting values x 1 ( 0 ) = x 2 ( 0 ) = x 3 ( 0 ) = x 4 ( 0 ) = 0 , the first iteration of the Jacobi method gives x 1 1 = 0.2 x 2 0 + 1.4 = 1.4 x 2 1 = 0.2 ( x 1 0 + x 3 0 ) 2 = 2 x 3 1 = 0.2 ( x 2 0 + x 4 0 ) 1.2 = 1.2 x 4 1 = 0.2 x 3 0 + 3.2 = 3.2

      The second iteration is

      x 1 2 = 0.2 x 2 1 + 1.4 = 1 x 2 2 = 0.2 ( x 1 1 + x 3 1 ) 2 = 1.96 x 3 2 = 0.2 ( x 2 1 + x 4 1 ) 1.2 = 0.96 x 4 2 = 0.2 x 3 1 + 3.2 = 2.96

      And the third iteration is

      x 1 3 = 0.2 x 2 2 + 1.4 = 1.008 x 2 3 = 0.2 ( x 1 2 + x 3 2 ) 2 = 1.992 x 3 3 = 0.2 ( x 2 2 + x 4 2 ) 1.2 = 1 x 4 3 = 0.2 x 3 2 + 3.2 = 3.008