### 1 Iterative methods

Suppose we have the system of equations

$\phantom{\rule{2em}{0ex}}AX=B.$

The aim here is to find a sequence of approximations which gradually approach $X$ . We will denote these approximations

$\phantom{\rule{2em}{0ex}}{X}^{\left(0\right)},{X}^{\left(1\right)},{X}^{\left(2\right)},\dots ,{X}^{\left(k\right)},\dots$

where ${X}^{\left(0\right)}$ is our initial “guess", and the hope is that after a short while these successive iterates will be so close to each other that the process can be deemed to have converged to the required solution $X$ .

##### Key Point 10

An iterative method is one in which a sequence of approximations (or iterates ) is produced. The method is successful if these iterates converge to the true solution of the given problem.

It is convenient to split the matrix $A$ into three parts. We write

$\phantom{\rule{2em}{0ex}}A=L+D+U$

where $L$ consists of the elements of $A$ strictly below the diagonal and zeros elsewhere; $D$ is a diagonal matrix consisting of the diagonal entries of $A$ ; and $U$ consists of the elements of $A$ strictly above the diagonal. Note that $L$ and $U$ here are not the same matrices as appeared in the $LU$ decomposition! The current $L$ and $U$ are much easier to find.

For example

$\phantom{\rule{2em}{0ex}}\begin{array}{ccccccc}\hfill \underset{⏟}{\left[\begin{array}{cc}\hfill 3\hfill & \hfill -4\hfill \\ \hfill 2\hfill & \hfill 1\hfill \end{array}\right]}\hfill & \hfill =\hfill & \hfill \underset{⏟}{\left[\begin{array}{cc}\hfill 0\hfill & \hfill 0\hfill \\ \hfill 2\hfill & \hfill 0\hfill \end{array}\right]}\hfill & \hfill +\hfill & \hfill \underset{⏟}{\left[\begin{array}{cc}\hfill 3\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1\hfill \end{array}\right]}\hfill & \hfill +\hfill & \hfill \underset{⏟}{\left[\begin{array}{cc}\hfill 0\hfill & \hfill -4\hfill \\ \hfill 0\hfill & \hfill 0\hfill \end{array}\right]}\hfill \\ \hfill ↑\hfill & \hfill \hfill & \hfill ↑\hfill & \hfill \hfill & \hfill ↑\hfill & \hfill \hfill & \hfill ↑\hfill \\ \hfill A\hfill & \hfill =\hfill & \hfill L\hfill & \hfill +\hfill & \hfill D\hfill & \hfill +\hfill & \hfill U\hfill \\ \hfill \hfill \end{array}$

and

$\phantom{\rule{2em}{0ex}}\begin{array}{ccccccc}\hfill \underset{⏟}{\left[\begin{array}{ccc}\hfill 2\hfill & \hfill -6\hfill & \hfill 1\hfill \\ \hfill 3\hfill & \hfill -2\hfill & \hfill 0\hfill \\ \hfill 4\hfill & \hfill -1\hfill & \hfill 7\hfill \end{array}\right]}\hfill & \hfill =\hfill & \hfill \underset{⏟}{\left[\begin{array}{ccc}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 3\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 4\hfill & \hfill -1\hfill & \hfill 0\hfill \end{array}\right]}\hfill & \hfill +\hfill & \hfill \underset{⏟}{\left[\begin{array}{ccc}\hfill 2\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill -2\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 7\hfill \end{array}\right]}\hfill & \hfill +\hfill & \hfill \underset{⏟}{\left[\begin{array}{ccc}\hfill 0\hfill & \hfill -6\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]}\hfill \\ \hfill ↑\hfill & \hfill \hfill & \hfill ↑\hfill & \hfill \hfill & \hfill ↑\hfill & \hfill \hfill & \hfill ↑\hfill \\ \hfill A\hfill & \hfill =\hfill & \hfill L\hfill & \hfill +\hfill & \hfill D\hfill & \hfill +\hfill & \hfill U\hfill \\ \hfill \hfill \end{array}$ and, more generally for $3×3$ matrices

$\phantom{\rule{2em}{0ex}}\begin{array}{ccccccc}\hfill \underset{⏟}{\left[\begin{array}{ccc}\hfill \bullet \hfill & \hfill \bullet \hfill & \hfill \bullet \hfill \\ \hfill \bullet \hfill & \hfill \bullet \hfill & \hfill \bullet \hfill \\ \hfill \bullet \hfill & \hfill \bullet \hfill & \hfill \bullet \hfill \end{array}\right]}\hfill & \hfill =\hfill & \hfill \underset{⏟}{\left[\begin{array}{ccc}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \bullet \hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \bullet \hfill & \hfill \bullet \hfill & \hfill 0\hfill \end{array}\right]}\hfill & \hfill +\hfill & \hfill \underset{⏟}{\left[\begin{array}{ccc}\hfill \bullet \hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \bullet \hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \bullet \hfill \end{array}\right]}\hfill & \hfill +\hfill & \hfill \underset{⏟}{\left[\begin{array}{ccc}\hfill 0\hfill & \hfill \bullet \hfill & \hfill \bullet \hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \bullet \hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]}.\hfill \\ \hfill ↑\hfill & \hfill \hfill & \hfill ↑\hfill & \hfill \hfill & \hfill ↑\hfill & \hfill \hfill & \hfill ↑\hfill \\ \hfill A\hfill & \hfill =\hfill & \hfill L\hfill & \hfill +\hfill & \hfill D\hfill & \hfill +\hfill & \hfill U.\hfill \\ \hfill \hfill \end{array}$

#### 1.1 The Jacobi iteration

The simplest iterative method is called Jacobi iteration and the basic idea is to use the $A=L+D+U$ partitioning of $A$ to write $AX=B$ in the form

$\phantom{\rule{2em}{0ex}}DX=-\left(L+U\right)X+B.$

We use this equation as the motivation to define the iterative process

$\phantom{\rule{2em}{0ex}}D{X}^{\left(k+1\right)}=-\left(L+U\right){X}^{\left(k\right)}+B$

which gives ${X}^{\left(k+1\right)}$ as long as $D$ has no zeros down its diagonal, that is as long as $D$ is invertible. This is Jacobi iteration.

##### Key Point 11

The Jacobi iteration for approximating the solution of $AX=B$ where $A=L+D+U$ is given by

${X}^{\left(k+1\right)}=-{D}^{-1}\left(L+U\right){X}^{\left(k\right)}+{D}^{-1}B$

##### Example 18

Use the Jacobi iteration to approximate the solution $X=\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]$ of $\left[\begin{array}{ccc}\hfill 8\hfill & \hfill 2\hfill & \hfill 4\hfill \\ \hfill 3\hfill & \hfill 5\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill 1\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -16\hfill \\ \hfill 4\hfill \\ \hfill -12\hfill \end{array}\right].$

Use the initial guess ${X}^{\left(0\right)}=\left[\begin{array}{c}\hfill 0\hfill \\ \hfill 0\hfill \\ \hfill 0\hfill \end{array}\right]$ .

##### Solution

In this case $D=\left[\begin{array}{ccc}\hfill 8\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 5\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 4\hfill \end{array}\right]$ and $L+U=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill 2\hfill & \hfill 4\hfill \\ \hfill 3\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill 1\hfill & \hfill 0\hfill \end{array}\right]$ .

First iteration .

The first iteration is $D{X}^{\left(1\right)}=-\left(L+U\right){X}^{\left(0\right)}+B$ , or in full

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 8\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 5\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(1\right)}\hfill \\ \hfill {x}_{2}^{\left(1\right)}\hfill \\ \hfill {x}_{3}^{\left(1\right)}\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill -2\hfill & \hfill -4\hfill \\ \hfill -3\hfill & \hfill 0\hfill & \hfill -1\hfill \\ \hfill -2\hfill & \hfill -1\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(0\right)}\hfill \\ \hfill {x}_{2}^{\left(0\right)}\hfill \\ \hfill {x}_{3}^{\left(0\right)}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill -16\hfill \\ \hfill 4\hfill \\ \hfill -12\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -16\hfill \\ \hfill 4\hfill \\ \hfill -12\hfill \end{array}\right],$

since the initial guess was ${x}_{1}^{\left(0\right)}={x}_{2}^{\left(0\right)}={x}_{3}^{\left(0\right)}=0$ .

Taking this information row by row we see that

$\begin{array}{rcll}8{x}_{1}^{\left(1\right)}& =& -16\phantom{\rule{1em}{0ex}}\therefore {x}_{1}^{\left(1\right)}=-2& \text{}\\ 5{x}_{2}^{\left(1\right)}& =& 4\phantom{\rule{1em}{0ex}}\therefore {x}_{2}^{\left(1\right)}=0.8\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}& \text{}\\ 4{x}_{3}^{\left(1\right)}& =& -12\phantom{\rule{1em}{0ex}}\therefore {x}_{3}^{\left(1\right)}=-3& \text{}\end{array}$

Thus the first Jacobi iteration gives us ${X}^{\left(1\right)}=\left[\begin{array}{c}\hfill {x}_{1}^{\left(1\right)}\hfill \\ \hfill {x}_{2}^{\left(1\right)}\hfill \\ \hfill {x}_{3}^{\left(1\right)}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -2\hfill \\ \hfill 0.8\hfill \\ \hfill -3\hfill \end{array}\right]$ as an approximation to $X$ .

Second iteration .

The second iteration is $D{X}^{\left(2\right)}=-\left(L+U\right){X}^{\left(1\right)}+B$ , or in full

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 8\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 5\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(2\right)}\hfill \\ \hfill {x}_{2}^{\left(2\right)}\hfill \\ \hfill {x}_{3}^{\left(2\right)}\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill -2\hfill & \hfill -4\hfill \\ \hfill -3\hfill & \hfill 0\hfill & \hfill -1\hfill \\ \hfill -2\hfill & \hfill -1\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(1\right)}\hfill \\ \hfill {x}_{2}^{\left(1\right)}\hfill \\ \hfill {x}_{3}^{\left(1\right)}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill -16\hfill \\ \hfill 4\hfill \\ \hfill -12\hfill \end{array}\right].$

Taking this information row by row we see that

$\begin{array}{rcll}8{x}_{1}^{\left(2\right)}& =& -2{x}_{2}^{\left(1\right)}-4{x}_{3}^{\left(1\right)}-16=-2\left(0.8\right)-4\left(-3\right)-16=-5.6\phantom{\rule{1em}{0ex}}\therefore {x}_{1}^{\left(2\right)}=-0.7& \text{}\\ 5{x}_{2}^{\left(2\right)}& =& -3{x}_{1}^{\left(1\right)}-{x}_{3}^{\left(1\right)}+4=-3\left(-2\right)-\left(-3\right)+4=13\phantom{\rule{1em}{0ex}}\therefore {x}_{2}^{\left(2\right)}=2.6& \text{}\\ 4{x}_{3}^{\left(2\right)}& =& -2{x}_{1}^{\left(1\right)}-{x}_{2}^{\left(1\right)}-12=-2\left(-2\right)-0.8-12=-8.8\phantom{\rule{1em}{0ex}}\therefore {x}_{3}^{\left(2\right)}=-2.2& \text{}\end{array}$

Therefore the second iterate approximating $X$ is ${X}^{\left(2\right)}=\left[\begin{array}{c}\hfill {x}_{1}^{\left(2\right)}\hfill \\ \hfill {x}_{2}^{\left(2\right)}\hfill \\ \hfill {x}_{3}^{\left(2\right)}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -0.7\hfill \\ \hfill 2.6\hfill \\ \hfill -2.2\hfill \end{array}\right]$ .

Third iteration .

The third iteration is $D{X}^{\left(3\right)}=-\left(L+U\right){X}^{\left(2\right)}+B$ , or in full

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 8\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 5\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(3\right)}\hfill \\ \hfill {x}_{2}^{\left(3\right)}\hfill \\ \hfill {x}_{3}^{\left(3\right)}\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill -2\hfill & \hfill -4\hfill \\ \hfill -3\hfill & \hfill 0\hfill & \hfill -1\hfill \\ \hfill -2\hfill & \hfill -1\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(2\right)}\hfill \\ \hfill {x}_{2}^{\left(2\right)}\hfill \\ \hfill {x}_{3}^{\left(2\right)}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill -16\hfill \\ \hfill 4\hfill \\ \hfill -12\hfill \end{array}\right]$

Taking this information row by row we see that

$\begin{array}{rcll}8{x}_{1}^{\left(3\right)}& =& -2{x}_{2}^{\left(2\right)}-4{x}_{3}^{\left(2\right)}-16=-2\left(2.6\right)-4\left(-2.2\right)-16=-12.4\phantom{\rule{1em}{0ex}}\therefore {x}_{1}^{\left(3\right)}=-1.55& \text{}\\ 5{x}_{2}^{\left(3\right)}& =& -3{x}_{1}^{\left(2\right)}-{x}_{3}^{\left(2\right)}+4=-3\left(-0.7\right)-\left(2.2\right)+4=8.3\phantom{\rule{1em}{0ex}}\therefore {x}_{2}^{\left(3\right)}=1.66& \text{}\\ 4{x}_{3}^{\left(3\right)}& =& -2{x}_{1}^{\left(2\right)}-{x}_{2}^{\left(2\right)}-12=-2\left(-0.7\right)-2.6-12=-13.2\phantom{\rule{1em}{0ex}}\therefore {x}_{3}^{\left(3\right)}=-3.3& \text{}\end{array}$

Therefore the third iterate approximating $X$ is ${X}^{\left(3\right)}=\left[\begin{array}{c}\hfill {x}_{1}^{\left(3\right)}\hfill \\ \hfill {x}_{2}^{\left(3\right)}\hfill \\ \hfill {x}_{3}^{\left(3\right)}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -1.55\hfill \\ \hfill 1.66\hfill \\ \hfill -3.3\hfill \end{array}\right]$ .

More iterations ...

Three iterations is plenty when doing these calculations by hand! But the repetitive nature of the process is ideally suited to its implementation on a computer. It turns out that the next few iterates are

$\phantom{\rule{2em}{0ex}}{X}^{\left(4\right)}=\left[\begin{array}{c}\hfill -0.765\hfill \\ \hfill 2.39\hfill \\ \hfill -2.64\hfill \end{array}\right],\phantom{\rule{1em}{0ex}}{X}^{\left(5\right)}=\left[\begin{array}{c}\hfill -1.277\hfill \\ \hfill 1.787\hfill \\ \hfill -3.215\hfill \end{array}\right],\phantom{\rule{1em}{0ex}}{X}^{\left(6\right)}=\left[\begin{array}{c}\hfill -0.839\hfill \\ \hfill 2.209\hfill \\ \hfill -2.808\hfill \end{array}\right],$

to 3 d.p. Carrying on even further ${X}^{\left(20\right)}=\left[\begin{array}{c}\hfill {x}_{1}^{\left(20\right)}\hfill \\ \hfill {x}_{2}^{\left(20\right)}\hfill \\ \hfill {x}_{3}^{\left(20\right)}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -0.9959\hfill \\ \hfill 2.0043\hfill \\ \hfill -2.9959\hfill \end{array}\right]$ , to 4 d.p. After about 40 iterations successive iterates are equal to 4 d.p. Continuing the iteration even further causes the iterates to agree to more and more decimal places. The method converges to the exact answer

$X=\left[\begin{array}{c}\hfill -1\hfill \\ \hfill 2\hfill \\ \hfill -3\hfill \end{array}\right]$ .

The following Task involves calculating just two iterations of the Jacobi method.

Carry out two iterations of the Jacobi method to approximate the solution of

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 4\hfill & \hfill -1\hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill 4\hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill -1\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 2\hfill \\ \hfill 3\hfill \end{array}\right]$

with the initial guess ${X}^{\left(0\right)}=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 1\hfill \\ \hfill 1\hfill \end{array}\right]$ .

The first iteration is $D{X}^{\left(1\right)}=-\left(L+U\right){X}^{\left(0\right)}+B$ , that is,

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 4\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 4\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(1\right)}\hfill \\ \hfill {x}_{2}^{\left(1\right)}\hfill \\ \hfill {x}_{3}^{\left(1\right)}\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(0\right)}\hfill \\ \hfill {x}_{2}^{\left(0\right)}\hfill \\ \hfill {x}_{3}^{\left(0\right)}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 2\hfill \\ \hfill 3\hfill \end{array}\right]$

from which it follows that ${X}^{\left(1\right)}=\left[\begin{array}{c}\hfill 0.75\hfill \\ \hfill 1\hfill \\ \hfill 1.25\hfill \end{array}\right]$ .

The second iteration is $D{X}^{\left(1\right)}=-\left(L+U\right){X}^{\left(0\right)}+B$ , that is,

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 4\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 4\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(2\right)}\hfill \\ \hfill {x}_{2}^{\left(2\right)}\hfill \\ \hfill {x}_{3}^{\left(2\right)}\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(0\right)}\hfill \\ \hfill {x}_{2}^{\left(0\right)}\hfill \\ \hfill {x}_{3}^{\left(0\right)}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 2\hfill \\ \hfill 3\hfill \end{array}\right]$

from which it follows that ${X}^{\left(2\right)}=\left[\begin{array}{c}\hfill 0.8125\hfill \\ \hfill 1\hfill \\ \hfill 1.1875\hfill \end{array}\right]$ .

Notice that at each iteration the first thing we do is get a new approximation for ${x}_{1}$ and then we continue to use the old approximation to ${x}_{1}$ in subsequent calculations for that iteration! Only at the next iteration do we use the new value. Similarly, we continue to use an old approximation to ${x}_{2}$ even after we have worked out a new one. And so on.

Given that the iterative process is supposed to improve our approximations why not use the better values straight away? This observation is the motivation for what follows.

#### 1.2 Gauss-Seidel iteration

The approach here is very similar to that used in Jacobi iteration. The only difference is that we use new approximations to the entries of $X$ as soon as they are available. As we will see in the Example below, this means rearranging $\left(L+D+U\right)X=B$ slightly differently from what we did for Jacobi. We write

$\phantom{\rule{2em}{0ex}}\left(D+L\right)X=-UX+B$

and use this as the motivation to define the iteration

$\phantom{\rule{2em}{0ex}}\left(D+L\right){X}^{\left(k+1\right)}=-U{X}^{\left(k\right)}+B.$

##### Key Point 12

The Gauss-Seidel iteration for approximating the solution of $AX=B$ is given by

${X}^{\left(k+1\right)}=-{\left(D+L\right)}^{-1}U{X}^{\left(k\right)}+{\left(D+L\right)}^{-1}B$

Example 19 which follows revisits the system of equations we saw earlier in this Section in Example 18.

##### Example 19

Use the Gauss-Seidel iteration to approximate the solution $X=\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]$ of $\left[\begin{array}{ccc}\hfill 8\hfill & \hfill 2\hfill & \hfill 4\hfill \\ \hfill 3\hfill & \hfill 5\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill 1\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -16\hfill \\ \hfill 4\hfill \\ \hfill -12\hfill \end{array}\right].$ Use the initial guess ${X}^{\left(0\right)}=\left[\begin{array}{c}\hfill 0\hfill \\ \hfill 0\hfill \\ \hfill 0\hfill \end{array}\right]$ .

##### Solution

In this case $D+L=\left[\begin{array}{ccc}\hfill 8\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 3\hfill & \hfill 5\hfill & \hfill 0\hfill \\ \hfill 2\hfill & \hfill 1\hfill & \hfill 4\hfill \end{array}\right]$ and $U=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill 2\hfill & \hfill 4\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]$ .

First iteration .

The first iteration is $\left(D+L\right){X}^{\left(1\right)}=-U{X}^{\left(0\right)}+B$ , or in full

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 8\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 3\hfill & \hfill 5\hfill & \hfill 0\hfill \\ \hfill 2\hfill & \hfill 1\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(1\right)}\hfill \\ \hfill {x}_{2}^{\left(1\right)}\hfill \\ \hfill {x}_{3}^{\left(1\right)}\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill -2\hfill & \hfill -4\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill -1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(0\right)}\hfill \\ \hfill {x}_{2}^{\left(0\right)}\hfill \\ \hfill {x}_{3}^{\left(0\right)}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill -16\hfill \\ \hfill 4\hfill \\ \hfill -12\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -16\hfill \\ \hfill 4\hfill \\ \hfill -12\hfill \end{array}\right],$

since the initial guess was ${x}_{1}^{\left(0\right)}={x}_{2}^{\left(0\right)}={x}_{3}^{\left(0\right)}=0$ .

Taking this information row by row we see that

$\begin{array}{rcll}8{x}_{1}^{\left(1\right)}& =& -16\phantom{\rule{1em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\therefore \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{x}_{1}^{\left(1\right)}=-2\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \text{}\\ 3{x}_{2}^{\left(1\right)}+5{x}_{2}^{\left(1\right)}& =& 4\therefore 5{x}_{2}^{\left(1\right)}=-3\left(-2\right)+4\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\therefore \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{x}_{2}^{\left(1\right)}=2\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \text{}\\ 2{x}_{1}^{\left(1\right)}+{x}_{2}^{\left(1\right)}+4{x}_{3}^{\left(1\right)}& =& -12\therefore 4{x}_{3}^{\left(1\right)}=-2\left(-2\right)-2-12\phantom{\rule{1em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\therefore \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{x}_{3}^{\left(1\right)}=-2.5\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \text{}\end{array}$

(Notice how the new approximations to ${x}_{1}$ and ${x}_{2}$ were used immediately after they were found.)

Thus the first Gauss-Seidel iteration gives us ${X}^{\left(1\right)}=\left[\begin{array}{c}\hfill {x}_{1}^{\left(1\right)}\hfill \\ \hfill {x}_{2}^{\left(1\right)}\hfill \\ \hfill {x}_{3}^{\left(1\right)}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -2\hfill \\ \hfill 2\hfill \\ \hfill -2.5\hfill \end{array}\right]$ as an approximation to $X$ .

##### Solution

Second iteration .

The second iteration is $\left(D+L\right){X}^{\left(2\right)}=-U{X}^{\left(1\right)}+B$ , or in full

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 8\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 3\hfill & \hfill 5\hfill & \hfill 0\hfill \\ \hfill 2\hfill & \hfill 1\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(2\right)}\hfill \\ \hfill {x}_{2}^{\left(2\right)}\hfill \\ \hfill {x}_{3}^{\left(2\right)}\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill -2\hfill & \hfill -4\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill -1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(1\right)}\hfill \\ \hfill {x}_{2}^{\left(1\right)}\hfill \\ \hfill {x}_{3}^{\left(1\right)}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill -16\hfill \\ \hfill 4\hfill \\ \hfill -12\hfill \end{array}\right]$

Taking this information row by row we see that

$\begin{array}{rcll}8{x}_{1}^{\left(2\right)}& =& -2{x}_{2}^{\left(1\right)}-4{x}_{3}^{\left(1\right)}-16\phantom{\rule{1em}{0ex}}\therefore \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{x}_{1}^{\left(2\right)}=-1.25\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \text{}\\ 3{x}_{1}^{\left(2\right)}+5{x}_{2}^{\left(2\right)}& =& -{x}_{3}^{\left(1\right)}+4\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\therefore \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{x}_{2}^{\left(2\right)}=2.05\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \text{}\\ 2{x}_{1}^{\left(2\right)}+{x}_{2}^{\left(2\right)}+4{x}_{3}^{\left(2\right)}& =& -12\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\therefore \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{x}_{3}^{\left(2\right)}=-2.8875\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \text{}\end{array}$

Therefore the second iterate approximating $X$ is ${X}^{\left(2\right)}=\left[\begin{array}{c}\hfill {x}_{1}^{\left(2\right)}\hfill \\ \hfill {x}_{2}^{\left(2\right)}\hfill \\ \hfill {x}_{3}^{\left(2\right)}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -1.25\hfill \\ \hfill 2.05\hfill \\ \hfill -2.8875\hfill \end{array}\right]$ .

Third iteration .

The third iteration is $\left(D+L\right){X}^{\left(3\right)}=-U{X}^{\left(2\right)}+B$ , or in full

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 8\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 3\hfill & \hfill 5\hfill & \hfill 0\hfill \\ \hfill 2\hfill & \hfill 1\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(3\right)}\hfill \\ \hfill {x}_{2}^{\left(3\right)}\hfill \\ \hfill {x}_{3}^{\left(3\right)}\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill -2\hfill & \hfill -4\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill -1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(2\right)}\hfill \\ \hfill {x}_{2}^{\left(2\right)}\hfill \\ \hfill {x}_{3}^{\left(2\right)}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill -16\hfill \\ \hfill 4\hfill \\ \hfill -12\hfill \end{array}\right].$

Taking this information row by row we see that

$\begin{array}{rcll}8{x}_{1}^{\left(3\right)}& =& -2{x}_{2}^{\left(2\right)}-4{x}_{3}^{\left(2\right)}-16\phantom{\rule{1em}{0ex}}\therefore \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{x}_{1}^{\left(3\right)}=-1.0687\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \text{}\\ 3{x}_{1}^{\left(3\right)}+5{x}_{2}^{\left(3\right)}& =& -{x}_{3}^{\left(2\right)}+4\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}\therefore \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{x}_{2}^{\left(3\right)}=2.0187\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \text{}\\ 2{x}_{1}^{\left(3\right)}+{x}_{2}^{\left(3\right)}+4{x}_{3}^{\left(3\right)}& =& -12\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}\therefore \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{x}_{3}^{\left(3\right)}=-2.9703\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \text{}\end{array}$

to 4 d.p. Therefore the third iterate approximating $X$ is

$\phantom{\rule{2em}{0ex}}{X}^{\left(3\right)}=\left[\begin{array}{c}\hfill {x}_{1}^{\left(3\right)}\hfill \\ \hfill {x}_{2}^{\left(3\right)}\hfill \\ \hfill {x}_{3}^{\left(3\right)}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill -1.0687\hfill \\ \hfill 2.0187\hfill \\ \hfill -2.9703\hfill \end{array}\right].$

More iterations ...

Again, there is little to be learned from pushing this further by hand. Putting the procedure on a computer and seeing how it progresses is instructive, however, and the iteration continues as follows:

$\phantom{\rule{2em}{0ex}}{X}^{\left(4\right)}=\left[\begin{array}{c}\hfill -1.0195\hfill \\ \hfill 2.0058\hfill \\ \hfill -2.9917\hfill \end{array}\right],\phantom{\rule{1em}{0ex}}{X}^{\left(5\right)}=\left[\begin{array}{c}\hfill -1.0056\hfill \\ \hfill 2.0017\hfill \\ \hfill -2.9976\hfill \end{array}\right],\phantom{\rule{1em}{0ex}}{X}^{\left(6\right)}=\left[\begin{array}{c}\hfill -1.0016\hfill \\ \hfill 2.0005\hfill \\ \hfill -2.9993\hfill \end{array}\right],$

$\phantom{\rule{2em}{0ex}}{X}^{\left(7\right)}=\left[\begin{array}{c}\hfill -1.0005\hfill \\ \hfill 2.0001\hfill \\ \hfill -2.9998\hfill \end{array}\right],\phantom{\rule{1em}{0ex}}{X}^{\left(8\right)}=\left[\begin{array}{c}\hfill -1.0001\hfill \\ \hfill 2.0000\hfill \\ \hfill -2.9999\hfill \end{array}\right],\phantom{\rule{1em}{0ex}}{X}^{\left(9\right)}=\left[\begin{array}{c}\hfill -1.0000\hfill \\ \hfill 2.0000\hfill \\ \hfill -3.0000\hfill \end{array}\right]$

(to 4 d.p.). Subsequent iterates are equal to ${X}^{\left(9\right)}$ to this number of decimal places. The Gauss-Seidel iteration has converged to 4 d.p. in 9 iterations. It took the Jacobi method almost 40 iterations to achieve this!

Carry out two iterations of the Gauss-Seidel method to approximate the solution of

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 4\hfill & \hfill -1\hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill 4\hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill -1\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 2\hfill \\ \hfill 3\hfill \end{array}\right]$

with the initial guess ${X}^{\left(0\right)}=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 1\hfill \\ \hfill 1\hfill \end{array}\right]$ .

The first iteration is $\left(D+L\right){X}^{\left(1\right)}=-U{X}^{\left(0\right)}+B$ , that is,

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 4\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill -1\hfill & \hfill 4\hfill & \hfill 0\hfill \\ \hfill -1\hfill & \hfill -1\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(1\right)}\hfill \\ \hfill {x}_{2}^{\left(1\right)}\hfill \\ \hfill {x}_{3}^{\left(1\right)}\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(0\right)}\hfill \\ \hfill {x}_{2}^{\left(0\right)}\hfill \\ \hfill {x}_{3}^{\left(0\right)}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 2\hfill \\ \hfill 3\hfill \end{array}\right]$

from which it follows that ${X}^{\left(1\right)}=\left[\begin{array}{c}\hfill 0.75\hfill \\ \hfill 0.9375\hfill \\ \hfill 1.1719\hfill \end{array}\right]$ .

The second iteration is $\left(D+L\right){X}^{\left(1\right)}=-U{X}^{\left(0\right)}+B$ , that is,

$\phantom{\rule{2em}{0ex}}\left[\begin{array}{ccc}\hfill 4\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill -1\hfill & \hfill 4\hfill & \hfill 0\hfill \\ \hfill -1\hfill & \hfill -1\hfill & \hfill 4\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(2\right)}\hfill \\ \hfill {x}_{2}^{\left(2\right)}\hfill \\ \hfill {x}_{3}^{\left(2\right)}\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]\left[\begin{array}{c}\hfill {x}_{1}^{\left(1\right)}\hfill \\ \hfill {x}_{2}^{\left(1\right)}\hfill \\ \hfill {x}_{3}^{\left(1\right)}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 2\hfill \\ \hfill 3\hfill \end{array}\right]$

from which it follows that ${X}^{\left(2\right)}=\left[\begin{array}{c}\hfill 0.7773\hfill \\ \hfill 0.9873\hfill \\ \hfill 1.1912\hfill \end{array}\right]$ .