7.3. Convergence of indirect methods#

We have seen that both the Jacobi and Gauss-Seidel methods are iterated until the estimates of the solution converge to a given tolerance. In Example 7.1 required 49 iterations to converge to the solution of a system of linear equations whereas in Example 7.2 only required 20 iterations to converge to the solution for the same system.

Not all indirect methods will converge for a given system of linear equations, we can establish whether a method will be convergent using the theorem below. Let \(\vec{e}^{(k)} = \|\vec{x} - \vec{x}^{(k)}\|\) be the error between the exact solution \(\vec{x}\) and the estimate \(\vec{x}^{(k)}\). The error from one estimate to the next is updated using the iteration matrix for the method

\[ \vec{e}^{(k+1)} = T\vec{e}^{(k)}. \]

We can write the first error, \(\vec{e}^{(0)}\) as a linear combination of some vectors \(\vec{v}_i\)

\[ \vec{e}^{(0)} =\alpha_1 \vec{v}_1 +\alpha_2 \vec{v}_2 +\cdots +\alpha_n \vec{v}_n =\sum_{i=1}^n \alpha_i \vec{v}_i, \]

where \(\alpha_i\) are scalars. If \(\vec{v}_i\) are the eigenvalues of the iteration matrix \(T\) with eigenvalues \(\lambda_i\) so \(T\vec{v}_i = \lambda_i \vec{v}_i\) then iterating \(\vec{e}^{(k)}\) gives

\[\begin{split} \begin{align*} \vec{e}^{(1)} &= T\vec{e}^{(0)} =T\left(\sum_{i=1}^n \alpha_i \vec{v}_i \right)=\sum_{i=1}^n \alpha_i T\vec{v}_i = \sum_{i=1}^n \alpha_i \lambda_i \vec{v}_i , \\ \vec{e}^{(2)} &=T\vec{e}^{(1)} =T\left(\sum_{i=1}^n \alpha_i \lambda_i \vec{v}_i \right)=\sum_{i=1}^n \alpha_i \lambda_i T\vec{v}_i =\sum_{i=1}^n \alpha_i \lambda_i^2 \vec{v}_i , \\ &\vdots \\ \vec{e}^{(k+1)} &=\sum_{i=1}^n \alpha_i \lambda_i^{k+1} \vec{v}_i . \end{align*} \end{split}\]

If \(|\lambda_1|>\lambda_2, \lambda_3, \ldots \lambda_n\) then

\[ \vec{e}^{(k+1)} =\alpha_1 \lambda_1^{k+1} \vec{v}_1 +\sum_{i=2}^n \alpha_i \lambda_i^{k+1} \vec{v}_i =\lambda_1^{k+1} \left(\alpha_1 \vec{v}_1 +\sum_{i=2}^n \alpha_i \vec{v}_i {\left(\frac{\lambda_i }{\lambda_1 }\right)}^{k+1} \right) \]

and since \(\lambda_i / \lambda_ 1 < 1\) then

\[ \begin{align*} \lim_{k\to \infty } \vec{e}^{(k+1)} =\alpha_1 \lambda_1^{k+1} \vec{v}_1 . \end{align*} \]

This means that as the number of iterations increases, the error varies by a factor of \(\lambda_1^{k+1}\) where \(\lambda_1\) is the largest eigenvalue of \(T\) which is known as the spectral radius.

Definition 7.4 (Spectral radius)

The spectral radius of \(A\) denoted by \(\rho(A)\) is

(7.8)#\[ \rho(A) = \max_i(\| \lambda_i \|). \]

where \(\lambda_i\) are the eigenvalues of \(A\).

Theorem 7.1 (Convergence criteria for an indirect method)

The spectral radius of \(T\) gives us the following information about an indirect method

  • If \(\rho (T) > 1\) then the errors will increase over each iteration, therefore for an indirect method to converge to the solution we require \(\rho (T)< 1\).

  • The smaller the value of \(\rho (T)\) the faster the errors will tend to zero.

Example 7.3

Show that the Jacobi and Gauss-Seidel methods are convergent of the system of linear equations from Example 7.1 (shown below)

\[\begin{split} \begin{align*} x_{1}^{(k+1)} &= \frac{1}{4} \left( -2 - 3 x_{2}^{(k)} \right), \\ x_{2}^{(k+1)} &= \frac{1}{4} \left( -8 - 3 x_{1}^{(k)} + x_{3}^{(k)} \right), \\ x_{3}^{(k+1)} &= \frac{1}{4} \left( 14 + x_{2}^{(k)} \right). \end{align*} \end{split}\]
Solution (click to show)

The coefficient matrix for this linear system is

\[\begin{split} A = \begin{pmatrix} 4 & 3 & 0 \\ 3 & 4 & -1 \\ 0 & -1 & 4 \end{pmatrix}. \end{split}\]

so

\[\begin{split} \begin{align*} L &= \begin{pmatrix} 0 & 0 & 0 \\ 3 & 0 & 0 \\ 0 & -1 & 0 \end{pmatrix}, & D &= \begin{pmatrix} 4 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 4 \end{pmatrix}, \\ U &= \begin{pmatrix} 0 & 3 & 0 \\ 0 & 0 & -1 \\ 0 & 0 & 0 \end{pmatrix}. \end{align*} \end{split}\]

The iteration matrices for the Jacobi and Gauss-Seidel methods are given in equations (7.4) and (7.7) which for this system are

\[\begin{split} \begin{align*} T_J &= -D^{-1} ( L + U) \\ &= -\begin{pmatrix} 4 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 4 \end{pmatrix} ^{-1} \left( \begin{pmatrix} 0 & 0 & 0 \\ 3 & 0 & 0 \\ 0 & -1 & 0 \end{pmatrix} + \begin{pmatrix} 4 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 4 \end{pmatrix} \right) \\ &= - \begin{pmatrix} \frac{1}{4} & 0 & 0 \\ 0 & \frac{1}{4} & 0 \\ 0 & 0 & \frac{1}{4} \end{pmatrix} \begin{pmatrix} 0 & 3 & 0 \\ 3 & 0 & -1 \\ 0 & -1 & 0 \end{pmatrix} \\ &= \begin{pmatrix} 0 & -\frac{3}{4} & 0 \\ -\frac{3}{4} & 0 & \frac{1}{4} \\ 0 & \frac{1}{4} & 0 \end{pmatrix}, \\ T_{GS} &= - (L + D)^{-1} U \\ &= -\left( \begin{pmatrix} 0 & 0 & 0 \\ 3 & 0 & 0 \\ 0 & -1 & 0 \end{pmatrix} + \begin{pmatrix} 4 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 4 \end{pmatrix} \right)^{-1} \begin{pmatrix} 0 & 3 & 0 \\ 0 & 0 & -1 \\ 0 & 0 & 0 \end{pmatrix} \\ &= - \begin{pmatrix} 3 & 0 & 0 \\ 3 & 4 & 0 \\ 0 & -1 & 4 \end{pmatrix}^{-1} \begin{pmatrix} 0 & 3 & 0 \\ 0 & 0 & -1 \\ 0 & 0 & 0 \end{pmatrix} \\ &= \begin{pmatrix} -\frac{1}{4} & 0 & 0 \\ \frac{3}{16} & \frac{1}{4} & 0 \\ \frac{3}{64} & -\frac{1}{16} & \frac{1}{4} \end{pmatrix} \begin{pmatrix} 0 & 3 & 0 \\ 0 & 0 & -1 \\ 0 & 0 & 0 \end{pmatrix} \\ & = \begin{pmatrix} 0 & -\frac{3}{4} & 0 \\ 0 & \frac{9}{16} & \frac{1}{4} \\ 0 & \frac{9}{64} & \frac{1}{16} \end{pmatrix} \end{align*} \end{split}\]

Calculating the spectral radius for these iteration matrices gives \(\rho(T_J )=0.7906\) and \(\rho (T_{GS})=0.625\) which are both less than 1 so both of these methods are convergent for this system. Furthermore, the Gauss-Seidel method will converge faster than the Jacobi method since it has a smaller spectral radius.