Iterative method used to solve a linear system of equations
In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.
Description [edit]
Let
-
be a square system of n linear equations, where:
Then A can be decomposed into a diagonal component D, a lower triangular part L and an upper triangular part U:
-
The solution is then obtained iteratively via
-
where
is the kth approximation or iteration of
and
is the next or k + 1 iteration of
. The element-based formula is thus:
-
The computation of
requires each element in x (k) except itself. Unlike the Gauss–Seidel method, we can't overwrite
with
, as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size n.
Algorithm [edit]
Input: initial guess
to the solution, (diagonal dominant) matrix
, right-hand side vector
, convergence criterion Output: solution when convergence is reached Comments: pseudocode based on the element-based formula above
while convergence not reached do for i := 1 step until n do
for j := 1 step until n do if j ≠ i then
end end
end
end
Convergence [edit]
The standard convergence condition (for any iterative method) is when the spectral radius of the iteration matrix is less than 1:
-
A sufficient (but not necessary) condition for the method to converge is that the matrix A is strictly or irreducibly diagonally dominant. Strict row diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms:
-
The Jacobi method sometimes converges even if these conditions are not satisfied.
Note that the Jacobi method does not converge for every symmetric positive-definite matrix. For example
-
Examples [edit]
Example 1 [edit]
A linear system of the form
with initial estimate
is given by
-
We use the equation
, described above, to estimate
. First, we rewrite the equation in a more convenient form
, where
and
. From the known values
-
we determine
as
-
Further,
is found as
-
With
and
calculated, we estimate
as
:
-
The next iteration yields
-
This process is repeated until convergence (i.e., until
is small). The solution after 25 iterations is
-
Example 2 [edit]
Suppose we are given the following linear system:
-
If we choose (0, 0, 0, 0) as the initial approximation, then the first approximate solution is given by
-
Using the approximations obtained, the iterative procedure is repeated until the desired accuracy has been reached. The following are the approximated solutions after five iterations.
| | | |
0.6 | 2.27272 | -1.1 | 1.875 |
1.04727 | 1.7159 | -0.80522 | 0.88522 |
0.93263 | 2.05330 | -1.0493 | 1.13088 |
1.01519 | 1.95369 | -0.9681 | 0.97384 |
0.98899 | 2.0114 | -1.0102 | 1.02135 |
The exact solution of the system is (1, 2, −1, 1).
Python example [edit]
import numpy as np ITERATION_LIMIT = 1000 # initialize the matrix A = np . array ([[ 10. , - 1. , 2. , 0. ], [ - 1. , 11. , - 1. , 3. ], [ 2. , - 1. , 10. , - 1. ], [ 0.0 , 3. , - 1. , 8. ]]) # initialize the RHS vector b = np . array ([ 6. , 25. , - 11. , 15. ]) # prints the system print ( "System:" ) for i in range ( A . shape [ 0 ]): row = [ " {} *x {} " . format ( A [ i , j ], j + 1 ) for j in range ( A . shape [ 1 ])] print ( " + " . join ( row ), "=" , b [ i ]) print () x = np . zeros_like ( b ) for it_count in range ( ITERATION_LIMIT ): if it_count != 0 : print ( "Iteration {0} : {1} " . format ( it_count , x )) x_new = np . zeros_like ( x ) for i in range ( A . shape [ 0 ]): s1 = np . dot ( A [ i , : i ], x [: i ]) s2 = np . dot ( A [ i , i + 1 :], x [ i + 1 :]) x_new [ i ] = ( b [ i ] - s1 - s2 ) / A [ i , i ] if x_new [ i ] == x_new [ i - 1 ]: break if np . allclose ( x , x_new , atol = 1e-10 , rtol = 0. ): break x = x_new print ( "Solution:" ) print ( x ) error = np . dot ( A , x ) - b print ( "Error:" ) print ( error )
Weighted Jacobi method [edit]
The weighted Jacobi iteration uses a parameter
to compute the iteration as
-
with
being the usual choice.[1] From the relation
, this may also be expressed as
-
.
Convergence in the symmetric positive definite case [edit]
In case that the system matrix
is of symmetric positive-definite type one can show convergence.
Let
be the iteration matrix. Then, convergence is guaranteed for
-
where
is the maximal eigenvalue.
The spectral radius can be minimized for a particular choice of
as follows
-
where
is the matrix condition number.
See also [edit]
- Gauss–Seidel method
- Successive over-relaxation
- Iterative method § Linear systems
- Gaussian Belief Propagation
- Matrix splitting
References [edit]
- ^ Saad, Yousef (2003). Iterative Methods for Sparse Linear Systems (2nd ed.). SIAM. p. 414. ISBN0898715342.
External links [edit]
- This article incorporates text from the article Jacobi_method on CFD-Wiki that is under the GFDL license.
- Black, Noel; Moore, Shirley & Weisstein, Eric W. "Jacobi method". MathWorld.
- Jacobi Method from www.math-linux.com