<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://en.formulasearchengine.com/index.php?action=history&amp;feed=atom&amp;title=Natural_process_variation</id>
	<title>Natural process variation - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://en.formulasearchengine.com/index.php?action=history&amp;feed=atom&amp;title=Natural_process_variation"/>
	<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Natural_process_variation&amp;action=history"/>
	<updated>2026-04-17T08:25:00Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.0-wmf.28</generator>
	<entry>
		<id>https://en.formulasearchengine.com/index.php?title=Natural_process_variation&amp;diff=12840&amp;oldid=prev</id>
		<title>en&gt;Steven Hepting at 19:06, 22 March 2010</title>
		<link rel="alternate" type="text/html" href="https://en.formulasearchengine.com/index.php?title=Natural_process_variation&amp;diff=12840&amp;oldid=prev"/>
		<updated>2010-03-22T19:06:23Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;In [[numerical linear algebra]], the &amp;#039;&amp;#039;&amp;#039;Jacobi method&amp;#039;&amp;#039;&amp;#039; is an algorithm for determining the solutions of a [[system of linear equations]] with largest absolute values in each row and column dominated by the diagonal element. Each diagonal element is solved for, and an approximate value plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the [[Jacobi eigenvalue algorithm|Jacobi transformation method of matrix diagonalization]]. The method is named after [[Carl Gustav Jakob Jacobi]].&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
Given a square system of &amp;#039;&amp;#039;n&amp;#039;&amp;#039; linear equations:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;A\mathbf x = \mathbf b&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\begin{bmatrix} a_{11} &amp;amp; a_{12} &amp;amp; \cdots &amp;amp; a_{1n} \\ a_{21} &amp;amp; a_{22} &amp;amp; \cdots &amp;amp; a_{2n} \\ \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \\a_{n1} &amp;amp; a_{n2} &amp;amp; \cdots &amp;amp; a_{nn} \end{bmatrix}, \qquad  \mathbf{x} = \begin{bmatrix} x_{1} \\ x_2 \\ \vdots \\ x_n \end{bmatrix} , \qquad  \mathbf{b} = \begin{bmatrix} b_{1} \\ b_2 \\ \vdots \\ b_n \end{bmatrix}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then &amp;#039;&amp;#039;A&amp;#039;&amp;#039; can be decomposed into a [[diagonal matrix|diagonal]] component &amp;#039;&amp;#039;D&amp;#039;&amp;#039;, and the remainder &amp;#039;&amp;#039;R&amp;#039;&amp;#039;:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;A=D+R \qquad \text{where} \qquad D = \begin{bmatrix} a_{11} &amp;amp; 0 &amp;amp; \cdots &amp;amp; 0 \\ 0 &amp;amp; a_{22} &amp;amp; \cdots &amp;amp; 0 \\ \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \\0 &amp;amp; 0 &amp;amp; \cdots &amp;amp; a_{nn} \end{bmatrix} \text{ and } R = \begin{bmatrix} 0 &amp;amp; a_{12} &amp;amp; \cdots &amp;amp; a_{1n} \\ a_{21} &amp;amp; 0 &amp;amp; \cdots &amp;amp; a_{2n} \\ \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \\ a_{n1} &amp;amp; a_{n2} &amp;amp; \cdots &amp;amp; 0 \end{bmatrix}. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The solution is then obtained iteratively via&lt;br /&gt;
:&amp;lt;math&amp;gt; \mathbf{x}^{(k+1)} = D^{-1} (\mathbf{b} - R \mathbf{x}^{(k)}). &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The element-based formula is thus:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; x^{(k+1)}_i  = \frac{1}{a_{ii}} \left(b_i -\sum_{j\ne i}a_{ij}x^{(k)}_j\right),\quad i=1,2,\ldots,n. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the computation of &amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;&amp;#039;&amp;#039;i&amp;#039;&amp;#039;&amp;lt;/sub&amp;gt;&amp;lt;sup&amp;gt;(&amp;#039;&amp;#039;k&amp;#039;&amp;#039;+1)&amp;lt;/sup&amp;gt; requires each element in &amp;#039;&amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;#039;&amp;lt;sup&amp;gt;(&amp;#039;&amp;#039;k&amp;#039;&amp;#039;)&amp;lt;/sup&amp;gt; except itself. Unlike the [[Gauss–Seidel method]], we can&amp;#039;t overwrite &amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;&amp;#039;&amp;#039;i&amp;#039;&amp;#039;&amp;lt;/sub&amp;gt;&amp;lt;sup&amp;gt;(&amp;#039;&amp;#039;k&amp;#039;&amp;#039;)&amp;lt;/sup&amp;gt; with &amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;&amp;#039;&amp;#039;i&amp;#039;&amp;#039;&amp;lt;/sub&amp;gt;&amp;lt;sup&amp;gt;(&amp;#039;&amp;#039;k&amp;#039;&amp;#039;+1)&amp;lt;/sup&amp;gt;, as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size &amp;#039;&amp;#039;n&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Algorithm ==&lt;br /&gt;
:  Choose an initial guess &amp;lt;math&amp;gt;x^{0}&amp;lt;/math&amp;gt; to the solution &amp;lt;br&amp;gt;&lt;br /&gt;
:  &amp;lt;math&amp;gt; k = 0 &amp;lt;/math&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
:  check if convergence is reached&lt;br /&gt;
:  while convergence not reached do &amp;lt;br&amp;gt;&lt;br /&gt;
::  for i := 1 step until n do &amp;lt;br&amp;gt;&lt;br /&gt;
:::  &amp;lt;math&amp;gt; \sigma = 0 &amp;lt;/math&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
:::  for j := 1 step until n do &amp;lt;br&amp;gt;&lt;br /&gt;
::::  if j &amp;amp;ne; i then&lt;br /&gt;
:::::  &amp;lt;math&amp;gt; \sigma  = \sigma  + a_{ij} x_j^{(k)} &amp;lt;/math&amp;gt;&lt;br /&gt;
::::  end if&lt;br /&gt;
:::  end (j-loop) &amp;lt;br&amp;gt;&lt;br /&gt;
:::  &amp;lt;math&amp;gt;  x_i^{(k+1)}  = {{\left( {b_i  - \sigma } \right)} \over {a_{ii} }} &amp;lt;/math&amp;gt;&lt;br /&gt;
::  end (i-loop)&lt;br /&gt;
::  check if convergence is reached&lt;br /&gt;
::  &amp;lt;math&amp;gt;k = k + 1&amp;lt;/math&amp;gt;&lt;br /&gt;
:  loop (while convergence condition not reached)&lt;br /&gt;
&lt;br /&gt;
==Convergence== &amp;lt;!-- [[Matrix splitting]] links here.  Please do not change. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The standard convergence condition (for any iterative method) is when the [[spectral radius]] of the iteration matrix is less than 1:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\rho(D^{-1}R) &amp;lt; 1. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The method is guaranteed to converge if the matrix &amp;#039;&amp;#039;A&amp;#039;&amp;#039; is strictly or irreducibly [[diagonally dominant matrix|diagonally dominant]].  Strict row diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\left | a_{ii} \right | &amp;gt; \sum_{j \ne i} {\left | a_{ij} \right |}. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Jacobi method sometimes converges even if these conditions are not satisfied.&lt;br /&gt;
&lt;br /&gt;
==Example==&lt;br /&gt;
A linear system of the form &amp;lt;math&amp;gt;Ax=b&amp;lt;/math&amp;gt; with initial estimate &amp;lt;math&amp;gt;x^{(0)}&amp;lt;/math&amp;gt; is given by&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; A=&lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           2 &amp;amp; 1 \\&lt;br /&gt;
           5 &amp;amp; 7 \\&lt;br /&gt;
           \end{bmatrix},&lt;br /&gt;
 \ b=&lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           11 \\&lt;br /&gt;
           13 \\&lt;br /&gt;
           \end{bmatrix}&lt;br /&gt;
\quad \text{and} \quad x^{(0)} =&lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           1 \\&lt;br /&gt;
           1 \\&lt;br /&gt;
        \end{bmatrix} .&amp;lt;/math&amp;gt;&lt;br /&gt;
We use the equation &amp;lt;math&amp;gt; x^{(k+1)}=D^{-1}(b - Rx^{(k)})&amp;lt;/math&amp;gt;, described above, to estimate &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;. First, we rewrite the equation in a more convenient form &amp;lt;math&amp;gt;D^{-1}(b - Rx^{(k)}) = Tx^{(k)} + C&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;T=-D^{-1}R&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C = D^{-1}b&amp;lt;/math&amp;gt;.  Note that &amp;lt;math&amp;gt;R=L+U&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;L&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; are the strictly lower and upper parts of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;.  From the known values&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; D^{-1}=&lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           1/2 &amp;amp; 0 \\&lt;br /&gt;
           0 &amp;amp; 1/7 \\&lt;br /&gt;
           \end{bmatrix}, &lt;br /&gt;
 \ L=&lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           0 &amp;amp; 0 \\&lt;br /&gt;
           5 &amp;amp; 0 \\&lt;br /&gt;
           \end{bmatrix}&lt;br /&gt;
\quad \text{and}  \quad U =&lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           0 &amp;amp; 1 \\&lt;br /&gt;
           0 &amp;amp; 0 \\&lt;br /&gt;
        \end{bmatrix} .&amp;lt;/math&amp;gt;&lt;br /&gt;
we determine &amp;lt;math&amp;gt; T=-D^{-1}(L+U) &amp;lt;/math&amp;gt; as&lt;br /&gt;
:&amp;lt;math&amp;gt; T=&lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           1/2 &amp;amp; 0 \\&lt;br /&gt;
           0 &amp;amp; 1/7 \\&lt;br /&gt;
           \end{bmatrix}&lt;br /&gt;
\left\{&lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           0 &amp;amp; 0 \\&lt;br /&gt;
           -5 &amp;amp; 0 \\&lt;br /&gt;
           \end{bmatrix}&lt;br /&gt;
 +&lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           0 &amp;amp; -1 \\&lt;br /&gt;
           0 &amp;amp; 0 \\&lt;br /&gt;
        \end{bmatrix}\right\}  &lt;br /&gt;
 =&lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           0 &amp;amp; -1/2 \\&lt;br /&gt;
           -5/7 &amp;amp; 0 \\&lt;br /&gt;
        \end{bmatrix}  .&amp;lt;/math&amp;gt;&lt;br /&gt;
Further, C is found as&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; C =&lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           1/2 &amp;amp; 0 \\&lt;br /&gt;
           0 &amp;amp; 1/7 \\&lt;br /&gt;
           \end{bmatrix}&lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           11 \\&lt;br /&gt;
           13 \\&lt;br /&gt;
           \end{bmatrix}&lt;br /&gt;
 =&lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           11/2 \\&lt;br /&gt;
           13/7 \\&lt;br /&gt;
        \end{bmatrix}. &amp;lt;/math&amp;gt;&lt;br /&gt;
With T and C calculated, we estimate &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt; x^{(1)}= Tx^{(0)}+C &amp;lt;/math&amp;gt;:&lt;br /&gt;
:&amp;lt;math&amp;gt; x^{(1)}= &lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           0 &amp;amp; -1/2 \\&lt;br /&gt;
           -5/7 &amp;amp; 0 \\&lt;br /&gt;
           \end{bmatrix}&lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           1 \\&lt;br /&gt;
           1 \\&lt;br /&gt;
           \end{bmatrix}&lt;br /&gt;
 +&lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           11/2 \\&lt;br /&gt;
           13/7 \\&lt;br /&gt;
        \end{bmatrix}  &lt;br /&gt;
 =&lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           5.0 \\&lt;br /&gt;
           8/7 \\&lt;br /&gt;
        \end{bmatrix}  &lt;br /&gt;
\approx&lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           5 \\&lt;br /&gt;
           1.143 \\&lt;br /&gt;
        \end{bmatrix} .&amp;lt;/math&amp;gt;&lt;br /&gt;
The next iteration yields&lt;br /&gt;
:&amp;lt;math&amp;gt; x^{(2)}= &lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           0 &amp;amp; -1/2 \\&lt;br /&gt;
           -5/7 &amp;amp; 0 \\&lt;br /&gt;
           \end{bmatrix}&lt;br /&gt;
&lt;br /&gt;
      \begin{bmatrix}&lt;br /&gt;
           5.0 \\&lt;br /&gt;
           8/7 \\&lt;br /&gt;
           \end{bmatrix}&lt;br /&gt;
 +&lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           11/2 \\&lt;br /&gt;
           13/7 \\&lt;br /&gt;
        \end{bmatrix} &lt;br /&gt;
= &lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           69/14 \\&lt;br /&gt;
           -12/7 \\&lt;br /&gt;
        \end{bmatrix} &lt;br /&gt;
 \approx&lt;br /&gt;
        \begin{bmatrix}&lt;br /&gt;
           4.929 \\&lt;br /&gt;
           -1.713 \\&lt;br /&gt;
        \end{bmatrix} .&amp;lt;/math&amp;gt;&lt;br /&gt;
This process is repeated until convergence (i.e., until &amp;lt;math&amp;gt;\|Ax^{(n)} - b\|&amp;lt;/math&amp;gt; is small).  The solution after 25 iterations is&lt;br /&gt;
:&amp;lt;math&amp;gt; x=\begin{bmatrix}&lt;br /&gt;
7.111\\&lt;br /&gt;
-3.222&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Weighted Jacobi method ==&lt;br /&gt;
&lt;br /&gt;
The weighted Jacobi iteration uses a parameter &amp;lt;math&amp;gt;\omega&amp;lt;/math&amp;gt; to compute the iteration as&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; \mathbf{x}^{(k+1)} = \omega D^{-1} (\mathbf{b} - R \mathbf{x}^{(k)}) + \left(1-\omega\right)\mathbf{x}^{(k)}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\omega = 2/3&amp;lt;/math&amp;gt; being the usual choice.&amp;lt;ref&amp;gt;{{cite book|last=Saad|first=Yousef|authorlink=Yousef Saad|title=Iterative Methods for Sparse Linear Systems|edition=2|year=2003|publisher=[[Society for Industrial and Applied Mathematics|SIAM]]|isbn=0898715342|page=414}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
==See also==&lt;br /&gt;
&lt;br /&gt;
*[[Gauss–Seidel method]]&lt;br /&gt;
*[[Successive over-relaxation]]&lt;br /&gt;
*[[Iterative_method#Linear_systems|Iterative method. Linear systems]]&lt;br /&gt;
*[[Belief_propagation#Gaussian_belief_propagation_.28GaBP.29|Gaussian Belief Propagation]]&lt;br /&gt;
*[[Matrix splitting]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
*{{CFDWiki|name=Jacobi_method}}&lt;br /&gt;
*{{MathWorld|urlname=JacobiMethod|title=Jacobi method|author=Black, Noel; Moore, Shirley; and Weisstein, Eric W.}}&lt;br /&gt;
*[http://www.math-linux.com/spip.php?article49 Jacobi Method from www.math-linux.com]&lt;br /&gt;
*[http://math.fullerton.edu/mathews/n2003/GaussSeidelMod.html Module for Jacobi and Gauss–Seidel Iteration] &lt;br /&gt;
*[http://pagerank.suchmaschinen-doktor.de/matrix-inversion.html Numerical matrix inversion]&lt;br /&gt;
&lt;br /&gt;
{{Numerical linear algebra}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Numerical linear algebra]]&lt;br /&gt;
[[Category:Articles with example pseudocode]]&lt;br /&gt;
[[Category:Relaxation (iterative methods)]]&lt;/div&gt;</summary>
		<author><name>en&gt;Steven Hepting</name></author>
	</entry>
</feed>