|
|
Line 1: |
Line 1: |
| In [[numerical analysis]], '''Richardson extrapolation''' is a [[Series acceleration|sequence acceleration]] method, used to improve the [[rate of convergence]] of a [[sequence]]. It is named after [[Lewis Fry Richardson]], who introduced the technique in the early 20th century.<ref>{{cite journal
| | I like Rock climbing. <br>I to learn Arabic in my spare time.<br><br>Feel free to visit my web site ... [http://ece.modares.ac.ir/mnl/?q=node/1373380 Discount Code For Bookbyte] |
| | last=Richardson | first=L. F. | authorlink=Lewis Fry Richardson
| |
| | title=The approximate arithmetical solution by finite differences of physical problems including differential equations, with an application to the stresses in a masonry dam | |
| | journal=Philosophical Transactions of the Royal Society A
| |
| | year=1911 | volume=210
| |
| | issue=459-470 | pages=307–357
| |
| | doi=10.1098/rsta.1911.0009
| |
| }}</ref><ref>{{cite journal
| |
| | last=Richardson | first=L. F. | authorlink=Lewis Fry Richardson
| |
| | title=The deferred approach to the limit
| |
| | journal=Philosophical Transactions of the Royal Society A
| |
| | year=1927 | volume=226
| |
| | issue=636-646 | pages=299–349
| |
| | doi=10.1098/rsta.1927.0008
| |
| | last2=Gaunt
| |
| | first2=J. A.
| |
| }}</ref> In the words of [[Garrett Birkhoff|Birkhoff]] and [[Gian-Carlo Rota|Rota]], "... its usefulness for practical computations can hardly be overestimated."<ref>Page 126 of {{cite book | last=Birkhoff | first=Garrett | authorlink=Garrett Birkhoff | coauthors=[[Gian-Carlo Rota]] | title=Ordinary differential equations | publisher=John Wiley and sons | year=1978 | edition=3rd | isbn=0-471-07411-X | oclc= 4379402}}</ref>
| |
| | |
| Practical applications of Richardson extrapolation include [[Romberg integration]], which applies Richardson extrapolation to the [[trapezoid rule]], and the [[Bulirsch–Stoer algorithm]] for solving ordinary differential equations.
| |
| | |
| ==Example of Richardson extrapolation==
| |
| Suppose that we wish to approximate <math>A^*</math>, and we have a method <math>A(h)</math> that depends on a small parameter <math>h</math>, so that
| |
| | |
| <math>A(h) = A^\ast + C h^n + o(h^{n+1})\;</math>
| |
| | |
| Define a new method
| |
| | |
| <math> R(h,k) := \frac{ k^n A(h) - A(kh)}{k^n-1} </math>
| |
| | |
| Then
| |
| | |
| <math> \lim_{h \rightarrow 0 } R(h, k) = \frac{ k^n ( A^* + C h^n + o(h^{n+1}) ) - ( A^* + C k^n h^n + o(h^{n+1}) ) }{ k^n - 1} = A^* + o(h^{n+1}). </math>
| |
| | |
| <math> R(h,k) </math> is called the Richardson [[extrapolation]] of ''A''(''h''), and has a higher-order
| |
| error estimate <math> o(h^{n+1}) </math> compared to <math> A(h) </math>.
| |
| | |
| Very often, it is much easier to obtain a given precision by using ''R(h)'' rather
| |
| than ''A(h')'' with a much smaller '' h' '', which can cause problems due to limited precision (rounding errors) and/or due to the increasing number of calculations needed (see examples below).
| |
| | |
| ==General formula==
| |
| Let ''A''(''h'') be an approximation of ''A'' that depends on a positive step size ''h'' with an [[Approximation error|error]] formula of the form
| |
| :<math> A - A(h) = a_0h^{k_0} + a_1h^{k_1} + a_2h^{k_2} + \cdots </math> | |
| where the ''a<sub>i</sub>'' are unknown constants and the ''k<sub>i</sub>'' are known constants such that ''h<sup>k<sub>i</sub></sup>'' > ''h<sup>k<sub>i+1</sub></sup>''.
| |
| | |
| The exact value sought can be given by
| |
| :<math> A = A(h) + a_0h^{k_0} + a_1h^{k_1} + a_2h^{k_2} + \cdots </math>
| |
| which can be simplified with [[Big O notation]] to be
| |
| :<math> A = A(h)+ a_0h^{k_0} + O(h^{k_1}). \,\!</math>
| |
| | |
| Using the step sizes ''h'' and ''h / t'' for some ''t'', the two formulas for ''A'' are:
| |
| :<math> A = A(h)+ a_0h^{k_0} + O(h^{k_1}) \,\!</math>
| |
| :<math> A = A\!\left(\frac{h}{t}\right) + a_0\left(\frac{h}{t}\right)^{k_0} + O(h^{k_1}) .</math>
| |
| | |
| Multiplying the second equation by ''t''<sup>''k''<sub>0</sub></sup> and subtracting the first equation gives
| |
| :<math> (t^{k_0}-1)A = t^{k_0}A\left(\frac{h}{t}\right) - A(h) + O(h^{k_1}) </math>
| |
| which can be solved for ''A'' to give
| |
| :<math>A = \frac{t^{k_0}A\left(\frac{h}{t}\right) - A(h)}{t^{k_0}-1} + O(h^{k_1}) .</math>
| |
| | |
| By this process, we have achieved a better approximation of ''A'' by subtracting the largest term in the error which was ''O''(''h''<sup>''k''<sub>0</sub></sup>). This process can be repeated to remove more error terms to get even better approximations.
| |
| | |
| A general [[recurrence relation]] can be defined for the approximations by
| |
| :<math> A_{i+1}(h) = \frac{t^{k_i}A_i\left(\frac{h}{t}\right) - A_i(h)}{t^{k_i}-1} </math>
| |
| such that
| |
| :<math> A = A_{i+1}(h) + O(h^{k_{i+1}}) </math>
| |
| with <math>A_0=A(h)</math>.
| |
| | |
| The Richardson extrapolation can be considered as a linear [[sequence transformation]].
| |
| | |
| Additionally, the general formula can be used to estimate ''k''<sub>0</sub> when neither its value nor ''A'' is known ''a priori''. Such a technique can be useful for quantifying an unknown [[rate of convergence]]. Given approximations of ''A'' from three distinct step sizes ''h'', ''h / t'', and ''h / s'', the exact relationship
| |
| :<math>A=\frac{t^{k_0}A\left(\frac{h}{t}\right) - A(h)}{t^{k_0}-1} + O(h^{k_1}) = \frac{s^{k_0}A\left(\frac{h}{s}\right) - A(h)}{s^{k_0}-1} + O(h^{k_1})</math>
| |
| yields an approximate relationship
| |
| :<math>A\left(\frac{h}{t}\right) + \frac{A\left(\frac{h}{t}\right) - A(h)}{t^{k_0}-1} \approx A\left(\frac{h}{s}\right) +\frac{A\left(\frac{h}{s}\right) - A(h)}{s^{k_0}-1}</math>
| |
| which can be solved numerically to estimate ''k''<sub>0</sub>.
| |
| | |
| ==Example==
| |
| | |
| Using [[Taylor's theorem]] about h=0,
| |
| :<math>f(x+h) = f(x) + f'(x)h + \frac{f''(x)}{2}h^2 + \cdots</math>
| |
| the derivative of ''f''(''x'') is given by
| |
| :<math>f'(x) = \frac{f(x+h) - f(x)}{h} - \frac{f''(x)}{2}h + \cdots.</math>
| |
| | |
| If the initial approximations of the derivative are chosen to be
| |
| :<math>A_0(h) = \frac{f(x+h) - f(x)}{h}</math>
| |
| then ''k<sub>i</sub>'' = ''i''+1.
| |
| | |
| For ''t'' = 2, the first formula extrapolated for ''A'' would be | |
| :<math>A = 2A_0\!\left(\frac{h}{2}\right) - A_0(h) + O(h^2) .</math>
| |
| | |
| For the new approximation
| |
| :<math>A_1(h) = 2A_0\!\left(\frac{h}{2}\right) - A_0(h) </math>
| |
| we can extrapolate again to obtain
| |
| :<math> A = \frac{4A_1\!\left(\frac{h}{2}\right) - A_1(h)}{3} + O(h^3) .</math>
| |
| | |
| | |
| ==Example MATLAB code For Richardson Extrapolation==
| |
| | |
| The following demonstrates Richardson extrapolation to help solve the ODE <math>y'(t) = -y^2</math>, <math>y(0) = 1</math> with the Trapezoidal method. In this example we half the step size <math>h</math> each iteration and so in the discussion above we'd have that <math>t = 2</math>. The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers and so we take powers of <math>4 = 2^2 = t^2</math> in the pseudocode. We want to find the value of <math>y(5)</math>, which has the exact solution of <math>\frac{1}{5 + 1} = \frac{1}{6} = 0.1666...</math> since the exact solution of the ODE is <math>y(t) = \frac{1}{1 + t}</math>. This pseudocode assumes that a function called <code>Trapezoidal(f, tStart, tEnd, h, y0)</code> exists which performs the trapezoidal method on the function <code>f</code>, with starting point <code>y0</code> and <code>tStart</code>, step size <code>h</code>, and attempts to computes <code>y(tEnd)</code>
| |
| | |
| Starting with too small an initial step size can potentially introduce error into the final solution. Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance.
| |
| | |
| <source lang="matlab">
| |
| tStart = 0 %Starting time
| |
| tEnd = 5 %Ending time
| |
| f = -y^2 %The derivative of y, so y' = f(t, y(t)) = -y^2
| |
| % The solution to this ODE is y = 1/(1 + t)
| |
| y0 = 1 %The initial position (i.e. y0 = y(tStart) = y(0) = 1)
| |
| tolerance = 10^-11 %10 digit accuracy is desired
| |
| | |
| maxRows = 20 %Don't allow the iteration to continue indefinitely
| |
| initialH = tStart - tEnd %Pick an initial step size
| |
| haveWeFoundSolution = false %Were we able to find the solution to the desired tolerance? not yet.
| |
| | |
| h = initialH
| |
| | |
| %Create a 2D matrix of size maxRows by maxRows to hold the Richardson extrapolates
| |
| %Note that this will be a lower triangular matrix and that at most two rows are actually
| |
| % needed at any time in the compuation.
| |
| A = zeroMatrix(maxRows, maxRows)
| |
| | |
| %Compute the top left element of the matrix
| |
| A(1, 1) = Trapezoidal(f, tStart, tEnd, h, y0)
| |
| | |
| %Each row of the matrix requires one call to Trapezoidal
| |
| %This loops starts by filling the second row of the matrix, since the first row was computed above
| |
| for i = 1 : maxRows - 1 %Starting at i = 1, iterate at most maxRows - 1 times
| |
| h = h/2 %Half the previous value of h since this is the start of a new row
| |
|
| |
| %Call the Trapezoidal function with this new smaller step size
| |
| A(i + 1, 1) = Trapezoidal(f, tStart, tEnd, h, y0)
| |
|
| |
| for j = 1 : i %Go across the row until the diagonal is reached
| |
| %Use the last value computed (i.e. A(i + 1, j)) and the element from the
| |
| % row above it (i.e. A(i, j)) to compute the next Richardson extrapolate
| |
|
| |
| A(i + 1, j + 1) = ((4^j).*A(i + 1, j) - A(i, j))/(4^j - 1);
| |
| end
| |
|
| |
| %After leaving the above inner loop, the diagonal element of row i + 1 has been computed
| |
| % This diagonal element is the last Richardson extrapolate to be computed
| |
| %The difference between this extrapolate and the last extrapolate of row i is a good
| |
| % indication of the error
| |
| if(absoluteValue(A(i + 1, i + 1) - A(i, i)) < tolerance) %If the result is within tolerance
| |
| print("y(5) = ", A(i + 1, i + 1)) %Display the result of the Richardson extrapolation
| |
| haveWeFoundSolution = true
| |
| break %Done, so leave the loop
| |
| end
| |
| end
| |
| | |
| if(haveWeFoundSolution == false) %If we weren't able to find a solution to within the desired tolerance
| |
| print("Warning: Not able to find solution to within the desired tolerance of ", tolerance);
| |
| print("The last computed extrapolate was ", A(maxRows, maxRows))
| |
| end
| |
| </source>
| |
| | |
| ==See also==
| |
| * [[Aitken's delta-squared process]]
| |
| * [[Takebe Kenko]]
| |
| | |
| ==References==
| |
| <references/>
| |
| *''Extrapolation Methods. Theory and Practice'' by C. Brezinski and M. Redivo Zaglia, North-Holland, 1991.
| |
| | |
| ==External links==
| |
| *[http://math.fullerton.edu/mathews/n2003/RichardsonExtrapMod.html Module for Richardson's Extrapolation], fullerton.edu
| |
| *[http://ocw.mit.edu/courses/mathematics/18-304-undergraduate-seminar-in-discrete-mathematics-spring-2006/projects/xtrpltn_liu_xpnd.pdf Fundamental Methods of Numerical Extrapolation With Applications], mit.edu
| |
| *[http://www.math.ubc.ca/~feldman/m256/richard.pdf Richardson-Extrapolation]
| |
| *[http://www.math.ubc.ca/~israel/m215/rich/rich.html Richardson extrapolation on a website of Robert Israel (University of British Columbia) ]
| |
| | |
| [[Category:Numerical analysis]]
| |
| [[Category:Asymptotic analysis]]
| |