Banach limit: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Addbot
m Bot: Migrating 5 interwiki links, now provided by Wikidata on d:q806060 (Report Errors)
en>Gro-Tsen
→‎Ba spaces: connect the dots with the article on the Stone–Čech compactification where this dual is also described
Line 1: Line 1:
{{for|the interpolation method|Lanczos resampling}}
The person who wrote the article is called Jayson Hirano and he totally  online psychics ([http://ltreme.com/index.php?do=/profile-127790/info/ ltreme.com]) digs that title. Her family life in Ohio but her spouse desires them to move. Playing badminton is a factor that he is completely addicted to. Office supervising is exactly where her main earnings arrives from but she's currently utilized for an additional 1.<br><br>My [http://brazil.amor-amore.com/irboothe phone psychic] web-site: clairvoyance ([http://c045.danah.co.kr/home/index.php?document_srl=1356970&mid=qna simply click the up coming article])
{{technical|date=June 2012}}
{{lead too short|date=June 2012}}
 
The '''Lanczos algorithm''' is an [[iterative algorithm]] devised by [[Cornelius Lanczos]] that is an adaptation of [[power iteration|power methods]] to find [[eigenvalue]]s and [[eigenvector]]s of a [[square matrix]] or the [[singular value decomposition]] of a rectangular matrix. It is particularly useful for finding decompositions of very large sparse matrices. In [[latent semantic indexing]], for instance, matrices relating millions of documents to hundreds of thousands of terms must be reduced to singular-value form.
 
==Power method for finding eigenvalues==
{{main|Power iteration}}
The power method for finding the largest eigenvalue of a matrix <math>A\,</math> can be summarized by noting that if <math>x_0\,</math> is a random vector and <math>x_{n+1} = A x_n\,</math>, then in the large <math>n</math> limit, <math>x_n/\|x_n\|</math> approaches the normed eigenvector corresponding to the largest magnitude eigenvalue.
 
If <math>A = U \operatorname{diag}(\sigma_i) U' \,</math> is the [[Eigendecomposition of a matrix|eigendecomposition]] of <math>A\,</math>, then <math>A^n = U \operatorname{diag}(\sigma_i^n) U'</math>. As <math>n\,</math> gets very large, the diagonal matrix of eigenvalues will be dominated by whichever eigenvalue is largest (neglecting the case of two or more equally large eigenvalues, of course). As this happens, <math>\| x_{n+1}\| / \| x_{n}\|\,</math> will converge to the largest eigenvalue and <math> x_n /\| x_n\|\,</math> to the associated eigenvector. If the largest eigenvalue is multiple, then <math>x_n \,</math> will converge to a vector in the subspace spanned by the eigenvectors associated with those largest eigenvalues. Having found the first eigenvector/value, one can then successively restrict the algorithm to the null space of the known eigenvectors to get the second largest eigenvector/values and so on.
 
In practice, this simple algorithm does not work very well for computing very many of the eigenvectors because any [[round-off error]] will tend to introduce slight components of the more significant eigenvectors back into the computation, degrading the accuracy of the computation. Pure power methods also can converge slowly, even for the first eigenvector.
 
==Lanczos method==
During the procedure of applying the power method, while getting the ultimate eigenvector <math>A^{n-1} v</math>, we also got a series of vectors <math>A^j v, \, j=0,1,\cdots,n-2</math> which were eventually discarded. As <math> n </math> is often taken to be quite large, this can result in a large amount of disregarded information. More advanced algorithms, such as [[Arnoldi's algorithm]] and the Lanczos algorithm, save this information and use the [[Gram–Schmidt process]] or [[Householder algorithm]] to reorthogonalize them into a basis spanning the [[Krylov subspace]] corresponding to the matrix <math>A</math>.
 
===The algorithm===
 
The Lanczos algorithm can be viewed as a simplified [[Arnoldi's algorithm]] in that it applies to [[Hermitian matrices]]. The <math>m</math>'th step of the algorithm transforms the matrix <math>A</math> into a [[tridiagonal matrix]] <math>T_{mm}</math>; when <math>m</math> is equal to the dimension of <math>A</math>, <math>T_{mm}</math> is [[similar (linear algebra)|similar]] to <math>A</math>.
 
====Definitions====
We hope to calculate the tridiagonal and symmetric matrix <math>T_{mm} = V_m^* A V_m.</math>
 
The diagonal elements are denoted by <math>\alpha_j = t_{jj}</math>, and the off-diagonal elements are denoted by <math> \beta_j = t_{j-1,j} </math>.
 
Note that <math> t_{j-1,j} = t_{j,j-1} </math>, due to its symmetry.
 
====Iteration====
 
(Note: Following these steps alone will '''not''' give you the correct eigenvalue and eigenvectors. More consideration must be applied to correct for the numerical errors. See the section [[#Numerical_stability|Numerical stability]] in the following.)
 
There are in principle four ways to write the iteration procedure. Paige[1972] and other works show that the following procedure is the most numerically stable.<ref name="CW1985">{{Cite book|last1=Cullum |last2= Willoughby|title=Lanczos Algorithms for Large Symmetric Eigenvalue Computations|volume= 1| isbn= 0-8176-3058-9}}</ref><ref name="Saad1992">{{Cite book|author=[[Yousef Saad]]|title=Numerical Methods for Large Eigenvalue Problems|  isbn= 0-470-21820-7|url= http://www-users.cs.umn.edu/~saad/books.html}}</ref>
 
{{algorithm-begin|name=Lanczos}}
  <math>v_1 \leftarrow \, </math> random vector with norm 1.
  <math>v_0 \leftarrow 0 \, </math>
  <math>\beta_1 \leftarrow 0 \, </math>
  '''Iteration''': for <math>j = 1,2,\cdots,m-1\, </math>
      <math> w_j \leftarrow A v_j \, </math>
      <math> \alpha_j \leftarrow  w_j \cdot v_j  \, </math>
      <math> w_j \leftarrow w_j - \alpha_j v_j  - \beta_j v_{j-1} \, </math>
      <math> \beta_{j+1} \leftarrow \left\| w_j \right\|  \, </math>
      <math> v_{j+1} \leftarrow w_j / \beta_{j+1}  \, </math>
      endfor
      <math> w_m  \leftarrow A v_m \, </math>
      <math> \alpha_m \leftarrow  w_m \cdot v_m  \, </math>
  '''return'''
{{algorithm-end}}
 
Here, <math>x \cdot y</math> represents the dot product of vectors <math>x</math> and <math>y</math>.
 
After the iteration, we get the <math>\alpha_j</math> and <math>\beta_j</math> which construct a tridiagonal matrix
 
<math>T_{mm} = \begin{pmatrix}
\alpha_1 & \beta_2  &          &            &              & 0 \\
\beta_2  & \alpha_2 & \beta_3  &            &              & \\
        & \beta_3  & \alpha_3 & \ddots      &              & \\
        &          & \ddots  & \ddots      & \beta_{m-1}  & \\
        &          &          & \beta_{m-1} & \alpha_{m-1} & \beta_m \\
0        &          &          &            & \beta_m      & \alpha_m \\
\end{pmatrix}</math>
 
The vectors <math>v_j</math> ('''Lanczos vectors''') generated on the fly construct the transformation matrix
 
<math>V_m = \left( v_1, v_2, \cdots, v_m \right)</math>,
 
which is useful for calculating the eigenvectors (see below). In practice, it could be saved after generation (but takes a lot of memory), or could be regenerated when needed, as long as one keeps the first vector <math>v_1</math>. At each iteration the algorithm executes a matrix-vector multiplication
and 7n further floating point operations.
 
====Solve for eigenvalues and eigenvectors====
 
After the matrix <math>T_{mm}</math> is calculated, one can solve its eigenvalues <math>\lambda_i^{(m)}</math> and their corresponding eigenvectors <math>u_i^{(m)}</math> (for example, using the [[QR algorithm]] or Multiple Relatively Robust Representations (MRRR)). The eigenvalues and eigenvectors of <math>T</math> can be obtained in as little as <math>\mathcal{O}(m^2)</math> work with MRRR; obtaining just the eigenvalues is much simpler and can be done in <math>\mathcal{O}(m^2)</math> work with spectral bisection.
 
It can be proved that the eigenvalues are approximate eigenvalues of the original matrix <math>A</math>.
 
The Ritz eigenvectors <math>y_i</math> of <math>A</math> can be calculated by <math>y_i = V_m u_i^{(m)}</math>, where <math>V_m</math> is the transformation matrix whose column vectors are <math>v_1, v_2, \cdots, v_m</math>.
 
===Numerical stability===
Stability means how much the algorithm will be affected (i.e. will it produce the approximate result close to the original one) if there are small numerical errors introduced and accumulated. Numerical stability is the central criterion for judging the usefulness of implementing an algorithm on a computer with roundoff.
 
For the Lanczos algorithm, it can be proved that with ''exact arithmetic'', the set of vectors <math>v_1, v_2, \cdots, v_{m+1}</math> constructs an ''orthonormal'' basis, and the eigenvalues/vectors solved are good approximations to those of the original matrix. However, in practice (as the calculations are performed in floating point arithmetic where inaccuracy is inevitable), the orthogonality is quickly lost and in some cases the new vector could even be linearly dependent on the set that is already constructed. As a result, some of the eigenvalues of the resultant tridiagonal matrix may not be approximations to the original matrix. Therefore, the Lanczos algorithm is not very stable.
 
Users of this algorithm must be able to find and remove those "spurious" eigenvalues. Practical implementations of the Lanczos algorithm go in three directions to fight this stability issue:<ref name="CW1985"/><ref name="Saad1992"/>
# Prevent the loss of orthogonality
# Recover the orthogonality after the basis is generated
# After the good and "spurious" eigenvalues are all identified, remove the spurious ones.
 
==Variations==
Variations on the Lanczos algorithm exist where the vectors involved are tall, narrow matrices instead of vectors and the normalizing constants are small square matrices. These are called "block" Lanczos algorithms and can be much faster on computers with large numbers of registers and long memory-fetch times.
 
Many implementations of the Lanczos algorithm restart after a certain number of iterations.  One of the most influential restarted variations is the implicitly restarted Lanczos method,<ref>{{cite journal |author=D. Calvetti, L. Reichel, and D.C. Sorensen  |year=1994 |title=An Implicitly Restarted Lanczos Method for Large Symmetric Eigenvalue Problems|url=http://etna.mcs.kent.edu/vol.2.1994/pp1-21.dir/pp1-21.ps|
    journal = Electronic Transactions on Numerical Analysis|
    volume = 2|
    pages = 1–21
 
}}</ref> which is implemented in [[ARPACK]].<ref>{{cite book |author=R. B. Lehoucq, D. C. Sorensen, and C. Yang |year=1998 |title=ARPACK Users Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods |publisher=SIAM |url=http://www.ec-securehost.com/SIAM/SE06.html%7C }}</ref>  This has led into a number of other restarted variations such as restarted Lanczos bidiagonalization.<ref>{{cite journal |author=E. Kokiopoulou and C. Bekas and E. Gallopoulos  |year=2004 |title=Computing smallest singular triplets with implicitly restarted Lanczos bidiagonalization |journal=Appl. Numer. Math. |doi=10.1016/j.apnum.2003.11.011 |volume=49 |pages=39}}</ref>  Another successful restarted variation is the Thick-Restart Lanczos method,<ref>{{cite journal |author=Kesheng Wu and Horst Simon  |year=2000 |title=Thick-Restart Lanczos Method for Large Symmetric  Eigenvalue Problems |publisher=SIAM |doi=10.1137/S0895479898334605 |journal=SIAM Journal on Matrix Analysis and Applications |volume=22 |issue=2 |pages=602 }}</ref> which has been implemented in a software package called TRLan.<ref>{{cite web |author=Kesheng Wu and Horst Simon  |year=2001 |title=TRLan software package |publisher= |url=http://crd.lbl.gov/~kewu/trlan.html }}</ref>
 
=== Nullspace over a finite field ===
{{main|Block Lanczos algorithm}}
 
In 1995, [[Peter Montgomery (mathematician)|Peter Montgomery]] published an algorithm, based on the Lanczos algorithm, for finding elements of the [[kernel (matrix)|nullspace]] of a large sparse matrix over [[GF(2)]]; since the set of people interested in large sparse matrices over finite fields and the set of people interested in large eigenvalue problems scarcely overlap, this is often also called the ''block Lanczos algorithm'' without causing unreasonable confusion.{{Citation needed|date=June 2011}}
 
==Applications==
Lanczos algorithms are very attractive because the multiplication by <math>A\,</math> is the only large-scale linear operation. Since weighted-term text retrieval engines implement just this operation, the Lanczos algorithm can be applied efficiently to text documents (see [[Latent Semantic Indexing]]). Eigenvectors are also important for large-scale ranking methods such as the [[HITS algorithm]] developed by [[Jon Kleinberg]], or the [[PageRank]] algorithm used by Google.
 
Lanczos algorithms are also used in [[Condensed Matter Physics]] as a method for solving [[Hamiltonian matrix|Hamiltonians]] of [[Strongly correlated material|strongly correlated electron systems]].<ref>{{cite journal|last=Chen|first=HY|coauthors=Atkinson, W.A., Wortis, R.|title=Disorder-induced zero-bias anomaly in the Anderson-Hubbard model: Numerical and analytical calculations|journal=Physical Review B|date=July 2011|volume=84|issue=4|doi=10.1103/PhysRevB.84.045113}}</ref>
 
Lanczos algorithm has also been used in the formulation of the Levenberg-Marquardt algorithm for generating computational models of oil and gas reservoirs .<ref> Gharib Shirangi, M., History matching production data and uncertainty assessment with an efficient TSVD parameterization algorithm, Journal of Petroleum Science and Engineering http://www.sciencedirect.com/science/article/pii/S0920410513003227</ref>
 
==Implementations==
The [[NAG Numerical Library|NAG Library]] contains several routines<ref>{{ cite web | author = The Numerical Algorithms Group  | title = Keyword Index: Lanczos | date = | work = NAG Library Manual, Mark 23 | url = http://www.nag.co.uk/numeric/fl/nagdoc_fl23/html/INDEXES/KWIC/lanczos.html | accessdate = 2012-02-09 }}</ref> for the solution of large scale linear systems and eigenproblems which use the Lanczos algorithm. 
 
[[MATLAB]] and [[GNU Octave]] come with ARPACK built-in. Both stored and implicit matrices can be analyzed through the ''eigs()'' function ([http://www.mathworks.com/help/techdoc/ref/eigs.html Matlab]/[http://www.gnu.org/software/octave/doc/interpreter/Sparse-Linear-Algebra.html#doc_002deigs Octave]).
 
A Matlab implementation of the Lanczos algorithm (note precision issues) is available as a part of the [http://www.cs.cmu.edu/~bickson/gabp/#download Gaussian Belief Propagation Matlab Package]. The GraphLab<ref>[http://www.graphlab.ml.cmu.edu/pmf.html GraphLab]</ref> collaborative filtering library incorporates a large scale parallel implementation of the Lanczos algorithm (in C++) for multicore.
 
The [http://www.cs.wm.edu/~andreas/software/ PRIMME] library also implements a Lanczos like algorithm.
 
==References==
{{reflist}}
 
==External links==
* [http://books.google.com/books?vid=ISBN0801854148 Golub and van Loan give very good descriptions of the various forms of Lanczos algorithms in their book ''Matrix Computations'']
* [http://ai.stanford.edu/~ang/papers/ijcai01-linkanalysis.pdf Andrew Ng et al., an analysis of PageRank]
* [http://www.farcaster.com/papers/crypto-solve/node3.html Lanczos and conjugate gradient methods] B. A. LaMacchia and A. M. Odlyzko, Solving Large Sparse Linear Systems Over Finite Fields.
 
{{Numerical linear algebra}}
 
{{DEFAULTSORT:Lanczos Algorithm}}
[[Category:Numerical linear algebra]]

Revision as of 11:37, 5 February 2014

The person who wrote the article is called Jayson Hirano and he totally online psychics (ltreme.com) digs that title. Her family life in Ohio but her spouse desires them to move. Playing badminton is a factor that he is completely addicted to. Office supervising is exactly where her main earnings arrives from but she's currently utilized for an additional 1.

My phone psychic web-site: clairvoyance (simply click the up coming article)