Nazca Plate: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Dreth
m Reverted edits by 86.20.203.222 (talk): unexplained content removal (HG)
 
en>DemocraticLuntz
Reverted 1 good faith edit by 108.49.90.32 using STiki
 
Line 1: Line 1:
In [[numerical analysis]], one of the most important problems is designing efficient and [[Numerical stability|stable]] [[algorithm]]s for finding the [[eigenvalue]]s of a [[Matrix (mathematics)|matrix]].  These '''[[List of numerical analysis topics#Eigenvalue algorithms|eigenvalue algorithms]]''' may also find [[eigenvectors]].
I'm Tarah (18) from Gutersloh Ebbesloh, Germany. <br>I'm learning German literature at a local university and I'm just about to graduate.<br>I have a part time job in a college.<br><br>Look into my web page; Hostgator Coupons ([http://ccucontest.commons.co.kr/?document_srl=978058 ccucontest.commons.co.kr])
 
==Eigenvalues and eigenvectors==
{{main|Eigenvalues and eigenvectors|Generalized eigenvector}}
Given an {{math|''n'' &times; ''n''}} [[Square matrix#Square matrices|square matrix]] {{math|''A''}} of [[Real number|real]] or [[Complex number|complex]] numbers, an ''[[eigenvalue]]'' {{math|λ}} and its associated ''[[generalized eigenvector]]'' {{math|'''v'''}} are a pair obeying the relation<ref name="Axler">{{Citation
| last = Axler
| first = Sheldon
| author-link = Sheldon Axler
| title = Down with Determinants!
| journal = American Mathematical Monthly
| issue = 102
| pages = 139–154
| url = http://www.axler.net/DwD.pdf
| year = 1995 }}</ref>
 
:<math>\left(A - \lambda I\right)^k {\bold v} = 0,</math>
 
where {{math|'''v'''}} is a nonzero {{math|''n'' &times; 1}} column vector, {{math|''I''}} is the {{math|''n'' &times; ''n''}} [[identity matrix]], {{math|''k''}} is a positive integer, and both {{math|λ}} and {{math|'''v'''}} are allowed to be complex even when {{math|''A''}} is real. When {{math|1=''k'' = 1}}, the vector is called simply an ''[[eigenvector]]'', and the pair is called an ''eigenpair''. In this case, {{math|1=''A'''''v''' = λ'''v'''}}.  Any eigenvalue {{math|λ}} of {{math|''A''}} has ordinary<ref group="note">The term "ordinary" is used here only to emphasize the distinction between "eigenvector" and "generalized eigenvector".</ref> eigenvectors associated to it, for if {{math|''k''}} is the smallest integer such that {{math|1=(''A'' - λ''I'')<sup>''k''</sup> '''v''' = 0}} for a generalized eigenvector {{math|'''v'''}}, then {{math|1=(''A'' - λ''I'')<sup>''k''-1</sup> '''v'''}} is an ordinary eigenvector. The value {{math|''k''}} can always be taken as less than or equal to {{math|''n''}}. In particular, {{math|1=(''A'' - λ''I'')<sup>''n''</sup> '''v''' = 0}} for all generalized eigenvectors {{math|'''v'''}} associated with {{math|λ.}}
 
For each eigenvalue {{math|λ}} of {{math|''A''}}, the [[kernel (matrix)|kernel]] {{math|ker(''A'' - λ''I'')}} consists of all eigenvectors associated with {{math|λ}} (along with 0), called the ''[[eigenspace]]'' of {{math|λ}}, while the vector space {{math|ker((''A'' - λ''I'')<sup>''n''</sup>)}} consists of all generalized eigenvectors, and is called the ''[[generalized eigenspace]]''. The ''[[geometric multiplicity]]'' of {{math|λ}} is the dimension of its eigenspace. The ''[[algebraic multiplicity]]'' of {{math|λ}} is the dimension of its generalized eigenspace. The latter terminology is justified by the equation
 
:<math>p_A\left(z\right) = {\rm det}\left( zI - A \right) = \prod_{i=1}^k (z - \lambda_i)^{\alpha_i},</math>
 
where {{math|det}} is the [[determinant]] function, the {{math|λ<sub>''i''</sub>}} are all the distinct eigenvalues of {{math|''A''}} and the {{math|α<sub>''i''</sub>}} are the corresponding algebraic multiplicities. The function {{math|1=''p<sub>A</sub>''(''z'')}} is the ''[[characteristic polynomial]]'' of {{math|''A''}}. So the algebraic multiplicity is the multiplicity of the eigenvalue as a [[Properties of polynomial roots|zero]] of the characteristic polynomial. Since any eigenvector is also a generalized eigenvector, the geometric multiplicity is less than or equal to the algebraic multiplicity. The algebraic multiplicities sum up to {{math|''n''}}, the degree of the characteristic polynomial. The equation {{math|1=''p<sub>A</sub>''(''z'') = 0}} is called the ''characteristic equation'', as its roots are exactly the eigenvalues of {{math|''A''}}. By the [[Cayley-Hamilton theorem]], {{math|''A''}} itself obeys the same equation: {{math|1=''p<sub>A</sub>''(''A'') = 0.}}<ref group="note">where the constant term is multiplied by the identity matrix {{math|''I''}}.</ref> As a consequence, the columns of the matrix <math>\textstyle \prod_{i \ne j} (A - \lambda_iI)^{\alpha_i}</math> must be either 0 or generalized eigenvectors of the eigenvalue {{math|λ<sub>''j''</sub>}}, since they are annihilated by <math>\textstyle (A - \lambda_jI)^{\alpha_j}.</math>  In fact, the [[column space]] is the generalized eigenspace of {{math|λ<sub>''j''</sub>.}}
 
Any collection of generalized eigenvectors of distinct eigenvalues is linearly independent, so a basis for all of {{math|'''''C'''<sup> n</sup>''}} can be chosen consisting of generalized eigenvectors. More particularly, this basis {{math|{{(}}'''v'''<sub>''i''</sub>{{)}}{{Sup sub|''n''|''i''{{=}}1}}}} can be chosen and organized so that
:* if {{math|'''v'''<sub>''i''</sub>}} and {{math|'''v'''<sub>''j''</sub>}} have the same eigenvalue, then so does {{math|'''v'''<sub>''k''</sub>}} for each {{math|''k''}} between {{math|''i''}} and {{math|''j''}}, and
:* if {{math|'''v'''<sub>''i''</sub>}} is not an ordinary eigenvector, and if {{math|λ<sub>''i''</sub>}} is its eigenvalue, then {{math|1=(''A'' - λ<sub>''i''</sub>''I'' )'''v'''<sub>''i''</sub> = '''v'''<sub>''i''-1</sub>}} (in particular, {{math|'''v'''<sub>1</sub>}} must be an ordinary eigenvector).
If these basis vectors are placed as the column vectors of a matrix {{math|''V'' {{=}} [ '''v'''<sub>1</sub>  '''v'''<sub>2</sub>  ... '''v'''<sub>''n''</sub> ]}}, then {{math|''V''}} can be used to convert {{math|''A''}} to its [[Jordan normal form]]:
:<math>V^{-1}AV = \begin{bmatrix} \lambda_1 & \beta_1 & 0 & \ldots & 0 \\ 0 & \lambda_2 & \beta_2 & \ldots & 0 \\ 0 & 0 & \lambda_3 & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & \lambda_n \end{bmatrix},</math>
 
where the {{math|λ<sub>''i''</sub>}} are the eigenvalues, {{math|1=''β''<sub>''i''</sub> = 1}} if {{math|1=(''A'' - λ<sub>''i''+1</sub>)'''v'''<sub>''i''+1</sub> = '''v'''<sub>''i''</sub>}} and {{math|1=''β''<sub>''i''</sub> = 0}} otherwise.
 
More generally, if {{math|''W''}} is any invertible matrix, and {{math|λ}} is an eigenvalue of {{math|''A''}} with generalized eigenvector {{math|'''v'''}}, then {{math|1=(''W''<sup> -1</sup>''AW'' - λ''I'' )<sup>''k''</sup> ''W''<sup> -''k''</sup>'''v''' = 0}}. Thus {{math|λ}} is an eigenvalue of {{math|''W''<sup> -1</sup>''AW''}} with generalized eigenvector {{math|''W''<sup> -''k''</sup>'''v'''}}. That is, [[similar matrices]] have the same eigenvalues.
 
===Normal, hermitian, and real-symmetric matrices===
{{main|Adjoint matrix|Normal matrix|Hermitian matrix}}
 
The [[conjugate transpose|adjoint]] {{math|''M''<sup>*</sup>}} of a complex matrix {{math|''M''}} is the transpose of the conjugate of {{math|''M''}}: {{math|1=''M'' <sup>*</sup> = {{overline|''M''}} <sup>T</sup>}}. A square matrix {{math|''A''}} is called ''[[Normal matrix|normal]]'' if it commutes with its adjoint: {{math|1=''A''<sup>*</sup>''A'' = ''AA''<sup>*</sup>}}. It is called ''[[Hermitian matrix|hermitian]]'' if it is equal to its adjoint: {{math|1=''A''<sup>*</sup> = ''A''}}. All hermitian matrices are normal. If {{math|''A''}} has only real elements, then the adjoint is just the transpose, and {{math|''A''}} is hermitian if and only if it is [[symmetric matrix|symmetric]]. When applied to column vectors, the adjoint can be used to define the canonical inner product on {{math|'''''C'''<sup> n</sup>''}}: {{math|1='''w • v''' = '''w'''<sup>*</sup> '''v'''}}.<ref group="note">This ordering of the inner product (with the conjugate-linear position on the left), is preferred by physicists. Algebraists often place the conjugate-linear position on the right: {{math|1='''w • v''' = '''v'''<sup>*</sup> '''w'''}}.</ref> Normal, hermitian, and real-symmetric matrices have several useful properties:
:* Every generalized eigenvector of a normal matrix is an ordinary eigenvector.
:* Any normal matrix is similar to a diagonal matrix, since its Jordan normal form is diagonal.
:* Eigenvectors of distinct eigenvalues of a normal matrix are orthogonal.  
:* For any normal matrix {{math|''A''}}, {{math|'''''C'''<sup> n</sup>''}} has an orthonormal basis consisting of eigenvectors of {{math|''A''}}. The corresponding matrix of eigenvectors is [[Unitary matrix|unitary]].
:* The eigenvalues of a hermitian matrix are real, since {{math|1=({{overline|λ}} - λ)'''v''' = (''A''<sup>*</sup> - ''A'')'''v''' = (''A'' - ''A'')'''v''' = 0}} for a non-zero eigenvector {{math|'''v'''}}.
:* If {{math|''A''}} is real, there is an orthonormal basis for {{math|'''''R'''<sup> n</sup>''}} consisting of eigenvectors of {{math|''A''}} if and only if {{math|''A''}} is symmetric.
 
It is possible for a real or complex matrix to have all real eigenvalues without being hermitian. For example, a real [[triangular matrix]] has its eigenvalues along its diagonal, but in general is not symmetric.
 
==Condition number==
 
Any problem of numeric calculation can be viewed as the evaluation of some function &fnof; for some input {{math|''x''}}. The [[condition number]] {{math|''κ''(&fnof;, ''x'')}} of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can be [[Wilkinson's polynomial|very ill-conditioned]]. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not.
 
For the problem of solving the linear equation {{math|1=''A'''''v''' = '''b'''}} where {{math|''A''}} is invertible, the condition number {{math|1=''κ''(''A''<sup>-1</sup>, '''b''')}} is given by {{math|1={{!!}}''A''{{!!}}<sub>op</sub>{{!!}}''A''<sup>-1</sup>{{!!}}<sub>op</sub>}}, where {{nowrap|{{!!}}  {{!!}}<sub>op</sub>}}  is the [[operator norm]] subordinate to the normal Euclidian norm on {{math|'''''C'''<sup> n</sup>''}}. Since this number is independent of {{math|'''b'''}} and is the same for {{math|''A''}} and {{math|''A''<sup>-1</sup>}}, it is usually just called the condition number {{math|''κ''(''A'')}} of the matrix {{math|''A''}}. This value {{math|''κ''(''A'')}} is also the absolute value of the ratio of the largest eigenvalue of {{math|''A''}} to its smallest. If {{math|''A''}} is [[unitary]], then {{math|1={{!!}}''A''{{!!}}<sub>op</sub> = {{!!}}''A''<sup>-1</sup>{{!!}}<sub>op</sub> = 1}}, so {{math|1=''κ''(''A'') = 1}}. For general matrices, the operator norm is often difficult to calculate. For this reason, other [[matrix norms]] are commonly used to estimate the condition number.
 
For the eigenvalue problem, [[Bauer-Fike theorem|Bauer and Fike proved]] that if {{math|λ}} is an eigenvalue for a [[Diagonalizable matrix|diagonalizable]] {{math|''n'' &times; ''n''}} matrix {{math|''A''}} with eigenvector matrix {{math|''V''}}, then the absolute error in calculating {{math|λ}} is bounded by the product of {{math|''κ''(''V'')}} and the absolute error in {{math|''A''}}.<ref>{{Citation
| author = F. L. Bauer
| author2 = C. T. Fike
| title = Norms and exclusion theorems
| journal = Numer. Math.
| issue = 2
| pages = 137–141
| year = 1960 }}</ref> [[Bauer-Fike theorem#Corollary|As a result]], the condition number for finding {{math|λ}} is {{math|1=''κ''(λ, ''A'') = ''κ''(''V'') = {{!!}}''V'' {{!!}}<sub>op</sub> {{!!}}''V'' <sup>-1</sup>{{!!}}<sub>op</sub>}}. If {{math|''A''}} is normal, then {{math|''V''}} is unitary, and {{math|1=''κ''(λ, ''A'') = 1}}. Thus the eigenvalue problem for all normal matrices is well-conditioned.
 
The condition number for the problem of finding the eigenspace of a normal matrix {{math|''A''}} corresponding to an eigenvalue {{math|λ}} has been shown to be inversely proportional to the minimum distance between {{math|λ}} and the other distinct eigenvalues of {{math|''A''}}.<ref>{{Citation
| author = S.C. Eisenstat
| author2 = I.C.F. Ipsen
| title = Relative Perturbation Results for Eigenvalues and Eigenvectors of Diagonalisable Matrices
| journal = BIT
| volume = 38
| issue = 3
| pages = 502–9
| year = 1998 }}</ref> In particular, the eigenspace problem for normal matrices is well-conditioned for isolated eigenvalues. When eigenvalues are not isolated, the best that can be hoped for is to identify the span of all eigenvectors of nearby eigenvalues.
 
==Algorithms==
 
Any monic polynomial is the characteristic polynomial of its [[companion matrix]]. Therefore a general algorithm for finding eigenvalues could also be used to find the roots of polynomials. The [[Abel-Ruffini theorem]] shows that any such algorithm for dimensions greater than 4 must either be infinite, or involve functions of greater complexity than elementary arithmetic operations and fractional powers. For this reason algorithms that exactly calculate eigenvalues in a finite number of steps only exist for a few special classes of matrices. For general matrices, algorithms are [[Iterative method|iterative]], producing better approximate solutions with each iteration.
 
Some algorithms produce every eigenvalue, others will produce a few, or only one. However, even the latter algorithms can be used to find all eigenvalues. Once an eigenvalue {{math|λ}} of a matrix {{math|''A''}} has been identified, it can be used to either direct the algorithm towards a different solution next time, or to reduce the problem to one that no longer has {{math|λ}} as a solution.
 
Redirection is usually accomplished by shifting: replacing {{math|''A''}} with {{math|''A'' - μ''I''}} for some constant {{math|μ}}. The eigenvalue found for {{math|''A'' - μ''I''}} must have {{math|μ}} added back in to get an eigenvalue for {{math|''A''}}. For example, for [[power iteration]], {{math|1=μ = λ}}. Power iteration finds the largest eigenvalue in absolute value, so even when {{math|λ}} is only an approximate eigenvalue, power iteration is unlikely to find it a second time. Conversely, [[inverse iteration]] based methods find the lowest eigenvalue, so {{math|μ}} is chosen well away from {{math|λ}} and hopefully closer to some other eigenvalue.
 
Reduction can be accomplished by restricting {{math|''A''}} to the column space of the matrix {{math|''A'' - λ''I''}}, which {{math|''A''}} carries to itself. Since {{math|''A'' - λ''I''}} is singular, the column space is of lesser dimension. The eigenvalue algorithm can then be applied to the restricted matrix. This process can be repeated until all eigenvalues are found.
 
If an eigenvalue algorithm does not produce eigenvectors, a common practice is to use an inverse iteration based algorithm with {{math|μ}} set to a close approximation to the eigenvalue. This will quickly converge to the eigenvector of the closest eigenvalue to {{math|μ}}. For small matrices, an alternative is to look at the column space of the product of {{math|''A'' - λ{{'}}''I''}} for each of the other eigenvalues {{math|λ{{'}}.}}
 
==Hessenberg and Tri-diagonal matrices==
 
{{main|Hessenberg matrix}}
 
Because the eigenvalues of a triangular matrix are its diagonal elements, for general matrices there is no finite method like [[gaussian elimination]] to convert a matrix to triangular form while preserving eigenvalues. But it is possible to reach something close to triangular. An [[Hessenberg matrix|upper Hessenberg matrix]] is a square matrix for which all entries below the [[subdiagonal]] are zero. A lower Hessenberg matrix is one for which all entries above the [[superdiagonal]] are zero. Matrices that are both upper and lower Hessenberg are [[Tridiagonal matrix|tridiagonal]]. Hessenberg and tridiagonal matrices are the starting points for many eigenvalue algorithms because the zero entries reduce the complexity of the problem. Several methods are commonly used to convert a general matrix into a Hessenberg matrix with the same eigenvalues. If the original matrix was symmetric or hermitian, then the resulting matrix will be tridiagonal.
 
When only eigenvalues are needed, there is no need to calculate the similarity matrix, as the transformed matrix has the same eigenvalues. If eigenvectors are needed as well, the similarity matrix may be needed to transform the eigenvectors of the Hessenberg matrix back into eigenvectors of the original matrix.
 
{| class="wikitable" style="text-align: center"
|-
! Method !! Applies to !! Produces !! Cost without similarity matrix !! Cost with similarity matrix !! Description
|-
| [[Householder transformation]]s || General || Hessenberg || {{math|{{frac|2''n''<sup>3</sup>|3}} + ''O''(''n''<sup>2</sup>)}}<ref name=NumericalRecipes>{{cite book
| last1 = Press
| first1 = William H.
| last2 = Teukolsky
| first2 = Saul A.
| last3 = Vetterling
| first3 = William T.
| last4 = Flannery
| first4 = Brian P.
| title = Numerical Recipes in C
| edition = 2nd
| year = 1992
| publisher = Cambridge University Press
| isbn = 0-521-43108-5
}}</ref>{{rp|page=474}} || {{math|{{frac|4''n''<sup>3</sup>|3}} + ''O''(''n''<sup>2</sup>)}}<ref name=NumericalRecipes />{{rp|page=474}} || align="left" | Reflect each column through a subspace to zero out its lower entries.
|-
| [[Givens rotation]]s || General  || Hessenberg || {{math|{{frac|4''n''<sup>3</sup>|3}} + ''O''(''n''<sup>2</sup>)}}<ref name=NumericalRecipes />{{rp|page=470}} ||  || align="left" | Apply planar rotations to zero out individual entries. Rotations are ordered so that later ones do not cause zero entries to become non-zero again.
|-
| [[Arnoldi iteration]] || General || Hessenberg ||  || || align="left" | Perform Gram–Schmidt orthogonalization on Krylov subspaces.
|-
| [[Lanczos algorithm]] || Hermitian || Tridiagonal ||  ||  || align="left" | Arnoldi iteration for hermitian matrices, with shortcuts.
|}
 
==Iterative algorithms==
 
{{Expert-subject |Mathematics|2=section |reason=to improve descriptions and fill missing entries |date=July 2012 }}
 
Iterative algorithms solve the eigenvalue problem by producing sequences that converge to the eigenvalues. Some algorithms also produce sequences of vectors that converge to the eigenvectors. Most commonly, the eigenvalue sequences are expressed as sequences of similar matrices which converge to a triangular or diagonal form, allowing the eigenvalues to be read easily. The eigenvector sequences are expressed as the corresponding similarity matrices.
 
{| class="wikitable" style="text-align: center"
|-
! Method !! Applies to !! Produces !!Cost per step !! Convergence !! Description
|-
| [[Power iteration]] || General || eigenpair with largest value || {{math|''O''(''n''<sup>2</sup>)}} || Linear || align="left" |Repeatedly applies the matrix to an arbitrary starting vector and renormalizes.
|-
| [[Inverse iteration]] || General  || {{nowrap|eigenpair with value closest to μ}} ||  || Linear ||  align="left" |Power iteration for {{math|(''A'' - μ''I'' )<sup>-1</sup>}}
|-
| [[Rayleigh quotient iteration]] || Hermitian || eigenpair with value closest to μ ||  || Cubic ||  align="left" |Power iteration for {{math|(''A'' - μ<sub>''i''</sub>''I'' )<sup>-1</sup>,}} where {{math|μ<sub>''i''</sub>}} for each iteration is the Raleigh quotient of the previous iteration.
|-
| width="200" | [[Preconditioned Inverse iteration]]<ref>{{Citation
| last=Neymeyr
| first=K.
| title=A geometric theory for preconditioned inverse iteration IV: On the fastest convergence cases.
| journal=Linear Algebra Appl.
| volume=415
| issue=1
| pages=114–139
| year=2006
}}</ref> or [[LOBPCG|LOBPCG algorithm]] || [[Positive-definite matrix|Positive Definite]] Real Symmetric || eigenpair with value closest to μ ||  ||  || align="left" | Inverse iteration using a [[preconditioner]] (an approximate inverse to {{math|''A''}}).
|-
| [[Bisection eigenvalue algorithm|Bisection method]] || Real Symmetric Tridiagonal  || any eigenvalue ||  || linear || align="left" | Uses the [[bisection method]] to find roots of the characteristic polynomial, supported by the Sturm sequence.
|-
| [[Laguerre iteration]] || Real Symmetric Tridiagonal  || any eigenvalue || || cubic<ref>{{Citation
| last1=Li
| first1=T. Y.
| last2=Zeng
| first2=Zhonggang
| title=Laguerre's Iteration In Solving The Symmetric Tridiagonal Eigenproblem - Revisited
| journal=[[SIAM Journal on Scientific Computing]]
| year=1992
}}</ref> || align="left" | Uses [[Laguerre's method]] to find roots of the characteristic polynomial, supported by the Sturm sequence.
|-
| rowspan="2" | [[QR algorithm]] ||rowspan="2" | Hessenberg || all eigenvalues ||  {{math|''O''(''n''<sup>2</sup>)}} ||rowspan="2" | cubic || align="left" rowspan="2" | Factors ''A'' = ''QR'', where ''Q'' is orthogonal and ''R'' is triangular, then applies the next iteration to ''RQ''.
|-
| all eigenpairs || {{math|6''n''<sup>3</sup> + ''O''(''n''<sup>2</sup>)}}
|-
| [[Jacobi eigenvalue algorithm]] || Real Symmetric || all eigenvalues ||{{math|''O''(''n''<sup>3</sup>)}} || quadratic || align="left" | Uses Givens rotations to attempt clearing all off-diagonal entries. This fails, but strengthens the diagonal.
|-
| rowspan="2" | [[Divide-and-conquer eigenvalue algorithm|Divide-and-conquer]] || rowspan="2" | Hermitian Tridiagonal || all eigenvalues || {{math|''O''(''n''<sup>2</sup>)}} || rowspan="2" | || align="left" rowspan="2" | Divides the matrix into submatrices that are diagonalized then recombined.
|-
| all eigenpairs || {{math|({{frac|4|3}})''n''<sup>3</sup> + ''O''(''n''<sup>2</sup>)}}
|-
| [[Homotopy method]] || Real Symmetric Tridiagonal || all eigenpairs || {{math|''O''(''n''<sup>2</sup>)<ref>{{Citation
| last=Chu
| first=Moody T.
| title=A Note on the Homotopy Method for Linear Algebraic Eigenvalue Problems
| journal=Linear Algebra Appl.
| volume=105
| pages=225–236
| year=1988
}}</ref>}} ||  || align="left" | Constructs a computable homotopy path from a diagonal eigenvalue problem.
|-
| [[Folded spectrum method]] || Real Symmetric || eigenpair with value closest to μ ||  ||  || align="left" | Preconditioned inverse iteration applied to {{math|(''A'' - μ''I'' )<sup>2</sup>}}
|-
| [[MRRR|MRRR algorithm]]<ref>{{Citation
| last1=Dhillon
| first1=Inderjit S.
| last2=Parlett
| first2=Beresford N.
| last3=Vömel
| first3=Christof
| title=The Design and Implementation of the MRRR Algorithm
| journal=[[ACM Transactions on Mathematical Software]]
| volume=32
| issue=4
| pages=533–560
| year=2006
}}</ref> || Real Symmetric Tridiagonal || some or all eigenpairs || {{math|''O''(''n''<sup>2</sup>)}} ||  || align="left" | "Multiple Relatively Robust Representations" - Performs inverse iteration on a [[Cholesky decomposition|''LDL''<sup>T</sup> decomposition]] of the shifted matrix.
|}
 
==Direct calculation==
 
While there is no simple algorithm to directly calculate eigenvalues for general matrices, there are numerous special classes of matrices where eigenvalues can be directly calculated. These include:
 
===Triangular matrices===
 
Since the determinant of a [[triangular matrix]] is the product of its diagonal entries, if ''T'' is triangular, then {{nowrap|<math>\scriptstyle \mathrm{det}\left ( \lambda I - T \right ) = {\prod}_i \left ( \lambda - T_{ii} \right )</math>.}} Thus the eigenvalues of ''T'' are its diagonal entries.
 
===Factorable polynomial equations===
 
If {{math|''p''}} is any polynomial and {{math|1=''p''(''A'') = 0,}} then the eigenvalues of {{math|''A''}} also satisfy the same equation. If {{math|''p''}} happens to have a known factorization, then the eigenvalues of {{math|''A''}} lie among its roots.
 
For example, a [[Projection (linear algebra)|projection]] is a square matrix {{math|''P''}} satisfying {{math|1=''P''<sup>2</sup> = ''P''}}. The roots of the corresponding scalar polynomial equation, {{math|1=λ<sup>2</sup> = λ}}, are 0 and 1. Thus any projection has 0 and 1 for its eigenvalues. The multiplicity of 0 as an eigenvalue is the [[nullity]] of {{math|''P''}}, while the multiplicity of 1 is the rank of {{math|''P''}}.
 
Another example is a matrix {{math|''A''}} that satisfies {{math|1=''A''<sup>2</sup> = α<sup>2</sup>''I''}} for some scalar {{math|α}}. The eigenvalues must be {{math|±α}}. The projection operators
:<math>P_+=\frac{1}{2}\left(I+\frac{A}{\alpha}\right)</math>
 
:<math>P_-=\frac{1}{2}\left(I-\frac{A}{\alpha}\right)</math>
satisfy
:<math>AP_+=\alpha P_+ \quad AP_-=-\alpha P_-</math>
and
:<math>P_+P_+=P_+ \quad P_-P_-=P_- \quad P_+P_-=P_-P_+=0.</math>
 
The [[column space]]s of {{math|''P''<sub>+</sub>}} and {{math|''P''<sub>-</sub>}} are the eigenspaces of {{math|''A''}} corresponding to {{math|+α}} and {{math|-α}}, respectively.
 
===2×2 matrices===
 
For dimensions 2 through 4, formulas involving radicals exist that can be used to find the eigenvalues. While a common practice for 2×2 and 3×3 matrices, for 4×4 matrices the increasing complexity of the [[Quartic function#Ferrari's solution|root formulas]] makes this approach less attractive.
 
For the 2×2 matrix
 
:<math>A = \begin{bmatrix} a  & b \\ c & d \end{bmatrix},</math>
 
the characteristic polynomial is
 
:<math>{\rm det} \begin{bmatrix} \lambda - a & -b \\ -c & \lambda - d \end{bmatrix} = \lambda^2\, -\, \left( a + d \right )\lambda\, +\, \left ( ad - bc \right ) = \lambda^2\, -\, \lambda\, {\rm tr}(A)\, +\, {\rm det}(A).</math>
 
Thus the eigenvalues can be found by using the [[quadratic formula]]:
 
:<math>\lambda = \frac{{\rm tr}(A) \pm \sqrt{{\rm tr}^2 (A) - 4 {\rm det}(A)}}{2}.</math>
 
Defining <math>\textstyle {\rm gap}\left ( A \right ) = \sqrt{{\rm tr}^2 (A) - 4 {\rm det}(A)}</math>  to be the distance between the two eigenvalues, it is straightforward to calculate
 
:<math>\frac{\part\lambda}{\part a} = \frac{1}{2}\left ( 1 \pm \frac{a - d}{{\rm gap}(A)} \right ),\qquad \frac{\part\lambda}{\part b} =  \frac{\pm c}{{\rm gap}(A)}</math>
 
with similar formulas for {{math|''c''}} and {{math|''d''}}. From this it follows that the calculation is well-conditioned if the eigenvalues are isolated.
 
Eigenvectors can be found by exploiting the [[Cayley-Hamilton theorem]]. If {{math|λ<sub>1</sub>, λ<sub>2</sub>}} are the eigenvalues, then {{math|1=(''A'' - λ<sub>1</sub>''I'' )(''A'' - λ<sub>2</sub>''I'' ) = (''A'' - λ<sub>2</sub>''I'' )(''A'' - λ<sub>1</sub>''I'' ) = 0}}, so the columns of {{math|(''A'' - λ<sub>2</sub>''I'' )}} are annihilated by {{math|(''A'' - λ<sub>1</sub>''I'' )}} and vice versa. Assuming neither matrix is zero, the columns of each must include eigenvectors for the other eigenvalue. (If either matrix is zero, then {{math|''A''}} is a multiple of the identity and any non-zero vector is an eigenvector.)
 
For example, suppose
 
:<math>A = \begin{bmatrix} 4 & 3 \\ -2 & -3 \end{bmatrix},</math>
 
then {{math|1=tr(''A'') = 4 - 3 = 1}} and {{math|1=det(''A'') = 4(-3) - 3(-2) = -6}}, so the characteristic equation is
 
:<math> 0 = \lambda^2 - \lambda - 6 = (\lambda - 3)(\lambda + 2),</math>
 
and the eigenvalues are 3 and -2. Now,
 
:<math>A - 3I = \begin{bmatrix} 1 & 3 \\ -2 & -6 \end{bmatrix}, \qquad  A + 2I = \begin{bmatrix} 6 & 3 \\ -2 & -1 \end{bmatrix}.</math>
 
In both matrices, the columns are multiples of each other, so either column can be used. Thus, {{math|(1, -2)}} can be taken as an eigenvector associated with the eigenvalue -2, and {{math|(3, -1)}} as an eigenvector associated with the eigenvalue 3, as can be verified by multiplying them by {{math|''A''}}.
 
===3×3 matrices===
 
If {{math|''A''}} is a 3×3 matrix, then its characteristic equation can be expressed as:
 
:<math>{\rm det} \left( \alpha I - A \right) = \alpha^3 - \alpha^2 {\rm tr}(A) - \alpha \frac{1}{2}\left( {\rm tr}(A^2) - {\rm tr}^2(A) \right) - {\rm det}(A) = 0.</math>
 
This equation may be solved using the methods of [[Cubic function#Cardano's method|Cardano]] or [[Cubic function#Lagrange's method|Lagrange]], but an affine change to {{math|''A''}} will simplify the expression considerably, and lead directly to a [[Cubic function#Trigonometric (and hyperbolic) method|trigonometric solution]]. If {{math|1=''A'' = ''pB'' + ''qI''}}, then {{math|''A''}} and {{math|''B''}} have the same eigenvectors, and {{math|''β''}} is an eigenvalue of {{math|''B''}} if and only if {{math|1=''α'' = ''pβ'' + ''q''}} is an eigenvalue of {{math|''A''}}. Letting <math>\textstyle q = {\rm tr}(A)/3</math> and <math>\textstyle p = {\rm tr} \left((A - qI)^2 / 6\right)^{1/2}</math>, gives
 
:<math>{\rm det} \left( \beta I - B \right) = \beta^3 - 3 \beta - {\rm det}(B) = 0.</math>
 
The substitution {{math|1=''β'' = 2cos ''θ''}} and some simplification using the identity {{math|1=cos 3''θ'' = 4cos<sup>3</sup> ''θ'' - 3cos ''θ''}} reduces the equation to {{math|1=cos 3''θ'' = det(''B'') / 2}}. Thus
 
:<math>\beta = 2{\rm cos}\left(\frac{1}{3}{\rm arccos}\left( {\rm det}(B)/2 \right) + \frac{2k\pi}{3}\right), \quad k = 0, 1, 2.</math>
 
If {{math|det(''B'')}} is complex or is greater than 2 in absolute value, the arccosine should be taken along the same branch for all three values of {{math|''k''}}. This issue doesn't arise when {{math|''A''}} is real and symmetric, resulting in a simple algorithm:<ref name=Smith>{{Citation |last=Smith |first=Oliver K. |title=Eigenvalues of a symmetric 3 × 3 matrix. |journal=[[Communications of the ACM]] |volume=4 |issue=4 |date=April 1961 |page=168 }}</ref>
 
<source lang="matlab">
% Given a real symmetric 3x3 matrix A, compute the eigenvalues
 
p1 = A(1,2)^2 + A(1,3)^2 + A(2,3)^2
if (p1 == 0)
  % A is diagonal.
  eig1 = A(1,1)
  eig2 = A(2,2)
  eig3 = A(3,3)
else
  q = trace(A)/3
  p2 = (A(1,1) - q)^2 + (A(2,2) - q)^2 + (A(3,3) - q)^2 + 2 * p1
  p = sqrt(p2 / 6)
  B = (1 / p) * (A - q * I)      % I is the identity matrix
  r = det(B) / 2
 
  % In exact arithmetic for a symmetric matrix  -1 <= r <= 1
  % but computation error can leave it slightly outside this range.
  if (r <= -1)
      phi = pi / 3
  elseif (r >= 1)
      phi = 0
  else
      phi = acos(r) / 3
  end
 
  % the eigenvalues satisfy eig3 <= eig2 <= eig1
  eig1 = q + 2 * p * cos(phi)
  eig3 = q + 2 * p * cos(phi + (2*pi/3))
  eig2 = 3 * q - eig1 - eig3    % since trace(A) = eig1 + eig2 + eig3
end
</source>
 
Once again, the eigenvectors of {{math|''A''}} can be obtained by recourse to the [[Cayley-Hamilton theorem]]. If {{math|''α''<sub>1</sub>, ''α''<sub>2</sub>, ''α''<sub>3</sub>}} are distinct eigenvalues of {{math|''A''}}, then {{math|1=(''A'' - ''α''<sub>1</sub>''I'')(''A'' - ''α''<sub>2</sub>''I'')(''A'' - ''α''<sub>3</sub>''I'') = 0}}. Thus the columns of the product of any two of these matrices will contain an eigenvector for the third eigenvalue. However, if {{math|1=''a''<sub>3</sub> = ''a''<sub>1</sub>}}, then {{math|1=(''A'' - ''α''<sub>1</sub>''I'')<sup>2</sup>(''A'' - ''α''<sub>2</sub>''I'') = 0}} and {{math|1=(''A'' - ''α''<sub>2</sub>''I'')(''A'' - ''α''<sub>1</sub>''I'')<sup>2</sup> = 0}}. Thus the ''generalized'' eigenspace of {{math|''α''<sub>1</sub>}} is spanned by the columns of {{math|''A'' - ''α''<sub>2</sub>''I''}} while the ordinary eigenspace is spanned by the columns of {{math|1=(''A'' - ''α''<sub>1</sub>''I'')(''A'' - ''α''<sub>2</sub>''I'')}}.  The ordinary eigenspace of {{math|''α''<sub>2</sub>}} is spanned by the columns of {{math|(''A'' - ''α''<sub>1</sub>''I'')<sup>2</sup>}}.
 
For example, let
 
:<math>A = \begin{bmatrix} 3 & 2 & 6 \\ 2 & 2 & 5 \\ -2 & -1 & -4 \end{bmatrix}.</math>
 
The characteristic equation is
 
:<math> 0 = \lambda^3 - \lambda^2 - \lambda + 1 = (\lambda - 1)^2(\lambda + 1),</math>
 
with eigenvalues 1 (of multiplicity 2) and -1. Calculating,
 
:<math>A - I = \begin{bmatrix} 2 & 2 & 6 \\ 2 & 1 & 5 \\ -2 & -1 & -5 \end{bmatrix}, \qquad A + I = \begin{bmatrix} 4 & 2 & 6 \\ 2 & 3 & 5 \\ -2 & -1 & -3 \end{bmatrix}</math>
 
and
 
:<math>(A - I)^2 = \begin{bmatrix} -4 & 0 & -8 \\ -4 & 0 & -8 \\ 4 & 0 & 8 \end{bmatrix}, \qquad (A - I)(A + I) = \begin{bmatrix} 0 & 4 & 4 \\ 0 & 2 & 2 \\ 0 & -2 & -2 \end{bmatrix}</math>
 
Thus {{math|(-4, -4, 4)}} is an eigenvector for -1, and {{math|(4, 2, -2)}} is an eigenvector for 1. {{math|(2, 3, -1)}} and {{math|(6, 5, -3)}} are both generalized eigenvectors associated with 1, either one of which could be combined with {{math|(-4, -4, 4)}} and {{math|(4, 2, -2)}} to form a basis of generalized eigenvectors of {{math|''A''}}.
 
==See also==
* [[List of numerical analysis topics#Eigenvalue algorithms|List of eigenvalue algorithms]]
 
==Notes==
{{reflist|group="note"}}
 
==References==
{{reflist}}
 
==Further reading==
*{{cite journal
| last = Bojanczyk
| first = Adam W.
| authorlink =
| coauthors =Adam Lutoborski
| title = Computation of the Euler angles of a symmetric 3X3 matrix
| journal = SIAM Journal on Matrix Analysis and Applications
| volume = 12
| issue = 1
| pages = 41–48
| publisher =
| location =
| date = Jan 1991
| url = http://cacm.acm.org/magazines/1961/4/14532-eigenvalues-of-a-symmetric-3-%C3%83-3-matrix/abstract
| jstor =
| issn =
| doi = }}
 
{{Numerical linear algebra}}
 
{{DEFAULTSORT:Eigenvalue Algorithm}}
[[Category:Numerical linear algebra]]

Latest revision as of 00:18, 17 November 2014

I'm Tarah (18) from Gutersloh Ebbesloh, Germany.
I'm learning German literature at a local university and I'm just about to graduate.
I have a part time job in a college.

Look into my web page; Hostgator Coupons (ccucontest.commons.co.kr)