|
|
Line 1: |
Line 1: |
| In [[linear algebra]] an ''n''-by-''n'' (square) [[matrix (mathematics)|matrix]] '''A''' is called '''invertible''' (some authors use '''nonsingular''' or '''nondegenerate''') if there exists an ''n''-by-''n'' matrix '''B''' such that
| | Some users of computer are aware that their computer become slower or have some errors following utilizing for a while. But most folks don't understand how to speed up their computer and a few of them don't dare to work it. They always find some experts to keep the computer in good condition however they have to spend certain income on it. Actually, you are able to do it by yourself. There are many registry cleaner software that there are 1 of them online. Some of them are free and you only need to download them. After installing it, this registry cleaner software will scan the registry. If it found these mistakes, it might report you and you can delete them to keep a registry clean. It is simple to work and it is the best method to repair registry.<br><br>Another answer would be to offer the computer system with a modern msvcr71 file. Frequently, once the file has been corrupted or damaged, it may no longer be able to function like it did before thus it's only all-natural to substitute the file. Simply download another msvcr71.dll file within the web. Often, the file comes in a zip format. Extract the files from the zip folder plus destination them accordingly in this location: C:\Windows\System32. Afterward, re-register the file. Click Start plus then choose Run. Once the Run window appears, sort "cmd". Press Enter plus then type "regsvr32 -u msvcr71.dll" followed by "regsvr32 msvcr71.dll". Press Enter again plus the file ought to be registered accordingly.<br><br>The error is basically a outcome of issue with Windows Installer package. The Windows Installer is a tool utilized to install, uninstall plus repair the most programs on the computer. Let us discuss a few aspects that helped a great deal of people whom facing the similar problem.<br><br>If you feel you don't have enough money at the time to upgrade, then the number one choice is to free up some room by deleting a few of the unwelcome files and folders.<br><br>One other way whenever arresting the 1328 error is to wash out a PC's registry. The registry is important because it is very where settings and files utilized by Windows for operating are stored. As it really is frequently selected, breakdowns plus situations of files getting corrupted are not uncommon. Additionally because of the method it happens to be configured, the "registry" gets saved inside the wrong fashion consistently, which makes the system run slow, ultimately causing the PC to suffer from a series of mistakes. The most effective system one may use inside cleaning out registries is to employ a reliable [http://bestregistrycleanerfix.com/registry-reviver registry reviver] system. A registry cleaner can find out and repair corrupted registry files plus settings permitting one's computer to run normally again.<br><br>If you think that there are issues with all the d3d9.dll file, then you need to replace it with a modern functioning file. This can be done by conducting a series of steps and you are able to start by downloading "d3d9.zip" within the server. Next you have to unzip the "d3d9.dll" file on the difficult drive of your computer. Proceed by finding "C:\Windows\System32" and then acquiring the existing "d3d9.dll" on a PC. Once found, rename the file "d3d9.dll to d3d9BACKUP.dll" plus then copy-paste this hot file to "C:\Windows\System32". After which, hit "Start" followed by "Run" or look "Run" on Windows Vista & 7. As shortly because a box shows up, kind "cmd". A black screen might then appear and you must sort "regsvr32d3d9.dll" and then click "Enter". This task usually allow you to substitute the aged file with the fresh copy.<br><br>Across the best of the scan results display page you see the tabs... Registry, Junk Files, Privacy, Bad Active X, Performance, etc. Every of these tabs may show we the results of which region. The Junk Files are mostly temporary files such as internet information, images, web pages... And they are really taking up storage.<br><br>Registry cleaners have been tailored to fix all the broken files inside your program, allowing a computer to read any file it wants, whenever it wants. They function by scanning from the registry plus checking every registry file. If the cleaner sees that it is corrupt, then it will replace it automatically. |
| | |
| :<math>\mathbf{AB} = \mathbf{BA} = \mathbf{I}_n \ </math>
| |
| | |
| where '''I'''<sub>''n''</sub> denotes the ''n''-by-''n'' [[identity matrix]] and the multiplication used is ordinary [[matrix multiplication]]. If this is the case, then the matrix '''B''' is uniquely determined by '''A''' and is called the '''''inverse''''' of '''A''', denoted by '''A'''<sup>−1</sup>. It follows from the theory of matrices that if
| |
| | |
| :<math>\mathbf{AB} = \mathbf{I} \ </math>
| |
| | |
| for ''finite square'' matrices '''A''' and '''B''', then also
| |
| | |
| :<math>\mathbf{BA} = \mathbf{I}. \ </math><ref>{{Cite book | last1=Horn | first1=Roger A. | last2=Johnson | first2=Charles R. | title=Matrix Analysis | publisher=[[Cambridge University Press]] | isbn=978-0-521-38632-6 | year=1985 | page=14 | postscript=<!--None-->}}.</ref>
| |
| | |
| Non-square matrices (''m''-by-''n'' matrices for which ''m ≠ n'') do not have an inverse. However, in some cases such a matrix may have a [[Inverse element#Matrices|left inverse]] or [[Inverse element#Matrices|right inverse]]. If '''A''' is ''m''-by-''n'' and the [[rank (linear algebra)|rank]] of '''A''' is equal to ''n'', then '''A''' has a left inverse: an ''n''-by-''m'' matrix '''B''' such that '''BA''' = '''I'''. If '''A''' has rank ''m'', then it has a right inverse: an ''n''-by-''m'' matrix '''B''' such that '''AB''' = '''I'''.
| |
| | |
| {{anchor|singular}} A square matrix that is not invertible is called '''singular''' or '''degenerate'''. A square matrix is singular [[if and only if]] its [[determinant]] is 0. Singular matrices are rare in the sense that a square matrix randomly selected from a [[continuous uniform distribution]] on its entries will [[almost never]] be singular.
| |
| | |
| While the most common case is that of matrices over the [[real number|real]] or [[complex number|complex]] numbers, all these definitions can be given for matrices over any [[ring (mathematics)|commutative ring]]. However, in this case the condition for a square matrix to be invertible is that its determinant is invertible in the ring, which in general is a much stricter requirement than being nonzero. The conditions for existence of left-inverse resp. right-inverse are more complicated since a notion of rank does not exist over rings.
| |
| | |
| '''Matrix inversion''' is the process of finding the matrix '''B''' that satisfies the prior equation for a given invertible matrix '''A'''.
| |
| | |
| == Properties ==
| |
| | |
| ===The invertible matrix theorem===
| |
| Let '''A''' be a square ''n'' by ''n'' matrix over a [[field (mathematics)|field]] ''K'' (for example the field '''R''' of real numbers). The following statements are equivalent:
| |
| : '''A''' is invertible, i.e. '''A''' has an inverse, is nonsingular, or is nondegenerate.
| |
| : '''A''' is [[Row equivalence|row-equivalent]] to the ''n''-by-''n'' [[identity matrix]] '''I'''<sub>''n''</sub>.
| |
| : '''A''' is [[Row equivalence|column-equivalent]] to the ''n''-by-''n'' [[identity matrix]] '''I'''<sub>''n''</sub>.
| |
| : '''A''' has ''n'' [[pivot position]]s.
| |
| : [[determinant|det]] '''A''' ≠ 0. In general, a square matrix over a [[commutative ring]] is invertible if and only if its [[determinant]] is a [[unit (ring theory)|unit]] in that ring.
| |
| : '''A''' has full rank; that is, [[rank (linear algebra)|rank]] '''A''' = ''n''.
| |
| : The equation '''Ax''' = '''0''' has only the trivial solution '''x''' = '''0'''
| |
| : [[null space|Null]] '''A''' = {0}
| |
| : The equation '''Ax''' = '''b''' has exactly one solution for each '''b''' in ''K<sup>n</sup>''.
| |
| : The columns of '''A''' are [[linear independence|linearly independent]].
| |
| : The columns of '''A''' [[linear span|span]] ''K<sup>n</sup>''
| |
| : Col '''A''' = ''K<sup>n</sup>''
| |
| : The columns of '''A''' form a [[basis of a vector space|basis]] of ''K<sup>n</sup>''.
| |
| : The linear transformation mapping '''x''' to '''Ax''' is a [[bijection]] from ''K<sup>n</sup>'' to ''K<sup>n</sup>''.
| |
| : There is an ''n'' by ''n'' matrix '''B''' such that '''AB''' = '''I'''<sub>''n''</sub> = '''BA'''.
| |
| : The [[transpose]] '''A'''<sup>T</sup> is an invertible matrix (hence rows of '''A''' are [[linear independence|linearly independent]], span ''K<sup>n</sup>'', and form a [[basis of a vector space|basis]] of ''K<sup>n</sup>'').
| |
| : The number 0 is not an [[eigenvalue]] of '''A'''.
| |
| : The matrix '''A''' can be expressed as a finite product of [[elementary matrix|elementary matrices]].
| |
| : The matrix '''A''' has a left inverse (i.e. there exists a '''B''' such that '''BA''' = '''I''') ''or'' a right inverse (i.e. there exists a '''C''' such that '''AC''' = '''I'''), in which case both left and right inverses exist and '''B''' = '''C''' = '''A<sup>-1</sup>'''.
| |
| | |
| ===Other properties===
| |
| Furthermore, the following properties hold for an invertible matrix '''A''':
| |
| * ('''A'''<sup>−1</sup>)<sup>−1</sup> = '''A''';
| |
| * (''k'''''A''')<sup>−1</sup> = ''k''<sup>−1</sup>'''A'''<sup>−1</sup> for nonzero scalar ''k'';
| |
| * ('''A'''<sup>T</sup>)<sup>−1</sup> = ('''A'''<sup>−1</sup>)<sup>T</sup>;
| |
| * For any invertible ''n''-by-''n'' matrices '''A''' and '''B''', ('''AB''')<sup>−1</sup> = '''B'''<sup>−1</sup>'''A'''<sup>−1</sup>. More generally, if '''A'''<sub>1</sub>,...,'''A'''<sub>''k''</sub> are invertible ''n''-by-''n'' matrices, then ('''A'''<sub>1</sub>'''A'''<sub>2</sub>⋯'''A'''<sub>''k−1''</sub>'''A'''<sub>''k''</sub>)<sup>−1</sup> = '''A'''<sub>''k''</sub><sup>−1</sup>'''A'''<sub>''k−1''</sub><sup>−1</sup>⋯'''A'''<sub>2</sub><sup>−1</sup>'''A'''<sub>1</sub><sup>−1</sup>;
| |
| * det('''A'''<sup>−1</sup>) = det('''A''')<sup>−1</sup>.
| |
| | |
| '''A matrix that is its own inverse''', i.e. '''A''' = '''A'''<sup>−1</sup> and '''A'''<sup>2</sup> = '''I''', is called an [[Involutory matrix|involution]].
| |
| | |
| === Density ===
| |
| Over the field of real numbers, the set of singular ''n''-by-''n'' matrices, considered as a subset of '''R'''<sup>''n''×''n''</sup>, is a [[null set]], i.e., has [[Lebesgue measure|Lebesgue]] [[measure zero]]. This is true because singular matrices are the roots of the polynomial function in the entries of the matrix given by the [[determinant]]. Thus in the language of [[measure theory]], [[almost all]] ''n''-by-''n'' matrices are invertible.
| |
| | |
| Furthermore the ''n''-by-''n'' invertible matrices are a [[dense set|dense]] [[open set]] in the [[topological space]] of all ''n''-by-''n'' matrices. Equivalently, the set of singular matrices is [[closed set|closed]] and [[nowhere dense]] in the space of ''n''-by-''n'' matrices.
| |
| | |
| In practice however, one may encounter non-invertible matrices. And in [[numerical analysis|numerical calculations]], matrices which are invertible, but close to a non-invertible matrix, can still be problematic; such matrices are said to be [[Condition number#Matrices|ill-conditioned]].
| |
| | |
| == Methods of matrix inversion ==
| |
| | |
| === Gaussian elimination ===
| |
| [[Gauss–Jordan elimination]] is an [[algorithm]] that can be used to determine whether a given matrix is invertible and to find the inverse. An alternative is the [[LU decomposition]] which generates upper and lower triangular matrices which are easier to invert.
| |
| | |
| === Newton's method ===
| |
| A generalisation of [[Newton's method]] as used for a [[Multiplicative inverse#Algorithms|multiplicative inverse algorithm]] may be convenient, if it is convenient to find a suitable starting seed:
| |
| | |
| :<math>X_{k+1} = 2X_k - X_k A X_k.</math>
| |
| | |
| [[Victor Pan]] and [[John Reif]] have done work that includes ways of generating a starting seed. Otherwise, the method may be adapted to use the starting seed from a trivial starting case by using a [[homotopy]] to "walk" in small steps from that to the matrix needed, "dragging" the inverses with them:
| |
| | |
| :<math>X_{k+1} = 2X_k - X_k A_{k+1} X_k,</math> where <math>A_0 = S,</math> <math>X_0 = S^{-1},</math> and <math>A_N = A</math> for some terminating ''N'', perhaps followed by another few iterations at ''A'' to settle the inverse.
| |
| | |
| Using this simplistically on real valued matrices would lead the homotopy through a degenerate matrix about half the time, so complex valued matrices should be used to bypass that, e.g. by using a starting seed ''S'' that has ''i'' in the first entry, ''1'' on the rest of the leading diagonal, and ''0'' elsewhere. If complex arithmetic is not directly available, it may be emulated at a small cost in computer memory by replacing each complex matrix element ''a+bi'' with a 2×2 real valued submatrix of the form <math>\begin{bmatrix}
| |
| a & b \\ -b & a \\
| |
| \end{bmatrix}</math> (see [[square root of a matrix]]).
| |
| | |
| Newton's method is particularly useful when dealing with families of related matrices that behave enough like the sequence manufactured for the homotopy above: sometimes a good starting point for refining an approximation for the new inverse can be the already obtained inverse of a previous matrix that nearly matches the current matrix, e.g. the pair of sequences of inverse matrices used in obtaining [[Matrix_square_root#By_Denman.E2.80.93Beavers_iteration|matrix square roots by Denman-Beavers iteration]]; this may need more than one pass of the iteration at each new matrix, if they are not close enough together for just one to be enough. [[Newton's method]] is also useful for "touch up" corrections to the [[Gauss–Jordan elimination|Gauss–Jordan algorithm]] which has been contaminated by small errors due to [[Round-off error|imperfect computer arithmetic]].
| |
| | |
| === Cayley–Hamilton method ===
| |
| [[Cayley–Hamilton theorem]] allows to represent the inverse of '''A''' in terms of det('''A'''), traces and powers of '''A'''
| |
| :<math> \mathbf{A}^{-1} = \frac{1}{\det (\mathbf{A})}\sum_{s=0}^{n-1}\mathbf{A}^{s}\sum_{k_1,k_2,\ldots ,k_{n-1}}\prod_{l=1}^{n-1} \frac{(-1)^{k_l+1}}{l^{k_l}k_{l}!}\mathrm{tr}(\mathbf{A}^l)^{k_l},</math>
| |
| where ''n'' is dimenison of '''A''', and the sum is taken over ''s'' and the sets of all ''k<sub>l</sub>'' ≥ 0 satisfying the linear [[Diophantine equation]]
| |
| :<math>s+\sum_{l=1}^{n-1}lk_{l} = n - 1.</math>
| |
| | |
| === Eigendecomposition ===
| |
| {{main|Eigendecomposition}}
| |
| If matrix '''A''' can be eigendecomposed and if none of its eigenvalues are zero, then '''A''' is [[nonsingular]] and its inverse is given by
| |
| :<math>\mathbf{A}^{-1}=\mathbf{Q}\mathbf{\Lambda}^{-1}\mathbf{Q}^{-1} </math>
| |
| where '''Q''' is the square (''N''×''N'') matrix whose ''i''<sup>th</sup> column is the eigenvector <math>q_i</math> of '''A''' and '''Λ''' is the [[diagonal matrix]] whose diagonal elements are the corresponding eigenvalues, ''i.e.'', <math>\Lambda_{ii}=\lambda_i</math>.
| |
| Furthermore, because '''Λ''' is a [[diagonal matrix]], its inverse is easy to calculate:
| |
| :<math>\left[\Lambda^{-1}\right]_{ii}=\frac{1}{\lambda_i}</math>
| |
| | |
| === Cholesky decomposition ===
| |
| {{main|Cholesky decomposition}}
| |
| If matrix '''A''' is [[Positive definite matrix|positive definite]], then its inverse can be obtained as
| |
| :<math>\mathbf{A}^{-1} = (\mathbf{L}^{*})^{-1} \mathbf{L}^{-1} , </math>
| |
| where '''L''' is the lower triangular [[Cholesky decomposition]] of '''A''', and '''L*''' denotes the conjugate transpose of '''L'''.
| |
| | |
| === Analytic solution ===
| |
| {{Main|Cramer's rule}}
| |
| Writing the transpose of the [[matrix of cofactors]], known as an [[adjugate matrix]], can also be an efficient way to calculate the inverse of ''small'' matrices, but this recursive method is inefficient for large matrices. To determine the inverse, we calculate a matrix of cofactors:
| |
| | |
| :<math>\mathbf{A}^{-1}={1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}}\mathbf{C}^{\mathrm{T}}={1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}}
| |
| \begin{pmatrix}
| |
| \mathbf{C}_{11} & \mathbf{C}_{21} & \cdots & \mathbf{C}_{n1} \\
| |
| \mathbf{C}_{12} & \mathbf{C}_{22} & \cdots & \mathbf{C}_{n2} \\
| |
| \vdots & \vdots & \ddots & \vdots \\
| |
| \mathbf{C}_{1n} & \mathbf{C}_{2n} & \cdots & \mathbf{C}_{nn} \\
| |
| \end{pmatrix}</math>
| |
| | |
| so that
| |
| :<math>\left(\mathbf{A}^{-1}\right)_{ij}={1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}}\left(\mathbf{C}^{\mathrm{T}}\right)_{ij}={1 \over \begin{vmatrix}\mathbf{A}\end{vmatrix}}\left(\mathbf{C}_{ji}\right)</math>
| |
| where |'''A'''| is the [[determinant]] of '''A''', '''C''' is the [[matrix of cofactors]], and '''C'''<sup>T</sup> represents the matrix [[transpose]].
| |
| | |
| ==== Inversion of 2×2 matrices ====
| |
| The ''cofactor equation'' listed above yields the following result for 2×2 matrices. Inversion of these matrices can be done easily as follows:<ref>{{cite book
| |
| |title=Introduction to linear algebra
| |
| |edition=3rd
| |
| |first1=Gilbert
| |
| |last1=Strang
| |
| |publisher=SIAM
| |
| |year=2003
| |
| |isbn=0-9614088-9-8
| |
| |page=71
| |
| |url=http://books.google.com/books?id=Gv4pCVyoUVYC}}, [http://books.google.com/books?id=Gv4pCVyoUVYC&pg=PA71 Chapter 2, page 71]
| |
| </ref>
| |
| :<math>\mathbf{A}^{-1} = \begin{bmatrix}
| |
| a & b \\ c & d \\
| |
| \end{bmatrix}^{-1} =
| |
| \frac{1}{\det(\mathbf{A})} \begin{bmatrix}
| |
| \,\,\,d & \!\!-b \\ -c & \,a \\
| |
| \end{bmatrix} =
| |
| \frac{1}{ad - bc} \begin{bmatrix}
| |
| \,\,\,d & \!\!-b \\ -c & \,a \\
| |
| \end{bmatrix}.</math>
| |
| This is possible because 1/(ad-bc) is the reciprocal of the determinant of the matrix in question, and the same strategy could be used for other matrix sizes.
| |
| | |
| The Cayley–Hamilton method gives
| |
| :<math>
| |
| \mathbf{A}^{-1}=\frac{1}{\det (\mathbf{A})}\left[ \mathrm{tr}\mathbf{A}- \mathbf{A}\right].
| |
| </math>
| |
| | |
| ==== Inversion of 3×3 matrices ====
| |
| A computationally efficient 3x3 matrix inversion is given by
| |
| :<math>\mathbf{A}^{-1} = \begin{bmatrix}
| |
| a & b & c\\ d & e & f \\ g & h & i\\
| |
| \end{bmatrix}^{-1} =
| |
| \frac{1}{\det(\mathbf{A})} \begin{bmatrix}
| |
| \, A & \, B & \,C \\ \, D & \, E & \, F \\ \, G & \, H & \, I\\
| |
| \end{bmatrix}^T =
| |
| \frac{1}{\det(\mathbf{A})} \begin{bmatrix}
| |
| \, A & \, D & \,G \\ \, B & \, E & \,H \\ \, C & \,F & \, I\\
| |
| \end{bmatrix}</math>
| |
| where the determinant of '''A''' can be computed by applying the [[rule of Sarrus]] as follows:
| |
| :<math>\det(\mathbf{A}) = a(ei-fh)-b(id-fg)+c(dh-eg).</math>
| |
| If the determinant is non-zero, the matrix is invertible, with the elements of the above matrix on the right side given by
| |
| :<math>\begin{matrix}
| |
| A = (ei-fh) & D = -(bi-ch) & G = (bf-ce) \\
| |
| B = -(di-fg) & E = (ai-cg) & H = -(af-cd) \\
| |
| C = (dh-eg) & F = -(ah-bg) & I = (ae-bd) \\
| |
| \end{matrix}</math>
| |
| | |
| The Cayley–Hamilton decomposition gives
| |
| :<math>
| |
| \mathbf{A}^{-1}=\frac{1}{\det (\mathbf{A})}\left[ \frac{1}{2}\left( (\mathrm{tr}\mathbf{A})^{2}-\mathrm{tr}\mathbf{A}^{2}\right) -\mathbf{A}\mathrm{tr}\mathbf{A}+\mathbf{A}^{2}\right].
| |
| </math>
| |
| | |
| The general 3×3 inverse can be expressed concisely in terms of the [[cross product]] and [[triple product]]:
| |
| | |
| If a matrix <math>\mathbf{A}=\left[\mathbf{x_0},\;\mathbf{x_1},\;\mathbf{x_2}\right]</math> (consisting of three column vectors, <math>\mathbf{x_0}</math>, <math>\mathbf{x_1}</math>, and <math>\mathbf{x_2}</math>) is invertible, its inverse is given by
| |
| :<math>\mathbf{A}^{-1}=\frac{1}{\det(\mathbf A)}\begin{bmatrix}
| |
| {(\mathbf{x_1}\times\mathbf{x_2})}^{T} \\
| |
| {(\mathbf{x_2}\times\mathbf{x_0})}^{T} \\
| |
| {(\mathbf{x_0}\times\mathbf{x_1})}^{T} \\
| |
| \end{bmatrix}.</math>
| |
| Note that <math>\det(A)</math> is equal to the triple product of <math>\mathbf{x_0}</math>, <math>\mathbf{x_1}</math>, and <math>\mathbf{x_2}</math>—the volume of the [[parallelepiped]] formed by the rows or columns:
| |
| : <math>\det(\mathbf{A})=\mathbf{x_0}\cdot(\mathbf{x_1}\times\mathbf{x_2}).</math>
| |
| The correctness of the formula can be checked by using cross- and triple-product properties and by noting that for groups, left and right inverses always coincide. Intuitively, because of the cross products, each row of <math>\mathbf{A}^{-1}</math> is orthogonal to the non-corresponding two columns of <math>\mathbf{A}</math> (causing the off-diagonal terms of <math>I=\mathbf{A}^{-1}\mathbf{A}</math> be zero). Dividing by
| |
| : <math>\det(\mathbf{A})=\mathbf{x_0}\cdot(\mathbf{x_1}\times\mathbf{x_2})</math>
| |
| causes the diagonal elements of <math>I=\mathbf{A}^{-1}\mathbf{A}</math> to be unity. For example, the first diagonal is:
| |
| :<math>1 = \frac{1}{\mathbf{x_0}\cdot(\mathbf{x_1}\times\mathbf{x_2})} \mathbf{x_0}\cdot(\mathbf{x_1}\times\mathbf{x_2}).</math>
| |
| | |
| ==== Inversion of 4×4 matrices ====
| |
| With increasing dimension, expressions for the inverse of '''A''' get complicated. For ''n'' = 4 the Cayley-Hamilton method leads to an expression that is still tractable:
| |
| :<math> \mathbf{A}^{-1}=\frac{1}{\det (\mathbf{A})}\left[ \frac{1}{6}\left( (\mathrm{tr}\mathbf{A})^{3}-3\mathrm{tr}\mathbf{A}\mathrm{tr}\mathbf{A}^{2}+2\mathrm{tr}\mathbf{A}^{3}\right) -\frac{1}{2}\mathbf{A}\left( (\mathrm{tr}\mathbf{A})^{2}-\mathrm{tr}\mathbf{A}^{2}\right) +\mathbf{A}^{2}\mathrm{tr}\mathbf{A}-\mathbf{A}^{3}\right]. </math>
| |
| | |
| === Blockwise inversion ===
| |
| Matrices can also be ''inverted blockwise'' by using the following analytic inversion formula:
| |
| {| border="0" cellpadding="0" cellspacing="0" width="100%"
| |
| |-
| |
| | align="left" |
| |
| :<math>
| |
| \begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{bmatrix}^{-1} = \begin{bmatrix} \mathbf{A}^{-1}+\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1} & -\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1} \\ -(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1} & (\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1} \end{bmatrix}
| |
| </math>
| |
| | align="right" | <math>(1)\,</math>
| |
| |}
| |
| where '''A''', '''B''', '''C''' and '''D''' are [[block matrix|matrix sub-blocks]] of arbitrary size. ('''A''' and '''D''' must be square, so that they can be inverted. Furthermore, '''A''' and '''D'''−'''CA'''<sup>−1</sup>'''B''' must be nonsingular.<ref>
| |
| {{cite book
| |
| | last = Bernstein
| |
| | first = Dennis
| |
| | title = Matrix Mathematics
| |
| | publisher = Princeton University Press
| |
| | year = 2005
| |
| | pages = 44
| |
| | isbn = 0-691-11802-7 }}
| |
| </ref>) This strategy is particularly advantageous if '''A''' is diagonal and '''D'''−'''CA'''<sup>−1</sup>'''B''' (the [[Schur complement]] of '''A''') is a small matrix, since they are the only matrices requiring inversion. This technique was reinvented several times and is due to [[Hans Boltz]] (1923),{{Citation needed|date=December 2009}}<!--Surely this was already known by the end of the 19th century.--> who used it for the inversion of geodetic matrices, and [[Tadeusz Banachiewicz]] (1937), who generalized it and proved its correctness.
| |
| | |
| The [[nullity theorem]] says that the nullity of '''A''' equals the nullity of the sub-block in the lower right of the inverse matrix, and that the nullity of '''B''' equals the nullity of the sub-block in the upper right of the inverse matrix.
| |
| | |
| The inversion procedure that led to Equation (1) performed matrix block operations that operated on '''C''' and '''D''' first. Instead, if '''A''' and '''B''' are operated on first, and provided '''D''' and '''A'''−'''BD'''<sup>−1</sup>'''C''' are nonsingular
| |
| ,<ref>
| |
| {{cite book
| |
| | last = Bernstein
| |
| | first = Dennis
| |
| | title = Matrix Mathematics
| |
| | publisher = Princeton University Press
| |
| | year = 2005
| |
| | pages = 45
| |
| | isbn = 0-691-11802-7 }}
| |
| </ref> the result is
| |
| {| border="0" cellpadding="0" cellspacing="0" width="100%"
| |
| |-
| |
| | align="left" |
| |
| :<math>
| |
| \begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{bmatrix}^{-1} = \begin{bmatrix} (\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1} & -(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1}\mathbf{BD}^{-1} \\ -\mathbf{D}^{-1}\mathbf{C}(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1} & \quad \mathbf{D}^{-1}+\mathbf{D}^{-1}\mathbf{C}(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1}\mathbf{BD}^{-1}\end{bmatrix}.
| |
| </math>
| |
| | align="right" | <math>(2)\,</math>
| |
| |}
| |
| Equating Equations (1) and (2) leads to
| |
| {| border="0" cellpadding="0" cellspacing="0" width="100%"
| |
| |-
| |
| | align="left" |
| |
| :<math>
| |
| (\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1} = \mathbf{A}^{-1}+\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1}\,
| |
| </math>
| |
| | align="right" | <math>(3)\,</math>
| |
| |}
| |
| :<math>
| |
| (\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1}\mathbf{BD}^{-1} = \mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\,
| |
| </math>
| |
| :<math>
| |
| \mathbf{D}^{-1}\mathbf{C}(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1} = (\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1}\,
| |
| </math>
| |
| :<math>
| |
| \mathbf{D}^{-1}+\mathbf{D}^{-1}\mathbf{C}(\mathbf{A}-\mathbf{BD}^{-1}\mathbf{C})^{-1}\mathbf{BD}^{-1} = (\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\,
| |
| </math>
| |
| where Equation (3) is the matrix inversion lemma, which is equivalent to the [[binomial inverse theorem]].
| |
| | |
| Since a blockwise inversion of an {{nobreak|{{var|n}}×{{var|n}}}} matrix requires inversion of two half-sized matrices and 6 multiplications between two half-sized matrices, it can be shown that a [[divide and conquer algorithm]] that uses blockwise inversion to invert a matrix runs with the same time complexity as the matrix multiplication algorithm that is used internally.<ref>T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, ''Introduction to Algorithms'', 3rd ed., MIT Press, Cambridge, MA, 2009, §28.2.</ref> There exist [[matrix multiplication#Algorithms for efficient matrix multiplication|matrix multiplication algorithms]] with a complexity of {{math|''O''(''n''<sup>2.3727</sup>)}} operations, while the best proven lower bound is {{math|''[[Big O notation#Family of Bachmann–Landau notations|Ω]]''({{var|n}}{{sup|2}} log {{var|n}})}}.<ref>[[Ran Raz]]. On the complexity of matrix product. In Proceedings of the thirty-fourth annual ACM symposium on Theory of computing. ACM Press, 2002. {{doi|10.1145/509907.509932}}.</ref>
| |
| | |
| === By Neumann series ===
| |
| If a matrix '''A''' has the property that
| |
| | |
| :<math>\lim_{n \to \infty} (\mathbf I - \mathbf A)^n = 0</math>
| |
| | |
| then '''A''' is nonsingular and its inverse may be expressed by a [[Neumann series]]:<ref>
| |
| {{cite book
| |
| | last = Stewart
| |
| | first = Gilbert
| |
| | title = Matrix Algorithms: Basic decompositions
| |
| | publisher = SIAM
| |
| | year = 1998
| |
| | pages = 55
| |
| | isbn = 0-89871-414-1}}
| |
| </ref>
| |
| | |
| :<math>\mathbf A^{-1} = \sum_{n = 0}^\infty (\mathbf I - \mathbf A)^n.</math>
| |
| | |
| Truncating the sum results in an "approximate" inverse which may be useful as a [[preconditioner]]. Note that a truncated series can be accelerated exponentially by noting that the Neumann series is a geometric sum. Therefore, if one wishes to compute <math>2^L</math> terms, one merely need the moments <math>A,A^2, A^4,...,A^{2^L}</math> which can be found through L matrix multiplications. Then another L matrix multiplications are needed to obtains the final result by multiplying all the moments together. Therefore, 2L matrix multiplications are needed to compute <math>2^L</math>terms of the sum.
| |
| | |
| More generally, if '''A''' is "near" the invertible matrix '''X''' in the sense that
| |
| :<math>\lim_{n \to \infty} (\mathbf I - \mathbf X^{-1} \mathbf A)^n = 0 \mathrm{~~or~~} \lim_{n \to \infty} (\mathbf I - \mathbf A \mathbf X^{-1})^n = 0</math>
| |
| then '''A''' is nonsingular and its inverse is
| |
| :<math>\mathbf A^{-1} = \sum_{n = 0}^\infty \left(\mathbf X^{-1} (\mathbf X - \mathbf A)\right)^n \mathbf X^{-1}~.</math>
| |
| If it is also the case that '''A-X''' has [[rank (linear algebra)|rank]] 1 then this simplifies to
| |
| :<math>\mathbf A^{-1} = \mathbf X^{-1} - \frac{\mathbf X^{-1} (\mathbf A - \mathbf X) \mathbf X^{-1}}{1+\operatorname{tr}(\mathbf X^{-1} (\mathbf A - \mathbf X))}~.</math>
| |
| | |
| == Derivative of the matrix inverse ==
| |
| Suppose that the invertible matrix '''A''' depends on a parameter ''t''. Then the derivative of the inverse of '''A''' with respect to ''t'' is given by
| |
| :<math> \frac{\mathrm{d}\mathbf{A}^{-1}}{\mathrm{d}t} = - \mathbf{A}^{-1} \frac{\mathrm{d}\mathbf{A}}{\mathrm{d}t} \mathbf{A}^{-1}. </math>
| |
| | |
| To derive the above expression for the derivative of the inverse of '''A''', one can differentiate the definition of the matrix inverse <math>\mathbf{A}^{-1}\mathbf{A}=\mathbf{I}</math> and then solve for the inverse of '''A''':
| |
| : <math>\frac{\mathrm{d}\mathbf{A}^{-1}\mathbf{A}}{\mathrm{d}t}
| |
| =\frac{\mathrm{d}\mathbf{A}^{-1}}{\mathrm{d}t}\mathbf{A}
| |
| +\mathbf{A}^{-1}\frac{\mathrm{d}\mathbf{A}}{\mathrm{d}t}
| |
| =\frac{\mathrm{d}\mathbf{I}}{\mathrm{d}t}
| |
| =\mathbf{0}.</math>
| |
| Subtracting <math>\mathbf{A}^{-1}\frac{\mathrm{d}\mathbf{A}}{\mathrm{d}t}</math> from both sides of the above and multiplying on the right by <math>\mathbf{A}^{-1}</math> gives the correct expression for the derivative of the inverse:
| |
| : <math> \frac{\mathrm{d}\mathbf{A}^{-1}}{\mathrm{d}t} = - \mathbf{A}^{-1} \frac{\mathrm{d}\mathbf{A}}{\mathrm{d}t} \mathbf{A}^{-1}. </math>
| |
| Similarly, if <math>\epsilon</math> is a small number then
| |
| : <math>\left(\mathbf{A} + \epsilon\mathbf{X}\right)^{-1}
| |
| = \mathbf{A}^{-1}
| |
| - \epsilon \mathbf{A}^{-1} \mathbf{X} \mathbf{A}^{-1} + \mathcal{O}(\epsilon^2)\,.</math>
| |
| | |
| == {{anchor|Moore-Penrose pseudoinverse}}Moore–Penrose pseudoinverse ==
| |
| | |
| Some of the properties of inverse matrices are shared by [[Moore–Penrose pseudoinverse]]s, which can be defined for any ''m''-by-''n'' matrix.
| |
| | |
| == Applications ==
| |
| | |
| For most practical applications, it is ''not'' necessary to invert a matrix to solve a [[system of linear equations]]; however, for a unique solution, it ''is'' necessary that the matrix involved be invertible.
| |
| | |
| Decomposition techniques like [[LU decomposition]] are much faster than inversion, and various fast algorithms for special classes of linear systems have also been developed.
| |
| | |
| === Matrix inverses in real-time simulations ===
| |
| | |
| Matrix inversion plays a significant role in [[computer graphics]], particularly in [[3D graphics]] rendering and 3D simulations. Examples include screen-to-world ray casting, world-to-subspace-to-world object transformations, and physical simulations.
| |
| | |
| === Matrix inverses in MIMO wireless communication ===
| |
| Matrix inversion also play a significant role in the [[MIMO]] (Multiple-Input, Multiple-Output) technology in wireless communications. The MIMO system consists of N transmit and M receive antennas. Unique signals, occupying the same frequency band, are sent via N transmit antennas and are received via M receive antennas. The signal arriving at each receive antenna will be a linear combination of the N transmitted signals forming a NxM transmission matrix '''H'''. It is crucial for the matrix '''H''' to be invertible for the receiver to be able to figure out the transmitted information.
| |
| | |
| == See also ==
| |
| * [[Binomial inverse theorem]]
| |
| * [[LU decomposition]]
| |
| * [[Matrix decomposition]]
| |
| * [[Matrix square root]]
| |
| * [[Moore–Penrose pseudoinverse]]
| |
| * [[Pseudoinverse]]
| |
| * [[Singular value decomposition]]
| |
| * [[Woodbury matrix identity]]
| |
| | |
| == Notes ==
| |
| {{Reflist}}
| |
| | |
| == References ==
| |
| * {{Introduction to Algorithms|2|chapter=28.4: Inverting matrices|pages=pp. 755–760}}
| |
| | |
| == External links ==
| |
| * {{springer|title=Inversion of a matrix|id=p/i052440}}
| |
| *[http://books.google.se/books?id=jgEiuHlTCYcC&printsec=frontcover Matrix Mathematics: Theory, Facts, and Formulas] at [[Google books]]
| |
| *[http://www.solvingequations.net Equations Solver Online]
| |
| *[http://www.khanacademy.org/video/inverse-matrix--part-1?playlist=Linear+Algebra Lecture on Inverse Matrices by Khan Academy]
| |
| *[http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-3-multiplication-and-inverse-matrices/ Linear Algebra Lecture on Inverse Matrices by MIT]
| |
| *[http://netlib.org/lapack/ LAPACK] is a collection of FORTRAN subroutines for solving dense linear algebra problems
| |
| *[http://www.alglib.net/eigen/ ALGLIB] includes a partial port of the LAPACK to C++, C#, Delphi, etc.
| |
| *[http://www.jimmysie.com/maths/matrixinv.php Online Inverse Matrix Calculator using AJAX]
| |
| *[http://www.emathhelp.net/calculators/linear-algebra/inverse-of-matrix-calculator/ Symbolic Inverse of Matrix Calculator with steps shown]
| |
| *[http://www.vias.org/tmdatanaleng/cc_matrix_pseudoinv.html Moore Penrose Pseudoinverse]
| |
| *[http://numericalmethods.eng.usf.edu/mws/gen/04sle/mws_gen_sle_bck_system.pdf Inverse of a Matrix Notes]
| |
| *[http://math.fullerton.edu/mathews/n2003/InverseMatrixMod.html Module for the Matrix Inverse]
| |
| *[http://mjollnir.com/matrix/demo.html Calculator for Singular or Non-Square Matrix Inverse]
| |
| *{{planetmath reference|title=Derivative of inverse matrix|id=6362}}
| |
| | |
| {{linear algebra}}
| |
| | |
| {{DEFAULTSORT:Invertible Matrix}}
| |
| [[Category:Linear algebra]]
| |
| [[Category:Matrices]]
| |
| [[Category:Determinants]]
| |
| [[Category:Matrix theory]]
| |
| | |
| [[zh:可逆矩阵]]
| |