Filter design: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Monkbot
en>AnomieBOT
m Dating maintenance tags: {{Elucidate}}
 
Line 1: Line 1:
In [[linear algebra]], the '''Cayley–Hamilton theorem''' (named after the mathematicians [[Arthur Cayley]] and [[William Rowan Hamilton]]) states that every [[square matrix]] over a [[commutative ring]] (such as the [[real number|real]] or [[complex number|complex]] [[field (mathematics)|field]]) satisfies its own [[Characteristic_polynomial#Characteristic_equation|characteristic equation]].
Obtain it in excel, copy-paste this continued plan towards corpuscle B1. A person's again access an majority of time in abnormal all over corpuscle A1, the huge in treasures will arise in B1.<br><br>If you cherished this write-up and you would like to get much more details regarding [http://prometeu.net clash of clans hack free download no survey] kindly take a look at our web site. Yet unfortunately Supercell, by allowing your current illusion on the multi player game, taps into you're instinctual male drive which can from the status hierarchy, and even though [http://browse.deviantart.com/?q=it%27%27s+unattainable it''s unattainable] to the the surface of your [http://Dict.Leo.org/?search=hierarchy hierarchy] if there isn't been logging in morning because the game became available plus you invested honest money in extra builders, the drive for getting a small bit further compels enough visitors to spend a real income towards virtual 'gems'" that exercise could be the top-grossing app within the Software package Store.<br><br>Delight in unlimited points, resources, coins or gems, you must have download the clash of clans crack tool by clicking on the button. Depending around operating system that an individual using, you will will need to run the downloaded list as administrator. Deliver the log in ID and select the device. Immediately this, you are ought enter the number together with gems or coins that you want to get.<br><br>Control system game playing is just the thing for kids. Consoles enable you to get far better control concerning content and safety, all of the kids can simply force of the wind by way of parent or guardian regulates on your computer. Using this step might help guard your young ones between harm.<br><br>Ensure you may not let video games take over your days. Game titles can be quite additive, this means you have to make undoubtedly you moderate the period of time that you investing activity such games. Merchandise in your articles invest an excessive involving time playing video game, your actual life have the ability to begin to falter.<br><br>Be careful about letting your young person play online video games, especially games with dwelling sound. There could be foul language in most channels, in addition to a lot bullying behavior. Could also be child predators in these products chat rooms. Exactly what your child is doing and surveil these chitchat times due to or perhaps protection.<br><br>If you want to master game play near shooter video games, excel att your weapons. See everything there is realize about each and all weapon style in the overall game. Each weapon excels while in certain ways, but occurs short in others. When you know that pluses and minuses associated with each weapon, you can use them to king advantage.
 
More precisely,<ref>{{citation
| last1 = Atiyah
| first1 = M. F.
| author1-link = M. F. Atiyah
| last2 = MacDonald
| first2 = I. G.
| author2-link = I. G. Macdonald
| year = 1969
| title = Introduction to Commutative Algebra
| publisher = Westview Press
| isbn = 0-201-40751-5
}}</ref><ref>[http://planetmath.org/?op=getobj&from=objects&id=7308 A proof from PlanetMath.]</ref><ref>[http://www.mathpages.com/home/kmath640/kmath640.htm The Cayley–Hamilton Theorem] at MathPages</ref><ref>{{springer|title=Cayley–Hamilton theorem|id=p/c120080}}</ref> if {{mvar|A}} is a given {{math|''n''&times;''n''}} matrix and {{math|''I<sub>n</sub>&nbsp;''}} is the  {{math|''n''&times;''n''}} [[identity matrix]], then the  [[characteristic polynomial]] of {{mvar|A}} is defined as
::<math>p(\lambda)=\det(\lambda I_n-A)~,</math>
where "det" is the [[determinant]] operation. Since the entries of the matrix are (linear or constant) polynomials in {{mvar|λ}}, the determinant is also an {{mvar|n}}-th order polynomial in {{mvar|λ}}.
 
The Cayley–Hamilton theorem states that "substituting" the matrix {{mvar|A}} for {{mvar|λ}} in this polynomial results in the [[zero matrix]],
::{{math| ''p''(''A'') {{=}} 0}} .
 
The powers of {{mvar|A}}, obtained by substitution from powers of {{mvar|λ}}, are defined by repeated matrix multiplication; the constant term of {{math| ''p''(''λ'')}} gives a multiple of the power {{mvar|A}}<sup>0</sup>, which power is defined as the identity matrix.
The theorem allows {{mvar|A}}<sup>{{mvar|n}}</sup> to be expressed as a linear combination of the lower matrix powers of {{mvar|A}}.
 
When the ring is a field, the Cayley–Hamilton theorem is equivalent to the statement that the [[Minimal polynomial (linear algebra)|minimal polynomial]] of a square matrix [[Polynomial division|divides]] its characteristic polynomial.
 
== Example ==
As a concrete example, let
:<math>A = \begin{pmatrix}1&2\\3&4\end{pmatrix}</math>.
Its characteristic polynomial is given by
:<math>p(\lambda)=\det(\lambda I_2-A)=\det\begin{pmatrix}\lambda-1&-2\\
-3&\lambda-4\end{pmatrix}=(\lambda-1)(\lambda-4)-(-2)(-3)=\lambda^2-5\lambda-2.</math>
 
The Cayley–Hamilton theorem claims that, if we ''define''
:<math>p(X)=X^2-5X-2I_2,</math>
then
:<math>p(A)=A^2-5A-2I_2=\begin{pmatrix}0&0\\0&0\\\end{pmatrix},</math>
which one can verify easily.
 
== Illustration for specific dimensions and practical applications==
For a 1×1 matrix {{math|''A''&nbsp;{{=}}&nbsp;(''a'')}}, the characteristic polynomial is given by ''p''(λ)&nbsp;=&nbsp;''λ''&nbsp;−&nbsp;''a'', and so ''p''(''A'')&nbsp;=&nbsp;(''a'')&nbsp;−&nbsp;''a''(1)&nbsp;=&nbsp;0 is obvious.
 
For a 2×2 matrix,
:<math>A=\begin{pmatrix}a&b\\c&d\\\end{pmatrix} ,</math>
 
the characteristic polynomial is given by {{math| ''p''(''λ'')&nbsp;{{=}}&nbsp;''λ''<sup>2</sup>&nbsp;−&nbsp;(''a''&nbsp;+&nbsp;''d'')''λ''&nbsp;+&nbsp;(''ad''&nbsp;−&nbsp;''bc'')}}, so the Cayley–Hamilton theorem states that
:<math>p(A)=A^2-(a+d)A+(ad-bc)I_2=\begin{pmatrix}0&0\\0&0\\\end{pmatrix};</math>
which is indeed always the case, evident by working out the entries of {{mvar|A}}<sup>2</sup>.
 
For a general {{math|''n''×''n''}} [[invertible matrix]] {{mvar|A}}, i.e., one with nonzero determinant, {{mvar|A}}<sup>−1</sup> can thus be written as an (''n''&nbsp;−&nbsp;1)-th order  [[polynomial expression]] in {{mvar|A}}:  As indicated,  the Cayley–Hamilton theorem amounts to  the identity
 
{{Equation box 1
|indent =::
|equation =  <math>p(A)=A^n+c_{n-1}A^{n-1}+\cdots+c_1A+(-1)^n\det(A)I_n =0 ~,</math>
|cellpadding= 6
|border
|border colour = #0073CF
|bgcolor=#F9FFF7}}
with {{math|''c''<sub>''n''−1</sub>&nbsp;{{=}}&nbsp;−tr(''A'')}}, etc., where tr({{mvar|A}}) is the [[Trace (linear algebra)|trace]] of the matrix {{mvar|A}}. 
 
This can then be written as
:<math>-(-1)^n\det(A)I_n = A(A^{n-1}+c_{n-1}A^{n-2}+\cdots+c_{1}I_n),</math>
 
and, by  multiplying both sides by ''A''<sup>−1</sup>, one is led to the compact expression for the inverse,
:<math> A^{-1}=\frac{(-1)^{n-1}}{\det(A)}(A^{n-1}+c_{n-1}A^{n-2}+\cdots+c_{1}I_n).</math>
 
For larger matrices, the expressions for the coefficients {{math|''c''<sub>''k''</sub>}} of the characteristic polynomial in terms of the matrix components become increasingly complicated; but they can also be expressed in terms of traces of powers of the matrix {{mvar|A}}, using [[Newton's identities]] (at least when the ring contains the rational numbers), thus resulting in the expression for the [[adjugate matrix]] of {{mvar|A}},
:<math>\det ( A) A^{-1} = \sum_{s=0}^{n-1}A^{s}\sum_{k_1,k_2,\ldots ,k_{n-1}}\prod_{l=1}^{n-1} \frac{(-1)^{k_l+1}}{l^{k_l}k_{l}!}\mathrm{tr}(A^l)^{k_l},</math>
where the sum is taken over {{mvar|s}} and the sets of all integer partitions {{math|''k<sub>l</sub>'' ≥ 0}} satisfying the equation
:<math> s+\sum_{l=1}^{n-1}lk_{l} = n - 1.</math>
 
For instance, in the above 2×2 matrix example, the coefficient {{math|−''c''<sub>1</sub>&nbsp;{{=}}&nbsp;''a''&nbsp;+&nbsp;''d''}} of ''λ'' above is just the trace of {{mvar|A}}, tr{{mvar|A}}, while the constant coefficient {{math|''c''<sub>0</sub>&nbsp;{{=}}&nbsp;''ad''&nbsp;−&nbsp;''bc''}} can be written as {{math|½((tr''A'')<sup>2</sup>&nbsp;−&nbsp;tr(''A''<sup>2</sup>))}}. (Of course, it is also the determinant of {{mvar|A}}, in this case.)
 
In fact, this expression, {{math|½((tr''A'')<sup>2</sup>&nbsp;−&nbsp;tr(''A''<sup>2</sup>))}}, always gives the coefficient ''c''<sub>''n''−2</sub> of ''λ''<sup>''n''−2</sup> in the characteristic polynomial of any ''n''×''n'' matrix; so, for a 3×3 matrix {{mvar|A}}, the statement of the Cayley–Hamilton theorem can also be written as
:<math>A^3- (\operatorname{tr}A)A^2+\frac{1}{2}\left((\operatorname{tr}A)^2-\operatorname{tr}(A^2)\right)A-\det(A)I_3=0,</math>
where the right-hand side designates a 3×3 matrix with all entries reduced to zero. Likewise, this determinant in the ''n''&nbsp;=&nbsp;3 case, is  now
:<math>\tfrac{1}{6} \left ( (\operatorname{tr}A)^3-3\operatorname{tr}(A^2)(\operatorname{tr}A)+2\operatorname{tr}(A^3) \right )</math>
minus the coefficient ''c''<sub>''n''−3</sub> of ''λ''<sup>''n''−3</sup> in the general case, as seen below.
 
Similarly, one can write for a 4×4 matrix {{mvar|A}},
:<math> A^4-(\operatorname{tr}A)A^3 + \tfrac{1}{2}\bigl((\operatorname{tr}A)^2-\operatorname{tr}(A^2)\bigr)A^2 - \tfrac{1}{6}\bigl( (\operatorname{tr}A)^3-3\operatorname{tr}(A^2)(\operatorname{tr}A)+2\operatorname{tr}(A^3)\bigr)A + \det(A)I_4 = 0,</math>
where, now,  the determinant is
:<math>\tfrac{1}{24} \left ( (\operatorname{tr}A)^4-6 \operatorname{tr}(A^2)(\operatorname{tr}A)^2+3(\operatorname{tr}(A^2))^2+8\operatorname{tr}(A^3)\operatorname{tr}(A) -6\operatorname{tr}(A^4) \right )</math>
and so on for larger matrices, with the increasingly complex expressions for the coefficients deducible from [[Newton's identities]].
 
A practical method for obtaining these coefficients  {{math|''c''<sub>''k''</sub>}} for a general {{math|''n''×''n''}} matrix, yielding the above ones virtually by inspection,  provided no root be zero, relies on an [[Matrix_exponential#The_determinant_of_the_matrix_exponential|alternate expression for the determinant]],
:<math> p(\lambda)= \det ~(\lambda I_n -A) = \lambda^n \exp (\operatorname{tr} (\log (I_n - A/\lambda))). </math>
 
Hence,
:<math>p(\lambda)= \lambda^n \exp \left( -\operatorname{tr} \sum_{m=1}^\infty {({A\over\lambda})^m \over m}  \right),</math>
 
where the exponential ''only'' needs be expanded to order  ''λ''<sup>−''n''</sup>, since {{math|''p''(''λ'')}} is of order ''n'', the net negative powers of ''λ'' automatically vanishing by the C–H theorem. (Again, this requires a ring containing the rational numbers.)
 
The generic coefficients of the characteristic polynomial for general {{mvar|n}} are given ([[Urbain Le Verrier|Le Verrier]]) by determinants of {{math|''m''×''m''}} matrices,
:<math>c_{n-m} = \frac{(-)^m}{m!} 
\begin{vmatrix}  \operatorname{tr}A  &  m-1 &0&\cdots\\
\operatorname{tr}A^2  &\operatorname{tr}A&  m-2 &\cdots\\
\cdots & \cdots & \cdots & \cdots    \\
\operatorname{tr}A^{m-1} &\operatorname{tr}A^{m-2}& \cdots& 1    \\
\operatorname{tr}A^m  &\operatorname{tr}A^{m-1}& \cdots& \operatorname{tr}A    \\ \end{vmatrix}        ~.</math>
 
The Cayley–Hamilton theorem always provides a relationship between the powers of {{mvar|A}} (though not always the simplest one), which allows one to simplify expressions involving such powers, and evaluate them without having to compute the power {{mvar|A}}<sup>''n''</sup> or any higher powers of {{mvar|A}}.
 
For instance, the concrete 2×2 Example above can be written as
 
:<math>A^2=5A+2I_2\,  .</math>
 
Then, for example, to calculate ''A''<sup>4</sup>, observe
 
:<math>A^3=(5A+2I_2)A=5A^2+2A=5(5A+2I_2)+2A=27A+10I_2\,</math>
:<math>A^4=A^3A=(27A+10I_2)A=27A^2+10A=27(5A+2I_2)+10A=145A+54I_2\, .</math>
 
== Proving the theorem in general ==
As the examples above show, obtaining the statement of the Cayley–Hamilton theorem for an ''n''×''n'' matrix <math>A=(a_{i,j})_{i,j=1}^n</math> requires two steps: first the coefficients ''c''<sub>''i''</sub> of the characteristic polynomial are determined by development as a polynomial in ''t'' of the determinant
:<math>p(t) = \det(t I_n - A) =
\begin{vmatrix}t-a_{1,1}&-a_{1,2}&\cdots&-a_{1,n}\\
-a_{2,1}&t-a_{2,2}&\cdots&-a_{2,n}\\
\vdots & \vdots & \ddots & \vdots\\
-a_{n,1}&-a_{n,2}& \cdots& t-a_{n,n}\\ \end{vmatrix} = t^n+c_{n-1}t^{n-1}+\cdots+c_1t+c_0,</math>
and then these coefficients are used in a linear combination of powers of ''A'' that is equated to the ''n''×''n'' null matrix:
:<math>A^n+c_{n-1}A^{n-1}+\cdots+c_1A+c_0I_n=\begin{pmatrix}0&\cdots&0\\\vdots&\ddots&\vdots\\0&\cdots&0\end{pmatrix}.</math>
The left hand side can be worked out to an ''n''×''n'' matrix whose entries are (enormous) polynomial expressions in the set of entries <math>a_{i,j}</math> of ''A'', so the Cayley–Hamilton theorem states that each of these ''n''<sup>2</sup> expressions are equivalent to 0. For any fixed value of ''n'' these identities can be obtained by tedious but completely straightforward algebraic manipulations. None of these computations can show however why the Cayley–Hamilton theorem should be valid for matrices of all possible sizes ''n'', so a uniform proof for all ''n'' is needed.
 
=== Preliminaries ===
If a vector ''v'' of size ''n'' happens to be an [[eigenvector]] of ''A'' with eigenvalue λ, in other words if ''A''⋅''v'' = λ''v'', then
 
:<math>\begin{align}
p(A)\cdot v & = A^n\cdot v+c_{n-1}A^{n-1}\cdot v+\cdots+c_1A\cdot v+c_0I_n\cdot v \\
& = \lambda^nv+c_{n-1}\lambda^{n-1}v+\cdots+c_1\lambda v+c_0 v=p(\lambda)v,
\end{align}</math>
 
which is the null vector since ''p''(λ) = 0 (the eigenvalues of ''A'' are precisely the [[root of a function|root]]s of ''p''(''t'')). This holds for all possible eigenvalues λ, so the two matrices equated by the theorem certainly give the same (null) result when applied to any eigenvector. Now if ''A'' admits a [[basis (linear algebra)|basis]] of eigenvectors, in other words if ''A'' is [[diagonalizable]], then the Cayley–Hamilton theorem must hold for ''A'', since two matrices that give the same values when applied to each element of a basis must be equal. Not all matrices are diagonalizable, but for matrices with complex coefficients many of them are: the set of diagonalizable complex square matrices of a given size is [[dense set|dense]] in the set of all such square matrices<ref>{{cite book|author=R. Bhatia|year=1997|title=Matrix Analysis|publisher=Springer|page=7}}</ref> (for a matrix to be diagonalizable it suffices for instance that its characteristic polynomial not have multiple roots). Now if any of the ''n''<sup>2</sup> expressions that the theorem equates to 0 would not reduce to a null expression, in other words if it would be a nonzero polynomial in the coefficients of the matrix, then the set of complex matrices for which this expression happens to give 0 would not be dense in the set of all matrices, which would contradict the fact that the theorem holds for all diagonalizable matrices. Thus one can see that the Cayley–Hamilton theorem must be true.
 
While this provides a valid proof (for matrices over the complex numbers), the argument is not very satisfactory, since the identities represented by the theorem do not in any way depend on the nature of the matrix (diagonalizable or not), nor on the kind of entries allowed (for matrices with real entries the diagonizable ones do not form a dense set, and it seems strange one would have to consider complex matrices to see that the Cayley–Hamilton theorem holds for them). We shall therefore now consider only arguments that prove the theorem directly for any matrix using algebraic manipulations only; these also have the benefit of working for matrices with entries in any [[commutative ring]].
 
There is a great variety of such proofs of the Cayley–Hamilton theorem, of which several will be given here. They vary in the amount of abstract algebraic notions required to understand the proof. The simplest proofs use just those notions needed to formulate the theorem (matrices, polynomials with numeric entries, determinants), but involve technical computations that render somewhat mysterious the fact that they lead precisely to the correct conclusion. It is possible to avoid such details, but at the price of involving more subtle algebraic notions: polynomials with coefficients in a non-commutative ring, or matrices with unusual kinds of entries.
 
==== Adjugate matrices ====
All proofs below use the notion of the [[adjugate matrix]] adj(''M'') of an ''n''×''n'' matrix ''M''. This is a matrix whose coefficients are given  by polynomial expressions in the coefficients of ''M'' (in fact by certain (''n''&nbsp;−&nbsp;1)×(''n''&nbsp;−&nbsp;1) determinants), in such a way that one has the following fundamental relations
:<math>\operatorname{adj}(M)\cdot M=\det(M)I_n=M\cdot\operatorname{adj}(M).</math>
These relations are a direct consequence of the basic properties of determinants: evaluation of the (''i'',''j'') entry of the matrix product on the left gives the expansion by column ''j'' of the determinant of the matrix obtained from ''M'' by replacing column ''i'' by a copy of column ''j'', which is det(''M'') if ''i'' = ''j'' and zero otherwise; the matrix product on the right is similar, but for expansions by rows. Being a consequence of just algebraic expression manipulation, these relations are valid for matrices with entries in any commutative ring (commutativity must be assumed for determinants to be defined in the first place). This is important to note here, because these relations will be applied for matrices with non-numeric entries such as polynomials.
 
=== A direct algebraic proof ===
This proof uses just the kind of objects needed to formulate the Cayley–Hamilton theorem: matrices with polynomials as entries. The matrix {{math|''t I''<sub>n</sub> −''A''}} whose determinant is the characteristic polynomial of {{mvar|A}} is such a matrix, and since polynomials form a commutative ring, it has an [[Adjugate matrix|adjugate]]
:<math>B=\operatorname{adj}(tI_n-A).</math>
Then according to the right hand fundamental relation of the adjugate one has
:<math>(t I_n - A) \cdot B = \det(t I_n - A) I_n = p(t) I_n.</math>
Since ''B'' is also a matrix with polynomials in ''t'' as entries, one can for each ''i'' collect the coefficients of ''t<sup>i</sup>'' in each entry to form a matrix ''B'' <sub>''i''</sub> of numbers, such that one has
:<math>B = \sum_{i = 0}^{n - 1} t^i B_i</math>
(the way the entries of ''B'' are defined makes clear that no powers higher than ''t''<sup>''n''−1</sup> occur). While this ''looks'' like a polynomial with matrices as coefficients, we shall not consider such a notion; it is just a way to write a matrix with polynomial entries as linear combination of constant matrices, and the coefficient ''t <sup>i</sup>'' has been written to the left of the matrix to stress this point of view. Now one can expand the matrix product in our equation by bilinearity
:<math>\begin{align}
p(t) I_n &= (t I_n - A) \cdot B \\
&=(t I_n - A) \cdot\sum_{i = 0}^{n - 1} t^i B_i  \\
&=\sum_{i = 0}^{n - 1} tI_n\cdot t^i B_i - \sum_{i = 0}^{n - 1} A\cdot t^i B_i \\
&=\sum_{i = 0}^{n - 1} t^{i + 1}  B_i- \sum_{i = 0}^{n - 1} t^i A\cdot B_i  \\
&=t^n B_{n - 1} + \sum_{i = 1}^{n - 1}  t^i(B_{i - 1} - A\cdot  B_i) - A \cdot B_0.
\end{align}</math>
Writing
 
:<math>p(t)I_n=t^nI_n+t^{n-1}c_{n-1}I_n+\cdots+tc_1I_n+c_0I_n,</math>
 
one obtains an equality of two matrices with polynomial entries, written as linear combinations of constant matrices with powers of ''t'' as coefficients. Such an equality can hold only if in any matrix position the entry that is multiplied by a given power ''t<sup>i</sup>'' is the same on both sides; it follows that the constant matrices with coefficient ''t<sup>i</sup>'' in both expressions must be equal. Writing these equations for ''i'' from ''n'' down to 0 one finds
:<math>B_{n - 1} = I_n, \qquad B_{i - 1} - A\cdot B_i = c_i I_n\quad \text{for }1 < i < n-1, \qquad -A B_0 = c_0 I_n.</math>
We multiply the equation of the coefficients of ''t''<sup>''i''</sup> from the left by ''A''<sup>''i''</sup>, and sum up; the left-hand sides form a [[telescoping sum]] and cancel completely, which results in the equation
:<math> 0=A^n+c_{n-1}A^{n-1}+\cdots+c_1A+c_0I_n= p(A).</math>
This completes the proof.
 
=== A proof using polynomials with matrix coefficients ===
This proof is similar to the first one, but tries to give meaning to the notion of polynomial with matrix coefficients that was suggested by the expressions occurring in that proof. This requires considerable care, since it is somewhat unusual to consider polynomials with coefficients in a non-commutative ring, and not all reasoning that is valid for commutative polynomials can be applied in this setting. Notably, while arithmetic of polynomials over a commutative ring models the arithmetic of [[polynomial function]]s, this is not the case over a non-commutative ring (in fact there is no obvious notion of polynomial function in this case that is closed under multiplication). So when considering polynomials in ''t'' with matrix coefficients, the variable ''t'' must not be thought of as an "unknown", but as a formal symbol that is to be manipulated according to given rules; in particular one cannot just set ''t'' to a specific value.
 
:<math>(f+g)(x) = \sum_i \left (f_i+g_i \right )x^i = \sum_i{f_i x^i} + \sum_i{g_i x^i} = f(x) + g(x)</math>.
 
Let ''M''(''n'', ''R'') be the ring of ''n''×''n'' matrices with entries in some ring ''R'' (such as the real or complex numbers) that has ''A'' as an element. Matrices with as coefficients polynomials in ''t'', such as <math>tI_n - A</math> or its adjugate ''B'' in the first proof, are elements of ''M''(''n'', ''R''[''t'']). By collecting like powers of ''t'', such matrices can be written as "polynomials" in ''t'' with constant matrices as coefficients; write ''M''(''n'', ''R'')[''t''] for the set of such polynomials. Since this set is in bijection with ''M''(''n'', ''R''[''t'']), one defines arithmetic operations on it correspondingly, in particular multiplication is given by
 
:<math>\left (\sum_iM_it^i \right )\cdot \left (\sum_jN_jt^j \right )=\sum_{i,j}(M_i\cdot N_j)t^{i+j},</math>
 
respecting the order of the coefficient matrices from the two operands; obviously this gives a non-commutative multiplication. Thus the identity
 
:<math>(t I_n - A) \cdot B = p(t) I_n.</math>
 
from the first proof can be viewed as one involving a multiplication of elements in ''M''(''n'', ''R'')[''t''].
 
At this point, it is tempting to simply set ''t'' equal to the matrix ''A'', which makes the first factor on the left equal to the null matrix, and the right hand side equal to ''p''(''A''); however, this is not an allowed operation when coefficients do not commute. It is possible to define a "right-evaluation map" ev<sub>''A''</sub> : '''M'''[''t''] → '''M''', which replaces each ''t''<sup>''i''</sup> by the matrix power ''A''<sup>''i''</sup> of ''A'', where one stipulates that the power is always to be multiplied on the right to the corresponding coefficient. But this map is not a ring homomorphism: the right-evaluation of a product differs in general from the product of the right-evaluations. This is so because multiplication of polynomials with matrix coefficients does not model multiplication of expressions containing unknowns: a product <math>Mt^i Nt^j = (M\cdot N)t^{i+j}</math> is defined assuming that ''t'' commutes with ''N'', but this may fail if ''t'' is replaced by the matrix ''A''.
 
One can work around this difficulty in the particular situation at hand, since the above right-evaluation map does become a ring homomorphism if the matrix ''A'' is in the [[center (algebra)|center]] of the ring of coefficients, so that it commutes with all the coefficients of the polynomials (the argument proving this is straightforward, exactly because commuting ''t'' with coefficients is now justified after evaluation). Now ''A'' is not always in the center of '''M''', but we may replace '''M''' with a smaller ring provided it contains all the coefficients of the polynomials in question: <math>I_n</math>, ''A'', and the coefficients <math>B_i</math> of the polynomial ''B''. The obvious choice for such a subring is the [[centralizer]] ''Z'' of ''A'', the subring of all matrices that commute with ''A''; by definition ''A'' is in the center of ''Z''. This centralizer obviously contains <math>I_n</math>, and ''A'', but one has to show that it contains the matrices <math>B_i</math>. To do this one combines the two fundamental relations for adjugates, writing out the adjugate ''B'' as a polynomial:
:<math>\begin{align}
  \left(\sum_{i = 0}^m B_i t^i\right) (t I_n - A)&=(tI_n - A) \sum_{i = 0}^m B_i t^i \\
  \sum_{i = 0}^m B_i t^{i + 1} - \sum_{i = 0}^m B_i A t^i &= \sum_{i = 0}^m B_i t^{i + 1} - \sum_{i = 0}^m A B_i t^i \\
\sum_{i = 0}^m B_i A t^i &= \sum_{i = 0}^m A B_i t^i .
\end{align}</math>
[[Equating the coefficients]] shows that for each ''i'', we have ''A'' ''B''<sub>''i''</sub> = ''B''<sub>''i''</sub> ''A'' as desired. Having found the proper setting in which ev<sub>''A''</sub> is indeed a homomorphism of rings, one can complete the proof as suggested above:
:<math>\begin{align}
\operatorname{ev}_A\bigl(p(t) I_n\bigr) &= \operatorname{ev}_A((t I_n - A)\cdot B)  \\
  p(A)&= \operatorname{ev}_A(t I_n - A)\cdot \operatorname{ev}_A(B) \\
  p(A) &= (A \cdot I_n - A) \cdot \operatorname{ev}_A(B) = 0\cdot\operatorname{ev}_A(B) = 0 .
\end{align}</math>
This completes the proof.
 
=== A synthesis of the first two  proofs ===
In the first proof, one was able to determine the coefficients ''B''<sub>''i''</sub> of ''B'' based on the right hand fundamental relation for the adjugate only. In fact the first ''n'' equations derived can be interpreted as determining the quotient ''B'' of the [[Euclidean division]] of the polynomial <math>p(t)I_n</math> on the left by the ''monic'' polynomial <math>I_nt-A</math>, while the final equation expresses the fact that the remainder is zero. This division is performed in the ring of polynomials with matrix coefficients. Indeed, even over a non-commutative ring, Euclidean division by a monic polynomial ''P'' is defined, and always produces a unique quotient and remainder with the same degree condition as in the commutative case, provided it is specified at which side one wishes ''P'' to be a factor (here that is to the left). To see that quotient and remainder are unique (which is the important part of the statement here), it suffices to write <math>PQ+r=PQ'+r'</math> as <math>P(Q-Q')=r'-r</math> and observe that since ''P'' is monic, <math>P(Q-Q')</math> cannot have a degree less than that of ''P'', unless <math>Q=Q'</math>.
 
But the dividend <math>p(t)I_n</math> and divisor <math>I_nt-A</math> used here both lie in the subring (''R''[''A''])[''t''], where ''R''[''A''] is the subring of the matrix ring ''M''(''n'', ''R'') generated by ''A'': the ''R''-linear span of all powers of ''A''. Therefore the Euclidean division can in fact be performed within that ''commutative'' polynomial ring, and of course it then gives the same quotient ''B'' and remainder 0 as in the larger ring; in particular this shows that ''B'' in fact lies in (''R''[''A''])[''t'']. But in this commutative setting it is valid to set ''t'' to ''A'' in the equation <math>p(t)I_n=(I_nt-A)B</math>, in other words apply the evaluation map
:<math>\operatorname{ev}_A:(R[A])[t]\to R[A]</math>
which is a ring homomorphism, giving
:<math>p(A)=0\cdot\operatorname{ev}_A(B)=0</math>
just like in the second proof, as desired.
 
In addition to proving the theorem, the above argument tells us that the coefficients ''B<sub>i</sub>'' of ''B'' are polynomials in ''A'', while from the second proof we only knew that they lie in the centralizer ''Z'' of ''A''; in general ''Z'' is a larger subring than ''R''[''A''], and not necessarily commutative. In particular the constant term <math>B_0=\operatorname{adj}(-A)</math> lies in ''R''[''A'']. Since ''A'' is an arbitrary square matrix, this proves that adj(''A'') can always be expressed as a polynomial in ''A'' (with coefficients that depend on ''A''), something that is not obvious from the definition of the adjugate matrix. In fact the equations found in the first proof allow successively expressing <math>B_{n-1}, \ldots, B_1, B_0</math> as polynomials in ''A'', which leads to the identity
:<math>\operatorname{adj}(-A)=\sum_{i=1}^nc_iA^{i-1},</math>
valid for all ''n''×''n'' matrices, where
:<math>t^n+c_{n-1}t^{n-1}+\cdots+c_1t+c_0</math>
is the characteristic polynomial of ''A''. Note that this identity implies the statement of the Cayley–Hamilton theorem: one may move adj(−''A'') to the right hand side, multiply the resulting equation (on the left or on the right) by ''A'', and use the fact that
:<math>-A\cdot \operatorname{adj}(-A)=\operatorname{adj}(-A)\cdot-A=\det(-A)I_n=c_0I_n.</math>
 
=== A proof using matrices of endomorphisms ===
As was mentioned above, the matrix ''p''(''A'') in statement of the theorem is obtained by first evaluating the determinant and then substituting the matrix ''A'' for ''t''; doing that substitution into the matrix <math>tI_n-A</math> before evaluating the determinant is not meaningful. Nevertheless, it is possible to give an interpretation where ''p''(''A'') is obtained directly as the value of a certain determinant, but this requires a more complicated setting, one of matrices over a ring in which one can interpret both the entries <math>A_{i,j}</math> of ''A'', and all of ''A'' itself. One could take for this the ring ''M''(''n'', ''R'') of ''n''×''n'' matrices over ''R'', where the entry <math>A_{i,j}</math> is realised as <math>A_{i,j}I_n</math>, and ''A'' as itself. But considering matrices with matrices as entries might cause confusion with [[block matrix|block matrices]], which is not intended, as that gives the wrong notion of determinant (recall that the determinant of a matrix is defined as a sum of products of its entries, and in the case of a block matrix this is generally not the same as the corresponding sum of products of its blocks!). It is clearer to distinguish ''A'' from the endomorphism φ of an ''n''-dimensional vector space ''V'' (or free ''R''-module if ''R'' is not a field) defined by it in a basis ''e''<sub>1</sub>, ..., ''e''<sub>''n''</sub>, and to take matrices over the ring End(''V'') of all such endomorphisms. Then φ ∈ End(''V'') is a possible matrix entry, while ''A'' designates the element of ''M''(''n'', End(''V'')) whose ''i'',''j'' entry is endomorphism of scalar multiplication by <math>A_{i,j}</math>; similarly ''I''<sub>''n''</sub> will be interpreted as element of ''M''(''n'', End(''V'')). However, since End(''V'') is not a commutative ring, no determinant is defined on ''M''(''n'', End(''V'')); this can only be done for matrices over a commutative subring of End(''V''). Now the entries of the matrix <math>\varphi I_n-A</math> all lie in the subring ''R''[φ] generated by the identity and φ, which is commutative. Then a determinant map ''M''(''n'', ''R''[φ]) → ''R''[φ] is defined, and <math>\det(\varphi I_n-A)</math> evaluates to the value ''p''(φ) of the characteristic polynomial of ''A'' at φ (this holds independently of the relation between ''A'' and φ); the Cayley–Hamilton theorem states that ''p''(φ) is the null endomorphism.
 
In this form, the following proof can be obtained from that of {{Harvard citations|last1 = Atiyah|last2 = MacDonald|year = 1969|loc = Prop. 2.4}} (which in fact is the more general statement related to the [[Nakayama lemma]]; one takes for the ideal in that proposition the whole ring ''R''). The fact that ''A'' is the matrix of φ in the basis ''e''<sub>1</sub>, ..., ''e''<sub>''n''</sub> means that
:<math>\varphi(e_i) = \sum_{j = 1}^n A_{j,i} e_j \quad\text{for }i=1,\ldots,n.</math>
One can interpret these as ''n'' components of one equation in ''V''<sup>''n''</sup>, whose members can be written using the matrix-vector product ''M''(''n'', End(''V'')) × ''V<sup>n</sup>'' → ''V<sup>n</sup>'' that is defined as usual, but with individual entries ψ ∈ End(''V'') and ''v'' in ''V'' being "multiplied" by forming <math>\psi(v)</math>; this gives:
:<math>\varphi I_n \cdot E= A^\mathrm{tr}\cdot E,</math>
where <math>E\in V^n</math> is the element whose component ''i'' is ''e''<sub>''i''</sub> (in other words it is the basis ''e''<sub>1</sub>, ..., ''e''<sub>''n''</sub> of ''V'' written as a column of vectors). Writing this equation as
:<math>(\varphi I_n-A^\mathrm{tr})\cdot E=0\in V^n</math>
one recognizes the [[transpose]] of the matrix <math>\varphi I_n-A</math> considered above, and its determinant (as element of ''M''(''n'', ''R''[φ])) is also ''p''(φ). To derive from this equation that ''p''(φ) = 0 ∈ End(''V''), one left-multiplies by the [[adjugate matrix]] of <math>\varphi I_n-A^\mathrm{tr}</math>, which is defined in the matrix ring ''M''(''n'', ''R''[φ]), giving
:<math>\begin{align}
0&=\operatorname{adj}(\varphi I_n-A^\mathrm{tr})\cdot((\varphi I_n-A^\mathrm{tr})\cdot E)\\
  &= (\operatorname{adj}(\varphi I_n-A^\mathrm{tr})\cdot(\varphi I_n-A^\mathrm{tr}))\cdot E\\
  &= (\det(\varphi I_n-A^\mathrm{tr})I_n)\cdot E\\
  &= (p(\varphi)I_n)\cdot E;
\end{align}</math>
the associativity of matrix-matrix and matrix-vector multiplication used in the first step is a purely formal property of those operations, independent of the nature of the entries. Now component ''i'' of this equation says that ''p''(φ)(''e<sub>i</sub>'') = 0 ∈ ''V''; thus ''p''(φ) vanishes on all ''e''<sub>''i''</sub>, and since these elements generate ''V'' it follows that ''p''(φ) = 0 ∈ End(''V''), completing the proof.
 
One additional fact that follows from this proof is that the matrix ''A'' whose characteristic polynomial is taken need not be identical to the value φ substituted into that polynomial; it suffices that φ be an endomorphism of ''V'' satisfying the initial equations
 
:<math>\varphi(e_i) = \sum_j A_{j,i} e_j</math>
for ''some'' sequence of elements ''e''<sub>1</sub>,...,''e''<sub>''n''</sub> that generate ''V'' (which space might have smaller dimension than ''n'', or in case the ring ''R'' is not a field it might not be a [[free module]] at all).
 
=== A bogus "proof": ''p''(''A'') = det(''AI''<sub>''n''</sub>&nbsp;−&nbsp;''A'') = det(''A''&nbsp;−&nbsp;''A'') = 0 ===
One elementary but incorrect argument for the theorem is to "simply" take the definition
 
:<math>p(\lambda) = \det(\lambda I_n - A)</math>
 
and substitute ''A'' for λ, obtaining
 
:<math>p(A)=\det(A I_n - A) = \det(A - A) = 0.</math>
 
There are many ways to see why this argument is wrong. First, in Cayley–Hamilton theorem, ''p''(''A'') is an ''n×n matrix''.  However, the right hand side of the above equation is the value of a determinant, which is a ''scalar''. So they cannot be equated unless ''n''&nbsp;=&nbsp;1 (i.e. ''A'' is just a scalar).  Second, in the expression <math>\det(\lambda I_n - A)</math>, the variable λ actually occurs at the diagonal entries of the matrix <math>\lambda I_n - A</math>.  To illustrate, consider the characteristic polynomial in the previous example again:
 
:<math>\det\begin{pmatrix}\lambda-1&-2\\-3&\lambda-4\end{pmatrix}.</math>
 
If one substitutes the entire matrix ''A'' for λ in those positions, one obtains
 
:<math>\det\begin{pmatrix} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} - 1 & -2 \\ -3 &\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} - 4\end{pmatrix},</math>
 
in which the "matrix" expression is simply not a valid one.  Note, however, that if scalar multiples of identity matrices
instead of scalars are subtracted in the above, i.e. if the substitution is performed as
 
:<math> \det \begin{pmatrix} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} - I_2 & -2I_2 \\ -3I_2 &\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} - 4I_2 \end{pmatrix},</math>
 
then the determinant is indeed zero, but the expanded matrix in question does not evaluate to <math>A I_n-A</math>; nor can its determinant (a scalar) be compared to ''p''(''A'') (a matrix).  So the argument that <math>p(A)=\det(AI_n-A)=0</math> still does not apply.
 
Actually, if such an argument holds, it should also hold when other [[multilinear form]]s instead of determinant is used. For instance, if we consider the [[permanent]] function and define <math>q(\lambda) = \operatorname{perm}(\lambda I_n - A)</math>, then by the same argument, we should be able to "prove" that ''q''(''A'')&nbsp;=&nbsp;0.  But this statement is demonstrably wrong.  In the 2-dimensional case, for instance, the permanent of a matrix is given by
 
:<math>\operatorname{perm} \begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad + bc. </math>
 
So, for the matrix ''A'' in the previous example,
 
:<math>q(\lambda) = \operatorname{perm} (\lambda I_2 - A) = \operatorname{perm} \begin{pmatrix} \lambda - 1 & -2  \\ -3 & \lambda-4 \end{pmatrix} = (\lambda - 1)(\lambda - 4) + (-2)(-3) = \lambda^2 - 5\lambda + 10.</math>
 
Yet one can verify that
 
:<math>q(A)=A^2-5A+10I_2=12I_2\not=0.</math>
 
One of the proofs for Cayley–Hamilton theorem above bears some similarity to the argument that <math>p(A)=\det(AI_n-A)=0</math>. By introducing a matrix with non-numeric coefficients, one can actually let ''A'' live inside a matrix entry, but then <math>A I_n</math> is not equal to ''A'', and the conclusion is reached differently.
 
==Abstraction and generalizations==
The above proofs show that the Cayley–Hamilton theorem holds for matrices with entries in any commutative ring ''R'', and that ''p''(φ) = 0 will hold whenever φ is an endomorphism of an ''R'' module generated by elements ''e''<sub>1</sub>,...,''e''<sub>''n''</sub> that satisfies
 
:<math>\varphi(e_j)=\sum a_{ij}e_i, \qquad j =1, \cdots, n.</math>
 
This more general version of the theorem is the source of the celebrated [[Nakayama lemma]] in commutative algebra and algebraic geometry.
 
==See also==
* [[Companion matrix]]
 
== References ==
<references/>
 
{{DEFAULTSORT:Cayley-Hamilton Theorem}}
[[Category:Theorems in linear algebra]]
[[Category:Articles containing proofs]]
[[Category:Matrix theory]]

Latest revision as of 22:23, 3 June 2014

Obtain it in excel, copy-paste this continued plan towards corpuscle B1. A person's again access an majority of time in abnormal all over corpuscle A1, the huge in treasures will arise in B1.

If you cherished this write-up and you would like to get much more details regarding clash of clans hack free download no survey kindly take a look at our web site. Yet unfortunately Supercell, by allowing your current illusion on the multi player game, taps into you're instinctual male drive which can from the status hierarchy, and even though its unattainable to the the surface of your hierarchy if there isn't been logging in morning because the game became available plus you invested honest money in extra builders, the drive for getting a small bit further compels enough visitors to spend a real income towards virtual 'gems'" that exercise could be the top-grossing app within the Software package Store.

Delight in unlimited points, resources, coins or gems, you must have download the clash of clans crack tool by clicking on the button. Depending around operating system that an individual using, you will will need to run the downloaded list as administrator. Deliver the log in ID and select the device. Immediately this, you are ought enter the number together with gems or coins that you want to get.

Control system game playing is just the thing for kids. Consoles enable you to get far better control concerning content and safety, all of the kids can simply force of the wind by way of parent or guardian regulates on your computer. Using this step might help guard your young ones between harm.

Ensure you may not let video games take over your days. Game titles can be quite additive, this means you have to make undoubtedly you moderate the period of time that you investing activity such games. Merchandise in your articles invest an excessive involving time playing video game, your actual life have the ability to begin to falter.

Be careful about letting your young person play online video games, especially games with dwelling sound. There could be foul language in most channels, in addition to a lot bullying behavior. Could also be child predators in these products chat rooms. Exactly what your child is doing and surveil these chitchat times due to or perhaps protection.

If you want to master game play near shooter video games, excel att your weapons. See everything there is realize about each and all weapon style in the overall game. Each weapon excels while in certain ways, but occurs short in others. When you know that pluses and minuses associated with each weapon, you can use them to king advantage.