Tonic (music): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Monkbot
en>Taohinton
mNo edit summary
 
Line 1: Line 1:
:''For other uses, see [[Trace]]''
Greetings! I am Marvella and I really feel comfortable when individuals use the complete name. For a while she's been in South Dakota. The factor she adores most is body developing and now she is attempting to earn money with it. In her expert lifestyle she is a payroll clerk but she's usually wanted her own business.<br><br>Feel free to surf to my web page ... [http://n2c.co/diettogo30380 healthy meals delivered]
In [[linear algebra]], the '''trace''' of an ''n''-by-''n'' [[square matrix]] ''A'' is defined to be the sum of the elements on the [[main diagonal]] (the diagonal from the upper left to the lower right) of ''A'', i.e.,
 
:<math>\operatorname{tr}(A) = a_{11} + a_{22} + \dots + a_{nn}=\sum_{i=1}^{n} a_{ii}</math>
where ''a<sub>jk</sub>'' denotes the entry on the ''j''-th row and ''k''-th column of ''A''. The trace of a matrix is the sum of the (complex) [[eigenvalue]]s, and it is [[Invariants of tensors|invariant]] with respect to a [[change of basis]]. This characterization can be used to define the trace of a linear operator in general. Note that the trace is only defined for a square matrix (i.e., {{nowrap|''n'' &times; ''n''}}).
 
Geometrically, the trace can be interpreted as the infinitesimal change in volume (as the derivative of the [[determinant]]), which is made precise in [[Jacobi's formula]].
 
The term '''trace''' is a [[calque]] from the German ''[[wikt:Spur#German|Spur]]'' ([[cognate]] with the English ''[[Wiktionary:spoor|spoor]]''), which, as a function in mathematics, is often abbreviated to "tr".
 
== Example ==
Let ''T'' be a linear operator represented by the matrix
 
:<math>
\begin{bmatrix}
  -2 & 2 & -4 \\
  -1 & 1 &  3 \\
  2 & 0 & -1
\end{bmatrix}
</math>.
 
Then {{nowrap|1=tr(''T'') = &minus;2 + 1 &minus; 1 = &minus;2}}.
 
== Properties ==
 
===Basic properties===
The trace is a [[linear operator|linear mapping]]. That is,
 
:<math>\operatorname{tr}(A + B) = \operatorname{tr}(A) + \operatorname{tr}(B)</math>,
 
:<math>\operatorname{tr}(cA) = c \operatorname{tr}(A)</math>.
 
for all square matrices ''A'' and ''B'', and all [[scalar (mathematics)|scalar]]s ''c''.
 
A matrix and its [[transpose]] have the same trace:
 
:<math> \operatorname{tr}(A) = \operatorname{tr}(A^{\mathrm T})</math>.
 
This follows immediately from the fact that transposing a square matrix does not affect elements along the main diagonal.
 
===Trace of a product===
 
The trace of a product can be rewritten as the sum of entry-wise products of elements:
 
:<math>\operatorname{tr}(X^{\mathrm T}Y) = \operatorname{tr}(XY^{\mathrm T}) = \operatorname{tr}(Y^{\mathrm T}X) = \operatorname{tr}(YX^{\mathrm T}) = \sum_{i,j}X_{ij}Y_{ij}</math>.
 
This means that the trace of a product of matrices functions similarly to a [[dot product]] of vectors.  For this reason, generalizations of vector operations to matrices (e.g. in [[matrix calculus]] and [[statistics]]) often involve a trace of matrix products.
 
The trace of a product can also be written in the following forms:
{|
|
:<math>\operatorname{tr}(X^{\mathrm T}Y) = \sum_{ij}(X \circ Y)_{ij}</math>
|
:::(using the [[Matrix multiplication#Hadamard product|Hadamard product]], i.e. entry-wise product).
|-
|
:<math>\operatorname{tr}(X^{\mathrm T}Y) = \operatorname{vec}(X) \cdot \operatorname{vec}(Y) = \operatorname{vec}(X)^{\mathrm T}\operatorname{vec}(Y)</math>
|
:::(using the [[vectorization (mathematics)|vectorization]] operator).
|}
 
The matrices in a trace of a product can be switched: If ''A'' is an ''m''&times;''n'' matrix and ''B'' is an ''n''&times;''m'' matrix, then
:<math>\operatorname{tr}(AB) = \operatorname{tr}(BA)</math>.<ref>This is immediate from the definition of the [[matrix product]]:
:<math>\operatorname{tr}(AB) = \sum_{i=1}^m \left(AB\right)_{ii} = \sum_{i=1}^m \sum_{j=1}^n A_{ij} B_{ji} = \sum_{j=1}^n \sum_{i=1}^m B_{ji} A_{ij} = \sum_{j=1}^n \left(BA\right)_{jj} = \operatorname{tr}(BA)</math>.</ref>
 
Equivalently, the trace is ''invariant under [[cyclic permutation]]s'', i.e.,
 
:<math>\operatorname{tr}(ABCD) = \operatorname{tr}(BCDA) = \operatorname{tr}(CDAB) = \operatorname{tr}(DABC)</math>.
 
This is known as the ''cyclic property''.
 
Note that arbitrary permutations are not allowed: in general,
:<math>\operatorname{tr}(ABC) \neq \operatorname{tr}(ACB)</math>.
 
However, if products of three [[symmetric matrix|symmetric]] matrices are considered, any permutation is allowed. (Proof: tr(''ABC'') = tr(''A''<sup>T</sup> ''B''<sup>T</sup> ''C''<sup>T</sup>) = tr(''A''<sup>T</sup>(''CB'')<sup>T</sup>) = tr((''CB'')<sup>T</sup>''A''<sup>T</sup>) = tr((''ACB'')<sup>T</sup>) = tr(''ACB''), where the last equality is because the traces of a matrix and its transpose are equal.) For more than three factors this is not true.
 
Unlike the [[determinant]], the trace of the product is not the product of traces. What is true is that the trace of the [[tensor product]] of two matrices is the product of their traces:
 
:<math>\operatorname{tr}(X \otimes Y) = \operatorname{tr}(X)\operatorname{tr}(Y)</math>.
 
===Other properties===
The following three properties:
:<math>\operatorname{tr}(A + B) = \operatorname{tr}(A) + \operatorname{tr}(B)</math>,
:<math>\operatorname{tr}(cA) = c\cdot \operatorname{tr}(A)</math>,
:<math>\operatorname{tr}(AB) = \operatorname{tr}(BA)</math>,
 
characterize the trace completely in the sense as follows. Let ''f'' be a [[linear functional]] on the space of square matrices satisfying {{nowrap|''f''(''x y'') {{=}} ''f''(''y x'')}}. Then ''f'' and tr are proportional.<ref>Proof:
:<math>f(e_{ij}) = 0</math> if and only if <math>i \neq j</math> and <math>f(e_{jj}) = f(e_{11})</math> (with the standard basis <math>e_{ij}</math>),
and thus
 
:<math>f(A) = \sum_{i, j} [A]_{ij} f(e_{ij}) = \sum_i [A]_{ii} f(e_{11}) = f(e_{11}) \operatorname{tr}(A)</math>.
 
More abstractly, this corresponds to the decomposition <math>\mathit{gl}_n = \mathit{sl}_n \oplus k</math>, as tr(''AB'') = tr(''BA'') (equivalently, <math>\operatorname{tr}([A, B]) = 0</math>) defines the trace on ''sl''<sub>''n''</sub>, which has complement the scalar matrices, and leaves one degree of freedom: any such map is determined by its value on scalars, which is one scalar parameter and hence all are multiple of the trace, a non-zero such map.</ref>
 
The trace is [[similarity invariance|similarity-invariant]], which means that ''A'' and ''P''<sup>&minus;1</sup>''AP'' have the same trace. This is because
 
:<math>\operatorname{tr}(P^{-1}AP) = \operatorname{tr}(P^{-1}(AP)) = \operatorname{tr}((AP) P^{-1}) = \operatorname{tr}(A (PP^{-1}))= \operatorname{tr}(A)</math>.
 
If ''A'' is [[symmetric matrix|symmetric]] and ''B'' is [[skew-symmetric matrix|antisymmetric]], then
 
:<math>\operatorname{tr}(AB) = 0</math>.
 
The trace of the [[identity matrix]] is the dimension of the space; this leads to [[Dimension (vector space)#Trace|generalizations of dimension using trace]]. The trace of an [[idempotent matrix]] ''A'' (for which ''A''<sup>2</sup>&nbsp;=&nbsp;''A'') is the [[rank (linear algebra)|rank]] of ''A''. The trace of a [[nilpotent matrix]] is zero.
 
More generally, if  {{nowrap|1=''f''(''x'') = (''x'' &minus; ''λ''<sub>1</sub>)<sup>''d''<sub>1</sub></sup>···(''x'' &minus; ''λ''<sub>''k''</sub>)<sup>''d''<sub>''k''</sub></sup>}} is the [[characteristic polynomial]] of a matrix ''A'', then
:<math>\operatorname{tr}(A) = d_1 \lambda_1 + \cdots + d_k \lambda_k</math>.
 
When both ''A'' and ''B'' are ''n''-by-''n'', the trace of the (ring-theoretic) [[commutator]] of ''A'' and ''B'' vanishes: tr([''A'',&nbsp;''B''])&nbsp;=&nbsp;0; one can state this as "the trace is a map of Lie algebras <math>gl_n \to k</math> from operators to scalars", as the commutator of scalars is trivial (it is an abelian Lie algebra). In particular, using similarity invariance, it follows that the identity matrix is never similar to the commutator of any pair of matrices.
 
Conversely, any square matrix with zero trace is a linear combinations of the commutators of pairs of matrices.<ref>Proof: <math>\mathfrak{sl}_n</math> is a [[semisimple Lie algebra]] and thus every element in it is a linear combination of commutators of some pairs of elements, otherwise the [[derived algebra]] would be a proper ideal.</ref> Moreover, any square matrix with zero trace is [[Bounded operator|unitarily equivalent]] to a square matrix with diagonal consisting of all zeros.
 
The trace of any power of a [[nilpotent matrix]] is zero. When the characteristic of the base field is zero, the converse also holds: if <math>\operatorname{tr}(x^k) = 0</math> for all <math>k</math>, then <math>x</math> is nilpotent.
 
*The trace of a [[projection matrix]] is the dimension of the target space.  If
:: <math>P_X = X\left(X^{\mathrm T} X\right)^{-1}X^{\mathrm T}</math>,
: then
:: <math>\operatorname{tr}\left(P_X \right)=\operatorname{rank}\left(X\right)</math>.
 
== Exponential trace ==
Expressions like exp(tr(''A'')), where ''A'' is a square matrix, occur so often in some fields (e.g. multivariate statistical theory), that a shorthand notation has become common:
 
:<math>\operatorname{etr}(A) := \exp(\operatorname{tr}(A))</math>.
 
This is sometimes referred to as the '''exponential trace''' function; it is used in the [[Golden–Thompson inequality]].
 
== Trace of a linear operator ==
 
Given some linear map {{nowrap|''f'' : ''V'' → ''V''}} (''V'' is a finite-[[dimension (linear algebra)|dimensional]] [[vector space]]) generally, we can define the trace of this map by considering the trace of [[Representation theory|matrix representation]] of ''f'', that is, choosing a [[basis (linear algebra)|basis]] for ''V'' and describing ''f'' as a matrix relative to this basis, and taking the trace of this square matrix. The result will not depend on the basis chosen, since different bases will give rise to [[matrix similarity|similar matrices]], allowing for the possibility of a basis-independent definition for the trace of a linear map.
 
Such a definition can be given using the [[natural isomorphism|canonical isomorphism]] between the space End(''V'') of linear maps on ''V'' and {{nowrap|''V'' ⊗ ''V''<sup>*</sup>}}, where ''V''<sup>*</sup> is the [[dual space]] of ''V''. Let ''v'' be in ''V'' and let ''f'' be in ''V''<sup>*</sup>. Then the trace of the decomposable element {{nowrap|''v'' ⊗ ''f''}} is defined to be ''f''(''v''); the trace of a general element is defined by linearity. Using an explicit basis for ''V'' and the corresponding dual basis for ''V''<sup>*</sup>, one can show that this gives the same definition of the trace as given above.
 
=== Eigenvalue relationships ===
If ''A'' is a square ''n''-by-''n'' matrix with [[real number|real]] or [[complex number|complex]] entries and if λ<sub>1</sub>,...,λ<sub>''n''</sub> are the [[eigenvalue]]s of ''A'' (listed according to their [[algebraic multiplicity|algebraic multiplicities]]), then
 
:<math>\operatorname{tr}(A) = \sum_i \lambda_i</math>.
 
This follows from the fact that ''A'' is always [[similar matrix|similar]] to its [[Jordan form]], an upper [[triangular matrix]] having λ<sub>1</sub>,...,λ<sub>''n''</sub> on the main diagonal. In contrast, the [[determinant]] of <math>A</math> is the ''product'' of its eigenvalues; i.e.,
 
:<math>\operatorname{det}(A) = \prod_i \lambda_i</math>.
 
More generally,
:<math>\operatorname{tr}(A^k) = \sum_i \lambda_i^k</math>.
 
=== Derivatives ===
The trace corresponds to the derivative of the determinant: it is the [[Lie algebra]] analog of the ([[Lie group]]) map of the determinant. This is made precise in [[Jacobi's formula]] for the [[derivative]] of the [[determinant]].
 
As a particular case,  ''at the identity'', the derivative  of the determinant actually amounts to the trace: <math>\operatorname{tr}=\operatorname{det}'_I</math>.  From this (or from the connection between the trace and the eigenvalues), one can derive a connection between the trace function, the exponential map between a Lie algebra and its Lie group (or concretely, the [[matrix exponential]] function), and the [[determinant]]:
 
:<math>\det(\exp(A)) = \exp(\operatorname{tr}(A))</math>.
 
For example, consider the one-parameter family of linear transformations given by rotation through angle θ,
 
:<math>R_{\theta} = \left(\begin{array}{cc}\cos \theta & -\sin \theta\\\sin \theta&\cos \theta\end{array}\right)</math>.
 
These transformations all have determinant 1, so they preserve area. The derivative of this family at θ = 0 is the antisymmetric matrix
 
:<math>A = \left(\begin{array}{cc}0 & -1\\1&0\end{array}\right)</math>
 
which clearly has trace zero, indicating that this matrix represents an infinitesimal transformation which preserves area.
 
A related characterization of the trace applies to linear vector fields. Given a matrix ''A'', define a vector field ''F'' on ℝ<sup>''n''</sup> by {{nowrap|''F''(''x'') {{=}} ''A'' ''x''}}. The components of this vector field are linear functions (given by the rows of ''A''). Its [[divergence]] {{nowrap|div ''F''}} is a constant function, whose value is equal to tr(''A'').
By the [[divergence theorem]], one can interpret this in terms of flows: if ''F''(''x'') represents the velocity of a fluid at location ''x'' and ''U'' is a region in ℝ<sup>''n''</sup>, the [[flow network|net flow]] of the fluid out of ''U'' is given by {{nowrap|tr(''A'') · vol(''U'')}}, where vol(''U'') is the [[volume]] of ''U''.
 
The trace is a linear operator, hence it commutes with the derivative:
 
:<math>\operatorname{d} \operatorname{tr} (X) = \operatorname{tr}(\operatorname{d\!} X)</math>.
 
== Applications ==
The trace of a 2-by-2 complex matrix is used to classify [[Möbius transformation]]s. First the matrix is normalized to make its [[determinant]] equal to one. Then, if the square of the trace is 4, the corresponding transformation is ''parabolic''. If the square is in the interval [0,4), it is ''elliptic''. Finally, if the square is greater than 4, the transformation is ''loxodromic''. See [[Möbius transformation#Classification|classification of Möbius transformations]].
 
The trace is used to define [[character (mathematics)|characters]] of [[group representation]]s. Two representations <math>A, B : G \to \mathit{GL}(V)</math> of a group ''G'' are equivalent (up to change of basis on ''V'') if <math>\operatorname{tr} A(g) = \operatorname{tr} B(g)</math> for all {{nowrap|''g'' ∈ ''G''}}.
 
The trace also plays a central role in the distribution of [[Quadratic form (statistics)|quadratic forms]].
 
== Lie algebra ==
The trace is a map of Lie algebras <math>\operatorname{tr}\colon \mathit{gl}_n \to k</math> from the Lie algebra ''gl''<sub>''n''</sub> of operators on a ''n''-dimensional space ({{nowrap|''n'' &times; ''n''}} matrices) to the Lie algebra ''k'' of scalars; as ''k'' is abelian (the Lie bracket vanishes), the fact that this is a map of Lie algebras is exactly the statement that the trace of a bracket vanishes: <math>\operatorname{tr}([A, B]) = 0</math>.
 
The kernel of this map, a matrix whose trace is [[0 (number)|zero]], is often said to be '''{{visible anchor|traceless}}''' or '''{{visible anchor|tracefree}}''', and these matrices form the [[simple Lie algebra]] ''sl''<sub>''n''</sub>, which is the [[Lie algebra]] of the [[special linear group]] of matrices with determinant 1. The special linear group consists of the matrices which do not change volume, while the special linear algebra is the matrices which ''infinitesimally'' do not change volume.
 
In fact, there is an internal [[Direct sum of Lie algebras|direct sum]] decomposition <math>\mathit{gl}_n = \mathit{sl}_n \oplus k</math> of operators/matrices into traceless operators/matrices and scalars operators/matrices. The projection map onto scalar operators can be expressed in terms of the trace, concretely as:
:<math>A \mapsto \textstyle{\frac{1}{n}}\operatorname{tr}(A) \cdot I</math>.
Formally, one can compose the trace (the [[counit]] map) with the unit map <math>k \to \mathit{gl}_n</math> of "inclusion of [[scalar transformation|scalars]]" to obtain a map <math>\mathit{gl}_n \to \mathit{gl}_n</math> mapping onto scalars, and multiplying by ''n''. Dividing by ''n'' makes this a projection, yielding the formula above.
 
In terms of [[short exact sequence]]s, one has
:<math>0 \to \mathit{sl}_n \to \mathit{gl}_n \overset{\operatorname{tr}}{\to} k \to 0</math>
which is analogous to
:<math>1 \to \mathit{SL}_n \to \mathit{GL}_n \overset{\operatorname{det}}{\to} K^* \to 1</math>
for Lie groups. However, the trace splits naturally (via <math>\textstyle{\frac{1}{n}}</math> times scalars) so <math>\mathit{gl}_n = sl_n \oplus k</math>, but the splitting of the determinant would be as the ''n''th root times scalars, and this does not in general define a function, so the determinant does not split and the general linear group does not decompose: <math>\mathit{GL}_n \neq SL_n \times K^*.</math>
 
=== Bilinear forms ===
The bilinear form
 
:<math>B(x, y) = \operatorname{tr}(\operatorname{ad}(x)\operatorname{ad}(y))\text{ where }\operatorname{ad}(x)y = [x, y] = xy - yx</math>
 
is called the [[Killing form]], which is used for the classification of Lie algebras.
 
The trace defines a bilinear form:
 
:<math>(x, y) \mapsto \operatorname{tr}(xy)</math>
 
(''x'', ''y'' square matrices).
 
The form is symmetric, non-degenerate<ref>This follows from the fact that <math>\operatorname{tr}(A^*A) = 0</math> if and only if <math>A = 0</math></ref> and associative in the sense that:
 
:<math>\operatorname{tr}(x[y, z]) = \operatorname{tr}([x, y]z)</math>.
 
In a simple Lie algebra (e.g., <math>\mathfrak{sl}_n</math>), every such bilinear form is proportional to each other; in particular, to the Killing form.
 
Two matrices ''x''  and ''y'' are said to be ''trace orthogonal'' if
 
: <math>\operatorname{tr}(xy) = 0</math>.
 
== Inner product ==
 
For an ''m''-by-''n'' matrix ''A'' with complex (or real) entries and <sup>*</sup> being the conjugate transpose, we have
 
:<math> \operatorname{tr}(A^* A) \ge 0 </math>
 
with equality if and only if {{nowrap|''A'' {{=}} 0}}.  The assignment
 
:<math>\langle A, B\rangle = \operatorname{tr}(B^* A)</math>
 
yields an [[inner product]] on the space of all complex (or real) ''m''-by-''n'' matrices.
 
The [[norm (mathematics)|norm]] induced by the above inner product is called the [[Frobenius norm]]. Indeed it is simply the [[Euclidean norm]] if the matrix is considered as a vector of length ''m''&nbsp;''n''.
 
It follows that if ''A'' and ''B'' are [[Positive definite matrix|positive semi-definite matrices]] of the same size then
 
: <math> 0 \leq \operatorname{tr}(A B)^2 \leq \operatorname{tr}(A^2) \operatorname{tr}(B^2) \leq \operatorname{tr}(A)^2 \operatorname{tr}(B)^2</math>.<ref>Can be proven with the [[Cauchy–Schwarz inequality]].</ref>
 
== Generalization ==
 
The concept of trace of a matrix is generalised to the [[trace class]] of [[compact operator]]s on [[Hilbert space]]s, and the analog of the Frobenius norm is called the [[Hilbert–Schmidt operator|Hilbert–Schmidt]] norm.
 
The [[partial trace]] is another generalization of the trace that is operator-valued.
 
If ''A'' is a general [[associative algebra]] over a field ''k'', then a trace on ''A'' is often defined to be any map tr: {{nowrap|''A'' → ''k''}} which vanishes on commutators: {{nowrap|tr([''a'', ''b'']) {{=}} 0}} for all {{nowrap|''a'', ''b'' in ''A''}}. Such a trace is not uniquely defined; it can always at least be modified by multiplication by a nonzero scalar.
 
A [[supertrace]] is the generalization of a trace to the setting of [[superalgebra]]s.
 
The operation of [[tensor contraction]] generalizes the trace to arbitrary tensors.
 
== Coordinate-free definition ==
We can identify the space of linear operators on a vector space ''V'' with the space <math>V \otimes V^*</math>, where <math>v \otimes h = (w \mapsto h(w)v)</math>. We also have a canonical bilinear function <math>t \colon V \times V^* \to F</math> that consists of applying an element ''w''<sup>*</sup> of ''V''<sup>*</sup> to an element ''v'' of ''V'' to get an element of ''F'', in symbols <math>t(v, w^*) := w^*(v) \in F</math>. This induces a linear function on the [[tensor product]] (by [[Tensor product#Characterization by a universal property|its universal property]]) <math>t\colon V \otimes V^* \to F</math>, which, as it turns out, when that tensor product is viewed as the space of operators, is equal to the trace.
 
This also clarifies why <math>\operatorname{tr}(AB)=\operatorname{tr}(BA)</math> and why <math>\operatorname{tr}(AB)\neq\operatorname{tr}(A)\operatorname{tr}(B)</math>, as composition of operators (multiplication of matrices) and trace can be interpreted as ''the same'' pairing. Viewing <math>\operatorname{End}(V) \cong V \otimes V^*</math>, one may interpret the composition map <math>\operatorname{End}(V) \times \operatorname{End}(V) \to \operatorname{End}(V)</math> as
:<math>(V \otimes V^*) \times (V \otimes V^*) \to (V \otimes V^*)</math>
coming from the pairing <math>V^* \times V \to F</math> on the middle terms. Taking the trace of the product then comes from pairing on the outer terms, while taking the product in the opposite order and then taking the trace just switches which pairing is applied first. On the other hand, taking the trace of ''A'' and the trace of ''B'' corresponds to applying the pairing on the left terms and on the right terms (rather than on inner and outer), and is thus different.
 
In coordinates, this corresponds to indexes: multiplication is given by <math>\textstyle{(AB)_{ik} = \sum_j a_{ij}b_{jk}}</math>, so <math>\textstyle{\operatorname{tr}(AB) = \sum_{ij} a_{ij}b_{ji}}</math> and <math>\textstyle{\operatorname{tr}(BA) = \sum_{ij} b_{ij}a_{ji}}</math> which is the same, while <math>\textstyle{\operatorname{tr}(A)\cdot \operatorname{tr}(B) = \sum_i a_{ii} \cdot \sum_j b_{jj}}</math>, which is different.
 
For <math>V</math> finite-dimensional, with basis <math>\{ e_i \}</math> and dual basis <math>\{ e^i \}</math>, then <math>e_i \otimes e^j</math> is the ''ij'''-entry of the matrix of the operator with respect to that basis. Any operator <math>A</math> is therefore a sum of the form <math>A = a_{ij} e_i \otimes e^j</math>. With <math>t</math> defined as above, <math>t(A) = a_{ij} t(e_i \otimes e^j)</math>. The latter, however, is just the [[Kronecker delta]], being 1 if {{nowrap|''i'' {{=}} ''j''}} and 0 otherwise. This shows that <math>t(A)</math> is simply the sum of the coefficients along the diagonal. This method, however, makes coordinate invariance an immediate consequence of the definition.
 
=== Dual ===
Further, one may dualize this map, obtaining a map <math>F^* = F \to V \otimes V^* \cong \operatorname{End}(V)</math>. This map is precisely the inclusion of [[scalar transformation|scalars]], sending {{nowrap|1 ∈ ''F''}} to the identity matrix: "trace is dual to scalars". In the language of [[bialgebra]]s, scalars are the ''unit'', while trace is the ''[[counit]]''.
 
One can then compose these, <math>F \overset{I}{\to} \operatorname{End}(V) \overset{\operatorname{tr}}{\to} F</math>, which yields multiplication by ''n'', as the trace of the identity is the dimension of the vector space.
 
== See also ==
* [[Characteristic function (probability theory)#Matrix-valued random variables|Characteristic function]]
* [[Field trace]]
* [[Golden–Thompson inequality]]
* [[Specht's theorem]]
* [[Trace class]]
* [[Trace inequalities]]
* [[von Neumann's trace inequality]]
 
== Notes ==
{{reflist}}
 
==External links==
* {{springer|title=Trace of a square matrix|id=p/t093550}}
 
{{DEFAULTSORT:Trace (Linear Algebra)}}
[[Category:Linear algebra]]
[[Category:Matrix theory]]

Latest revision as of 18:21, 7 April 2014

Greetings! I am Marvella and I really feel comfortable when individuals use the complete name. For a while she's been in South Dakota. The factor she adores most is body developing and now she is attempting to earn money with it. In her expert lifestyle she is a payroll clerk but she's usually wanted her own business.

Feel free to surf to my web page ... healthy meals delivered