Wedderburn's little theorem: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Rgdboer
m →‎References: authorlink
 
en>Octonion
mNo edit summary
Line 1: Line 1:
Emilia Shryock is my title but you can call me something you like. My day job is a librarian. To collect coins is a thing that I'm completely addicted to. South Dakota is exactly where me and my husband live and my family members enjoys it.<br><br>Also visit my site; [http://chorokdeul.co.kr/index.php?document_srl=454418&mid=customer21 at home std test]
In [[mathematics]], especially in [[linear algebra]] and [[Matrix (mathematics)|matrix theory]], the '''vectorization''' of a [[matrix (mathematics)|matrix]] is a [[linear transformation]] which converts the matrix into a [[column vector]]. Specifically, the vectorization of an ''m×n'' matrix ''A'', denoted by vec(''A''), is the ''mn × 1'' column vector obtained by stacking the columns of the matrix ''A'' on top of one another:
 
:<math>\mathrm{vec}(A) = [a_{1,1}, ..., a_{m,1}, a_{1,2}, ..., a_{m,2}, ..., a_{1,n}, ..., a_{m,n}]^T</math>
Here <math>a_{i,j}</math> represents the <math>(i,j)</math>-th element of matrix <math>A</math> and the superscript <math>^T</math> denotes the [[transpose]]. Vectorization expresses the [[isomorphism]] <math>\mathbf{R}^{m \times n} := \mathbf{R}^m \otimes \mathbf{R}^n \cong \mathbf{R}^{mn}</math> between these vector spaces (of matrices and vectors) in coordinates.
 
For example, for the 2×2 matrix <math>A</math> = <math>\begin{bmatrix} a & b \\ c & d \end{bmatrix}</math>, the vectorization is <math>\mathrm{vec}(A) = \begin{bmatrix} a \\ c \\ b \\ d \end{bmatrix}</math>.
 
==Compatibility with Kronecker products==
 
The vectorization is frequently used together with the [[Kronecker product]] to express [[matrix multiplication]] as a linear transformation on matrices. In particular,
:<math> \mbox{vec}(ABC)=(C^{T}\otimes A)\mbox{vec}(B) </math>
 
for matrices ''A'', ''B'', and ''C'' of dimensions ''k×l'', ''l×m'', and ''m×n''. For example, if <math> \mbox{ad}_A(X) = AX-XA</math> (the [[adjoint endomorphism]] of the [[Lie algebra]] gl(''n'','''C''') of all ''n×n'' matrices with [[complex number|complex]] entries), then <math>\mbox{vec}(\mbox{ad}_A(X)) = (I_n\otimes A - A^T \otimes I_n ) \mbox{vec}(X)</math>, where <math>I_n</math> is the ''n×n'' [[identity matrix]].
 
There are two other useful formulations:
 
:<math> \mbox{vec}(ABC)=(I_n\otimes AB)\mbox{vec}(C) =(C^{T}B^{T}\otimes I_k)\mbox{vec}(A)</math>
 
:<math> \mbox{vec}(AB)=(I_m\otimes A)\mbox{vec}(B) =(B^{T}\otimes I_k)\mbox{vec}(A)</math>
 
==Compatibility with Hadamard products==
 
Vectorization is an [[algebra homomorphism]] from the space of ''n×n'' matrices with the [[Hadamard product (matrices)|Hadamard]] (entrywise) product to '''C'''<sup>n</sup> with its [[Hadamard product]]{{dn|date=July 2013}}:
 
:vec(''A'' <math>\circ</math> ''B'') = vec(''A'') <math>\circ</math> vec(''B'').
 
==Compatibility with inner products==
 
Vectorization is a [[unitary transformation]] from the space of ''n×n'' matrices with the [[Matrix norm#Frobenius norm|Frobenius]] (or [[Hilbert-Schmidt operator|Hilbert-Schmidt]]) [[inner product]] to '''C'''<sup>n</sup> :
 
:tr(''A''<sup>*</sup> ''B'') = vec(''A'')<sup>*</sup> vec(''B'')
 
where the superscript <sup>*</sup> denotes the [[conjugate transpose]].
 
==Half-vectorization==
 
For a [[symmetric matrix]] ''A'', the vector vec(''A'') contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with the [[lower triangular matrix|lower triangular]] portion, that is, the ''n''(''n''+1)/2 entries on and below the [[main diagonal]]. For such matrices, the '''half-vectorization''' is sometimes more useful than the vectorization. The half-vectorization, vech(''A''), of a symmetric ''n×n'' matrix ''A'' is the ''n''(''n''+1)/2 × 1 column vector obtained by vectorizing only the lower triangular part of ''A'':
:vech(''A'') = [ ''A''<sub>1,1</sub>, ..., ''A''<sub>n,1</sub>, ''A''<sub>2,2</sub>, ..., ''A''<sub>n,2</sub>, ..., ''A''<sub>n-1,n-1</sub>,''A''<sub>n-1,n</sub>, ''A''<sub>n,n</sub> ]<sup>T</sup>.
 
For example, for the 2×2 matrix ''A'' = <math>\begin{bmatrix} a & b \\ b & d \end{bmatrix}</math>, the half-vectorization is vech(''A'') = <math>\begin{bmatrix} a \\ b \\ d \end{bmatrix}</math>.
 
There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice-versa called, respectively, the [[duplication matrix]] and the [[elimination matrix]].
 
==Programming language==
Programming languages that implement matrices may have easy means for vectorization.
In [[Matlab]]/[[GNU Octave]] a matrix <code>A</code> can be vectorized by <code>A(:)</code>.
In [[Python (programming language)|Python]] [[NumPy]] arrays implement the 'flatten' method (although this stacks the ''rows'' of the matrix, not the columns), while in [[R programming language|R]] the desired effect can be achieved via the 'c()' or 'as.vector()' functions.
 
==See also==
* [[Voigt notation]]
* [[Row-major order|Column-major order]]
* [[Matricization]]
 
==References==
*Jan R. Magnus and Heinz Neudecker (1999), ''Matrix Differential Calculus with Applications in Statistics and Econometrics'', 2nd Ed., Wiley. ISBN 0-471-98633-X.
*Jan R. Magnus (1988), ''Linear Structures'', Oxford University Press. ISBN 0-85264-299-7.
 
[[Category:Linear algebra]]
[[Category:Matrices]]

Revision as of 13:07, 10 January 2014

In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a column vector. Specifically, the vectorization of an m×n matrix A, denoted by vec(A), is the mn × 1 column vector obtained by stacking the columns of the matrix A on top of one another:

Here represents the -th element of matrix and the superscript denotes the transpose. Vectorization expresses the isomorphism between these vector spaces (of matrices and vectors) in coordinates.

For example, for the 2×2 matrix = , the vectorization is .

Compatibility with Kronecker products

The vectorization is frequently used together with the Kronecker product to express matrix multiplication as a linear transformation on matrices. In particular,

for matrices A, B, and C of dimensions k×l, l×m, and m×n. For example, if (the adjoint endomorphism of the Lie algebra gl(n,C) of all n×n matrices with complex entries), then , where is the n×n identity matrix.

There are two other useful formulations:

Compatibility with Hadamard products

Vectorization is an algebra homomorphism from the space of n×n matrices with the Hadamard (entrywise) product to Cn with its Hadamard productTemplate:Dn:

vec(A B) = vec(A) vec(B).

Compatibility with inner products

Vectorization is a unitary transformation from the space of n×n matrices with the Frobenius (or Hilbert-Schmidt) inner product to Cn :

tr(A* B) = vec(A)* vec(B)

where the superscript * denotes the conjugate transpose.

Half-vectorization

For a symmetric matrix A, the vector vec(A) contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with the lower triangular portion, that is, the n(n+1)/2 entries on and below the main diagonal. For such matrices, the half-vectorization is sometimes more useful than the vectorization. The half-vectorization, vech(A), of a symmetric n×n matrix A is the n(n+1)/2 × 1 column vector obtained by vectorizing only the lower triangular part of A:

vech(A) = [ A1,1, ..., An,1, A2,2, ..., An,2, ..., An-1,n-1,An-1,n, An,n ]T.

For example, for the 2×2 matrix A = , the half-vectorization is vech(A) = .

There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice-versa called, respectively, the duplication matrix and the elimination matrix.

Programming language

Programming languages that implement matrices may have easy means for vectorization. In Matlab/GNU Octave a matrix A can be vectorized by A(:). In Python NumPy arrays implement the 'flatten' method (although this stacks the rows of the matrix, not the columns), while in R the desired effect can be achieved via the 'c()' or 'as.vector()' functions.

See also

References

  • Jan R. Magnus and Heinz Neudecker (1999), Matrix Differential Calculus with Applications in Statistics and Econometrics, 2nd Ed., Wiley. ISBN 0-471-98633-X.
  • Jan R. Magnus (1988), Linear Structures, Oxford University Press. ISBN 0-85264-299-7.