Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
mNo edit summary
Line 1: Line 1:
In [[multilinear algebra]], a '''dyadic''' or '''dyadic tensor''' is a second [[Tensor (intrinsic definition)#Definition via tensor products of vector spaces|order]] [[tensor]] written in a special notation, formed by juxtaposing pairs of vectors, along with a notation for manipulating such expressions analogous to the rules for [[matrix (mathematics)|matrix algebra]]. The notation and terminology is relatively obsolete today. Its uses in physics include [[stress analysis]] and [[electromagnetism]].
In [[statistics]], '''truncation''' results in values that are limited above or below, resulting in a '''truncated sample'''.<ref>[[Yadolah Dodge|Dodge, Y.]] (2003) ''The Oxford Dictionary of Statistical Terms''. OUP. ISBN 0-19-920613-9</ref> Truncation is similar to but distinct from the concept of [[Censoring (statistics)|statistical censoring]]. A truncated sample can be thought of as being equivalent to an underlying sample with all values outside the bounds entirely omitted, with not even a count of those omitted being kept. If the sample had been censored, a record would be of those that were censored, consisting of a note of whether the lower or upper bound had been passed and the value of the bound.


Dyadic notation was first established by [[Josiah Willard Gibbs]] in 1884.
==Applications==


In this article, upper-case bold variables denote dyadics (including dyads) whereas lower-case bold variables denote vectors. An alternative notation uses respectively double and single over- or underbars.
Usually the values that [[insurance adjuster]]s receive are either left-truncated, right-censored or both.  For example, if policyholders are subject to a policy limit, u, then any loss amounts that are actually above u are reported to the insurance company as being exactly u because u is the amount the [[insurance company|insurance companies]] pay.  The insurance company knows that the actual loss is greater than ''u'' but they don't know what it is.  On the other hand, left truncation occurs when policyholders are subject to a deductible.  If policyholders are subject to a deductible d, any loss amount that is less than d will not even be reported to the insurance company.  If there is a claim on a policy limit of u and a deductible of d, any loss amount that is greater than u will be reported to the insurance company as a loss of u-d because that is the amount the insurance company has to pay. Therefore insurance loss data is left-truncated because the insurance company doesn't know if there are values below the deductible d because policyholders won't make a claim.  The insurance loss is also right censored if the loss is greater than u because u is the most the insurance company will pay, so it only knows that your claim is greater than u, not what the claim amount is exactly.


==Definitions and terminology==
==Probability distributions==
{{Main|Truncated distribution}}


===Dyadic, outer, and tensor products===
Truncation can be applied to any [[probability distribution]] and will lead to a new distribution, not usually one within the same family. Thus, if a random variable ''X'' has ''F''(''x'') as its distribution function, the new random variable ''Y'' defined as having the distribution of ''X'' truncated to the semi-open interval (a,b] has the distribution function


A ''dyad'' is a [[tensor]] of [[Tensor order|order]] two and [[Tensor rank|rank]] one, and is the result of the dyadic product of two [[Euclidean vector|vector]]s ([[complex vector]]s in general), whereas a ''dyadic'' is a general [[tensor]] of [[Tensor order|order]] two.
:<math>F_Y(y)=\frac{F(y)-F(a)}{F(b)-F(a)} \,</math>


There are several equivalent terms and notations for this product:
for ''y'' in the interval (''a'', ''b''], and 0 or 1 otherwise. If truncation were to the closed interval [a,b], the distribution function would be
*the '''dyadic product''' of two vectors '''a''' and '''b''' is denoted by the juxtaposition '''ab''',
*the '''[[outer product]]''' of two [[column vector]]s '''a''' and '''b''' is denoted and defined as '''a''' &otimes; '''b''' or '''ab'''<sup>T</sup>, where T means [[transpose]],
*the '''[[tensor product]]''' of two vectors '''a''' and '''b''' is denoted '''a''' &otimes; '''b''',


In the dyadic context they all have the same definition and meaning, and are used synonymously, although the '''tensor product''' is an instance of the more general and abstract use of the term.
:<math>F_Y(y)=\frac{F(y)-F(a-)}{F(b)-F(a-)} \,</math>


====Three-dimensional Euclidean space====
for ''y'' in the interval [''a'', ''b''], and 0 or 1 otherwise.


To illustrate the equivalent usage, consider [[Three-dimensional space|three-dimensional]] [[Euclidean space]], letting:
==Data analysis==


:<math>\mathbf{a} = a_1 \mathbf{i} + a_2 \mathbf{j} + a_3 \mathbf{k}</math>
The analysis of data where observations are treated as being from truncated versions of standard distributions can be undertaken using a [[maximum likelihood]], where the likelihood would be derived from the distribution or density of the truncated distribution. This involves taking account of the factor <math>{F(b)-F(a)}</math> in the modified density function which will depend on the parameters of the original distribution.
:<math>\mathbf{b} = b_1 \mathbf{i} + b_2 \mathbf{j} + b_3 \mathbf{k}</math>


be two vectors where '''i''', '''j''', '''k''' (also denoted '''e'''<sub>1</sub>, '''e'''<sub>2</sub>, '''e'''<sub>3</sub>) are the standard [[basis vectors]] in this [[vector space]] (see also [[Cartesian coordinates]]). Then the dyadic product of '''a''' and '''b''' can be represented as a sum:
In practice, if the fraction truncated is very small the effect of truncation might be ignored when analysing data. For example, it is common to use a [[normal distribution]] to model data whose values can only be positive but for which the typical range of values is well away from zero: in such cases a truncated or censored version of the normal distribution may formally be preferable (although there would be other alternatives also), but there would be very little change in results from the more complicated analysis. However, software is readily available for maximum likelihood estimation of even moderately complicated models, such as [[regression analysis|regression models]], for truncated data.<ref>Wolynetz, M.S. (1979) ''Maximum Likelihood estimation in a Linear model from Confined and Censored Normal Data''. J.Roy.Statist.Soc (Series C), 28(2), 195&ndash;206</ref>
 
:<math> \begin{array}{llll}
\mathbf{ab} = & a_1 b_1 \mathbf{i i} & + a_1 b_2 \mathbf{i j} & + a_1 b_3 \mathbf{i k} \\
&+ a_2 b_1 \mathbf{j i} & + a_2 b_2 \mathbf{j j} & + a_2 b_3 \mathbf{j k}\\
&+ a_3 b_1 \mathbf{k i} & + a_3 b_2 \mathbf{k j} & + a_3 b_3 \mathbf{k k}
\end{array}</math>
 
or by extension from row and column vectors, a 3&times;3 matrix (also the result of the outer product or tensor product of '''a''' and '''b'''):
 
:<math>\mathbf{a b} \equiv \mathbf{a}\otimes\mathbf{b} \equiv \mathbf{a b}^\mathrm{T} =
\begin{pmatrix}
a_1 \\
a_2 \\
a_3
\end{pmatrix}\begin{pmatrix}
b_1 & b_2 & b_3
\end{pmatrix} = \begin{pmatrix}
a_1b_1 & a_1b_2 & a_1b_3 \\
a_2b_1 & a_2b_2 & a_2b_3 \\
a_3b_1 & a_3b_2 & a_3b_3
\end{pmatrix}.</math>
 
A ''dyad'' is a component of the dyadic (a [[monomial]] of the sum or equivalently entry of the matrix) - the juxtaposition of a pair of [[basis vector]]s [[scalar multiplication|scalar multiplied]] by a number.
 
Just as the standard basis (and unit) vectors '''i''', '''j''', '''k''', have the representations:
 
:<math>\mathbf{i} = \begin{pmatrix}
1 \\
0 \\
0
\end{pmatrix}, \mathbf{j} = \begin{pmatrix}
0 \\
1 \\
0
\end{pmatrix}, \mathbf{k} = \begin{pmatrix}
0 \\
0 \\
1
\end{pmatrix}
</math>
 
(which can be transposed), the ''standard basis (and unit) dyads'' have the representation:
 
:<math>\mathbf{ii} = \begin{pmatrix}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}, \cdots \mathbf{ji} = \begin{pmatrix}
0 & 0 & 0 \\
1 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}, \cdots \mathbf{jk} = \begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0
\end{pmatrix} \cdots
</math>
 
For a simple numerical example in the standard basis:
 
:<math>\begin{align}
\mathbf{A} & = 2\mathbf{ij} + \frac{\sqrt{3}}{2}\mathbf{ji} - 8\pi \mathbf{jk} + \frac{2\sqrt{2}}{3} \mathbf{kk} \\
& = 2 \begin{pmatrix}
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix} + \frac{\sqrt{3}}{2}\begin{pmatrix}
0 & 0 & 0 \\
1 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix} - 8\pi \begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0
\end{pmatrix} + \frac{2\sqrt{2}}{3}\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 1
\end{pmatrix}\\
& = \begin{pmatrix}
0 & 2 & 0 \\
\sqrt{3}/2 & 0 & - 8\pi \\
0 & 0 & \frac{2\sqrt{2}}{3}
\end{pmatrix}
\end{align}</math>
 
====''N''-dimensional Euclidean space====
 
If the Euclidean space is ''N''-[[dimension]]al, and
 
:<math> \mathbf{a} = \sum_{i=1}^N a_i\mathbf{e}_i = a_1 \mathbf{e}_1 + a_2 \mathbf{e}_2 + \cdots a_N \mathbf{e}_N</math>
:<math>\mathbf{b} = \sum_{j=1}^N b_j\mathbf{e}_j  = b_1 \mathbf{e}_1 + b_2 \mathbf{e}_2 + \cdots b_N \mathbf{e}_N</math>
 
where '''e'''<sub>''i''</sub> and '''e'''<sub>''j''</sub> are the [[standard basis]] vectors in ''N''-dimensions (the index ''i'' on '''e'''<sub>''i''</sub> selects a specific vector, not a component of the vector as in ''a<sub>i</sub>''), then in algebraic form their dyadic product is:
 
:<math> \mathbf{A} = \sum _{j=1}^N\sum_{i=1}^N a_ib_j{\mathbf{e}}_i\mathbf{e}_j.</math>
 
This is known as the ''nonion form'' of the dyadic. Their outer/tensor product in matrix form is:
 
:<math>
\mathbf{ab} = \mathbf{ab}^\mathrm{T} =
\begin{pmatrix}
a_1 \\
a_2 \\
\vdots \\
a_N
\end{pmatrix}\begin{pmatrix}
b_1 & b_2 & \cdots & b_N
\end{pmatrix}
= \begin{pmatrix}
a_1b_1 & a_1b_2 & \cdots & a_1b_N \\
a_2b_1 & a_2b_2 & \cdots & a_2b_N \\
\vdots & \vdots & \ddots & \vdots \\
a_Nb_1 & a_Nb_2 & \cdots & a_Nb_N
\end{pmatrix}.</math>
 
A ''dyadic polynomial'' '''A''', otherwise known as a dyadic, is formed from multiple vectors '''a'''<sub>''i''</sub> and '''b'''<sub>''j''</sub>:
 
:<math> \mathbf{A} = \sum_i\mathbf{a}_i\mathbf{b}_i = \mathbf{a}_1\mathbf{b}_1+\mathbf{a}_2\mathbf{b}_2+\mathbf{a}_3\mathbf{b}_3+\cdots </math>
 
A dyadic which cannot be reduced to a sum of less than ''N'' dyads is said to be complete. In this case, the forming vectors are non-coplanar,{{Dubious|date=October 2012}} see [[#Chen|Chen (1983)]].
 
===Classification===
 
The following table classifies dyadics:
 
:{| class="wikitable"
|-
|
! [[Determinant]]
! [[Adjugate]]
! [[Matrix (mathematics)|Matrix]] and its [[Rank (linear algebra)|rank]]
|-
! Zero
| = 0
| = 0
| = 0; rank 0: all zeroes
|-
! Linear
| = 0
| = 0
| ≠ 0; rank 1: at least one non-zero element and all 2 × 2 subdeterminants zero (single dyadic)
|-
! [[Plane (geometry)|Planar]]
| = 0
| ≠ 0 (single dyadic)
| ≠ 0; rank 2: at least one non-zero 2 × 2 subdeterminant
|-
! Complete
| ≠ 0
| ≠ 0
| ≠ 0; rank 3: non-zero determinant
|}
 
===Identities===
 
The following identities are a direct consequence of the definition of the tensor product:<ref>Spencer (1992), page 19.</ref>
 
{{ordered list
|1= '''Compatible with [[scalar multiplication]]:'''
:<math>(\alpha \mathbf{a})  \mathbf{b} =\mathbf{a}  (\alpha \mathbf{b}) = \alpha (\mathbf{a}  \mathbf{b})</math>
for any scalar <math>\alpha</math>.
 
|2= '''[[Distributive property|Distributive]] over [[vector addition]]:'''
:<math>\mathbf{a}  (\mathbf{b} + \mathbf{c}) =\mathbf{a}  \mathbf{b} + \mathbf{a}  \mathbf{c}</math>
:<math>(\mathbf{a} + \mathbf{b})  \mathbf{c} =\mathbf{a}  \mathbf{c} + \mathbf{b}  \mathbf{c}</math>
}}
 
== Dyadic algebra ==
 
=== Product of dyadic and vector ===
 
There are four operations defined on a vector and dyadic, constructed from the products defined on vectors.
 
:{| class="wikitable"
|-valign="top"
!
! Left
! Right
|-valign="top"
! [[Dot product]]
|
<math> \mathbf{c}\cdot \mathbf{a} \mathbf{b} = \left(\mathbf{c}\cdot\mathbf{a}\right)\mathbf{b}</math>
|
<math> \left(\mathbf{a}\mathbf{b}\right)\cdot \mathbf{c} = \mathbf{a}\left(\mathbf{b}\cdot\mathbf{c}\right) </math>
|-valign="top"
! [[Cross product]]
|
<math> \mathbf{c} \times \left(\mathbf{ab}\right) = \left(\mathbf{c}\times\mathbf{a}\right)\mathbf{b} </math>
|
<math> \left(\mathbf{ab}\right)\times\mathbf{c} = \mathbf{a}\left(\mathbf{b}\times\mathbf{c}\right)</math>
|-
|}
 
=== Product of dyadic and dyadic ===
 
There are five operations for a dyadic to another dyadic. Let '''a''', '''b''', '''c''', '''d''' be vectors. Then:
 
:{| class="wikitable"
|-
!
!
! Dot
! Cross
|-valign="top"
! Dot
|| ''Dot product''
<math>\left(\mathbf{a}\mathbf{b}\right)\cdot\left(\mathbf{c}\mathbf{d}\right) = \mathbf{a}\left(\mathbf{b}\cdot\mathbf{c}\right)\mathbf{d}= \left(\mathbf{b}\cdot\mathbf{c}\right)\mathbf{a}\mathbf{d}</math>
|| ''Double dot product''
 
<math>\mathbf{ab}\colon\mathbf{cd}=\left(\mathbf{a}\cdot\mathbf{d}\right)\left(\mathbf{b}\cdot\mathbf{c}\right)</math>
 
or
 
<math> \left(\mathbf{ab}\right):\left(\mathbf{cd}\right) = \mathbf{c}\cdot\left(\mathbf{ab}\right)\cdot\mathbf{d} =  \left(\mathbf{a}\cdot\mathbf{c}\right)\left(\mathbf{b}\cdot\mathbf{d}\right) </math>
 
|| ''Dot–cross product''
<math> \left(\mathbf{ab}\right)
\!\!\!\begin{array}{c}
_\cdot \\
^\times
\end{array}\!\!\!
\left(\mathbf{c}\mathbf{d}\right)=\left(\mathbf{a}\cdot\mathbf{c}\right)\left(\mathbf{b}\times\mathbf{d}\right)</math>
|-valign="top"
! Cross
||
|| ''Cross–dot product''
 
<math> \left(\mathbf{ab}\right)
\!\!\!\begin{array}{c}
_\times  \\
^\cdot
\end{array}\!\!\!
\left(\mathbf{cd}\right)=\left(\mathbf{a}\times\mathbf{c}\right)\left(\mathbf{b}\cdot\mathbf{d}\right)</math>
|| ''Double cross product''
 
<math> \left(\mathbf{ab}\right)
\!\!\!\begin{array}{c}
_\times  \\
^\times
\end{array}\!\!\!
\left(\mathbf{cd}\right)=\left(\mathbf{a}\times\mathbf{c}\right)\left(\mathbf{b}\times \mathbf{d}\right)</math>
|-
|}
 
Letting
 
:<math> \mathbf{A}=\sum _i \mathbf{a}_i\mathbf{b}_i \quad \mathbf{B}=\sum _i \mathbf{c}_i\mathbf{d}_i </math>
 
be two general dyadics, we have:
 
:{| class="wikitable"
|-
!
!
! Dot
! Cross
|-valign="top"
! Dot
|| ''Dot product''
 
<math> \mathbf{A}\cdot\mathbf{B} = \sum_j\sum _i\left(\mathbf{b}_i\cdot\mathbf{c}_j\right)\mathbf{a}_i\mathbf{d}_j </math>
|| ''Double dot product''
 
<math>\mathbf{A}\colon\mathbf{B}=\sum_j\sum_i\left(\mathbf{a}_i\cdot\mathbf{d}_j\right)\left(\mathbf{b}_i\cdot\mathbf{c}_j\right)</math>
 
or
 
<math> \mathbf{A}\colon\mathbf{B}=\sum_j\sum_i =  \left(\mathbf{a}_i\cdot\mathbf{c}_j\right)\left(\mathbf{b}_i\cdot\mathbf{d}_j\right) </math>
 
|| ''Dot–cross product''
<math> \mathbf{A}\!\!\!\begin{array}{c}
_\cdot \\
^\times
\end{array}\!\!\!
\mathbf{B} = \sum_j\sum _i \left(\mathbf{a}_i\cdot\mathbf{c}_j\right)\left(\mathbf{b}_i\times\mathbf{d}_j\right) </math>
|-valign="top"
! Cross
||
|| ''Cross–dot product''
 
<math> \mathbf{A}\!\!\!\begin{array}{c}
_\times  \\
^\cdot
\end{array}\!\!\!
\mathbf{B} = \sum_j\sum _i \left(\mathbf{a}_i\times\mathbf{c}_j\right)\left(\mathbf{b}_i\cdot\mathbf{d}_j\right) </math>
|| ''Double cross product''
<math> \mathbf{A}
\!\!\!\begin{array}{c}
_\times  \\
^\times
\end{array}\!\!\!
\mathbf{B}=\sum _{i,j} \left(\mathbf{a}_i\times \mathbf{c}_j\right)\left(\mathbf{b}_i\times \mathbf{d}_j\right) </math>
|}
 
==== Double-dot product ====
 
There are two ways to define the double dot product, one must be careful when deciding which convention to use. As there are no analogous matrix operations for the remaining dyadic products, no ambiguities in their definitions appear.
 
The double-dot product is [[commutative]] due to commutativity of the normal dot-product:
 
:<math> \mathbf{A} \colon \! \mathbf{B} = \mathbf{B} \colon \! \mathbf{A} </math>
 
There is a special double dot product with a [[transpose]]
 
:<math> \mathbf{A} \colon \! \mathbf{B}^\mathrm{T} = \mathbf{A}^\mathrm{T} \colon \! \mathbf{B} </math>
 
Another identity is:
 
:<math>\mathbf{A}\colon\mathbf{B}=\left(\mathbf{A}\cdot\mathbf{B}^\mathrm{T}\right)\colon \mathbf{I}
=\left(\mathbf{B}\cdot\mathbf{A}^\mathrm{T}\right)\colon \mathbf{I} </math>
 
==== Double-cross product ====
 
We can see that, for any dyad formed from two vectors '''a''' and '''b''', its double cross product is zero.
 
:<math> \left(\mathbf{ab}\right)
\!\!\!\begin{array}{c}
_\times  \\
^\times
\end{array}\!\!\!
\left(\mathbf{ab}\right)=\left(\mathbf{a}\times\mathbf{a}\right)\left(\mathbf{b}\times\mathbf{b}\right)= 0</math>
 
However, by definition, a dyadic double-cross product on itself will generally be non-zero. For example, a dyadic '''A''' composed of six different vectors
 
:<math>\mathbf{A}=\sum _{i=1}^3 \mathbf{a}_i\mathbf{b}_i </math>
 
has a non-zero self-double-cross product of
 
:<math> \mathbf{A}
\!\!\!\begin{array}{c}
_\times  \\
^\times
\end{array}\!\!\!
\mathbf{A} = 2 \left[\left(\mathbf{a}_1\times \mathbf{a}_2\right)\left(\mathbf{b}_1\times \mathbf{b}_2\right)+\left(\mathbf{a}_2\times \mathbf{a}_3\right)\left(\mathbf{b}_2\times \mathbf{b}_3\right)+\left(\mathbf{a}_3\times \mathbf{a}_1\right)\left(\mathbf{b}_3\times \mathbf{b}_1\right)\right] </math>
 
====Tensor contraction====
 
{{main|Tensor contraction}}
 
The ''spur'' or ''expansion factor'' arises from the formal expansion of the dyadic in a coordinate basis by replacing each juxtaposition by a dot product of vectors:
 
:<math> \begin{array}{llll}
|\mathbf{A}| & = A_{11} \mathbf{i}\cdot\mathbf{i} + A_{12} \mathbf{i}\cdot\mathbf{j} + A_{31} \mathbf{i}\cdot\mathbf{k} \\
& + A_{21} \mathbf{j}\cdot\mathbf{i} + A_{22} \mathbf{j}\cdot\mathbf{j} + A_{23} \mathbf{j}\cdot\mathbf{k}\\
& + A_{31} \mathbf{k}\cdot\mathbf{i} + A_{32} \mathbf{k}\cdot\mathbf{j} + A_{33} \mathbf{k}\cdot\mathbf{k} \\
\\
& = A_{11} + A_{22} + A_{33} \\
\end{array}</math>
 
in index notation this is the contraction of indices on the dyadic:
 
:<math>|\mathbf{A}| = \sum_i A_i{}^i</math>
 
In three dimensions only, the ''rotation factor'' arises by replacing every juxtaposition by a [[cross product]]
 
:<math> \begin{array}{llll}
\langle\mathbf{A}\rangle & = A_{11} \mathbf{i}\times\mathbf{i} + A_{12} \mathbf{i}\times\mathbf{j} + A_{31} \mathbf{i}\times\mathbf{k} \\
& + A_{21} \mathbf{j}\times\mathbf{i} + A_{22} \mathbf{j}\times\mathbf{j} + A_{23} \mathbf{j}\times\mathbf{k}\\
& + A_{31} \mathbf{k}\times\mathbf{i} + A_{32} \mathbf{k}\times\mathbf{j} + A_{33} \mathbf{k}\times\mathbf{k} \\
\\
& = A_{12} \mathbf{k} - A_{31} \mathbf{j} - A_{21} \mathbf{k} \\
& + A_{23} \mathbf{i} + A_{31} \mathbf{j} - A_{32} \mathbf{i} \\
\\
& = (A_{23}-A_{32})\mathbf{i} + (A_{31}-A_{13})\mathbf{j} + (A_{12}-A_{21})\mathbf{k}\\
\end{array}</math>
 
In index notation this is the contraction of '''A''' with the [[Levi-Civita tensor]]
:<math>\langle\mathbf{A}\rangle=\sum_{jk}{\epsilon_i}^{jk}A_{jk}.</math>
 
==Special dyadics==
 
===Unit dyadic===
 
For any vector '''a''', there exist a unit dyadic '''I''', such that
 
:<math> \mathbf{I}\cdot\mathbf{a}=\mathbf{a}\cdot\mathbf{I}= \mathbf{a} </math>
 
For any basis of 3 vectors '''a''', '''b''' and '''c''', with [[Multiplicative inverse|reciprocal]] basis <math>\hat{{\mathbf{a}}}, \hat{\mathbf{b}}, \hat{\mathbf{c}}</math>, the unit dyadic is defined by
 
:<math>\mathbf{I} = \mathbf{a}\hat{\mathbf{a}} + \mathbf{b}\hat{\mathbf{b}} + \mathbf{c}\hat{\mathbf{c}}</math>
 
In the standard basis,
 
:<math> \mathbf{I} = \mathbf{ii} + \mathbf{jj} + \mathbf{kk} </math>
 
The corresponding matrix is
 
:<math>\mathbf{I}=\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1\\
\end{pmatrix}</math>
 
This can be put on more careful foundations (explaining what the logical content of "juxtaposing notation" could possibly mean) using the language of tensor products. If ''V'' is a finite-dimensional [[vector space]], a dyadic tensor on ''V'' is an elementary tensor in the tensor product of ''V'' with its [[dual space]].
 
The tensor product of ''V'' and its dual space is [[isomorphic]] to the space of [[linear map]]s from ''V'' to ''V'': a dyadic tensor ''vf'' is simply the linear map sending any ''w'' in ''V'' to ''f''(''w'')''v''. When ''V'' is Euclidean ''n''-space, we can use the [[inner product]] to identify the dual space with ''V'' itself, making a dyadic tensor an elementary tensor product of two vectors in Euclidean space.
 
In this sense, the unit dyadic '''ij''' is the function from 3-space to itself sending ''a''<sub>1</sub>'''i''' + ''a''<sub>2</sub>'''j''' + ''a''<sub>3</sub>'''k''' to ''a''<sub>2</sub>'''i''', and '''jj''' sends this sum to ''a&zwj;''<sub>2</sub>'''j'''. Now it is revealed in what (precise) sense  '''ii''' + '''jj''' + '''kk''' is the identity:  it sends ''a''<sub>1</sub>'''i''' + ''a''<sub>2</sub>'''j''' + ''a''<sub>3</sub>'''k''' to itself because its effect is to sum each unit vector in the standard basis scaled by the coefficient of the vector in that basis.
 
;Properties of unit dyadics
 
:<math> \left(\mathbf{a}\times\mathbf{I}\right)\cdot\left(\mathbf{b}\times\mathbf{I}\right)= \mathbf{ab}-\left(\mathbf{a}\cdot\mathbf{b}\right)\mathbf{I}</math>
 
:<math>\mathbf{I}
\!\!\begin{array}{c}
_\times  \\
^\cdot
\end{array}\!\!\!
\left(\mathbf{ab}\right)=\mathbf{b}\times\mathbf{a} </math>
 
:<math> \mathbf{I}
\!\!\begin{array}{c}
_\times  \\
^\times
\end{array}\!\!
\mathbf{A}=(\mathbf{A}
\!\!\begin{array}{c}
_\times  \\
^\times
\end{array}\!\!
\mathbf{I})\mathbf{I}-\mathbf{A}^\mathrm{T}</math>
 
:<math>\mathbf{I}\;\colon\left(\mathbf{ab}\right) = \left(\mathbf{I}\cdot\mathbf{a}\right)\cdot\mathbf{b} = \mathbf{a}\cdot\mathbf{b} = \mathrm{tr}\left(\mathbf{ab}\right)</math>
 
where "tr" denotes the [[Trace (linear algebra)|trace]].
 
===Rotation dyadic===
 
For any vector '''a''' in two dimensions, the left-cross product with the identity dyad '''I''':
 
:<math> \mathbf{a}\times \mathbf{I}</math>
 
is a 90 degree anticlockwise rotation dyadic around ''a''. Alternatively the dyadic tensor
 
:'''J'''  =  '''ji &minus; ij''' = <math> \begin{pmatrix}
0 & -1 \\
1 & 0
\end{pmatrix}</math>
 
is a 90° anticlockwise [[Rotation operator (vector space)|rotation operator]] in 2d. It can be left-dotted with a vector to produce the rotation:
:<math> (\mathbf{j i} - \mathbf{i j}) \cdot (x \mathbf{i} + y \mathbf{j}) =
x \mathbf{j i} \cdot \mathbf{i} - x \mathbf{i j} \cdot \mathbf{i} + y \mathbf{j i} \cdot \mathbf{j} - y \mathbf{i j} \cdot \mathbf{j} =
-y \mathbf{i} + x \mathbf{j},</math>
or in matrix notation
:<math>
\begin{pmatrix}
0 & -1 \\
1 & 0
\end{pmatrix}
\begin{pmatrix}
x \\
y
\end{pmatrix}=
\begin{pmatrix}
-y \\
x
\end{pmatrix}.</math>
 
A general 2d rotation dyadic for θ angle anti-clockwise is
 
:<math>\mathbf{I}\cos\theta + \mathbf{J}\sin\theta =
\begin{pmatrix}
  \cos\theta &-\sin\theta \\
  \sin\theta &\;\cos\theta
\end{pmatrix}
</math>
 
where '''I''' and '''J''' are as above.
 
==Related terms==
Some authors generalize from the term ''dyadic'' to related terms ''triadic'', ''tetradic'' and ''polyadic''.<ref>For example, {{cite journal |authors=I. V. Lindell and A. P. Kiselev |title=POLYADIC METHODS IN ELASTODYNAMICS |year=2001 |journal=Progress In Electromagnetics Research, PIER 31 |pages=113–154 }} [http://www.jpier.org/PIER/pier31/06.0005171.Lindell.K.pdf]</ref>


==See also==
==See also==
* [[Kronecker product]]
*[[Censoring (statistics)]]
* [[Polyadic algebra]]
*[[Trimmed estimator]]
* [[Unit vector]]
*[[Truncated (polyhedron)]]
* [[Multivector]]
*[[Truncated mean]]
* [[Differential form]]
*[[Truncated dependent variable]]
* [[Quaternions]]
* [[Field (mathematics)]]


==References==
==References==
{{reflist}}
<references/>
 
* {{cite news|url=http://www.stanford.edu/class/me331b/documents/VectorBasisIndependent.pdf|author=P. Mitiguy|year=2009|title=Vectors and dyadics|location=[[Stanford]], USA}} Chapter 2
* {{cite book | title=Vector analysis, Schaum's outlines|first1=M.R.|last1=Spiegel|first2=S.|last2=Lipschutz|first3=D.|last3=Spellman| year=2009 | publisher=McGraw Hill|isbn=978-0-07-161545-7}}
* {{cite book | title=Continuum Mechanics | author=A.J.M. Spencer | year=1992 | publisher=Dover Publications | isbn=0-486-43594-6 }}.
* {{Citation | last1=Morse | first1=Philip M. | last2=Feshbach | first2=Herman | title=Methods of theoretical physics, Volume 1 | publisher=[[McGraw-Hill]] | location=New York | mr=0059774 |isbn=978-0-07-043316-8 | year=1953 | chapter=§1.6: Dyadics and other vector operators|pages=54&ndash;92}}.
*{{cite book | title=Methods for Electromagnetic Field Analysis | author=Ismo V. Lindell | publisher=Wiley-Blackwell |year=1996 | isbn=978-0-7803-6039-6 }}.
*<cite id=Chen>{{cite book | title=Theory of Electromagnetic Wave - A Coordinate-free approach | author=Hollis C. Chen | publisher=McGraw Hill |year=1983 | isbn=978-0-07-010688-8 }}.</cite>
 
==External links==
* [http://www.ismolindell.com/publications/monographs/pdf/Aftis.pdf Advanced Field Theory, I.V.Lindel]
* [http://my.ece.ucsb.edu/bobsclass/201B/W01/vectors.pdf Vector and Dyadic Analysis]
* [http://chem4823.usask.ca/nmr/tensor.pdf Introductory Tensor Analysis]
* [http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20050175884_2005173651.pdf Nasa.gov, Foundations of Tensor Analysis for students of Physics and Engineering with an Introduction to the Theory of Relativity, J.C. Kolecki]
* [http://www.grc.nasa.gov/WWW/k-12/Numbers/Math/documents/Tensors_TM2002211716.pdf Nasa.gov, An introduction to Tensors for students of Physics and Engineering, J.C. Kolecki]
 
{{tensor}}


[[Category:Tensors]]
[[Category:Statistical data types]]
[[Category:Theory of probability distributions]]

Revision as of 11:21, 17 August 2014

In statistics, truncation results in values that are limited above or below, resulting in a truncated sample.[1] Truncation is similar to but distinct from the concept of statistical censoring. A truncated sample can be thought of as being equivalent to an underlying sample with all values outside the bounds entirely omitted, with not even a count of those omitted being kept. If the sample had been censored, a record would be of those that were censored, consisting of a note of whether the lower or upper bound had been passed and the value of the bound.

Applications

Usually the values that insurance adjusters receive are either left-truncated, right-censored or both. For example, if policyholders are subject to a policy limit, u, then any loss amounts that are actually above u are reported to the insurance company as being exactly u because u is the amount the insurance companies pay. The insurance company knows that the actual loss is greater than u but they don't know what it is. On the other hand, left truncation occurs when policyholders are subject to a deductible. If policyholders are subject to a deductible d, any loss amount that is less than d will not even be reported to the insurance company. If there is a claim on a policy limit of u and a deductible of d, any loss amount that is greater than u will be reported to the insurance company as a loss of u-d because that is the amount the insurance company has to pay. Therefore insurance loss data is left-truncated because the insurance company doesn't know if there are values below the deductible d because policyholders won't make a claim. The insurance loss is also right censored if the loss is greater than u because u is the most the insurance company will pay, so it only knows that your claim is greater than u, not what the claim amount is exactly.

Probability distributions

Mining Engineer (Excluding Oil ) Truman from Alma, loves to spend time knotting, largest property developers in singapore developers in singapore and stamp collecting. Recently had a family visit to Urnes Stave Church.

Truncation can be applied to any probability distribution and will lead to a new distribution, not usually one within the same family. Thus, if a random variable X has F(x) as its distribution function, the new random variable Y defined as having the distribution of X truncated to the semi-open interval (a,b] has the distribution function

for y in the interval (a, b], and 0 or 1 otherwise. If truncation were to the closed interval [a,b], the distribution function would be

for y in the interval [a, b], and 0 or 1 otherwise.

Data analysis

The analysis of data where observations are treated as being from truncated versions of standard distributions can be undertaken using a maximum likelihood, where the likelihood would be derived from the distribution or density of the truncated distribution. This involves taking account of the factor in the modified density function which will depend on the parameters of the original distribution.

In practice, if the fraction truncated is very small the effect of truncation might be ignored when analysing data. For example, it is common to use a normal distribution to model data whose values can only be positive but for which the typical range of values is well away from zero: in such cases a truncated or censored version of the normal distribution may formally be preferable (although there would be other alternatives also), but there would be very little change in results from the more complicated analysis. However, software is readily available for maximum likelihood estimation of even moderately complicated models, such as regression models, for truncated data.[2]

See also

References

  1. Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms. OUP. ISBN 0-19-920613-9
  2. Wolynetz, M.S. (1979) Maximum Likelihood estimation in a Linear model from Confined and Censored Normal Data. J.Roy.Statist.Soc (Series C), 28(2), 195–206