Thin plate spline: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
en>Schomerus
Reinstating the commented out section. This is a bad way to make edits because then they don't show up in the diffs! If there are any doubts about these sections, I suggest bringing them up in the talk page first. ~~~~
 
Line 1: Line 1:
{{Probability distribution |
Friends contact him Royal Cummins. The thing she adores most is to play handball but she can't make it her profession. Years in the past we moved to Kansas. I am a cashier and I'll be promoted quickly.<br><br>Feel free to visit my blog: [http://Newdayz.de/index.php?mod=users&action=view&id=16038 Newdayz.de]
  name      =Inverse-Wishart|
  type      =density|
  pdf_image  =|
  cdf_image  =|
  notation  =<math> \mathcal{W}^{-1}({\mathbf\Psi},\nu)</math>|
  parameters =<math> \nu > p-1  </math> [[degrees of freedom (statistics)|degrees of freedom]] ([[real numbers|real]])<br /><math>\mathbf{\Psi} > 0</math> [[scale matrix]] ([[positive-definite matrix|pos. def]])|
  support    =<math>\mathbf{X}</math> is [[positive-definite matrix|positive definite]]|
  pdf        =<math>\frac{\left|{\mathbf\Psi}\right|^{\frac{\nu}{2}}}{2^{\frac{\nu p}{2}}\Gamma_p(\frac{\nu}{2})} \left|\mathbf{X}\right|^{-\frac{\nu+p+1}{2}}e^{-\frac{1}{2}\operatorname{tr}({\mathbf\Psi}\mathbf{X}^{-1})}</math>
*<math>\Gamma_p</math> is the [[multivariate gamma function]]
*<math>\mathrm{tr}</math> is the [[trace (linear algebra)|trace]] function
|
  cdf        =|
  mean      = <math>\frac{\mathbf{\Psi}}{\nu - p - 1}</math>|
  median    =|
  mode      = <math>\frac{\mathbf{\Psi}}{\nu + p + 1}</math><ref name="Ohagan">{{Cite book
| author = A. O'Hagan, and J. J. Forster
| title = Kendall's Advanced Theory of Statistics: Bayesian Inference
| volume = 2B
| edition = 2
| publisher = Arnold
| year = 2004
| isbn = 0-340-80752-0
  }}</ref>{{rp|406}}|
  variance  =see below|
  skewness  =|
  kurtosis  =|
  entropy    =|
  mgf        =|
  char      =|
}}
 
In [[statistics]], the '''inverse Wishart distribution''', also called the '''inverted Wishart distribution''', is a [[probability distribution]] defined on real-valued [[positive-definite matrix|positive-definite]] [[matrix (mathematics)|matrices]].  In [[Bayesian statistics]] it is used as the [[conjugate prior]] for the covariance matrix of a
[[multivariate normal]] distribution.
 
We say <math>\mathbf{X}</math> follows an inverse Wishart distribution, denoted as <math> \mathbf{X}\sim \mathcal{W}^{-1}({\mathbf\Psi},\nu)</math>, if its [[matrix inverse|inverse]] <math> \mathbf{X}^{-1}</math> has a [[Wishart distribution]] <math> \mathcal{W}({\mathbf \Psi}^{-1}, \nu) </math>.
 
==Density==
The [[probability density function]] of the inverse Wishart is:
 
:<math>
\frac{\left|{\mathbf\Psi}\right|^{\frac{\nu}{2}}}{2^{\frac{\nu p}{2}}\Gamma_p(\frac{\nu}{2})} \left|\mathbf{X}\right|^{-\frac{\nu+p+1}{2}}e^{-\frac{1}{2}\operatorname{tr}({\mathbf\Psi}\mathbf{X}^{-1})}
</math>
 
where <math>\mathbf{X}</math> and <math>{\mathbf\Psi}</math> are <math>p\times p</math> [[positive-definite matrix|positive definite]] matrices, and &Gamma;<sub>''p''</sub>(&middot;) is the [[multivariate gamma function]].
 
==Theorems==
===Distribution of the inverse of a Wishart-distributed matrix===
 
If <math>{\mathbf A}\sim \mathcal{W}({\mathbf\Sigma},\nu)</math> and <math>{\mathbf\Sigma}</math> is of size <math>p \times p</math>, then <math>\mathbf{X}={\mathbf A}^{-1}</math> has an inverse Wishart distribution <math>\mathbf{X}\sim \mathcal{W}^{-1}({\mathbf\Sigma}^{-1},\nu)</math> .<ref name="MardiaK1979Multivariate">{{Cite book
| author = [[Kanti V. Mardia]], J. T. Kent and J. M. Bibby
| title = Multivariate Analysis
| publisher = [[Academic Press]]
| year = 1979
| isbn = 0-12-471250-9
}}</ref>
 
===Marginal and conditional distributions from an inverse Wishart-distributed matrix===
 
Suppose <math>{\mathbf A}\sim \mathcal{W}^{-1}({\mathbf\Psi},\nu)</math> has an inverse Wishart distribution. Partition the matrices <math> {\mathbf A} </math> and <math> {\mathbf\Psi} </math> [[Conformable matrix|conformably]] with each other
:<math>
    {\mathbf{A}} = \begin{bmatrix} \mathbf{A}_{11} & \mathbf{A}_{12} \\ \mathbf{A}_{21} & \mathbf{A}_{22} \end{bmatrix}, \;
    {\mathbf{\Psi}} = \begin{bmatrix} \mathbf{\Psi}_{11} & \mathbf{\Psi}_{12} \\ \mathbf{\Psi}_{21} & \mathbf{\Psi}_{22} \end{bmatrix}
</math>
where <math>{\mathbf A_{ij}}</math> and <math>{\mathbf \Psi_{ij}} </math> are <math> p_{i}\times p_{j}</math> matrices, then we have
 
i) <math> {\mathbf A_{11} } </math> is independent of <math> {\mathbf A}_{11}^{-1}{\mathbf A}_{12} </math> and <math> {\mathbf A}_{22\cdot 1} </math>, where <math>{\mathbf A_{22\cdot 1}} = {\mathbf A}_{22} - {\mathbf A}_{21}{\mathbf A}_{11}^{-1}{\mathbf A}_{12}</math> is the [[Schur complement]] of <math> {\mathbf A_{11} } </math> in <math> {\mathbf A} </math>;
 
ii) <math> {\mathbf A_{11} } \sim \mathcal{W}^{-1}({\mathbf \Psi_{11} }, \nu-p_{2}) </math>;
 
iii) <math> {\mathbf A}_{11}^{-1} {\mathbf A}_{12}| {\mathbf A}_{22\cdot 1} \sim MN_{p_{1}\times p_{2}}
( {\mathbf \Psi}_{11}^{-1} {\mathbf \Psi}_{12},  {\mathbf A}_{22\cdot 1} \otimes  {\mathbf \Psi}_{11}^{-1}) </math>, where <math> MN_{p\times q}(\cdot,\cdot) </math> is a [[matrix normal distribution]];
 
iv) <math> {\mathbf A}_{22\cdot 1} \sim  \mathcal{W}^{-1}({\mathbf \Psi}_{22\cdot 1}, \nu) </math>, where <math>{\mathbf \Psi_{22\cdot 1}} = {\mathbf \Psi}_{22} - {\mathbf \Psi}_{21}{\mathbf \Psi}_{11}^{-1}{\mathbf \Psi}_{12}</math>;
 
===Conjugate distribution===
 
Suppose we wish to make inference about a covariance matrix <math>{\mathbf{\Sigma}}</math> whose [[prior probability|prior]] <math>{p(\mathbf{\Sigma})}</math> has a <math>\mathcal{W}^{-1}({\mathbf\Psi},\nu)</math> distribution. If the observations <math>\mathbf{X}=[\mathbf{x}_1,\ldots,\mathbf{x}_n]</math> are independent p-variate Gaussian variables drawn from a <math>N(\mathbf{0},{\mathbf \Sigma})</math> distribution, then the conditional distribution  <math>{p(\mathbf{\Sigma}|\mathbf{X})}</math> has a <math>\mathcal{W}^{-1}({\mathbf A}+{\mathbf\Psi},n+\nu)</math> distribution, where <math>{\mathbf{A}}=\mathbf{X}\mathbf{X}^T</math> is <math>n-1</math> times the sample covariance matrix.
 
Because the prior and posterior distributions are the same family, we say the inverse Wishart distribution is [[conjugate prior|conjugate]] to the multivariate Gaussian.
 
Due to its conjugacy to the multivariate Gaussian, it is possible to [[marginalize out]] (integrate out) the Gaussian's parameter <math>\mathbf{\Sigma}</math>.
 
<math>P(\mathbf{X}|\mathbf{\Psi},\nu) = \int P(\mathbf{X}|\mathbf{\Sigma})P(\mathbf{\Sigma}|\mathbf{\Psi},\nu) d\mathbf{\Sigma} = \frac{|\mathbf{\Psi}|^{\frac{\nu}{2}}\Gamma_p\left(\frac{\nu+n}{2}\right)}{\pi^{\frac{np}{2}}|\mathbf{\Psi}+\mathbf{A}|^{\frac{\nu+n}{2}}\Gamma_p(\frac{\nu}{2})}</math>
 
(this is useful because the variance matrix <math>\mathbf{\Sigma}</math> is not known in practice, but because <math>{\mathbf\Psi}</math> is known ''a priori'', and <math>{\mathbf A}</math> can be obtained from the data, the right hand side can be evaluated directly).
 
===Moments===
The following is based on Press, S. J. (1982) "Applied Multivariate Analysis", 2nd ed. (Dover Publications, New York), after reparameterizing the degree of freedom to be consistent with the p.d.f. definition above.
 
The mean:<ref name="MardiaK1979Multivariate"/>{{rp|85}}
:<math>
E(\mathbf X) = \frac{\mathbf\Psi}{\nu-p-1}.</math>
 
The variance of each element of <math>\mathbf{X}</math>:
:<math>
\operatorname{Var}(x_{ij}) = \frac{(\nu-p+1)\psi_{ij}^2 + (\nu-p-1)\psi_{ii}\psi_{jj}}
{(\nu-p)(\nu-p-1)^2(\nu-p-3)}</math>
The variance of the diagonal uses the same formula as above with <math>i=j</math>, which simplifies to:
:<math>
\operatorname{Var}(x_{ii}) = \frac{2\psi_{ii}^2}{(\nu-p-1)^2(\nu-p-3)}.</math>
The covariance of elements of <math>\mathbf{X}</math> are given by:
:<math>
\operatorname{Cov}(x_{ij},x_{kl}) = \frac{2\psi_{ij}\psi_{kl} + (\nu-p-1) (\psi_{ik}\psi_{jl} + \psi_{il}\psi_{kj})}{(\nu-p)(\nu-p-1)^2(\nu-p-3)}</math>
 
== Related distributions ==
A [[univariate]] specialization of the inverse-Wishart distribution is the [[inverse-gamma distribution]]. With <math>p=1</math> (i.e. univariate) and <math>\alpha = \nu/2</math>, <math>\beta = \mathbf{\Psi}/2</math> and <math>x=\mathbf{X}</math> the [[probability density function]] of the inverse-Wishart distribution becomes
 
: <math>p(x|\alpha, \beta) = \frac{\beta^\alpha\, x^{-\alpha-1} \exp(-\beta/x)}{\Gamma_1(\alpha)}.</math>
 
i.e., the inverse-gamma distribution, where <math>\Gamma_1(\cdot)</math> is the ordinary [[Gamma function]].
 
A generalization is the [[inverse multivariate gamma distribution]].
 
Another generalization has been termed the generalized inverse Wishart distribution, <math>\mathcal{GW}^{-1}</math>. A <math> p \times p</math> positive definite matrix <math>\mathbf{X}</math> is said to be distributed as <math>\mathcal{GW}^{-1}(\mathbf{\Psi},\nu,\mathbf{S})</math> if <math>\mathbf{Y} = \mathbf{X}^{1/2}\mathbf{S}^{-1}\mathbf{X}^{1/2}</math> is distributed as <math>\mathcal{W}^{-1}(\mathbf{\Psi},\nu)</math>. Here <math>\mathbf{X}^{1/2}</math> denotes the symmetric matrix square root of <math>\mathbf{X}</math>, the parameters <math>\mathbf{\Psi},\mathbf{S}</math> are <math> p \times p</math> positive definite matrices, and the parameter <math>\nu</math> is a positive scalar larger than <math>2p</math>. Note that when <math>\mathbf{S}</math> is equal to an identity matrix, <math>\mathcal{GW}^{-1}(\mathbf{\Psi},\nu,\mathbf{S}) = \mathcal{W}^{-1}(\mathbf{\Psi},\nu)</math>. This generalized inverse Wishart distribution has been applied to estimating the distributions of multivariate autoregressive processes.<ref name="Triantafyllopoulos2011">{{Cite doi|10.1111/j.1467-9892.2010.00686.x}}</ref>
 
A different type of generalization is the [[normal-inverse-Wishart distribution]], essentially the product of a [[multivariate normal distribution]] with an inverse Wishart distribution.
 
==See also==
*[[Inverse multivariate gamma distribution]]
*[[Matrix normal distribution]]
*[[Wishart distribution]]
 
== References ==
<div class="references-small"><references/></div>
 
{{ProbDistributions|multivariate}}
 
[[Category:Continuous distributions]]
[[Category:Multivariate continuous distributions]]
[[Category:Conjugate prior distributions]]
[[Category:Probability distributions]]
[[Category:Exponential family distributions]]

Latest revision as of 20:53, 6 January 2015

Friends contact him Royal Cummins. The thing she adores most is to play handball but she can't make it her profession. Years in the past we moved to Kansas. I am a cashier and I'll be promoted quickly.

Feel free to visit my blog: Newdayz.de