Photoemission electron microscopy: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Nanite
No edit summary
 
Line 1: Line 1:
In [[probability theory]], a '''pairwise independent''' collection of [[random variable]]s is a set of random variables any two of which are [[statistical independence|independent]].<ref>Gut, A. (2005) ''Probability: a Graduate Course'', Springer-Verlag. ISBN 0-387-27332-8. pp.&nbsp;71&ndash;72.</ref>  Any collection of [[Mutual independence|mutually independent]] random variables is pairwise independent, but some pairwise independent collections are not mutually independent.  Pairwise independent random variables with finite [[variance]] are [[uncorrelated]].
I am Oscar and I completely dig that title. Supervising is my occupation. He is really fond of doing ceramics but he is struggling to discover time for it. For a whilst she's been in South Dakota.<br><br>Look into my web-site [http://www.Myprgenie.com/view-publication/how-to-be-successful-with-your-weight-loss-goals healthy meals delivered]
 
A pair of random variables ''X'' and ''Y'' are '''independent''' if and only if the random vector (''X'', ''Y'') with [[joint distribution|joint]] cumulative distribution function (CDF) <math>F_{X,Y}(x,y)</math> satisfies
 
:<math>F_{X,Y}(x,y) = F_X(x) F_Y(y),</math>
 
or equivalently, their joint density <math>f_{X,Y}(x,y)</math> satisfies
 
:<math>f_{X,Y}(x,y) = f_X(x) f_Y(y).</math>
 
That is, the joint distribution is equal to the product of the marginal distributions.<ref>{{cite book|title=Introduction to Mathematical Statistics|author = Hogg, R. V., McKean, J. W., Craig, A. T.| edition=6| year=2005| publisher=Pearson Prentice Hall|location=Upper Saddle River, NJ|isbn=0-13-008507-3}} Definition 2.5.1, page 109.</ref>
 
Unless it is not clear in context, in practice the modifier "mutual" is usually dropped so that '''independence''' means '''mutual independence'''. A statement such as " ''X'', ''Y'', ''Z'' are independent random variables" means that ''X'', ''Y'', ''Z'' are mutually independent.
 
==Example==
 
Pairwise independence does not imply mutual independence, as shown by the following example attributed to S. Bernstein.<ref>{{cite book|title=Introduction to Mathematical Statistics|author = Hogg, R. V., McKean, J. W., Craig, A. T.| edition=6| year=2005| publisher=Pearson Prentice Hall|location=Upper Saddle River, NJ|isbn=0-13-008507-3}} Remark 2.6.1, p. 120.</ref>
 
Suppose ''X'' and ''Y'' are two independent tosses of a fair coin, where we designate 1 for heads and 0 for tails. Let the third random variable ''Z'' be equal to 1 if one and only one of those coin tosses resulted in "heads", and 0 otherwise. Then jointly the triple (''X'', ''Y'', ''Z'') has the following [[joint probability distribution|probability distribution]]:
 
:<math>(X,Y,Z)=\left\{\begin{matrix}
(0,0,0) & \text{with probability}\ 1/4, \\
(0,1,1) & \text{with probability}\ 1/4, \\
(1,0,1) & \text{with probability}\ 1/4, \\
(1,1,0) & \text{with probability}\ 1/4.
\end{matrix}\right.</math>
 
Here the marginal probability distributions are identical: <math>f_X(0)=f_Y(0)=f_Z(0)=1/2,</math> and
<math>f_X(1)=f_Y(1)=f_Z(1)=1/2.</math> The bivariate distributions also agree: <math> f_{X,Y}=f_{X,Z}=f_{Y,Z}, </math> where <math>f_{X,Y}(0,0)=f_{X,Y}(0,1)=f_{X,Y}(1,0)=f_{X,Y}(1,1)=1/4.</math>
 
Since each of the pairwise joint distributions equals the product of their respective marginal distributions, the variables are pairwise independent:
 
* ''X'' and ''Y'' are independent, and
* ''X'' and ''Z'' are independent, and
* ''Y'' and ''Z'' are independent.
 
However, ''X'', ''Y'', and ''Z'' are '''not''' [[Mutually_independent#More_than_two_random_variables|mutually independent]], since <math>f_{X,Y,Z}(x,y,z) \neq f_X(x)f_Y(y)f_Z(z)</math>. Note that any of <math>\{X,Y,Z\}</math> is completely determined by the other two (any of ''X'', ''Y'', ''Z'' is the [[modular arithmetic|sum (modulo 2)]] of the others). That is as far from independence as random variables can get.
 
==Generalization==
 
More generally, we can talk about ''k''-wise independence, for any ''k''&nbsp;≥&nbsp;2. The idea is similar: a set of [[random variable]]s is ''k''-wise independent if every subset of size ''k'' of those variables is independent. ''k''-wise independence has been used in theoretical computer science, where it was used to prove a theorem about the problem [[MAXEkSAT]].
 
== See also ==
 
* [[Pairwise]]
 
== References ==
{{reflist}}
 
[[Category:Probability theory]]
[[Category:Theory of probability distributions]]
[[Category:Statistical dependence]]

Latest revision as of 11:55, 25 October 2014

I am Oscar and I completely dig that title. Supervising is my occupation. He is really fond of doing ceramics but he is struggling to discover time for it. For a whilst she's been in South Dakota.

Look into my web-site healthy meals delivered