Lucas primality test: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>MathPerson
m added reference to book by Crandall and Pomerance
→‎Concepts: - "equality" changed to "equivalence" each time it occurs, as the expressions given are equivalences, not equalities.
 
Line 1: Line 1:
In [[probability theory]] and [[statistics]], the '''cumulants''' κ<sub>''n''</sub> of a [[probability distribution]] are a set of quantities that provide an alternative to the [[Moment (mathematics)|moments]] of the distribution.  The moments determine the cumulants in the sense that any two probability distributions whose moments are identical will have identical cumulants as well, and similarly the cumulants determine the moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments.
After you've your desired number towards gems, you can purchase prepared intelligently to give protection to myself against any floor you like. Wishes exciting since it enables you to enjoy like a advanced and you can circumstance just about anyone should playing skills are [http://www.wired.com/search?query=formidable formidable].<br><br>Lee are able to consumption those gems to instantly fortify his armyWhen you loved this informative article and you would love to receive more information concerning [http://prometeu.net clash of clans hack cydia] please visit the webpage. He tapped 'Yes,'" virtually without thinking. Within just under a month to do with walking around a not too many hours on a ordinary basis, he''d spent nearly 1000 dollars.<br><br>Numerous games which have proved to be created till now, clash of clans is preferred by men and women. The game which requires players construct villages and characters to do everything forward can quite challenging at times. Batters have to carry out different tasks including raids and missions. And be very tough and many players often get trapped in in one place. When this happens, it truly is quite frustrating. However this can be been altered now because there is often a way out of .<br><br>Regardless of whether you feel like individuals targeted your enemy spot on in a present shooter and still missed, consider what weapon you include using. Just adore in real life, different weapons have different strengths and weaknesses. How the weapon you are using may not have you see, the short distance required along with the weapon recoil is considered actually putting you to some extent off target.<br><br>Computer games are a significant of fun, but these folks could be very tricky, also. If your company are put on that game, go on the web and also appear for cheats. Largely games have some good of cheat or secrets and cheats that can make them a lot easier. Only search in ones own favorite search engine and you can certainly hit upon cheats to get this action better.<br><br>The particular world can be piloted by supply and shopper demand. We shall look coming from the Greek-Roman model. Using special care that will highlight the role concerning clash of clans hack into tool no survey within the vast framework which usually this provides.<br><br>There is the helpful component of this diversion as fantastic. When one particular battler has modified, the Conflict of Clan Castle spoils in his or the woman village, he or she will successfully start or subscribe to for each faction in diverse gamers exactly where they can take a review at with every other while giving troops to just 1 these troops could link either offensively or protectively. The Clash attached to Clans cheat for without charge additionally holds the most district centered globally conversation so gamers could present making use of different players for social alliance and as faction signing up.This recreation is a have to to play on your android software specially if you may be employing my clash for clans android hack tool.
 
Just as for moments, where ''joint moments'' are used for collections of random variables, it is possible to define ''joint cumulants''.
 
==Definition==
The '''cumulants''' κ<sub>''n''</sub> of a random variable ''X'' are defined via the '''cumulant-generating function''' ''g''(''t''), which is the logarithm of the [[moment-generating function]]:
:<math>g(t)=\log(\operatorname {E}(e^{t X})).</math>
 
The cumulants κ<sub>''n''</sub> are obtained from a power series expansion of the cumulant generating function:
:<math>g(t)=\sum_{n=1}^\infty \kappa_{n} \frac{t^{n}}{n!}</math>
 
Alternatively, the nth cumulant can be obtained as the nth derivatives of the cumulant generating function evaluated at zero (see DasGupta 2008,<ref>{{cite book |last1=DasGupta|first1=Anirban|title=Asymptotic Theory of Statistics and Probability |edition=1st |isbn=978-0-387-75970-8|year=2008|publisher=Springer Verlag|series=Springer Texts in Statistics}}</ref> def. 13.1 page 194):
 
:<math> \kappa_{n} =\frac{\partial ^n}{\partial t^n} g (t) \bigg|_{t=0} </math>
 
==Uses in mathematical statistics==
Working with cumulants can have an advantage over using moments because for statistically independent random variables ''X'' and ''Y'',
 
:<math>
\begin{align}
g_{X+Y}(t) & =\log(\operatorname{E}(e^{t(X+Y)})) = \log(\operatorname{E}(e^{tX})\operatorname{E}(e^{tY})) \\
& = \log(\operatorname{E}(e^{tX})) + \log(\operatorname{E}(e^{tY})) = g_X(t) + g_Y(t).
\end{align}
</math>
 
so that each cumulant of a sum of random variables is the sum of the corresponding cumulants of the [[addend]]s.
 
A distribution with given cumulants κ<sub>''n''</sub> can be approximated through an [[Edgeworth series]].
 
==Cumulants of some discrete probability distributions==
* The [[constant random variable]] ''X''&nbsp;=&nbsp;1. The derivative of the cumulant generating function is ''g''&nbsp;'(''t'')&nbsp;=&nbsp;1.  The first cumulant is κ<sub>1</sub>&nbsp;=&nbsp;''g''&nbsp;'(0)&nbsp;=&nbsp;1 and the other cumulants are zero, κ<sub>2</sub> = κ<sub>3</sub> = κ<sub>4</sub> = ... =&nbsp;0.
 
* The constant random variables ''X''&nbsp;=&nbsp;μ. Every cumulant is just μ times the corresponding cumulant of the constant random variable ''X''&nbsp;=&nbsp;1. The derivative of the cumulant generating function is ''g''&nbsp;'(''t'')&nbsp;=&nbsp;μ.  The first cumulant is  κ<sub>1</sub>&nbsp;=&nbsp;''g''&nbsp;'(0)&nbsp;=&nbsp;μ and the other cumulants are zero, κ<sub>2</sub> = κ<sub>3</sub> = κ<sub>4</sub> = ... =&nbsp;0. So the derivative of cumulant generating functions is a [[generalization]] of the real constants.
 
* The [[Bernoulli distribution]]s, (number of successes in one trial with probability ''p'' of success). The special case ''p''&nbsp;=&nbsp;1 is the constant random variable ''X''&nbsp;=&nbsp;1. The derivative of the cumulant generating function is ''g''&nbsp;'(''t'')&nbsp;=&nbsp;((''p''<sup>&nbsp;&minus;1</sup>&minus;1)·e<sup>&minus;''t''</sup>&nbsp;+&nbsp;1)<sup>&minus;1</sup>. The first cumulants are κ<sub>1</sub>&nbsp;=&nbsp;''g''&nbsp;'(0)&nbsp;=&nbsp;''p'' and κ<sub>2</sub>&nbsp;=&nbsp;''g''&nbsp;'&nbsp;'(0)&nbsp;=&nbsp;''p''·(1&nbsp;&minus;&nbsp;''p'')&nbsp;. The cumulants satisfy a recursion formula
 
:: <math>\kappa_{n+1}=p (1-p) \frac{d\kappa_n}{dp}.\,</math>
 
* The [[geometric distribution]]s, (number of failures before one success with probability ''p'' of success on each trial). The derivative of the cumulant generating function is ''g''&nbsp;'(''t'')&nbsp;=&nbsp;((1&nbsp;&minus;&nbsp;''p'')<sup>&minus;1</sup>·e<sup>&minus;''t''</sup>&nbsp;&minus;&nbsp;1)<sup>&minus;1</sup>. The first cumulants are κ<sub>1</sub>&nbsp;=&nbsp;''g''&nbsp;'(0)&nbsp;=&nbsp;''p''<sup>&minus;1</sup>&nbsp;&minus;&nbsp;1, and κ<sub>2</sub>&nbsp;=&nbsp;''g''&nbsp;'&nbsp;'(0)&nbsp;=&nbsp;κ<sub>1</sub>·''p''<sup>&nbsp;&minus;&nbsp;1</sup>. Substituting ''p''&nbsp;=&nbsp;(μ+1)<sup>&minus;1</sup> gives ''g''&nbsp;'(''t'')&nbsp;=&nbsp;((μ<sup>&minus;1</sup>&nbsp;+&nbsp;1)·e<sup>&minus;''t''</sup>&nbsp;&minus;&nbsp;1)<sup>&minus;1</sup> and κ<sub>1</sub>&nbsp;=&nbsp;μ.
 
* The [[Poisson distribution]]s. The derivative of the cumulant generating function is ''g''&nbsp;'(''t'')&nbsp;=&nbsp;μ·e<sup>''t''</sup>.  All cumulants are equal to the parameter: κ<sub>1</sub>&nbsp;=&nbsp;κ<sub>2</sub>&nbsp;=&nbsp;κ<sub>3</sub>&nbsp;=&nbsp;...=μ.
 
* The [[binomial distribution]]s, (number of successes in ''n'' [[statistical independence|independent]] trials with probability ''p'' of success on each trial). The special case ''n''&nbsp;=&nbsp;1 is a Bernoulli distribution. Every cumulant is just ''n'' times the corresponding cumulant of the corresponding Bernoulli distribution.  The derivative of the cumulant generating function is ''g''&nbsp;'(''t'')&nbsp;=&nbsp;''n''·((''p''<sup>&minus;1</sup>&minus;1)·e<sup>&minus;''t''</sup>&nbsp;+&nbsp;1)<sup>&minus;1</sup>. The first cumulants are κ<sub>1</sub>&nbsp;=&nbsp;''g''&nbsp;'(0)&nbsp;=&nbsp;''n·p'' and κ<sub>2</sub>&nbsp;=&nbsp;''g''&nbsp;'&nbsp;'(0)&nbsp;=&nbsp;κ<sub>1</sub>·(1&minus;''p''). Substituting ''p''&nbsp;=&nbsp;μ·''n''<sup>&minus;1</sup> gives ''g''&nbsp;'(''t'')&nbsp;=&nbsp;((μ<sup>&minus;1</sup>&nbsp;&minus;&nbsp;''n''<sup>&minus;1</sup>)·e<sup>&minus;''t''</sup>&nbsp;+&nbsp;''n''<sup>&minus;1</sup>)<sup>&minus;1</sup> and κ<sub>1</sub>&nbsp;=&nbsp;μ. The limiting case ''n''<sup>&minus;1</sup>&nbsp;=&nbsp;0 is a Poisson distribution.
 
* The [[negative binomial distribution]]s, (number of failures before ''n'' successes with probability ''p'' of success on each trial). The special case ''n''&nbsp;=&nbsp;1 is a geometric distribution. Every cumulant is just ''n'' times the corresponding cumulant of the corresponding geometric distribution. The derivative of the cumulant generating function is ''g''&nbsp;'(''t'')&nbsp;=&nbsp;''n''·((1&minus;''p'')<sup>&minus;1</sup>·e<sup>&minus;''t''</sup>&minus;1)<sup>&minus;1</sup>. The first cumulants are κ<sub>1</sub>&nbsp;=&nbsp;''g''&nbsp;'(0)&nbsp;=&nbsp;''n''·(''p''<sup>&minus;1</sup>&minus;1), and κ<sub>2</sub>&nbsp;=&nbsp;''g''&nbsp;'&nbsp;'(0)&nbsp;=&nbsp;κ<sub>1</sub>·''p''<sup>&minus;1</sup>. Substituting ''p''&nbsp;=&nbsp;(μ·''n''<sup>&minus;1</sup>+1)<sup>&minus;1</sup> gives ''g''&nbsp;'(''t'')&nbsp;=&nbsp;((μ<sup>&minus;1</sup>+''n''<sup>&minus;1</sup>)·e<sup>&minus;''t''</sup>&minus;''n''<sup>&minus;1</sup>)<sup>&minus;1</sup> and κ<sub>1</sub>&nbsp;=&nbsp;μ. Comparing these formulas to those of the binomial distributions explains the name 'negative binomial distribution'. The [[limiting case]] ''n''<sup>&minus;1</sup>&nbsp;=&nbsp;0  is a Poisson distribution.
 
Introducing the [[variance-to-mean ratio]]
 
: <math>\varepsilon=\mu^{-1}\sigma^2=\kappa_1^{-1}\kappa_2, \,</math>
 
the above probability distributions get a unified formula for the derivative of the cumulant generating function:{{Citation needed|date=September 2010}}
 
: <math>g'(t)=\mu\cdot(1+\varepsilon\cdot (e^{-t}-1))^{-1}. \,</math>
 
The second derivative is
 
: <math>g''(t)=g'(t)\cdot(1+e^t\cdot (\varepsilon^{-1}-1))^{-1} \,</math>
 
confirming that the first cumulant is κ<sub>1</sub>&nbsp;=&nbsp;''g''&nbsp;'(0)&nbsp;=&nbsp;μ and the second cumulant is κ<sub>2</sub>&nbsp;=&nbsp;''g''&nbsp;'&nbsp;'(0)&nbsp;=&nbsp;''μ·ε''.
The constant random variables ''X''&nbsp;=&nbsp;μ have є&nbsp;=&nbsp;0. The binomial distributions have ''ε''&nbsp;=&nbsp;1&nbsp;&minus;&nbsp;''p'' so that 0&nbsp;<&nbsp;''ε''&nbsp;<&nbsp;1.  The Poisson distributions have ''ε''&nbsp;=&nbsp;1. The negative binomial distributions have ''ε''&nbsp;=&nbsp;''p''<sup>&minus;1</sup> so that ''ε''&nbsp;>&nbsp;1. Note the analogy to the classification of [[conic sections]] by [[eccentricity (mathematics)|eccentricity]]: circles ''ε'' = 0, ellipses 0 < ''ε'' < 1, parabolas ''ε'' = 1, hyperbolas ''ε'' > 1.
 
==Cumulants of some continuous probability distributions==
* For the [[normal distribution]] with [[expected value]] μ and [[variance]] σ<sup>2</sup>, the cumulant generating function is ''g''(''t'') = μ''t'' + σ<sup>2</sup>''t''<sup>2</sup>/2. The first and second derivatives of the cumulant generating function are ''g''&nbsp;'(''t'')&nbsp;=&nbsp;μ&nbsp;+&nbsp;σ<sup>2</sup>·''t'' and ''g''"(''t'')&nbsp;=&nbsp;σ<sup>2</sup>. The cumulants are κ<sub>1</sub>&nbsp;=&nbsp;μ, κ<sub>2</sub>&nbsp;=&nbsp;σ<sup>2</sup>, and κ<sub>3</sub>&nbsp;=&nbsp;κ<sub>4</sub>&nbsp;=&nbsp;...&nbsp;=&nbsp;0. The special case σ<sup>2</sup>&nbsp;=&nbsp;0 is a constant random variable ''X''&nbsp;=&nbsp;μ.
 
* The cumulants of the [[Uniform distribution (continuous)|uniform distribution]] on the interval [&minus;1,&nbsp;0] are ''κ''<sub>''n''</sub> = ''B''<sub>''n''</sub>/''n'', where ''B''<sub>''n''</sub> is the ''n''th [[Bernoulli number]].
 
* The cumulants of the [[exponential distribution]] with parameter ''λ'' are ''κ''<sub>''n''</sub> = ''λ''<sup>&minus;''n''</sup>&nbsp;(''n''&nbsp;&minus;&nbsp;1)!.
 
==Some properties of the cumulant generating function==
The cumulant generating function ''g''(''t''), if it exists, is [[infinitely differentiable]] and [[convex function|convex]], and passes through the origin. Its first derivative ranges monotonically in the open interval from the [[infimum]] to the [[supremum]] of the support of the probability distribution, and its second derivative is strictly positive everywhere it is defined, except for the [[degenerate distribution]] of a single point mass.  The cumulant-generating function exists if and only if the tails of the distribution are majorized by an [[exponential decay]], that is, (''see [[Big O notation]]'',)
:<math>\exists c>0, F(x)=O(e^{cx}), x\to-\infty;</math> and
:<math>\exists d>0, 1-F(x)=O(e^{-dx}),x\to+\infty;</math>
where <math>F</math> is the [[cumulative distribution function]].  The cumulant-generating function will have [[vertical asymptote]](s) at the [[infimum]] of such ''c'', if such an infimum exists, and at the [[supremum]] of such ''d'', if such a supremum exists, otherwise it will be defined for all real numbers.
 
If the [[Support (mathematics)|support]] of a random variable ''X'' has finite upper or lower bounds, then its cumulant-generating function ''y''=''g''(''t''), if it exists, approaches [[asymptote]](s) whose slope is equal to the supremum and/or infimum of the support,
:<math>y=(t+1)\inf \mathrm{supp}X-\mu(X),</math> and
:<math>y=(t-1)\sup\mathrm{supp}X+\mu(X),</math>
respectively, lying above both these lines everywhere.  (The [[integral]]s <math>\int_{-\infty}^0 \big[t\inf \mathrm{supp}X-g'(t)\big]dt</math> and <math>\int_{\infty}^0 \big[t\inf \mathrm{supp}X-g'(t)\big]dt</math> yield the [[y-intercept|''y''-intercepts]] of these asymptotes, since ''g''(0)=0.)
 
For a shift of the distribution by ''k'', <math>g_{X+k}(t)=g_X(t)+kt.</math> For a degenerate point mass at ''k'', the cgf is the straight line <math>g_k(t)=kt</math>, and more generally, <math>g_{X+Y}=g_X+g_Y</math> if and only if ''X'' and ''Y'' are independent and their cgfs exist; ([[subindependence]] and the existence of second moments sufficing to imply independence.<ref>{{cite journal | journal = Studia Scientiarum Mathematicarum Hungarica
| title = A note on sub-independent random variables and a class of bivariate mixtures
| volume = 49
| issue = 1
| pages = 19–25 |date = 2012-03-01
   
| first1 = G. G.| last1 = Hamedani
| first2 = Hans | last2 = Volkmer
| first3 = J. | last3 = Behboodian
| doi = 10.1556/SScMath.2011.1183
| url = http://www.akademiai.com/content/VM7942JR87GG2815
}}</ref>)
 
The [[natural exponential family]] of a distribution may be realized by shifting or translating ''g''(''t''), and adjusting it vertically so that it always passes through the origin: if ''f'' is the pdf with cgf <math>g(t)=\log M(t),</math> and <math>f|\theta</math> is its natural exponential family, then <math>f(x|\theta)=\frac1{M(\theta)}e^{\theta x} f(x),</math> and <math>g(t|\theta)=g(t+\theta)-g(\theta).</math>
 
If ''g''(''t'') is finite for a range ''t''<sub>1</sub>&nbsp;<&nbsp;Re(''t'')&nbsp;<&nbsp;''t''<sub>2</sub> then if ''t''<sub>1</sub>&nbsp;<&nbsp;0&nbsp;<&nbsp;''t''<sub>2</sub> then ''g''(''t'') is analytic and infinitely differentiable for ''t''<sub>1</sub>&nbsp;<&nbsp;Re(''t'')&nbsp;<&nbsp;''t''<sub>2</sub>. Moreover for ''t'' real and ''t''<sub>1</sub>&nbsp;<&nbsp;''t''&nbsp;<&nbsp;''t''<sub>2</sub> ''g''(''t'') is strictly convex, and ''g''<nowiki>'</nowiki>(''t'') is strictly increasing. {{Citation needed|date=March 2011}}
 
==Some properties of cumulants==
 
===Invariance and equivariance===
The first cumulant is shift-[[equivariant]]; all of the others are shift-[[invariant (mathematics)|invariant]]. This means that, if we denote by κ<sub>''n''</sub>(''X'') the ''n''th cumulant of the probability distribution of the random variable ''X'', then for any constant ''c'':
 
* <math>\kappa_1(X + c) = \kappa_1(X) + c ~ \text{ and}</math>
* <math>\kappa_n(X + c) = \kappa_n(X) ~ \text{ for } ~ n \ge 2.</math>
 
In other words, shifting a random variable (adding ''c'') shifts the first cumulant (the mean) and doesn't affect any of the others.
 
===Homogeneity===
The ''n''th cumulant is homogeneous of degree ''n'', i.e. if ''c'' is any constant, then
 
:<math>\kappa_n(cX)=c^n\kappa_n(X). \,</math>
 
===Additivity===
If ''X'' and ''Y'' are [[statistical independence|independent]] random variables then κ<sub>''n''</sub>(''X''&nbsp;+&nbsp;''Y'') =&nbsp;κ<sub>''n''</sub>(''X'')&nbsp;+&nbsp;κ<sub>''n''</sub>(''Y'').
 
===A negative result===
Given the results for the cumulants of the [[normal distribution]], it might be hoped to find families of distributions for which
κ<sub>''m''</sub>&nbsp;=&nbsp;κ<sub>''m''+1</sub>&nbsp;=&nbsp;...&nbsp;=&nbsp;0 for some ''m''&nbsp;>&nbsp;3, with the lower-order cumulants (orders 3 to ''m''&nbsp;&minus;&nbsp;1) being non-zero. There are no such distributions.<ref>Lukacs, E. (1970) Characteristic Functions (2nd Edition), Griffin, London. (Theorem 7.3.5)</ref> The underlying result here is that the cumulant generating function cannot be a finite-order polynomial of degree greater than&nbsp;2.
 
===Cumulants and moments===
The [[moment generating function]] is:
:<math>1+\sum_{n=1}^\infty \frac{\mu'_n t^n}{n!}=\exp\left(\sum_{n=1}^\infty \frac{\kappa_n t^n}{n!}\right) = \exp(g(t)).</math>
 
So the cumulant generating function is the logarithm of the moment generating function.
The first cumulant is the [[expected value]]; the second and third cumulants are respectively the second and third [[central moment]]s (the second central moment is the [[variance]]); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments.
 
The cumulants are related to the [[moment (mathematics)|moments]] by the following [[recursion]] formula:
 
:<math>\kappa_n=\mu'_n-\sum_{m=1}^{n-1}{n-1 \choose m-1}\kappa_m \mu_{n-m}'.</math>
 
The ''n''th [[moment (mathematics)|moment]] μ&prime;<sub>''n''</sub> is an ''n''th-degree polynomial in the first ''n'' cumulants:
 
<!-- NOTE: All coefficients below are POSITIVE.  Only when goes in the opposite direction – expressing cumulants in terms of moments--does one see some negative coefficients. -->
:<math>\mu'_1=\kappa_1\,</math>
:<math>\mu'_2=\kappa_2+\kappa_1^2\,</math>
:<math>\mu'_3=\kappa_3+3\kappa_2\kappa_1+\kappa_1^3\,</math>
:<math>\mu'_4=\kappa_4+4\kappa_3\kappa_1+3\kappa_2^2+6\kappa_2\kappa_1^2+\kappa_1^4\,</math>
:<math>\mu'_5=\kappa_5+5\kappa_4\kappa_1+10\kappa_3\kappa_2
+10\kappa_3\kappa_1^2+15\kappa_2^2\kappa_1
+10\kappa_2\kappa_1^3+\kappa_1^5\,</math>
:<math>\mu'_6=\kappa_6+6\kappa_5\kappa_1+15\kappa_4\kappa_2+15\kappa_4\kappa_1^2
+10\kappa_3^2+60\kappa_3\kappa_2\kappa_1+20\kappa_3\kappa_1^3+15\kappa_2^3
+45\kappa_2^2\kappa_1^2+15\kappa_2\kappa_1^4+\kappa_1^6.\,</math>
 
The coefficients are precisely those that occur in [[Faà di Bruno's formula]].
 
The "prime" distinguishes the moments μ&prime;<sub>''n''</sub> from the [[moment about the mean|central moments]] μ<sub>''n''</sub>.  To express the ''central'' moments as functions of the cumulants, just drop from these polynomials all terms in which κ<sub>1</sub> appears as a factor:
 
:<math>\mu_1=0\,</math>
:<math>\mu_2=\kappa_2\,</math>
:<math>\mu_3=\kappa_3\,</math>
:<math>\mu_4=\kappa_4+3\kappa_2^2\,</math>
:<math>\mu_5=\kappa_5+10\kappa_3\kappa_2\,</math>
:<math>\mu_6=\kappa_6+15\kappa_4\kappa_2+10\kappa_3^2+15\kappa_2^3.\,</math>
 
Likewise, the ''n''th cumulant κ<sub>''n''</sub> is an ''n''th-degree polynomial in the first ''n'' non-central moments:
 
:<math>\kappa_1=\mu'_1\,</math>
:<math>\kappa_2=\mu'_2-{\mu'_1}^2\,</math>
:<math>\kappa_3=\mu'_3-3\mu'_2\mu'_1+2{\mu'_1}^3\,</math>
:<math>\kappa_4=\mu'_4-4\mu'_3\mu'_1-3{\mu'_2}^2+12\mu'_2{\mu'_1}^2-6{\mu'_1}^4\,</math>
:<math>\kappa_5=\mu'_5-5\mu'_4\mu'_1-10\mu'_3\mu'_2+20\mu'_3{\mu'_1}^2+30{\mu'_2}^2\mu'_1-60\mu'_2{\mu'_1}^3+24{\mu'_1}^5\,</math>
:<math>\kappa_6=\mu'_6-6\mu'_5\mu'_1-15\mu'_4\mu'_2+30\mu'_4{\mu'_1}^2-10{\mu'_3}^2+120\mu'_3\mu'_2\mu'_1-120\mu'_3{\mu'_1}^3+30{\mu'_2}^3-270{\mu'_2}^2{\mu'_1}^2+360\mu'_2{\mu'_1}^4-120{\mu'_1}^6\,.</math>
 
To express the cumulants κ<sub>''n''</sub> for ''n''&nbsp;>&nbsp;1 as functions of the central moments, drop from these polynomials all terms in which μ'<sub>1</sub> appears as a factor:
 
:<math>\kappa_1=\mu'_1\,</math>
:<math>\kappa_2=\mu_2\,</math>
:<math>\kappa_3=\mu_3\,</math>
:<math>\kappa_4=\mu_4-3\mu_2^2\,</math>
:<math>\kappa_5=\mu_5-10\mu_3\mu_2\,</math>
:<math>\kappa_6=\mu_6-15\mu_4\mu_2-10\mu_3^2+30\mu_2^3\,.</math>
 
===Relation to moment-generating function===
Using the (non-central) moments μ′<sub>''n''</sub> = E(''X'' <sup>''n''</sup>) of ''X'' and the [[moment-generating function]],
:<math>\operatorname{E}(e^{tX}) = 1 + \sum_{m=1}^\infty \mu'_m \frac{t^m}{m!}=e^{g(t)}.</math>
With a [[formal power series]] logarithm:
:<math>\begin{align}g(t) &= \log(\operatorname{E}(e^{tX})) = - \sum_{n=1}^\infty \frac{1}{n}\left(1-\operatorname{E}(e^{tX})\right)^n = - \sum_{n=1}^\infty \frac{1}{n}\left(-\sum_{m=1}^\infty \mu'_m \frac{t^m}{m!}\right)^n \\
&= \mu'_1 t
+ \left(\mu'_2 - {\mu'_1}^2\right) \frac{t^2}{2!}
+ \left(\mu'_3 - 3\mu'_2\mu'_1 + 2{\mu'_1}^3\right) \frac{t^3}{3!}
+ \cdots .
\end{align}</math>
 
The cumulants of a distribution are closely related to the distribution's [[Moment (mathematics)|moments]].  For example, if a [[random variable]] ''X'' admits an [[expected value]] μ = E(''X'') and a [[variance]] σ<sup>2</sup> = E((''X''&nbsp;&minus;&nbsp;μ)<sup>2</sup>), then these are the first two '''cumulants''': μ = κ<sub>1</sub> and σ<sup>2</sup> = κ<sub>2</sub>.
 
Generally, the cumulants can be extracted from the cumulant-generating function via differentiation (at zero) of ''g''(''t'').  That is, the cumulants appear as the coefficients in the [[Maclaurin series]] of ''g''(''t''):
 
:<math>\begin{align} \kappa_1 &= g'(0) = \mu'_1 = \mu, \\
                    \kappa_2 &= g''(0) = \mu'_2 - {\mu'_1}^2 = \sigma^2, \\
                              &{} \  \  \vdots \\
                    \kappa_n &= g^{(n)}(0), \\
                              &{} \  \  \vdots
      \end{align}
</math>
 
Note that expectation values are sometimes denoted by angle brackets, ''e.g.'',
 
:<math>\mu'_n = \operatorname{E}(X^n)=\langle X^n \rangle \, </math>
 
and cumulants can be denoted by angle brackets with the subscript ''c'',{{Citation needed|date=February 2011}} ''e.g.'',
 
:<math>\kappa_n = \langle X^n\rangle_c. \, </math>
 
Some writers<ref>Kendall, M.G., Stuart, A. (1969) ''The Advanced Theory of Statistics'', Volume 1 (3rd Edition). Griffin, London. (Section 3.12)</ref><ref>Lukacs, E. (1970) ''Characteristic Functions'' (2nd Edition). Griffin, London. (Page 27)</ref> prefer to define the cumulant generating function as the natural logarithm of the [[Characteristic function (probability theory)|characteristic function]], which is sometimes also called the '''''second'' characteristic function,'''<ref>Lukacs, E. (1970) ''Characteristic Functions'' (2nd Edition). Griffin, London. (Section 2.4)</ref><ref>Aapo Hyvarinen, Juha Karhunen, and Erkki Oja (2001) ''Independent Component Analysis'', [[John Wiley & Sons]]. (Section 2.7.2)</ref>
 
:<math>h(t)=\sum_{n=1}^\infty \kappa_n \frac{(it)^n}{n!}=\log(\operatorname{E} (e^{i t X}))=\mu it - \sigma^2 \frac{ t^2}{2} + \cdots.\,</math>
 
The advantage of ''h''(''t'')&mdash;in some sense the function ''g''(''t'') evaluated for (purely) imaginary arguments&mdash;is that E(''e''<sup>''itX''</sup>) will be well defined for all real values of ''t'' even when E(''e''<sup>''tX''</sup>) is not well defined for all real values of ''t'', such as can occur when there is "too much" probability that ''X'' has a large magnitude.  Although ''h''(''t'') will be well defined, it nonetheless may mimic ''g''(''t'') by not having a [[Maclaurin series]] beyond (or, rarely, even to) linear order in the argument&nbsp;''t''.  Thus, many cumulants may still not be well defined.  Nevertheless, even when ''h''(''t'') does not have a long Maclaurin series it can be used directly in analyzing and, particularly, adding random variables.  Both the [[Cauchy distribution]] (also called the Lorentzian) and [[stable distribution]] (related to the Lévy distribution) are examples of distributions for which the generating functions do not have power-series expansions.
 
===Cumulants and set-partitions===
These polynomials have a remarkable [[combinatorics|combinatorial]] interpretation: the coefficients count certain [[partition of a set|partitions of sets]].  A general form of these polynomials is
 
:<math>\mu'_n=\sum_\pi \prod_{B\in\pi}\kappa_{\left|B\right|}</math>
 
where
 
*π runs through the list of all partitions of a set of size ''n'';
 
*"''B'' <math>\in</math> π" means ''B'' is one of the "blocks" into which the set is partitioned; and
 
*|''B''| is the size of the set ''B''.
 
Thus each [[monomial]] is a constant times a product of cumulants in which the sum of the indices is ''n'' (e.g., in the term κ<sub>3</sub>&nbsp;κ<sub>2</sub><sup>2</sup>&nbsp;κ<sub>1</sub>, the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the [[integer]] ''n''  corresponds to each term.  The ''coefficient'' in each term is the number of partitions of a set of ''n'' members that collapse to that partition of the integer ''n'' when the members of the set become indistinguishable.
 
===Cumulants and combinatorics===
Further connection between cumulants and combinatorics can be found in the work of [[Gian-Carlo Rota]] and Jianhong (Jackie) Shen, where links to [[invariant theory]], [[symmetric function]]s, and binomial sequences are studied via [[umbral calculus]].<ref>G.-C. Rota and J. Shen, [http://www.sciencedirect.com/science/article/pii/S0097316599930170 "On the Combinatorics of Cumulants"], Journal of Combinatorial Theory, Series A, 91:283–304, 2000.</ref>
 
==Joint cumulants==
The '''joint cumulant''' of several random variables ''X''<sub>1</sub>,&nbsp;...,&nbsp;''X''<sub>''n''</sub> is defined by a similar cumulant generating function
 
:<math>g(t_1,t_2,\dots,t_n)=\log E(\mathrm e^{\sum_{j=1}^n t_j X_j}).</math>
 
A consequence is that
 
:<math>\kappa(X_1,\dots,X_n)
=\sum_\pi (|\pi|-1)!(-1)^{|\pi|-1}\prod_{B\in\pi}E\left(\prod_{i\in B}X_i\right)</math>
 
where π runs through the list of all partitions of {&nbsp;1,&nbsp;...,&nbsp;''n''&nbsp;}, ''B''&nbsp;runs through the list of all blocks of the partition π, and |π| is the number of parts in the partition. For example,
 
:<math>\kappa(X,Y,Z)=E(XYZ)-E(XY)E(Z)-E(XZ)E(Y)-E(YZ)E(X)+2E(X)E(Y)E(Z).\,</math>
 
If any of these random variables are identical, e.g. if ''X''&nbsp;=&nbsp;''Y'', then the same formulae apply, e.g.
 
:<math>\kappa(X,X,Z)=E(X^2Z)-2E(XZ)E(X)-E(X^2)E(Z)+2E(X)^2E(Z),\,</math>
 
although for such repeated variables there are more concise formulae.  For zero-mean random vectors,
 
:<math>\kappa(X,Y,Z)=E(XYZ).\,</math>
:<math>\kappa(X,Y,Z,W) = E(XYZW)-E(XY)E(ZW)-E(XZ)E(YW)-E(XW)E(YZ).\,</math>
 
The joint cumulant of just one random variable is its expected value, and that of two random variables is their [[covariance]].  If some of the random variables are independent of all of the others, then any cumulant involving two (or more) independent random variables is zero. If all ''n'' random variables are the same, then the joint cumulant is the ''n''th ordinary cumulant.
 
The combinatorial meaning of the expression of moments in terms of cumulants is easier to understand than that of cumulants in terms of moments:
 
:<math>E(X_1\cdots X_n)=\sum_\pi\prod_{B\in\pi}\kappa(X_i : i \in B).</math>
 
For example:
 
:<math>E(XYZ)=\kappa(X,Y,Z)+\kappa(X,Y)\kappa(Z)+\kappa(X,Z)\kappa(Y)
+\kappa(Y,Z)\kappa(X)+\kappa(X)\kappa(Y)\kappa(Z).\,</math>
 
Another important property of joint cumulants is multilinearity:
 
:<math>\kappa(X+Y,Z_1,Z_2,\dots)=\kappa(X,Z_1,Z_2,\dots)+\kappa(Y,Z_1,Z_2,\dots).\,</math>
 
Just as the second cumulant is the variance, the joint cumulant of just two random variables is the [[covariance]]. The familiar identity
 
:<math>\operatorname{var}(X+Y)=\operatorname{var}(X)
+2\operatorname{cov}(X,Y)+\operatorname{var}(Y)\,</math>
 
generalizes to cumulants:
 
:<math>\kappa_n(X+Y)=\sum_{j=0}^n {n \choose j} \kappa(\,\underbrace{X,\dots,X}_j,\underbrace{Y,\dots,Y}_{n-j}\,).\,</math>
 
===Conditional cumulants and the law of total cumulance===
{{Main|law of total cumulance}}
The [[law of total expectation]] and the [[law of total variance]] generalize naturally to conditional cumulants.  The case ''n'' = 3, expressed in the language of (central) [[moment (mathematics)|moments]] rather than that of cumulants, says
 
:<math>\mu_3(X)=E(\mu_3(X\mid Y))+\mu_3(E(X\mid Y))
+3\,\operatorname{cov}(E(X\mid Y),\operatorname{var}(X\mid Y)).</math>
 
In general,<ref>Brillinger, D.R. (1969)  "The Calculation of Cumulants via Conditioning",  ''Annals of the Institute of Statistical Mathematics'', 21, 215&ndash;218</ref>
 
:<math>\kappa(X_1,\dots,X_n)=\sum_\pi \kappa(\kappa(X_{\pi_1}\mid Y),\dots,\kappa(X_{\pi_b}\mid Y))</math>
 
where
 
* the sum is over all [[partition of a set|partitions]]&nbsp;π of the set {&nbsp;1,&nbsp;...,&nbsp;''n''&nbsp;} of indices, and
 
* π<sub>1</sub>,&nbsp;...,&nbsp;π<sub>b</sub> are all of the "blocks" of the partition π; the expression κ(''X''<sub>π<sub>''m''</sub></sub>) indicates that the joint cumulant of the random variables whose indices are in that block of the partition.
 
==Relation to statistical physics==
In [[statistical physics]] many [[extensive quantities]] &ndash; that is quantities that are proportional to the volume or size of a given system &ndash; are related to cumulants of random variables.  The deep connection is that in a large system an extensive quantity like the energy or number of particles can be thought of as the sum of (say) the energy associated with a number of nearly independent regions.  The fact that the cumulants of these nearly independent random variables will (nearly) add make it reasonable that extensive quantities should be expected to be related to cumulants.
 
A system in equilibrium with a thermal bath at temperature ''T''  can occupy states of energy ''E''.  The energy ''E'' can be considered a random variable, having the probability density.  The [[partition function (statistical mechanics)|partition function]] of the system is
 
:<math>Z(\beta) = \langle\exp(-\beta E)\rangle,\,</math>
 
where [[Thermodynamic beta|''&beta;'']] =&nbsp;1/(''kT'') and ''k'' is [[Boltzmann's constant]] and the notation <math>\langle A \rangle</math> has been used rather than <math>\operatorname{E}(A)</math> for the expectation value to avoid confusion with the energy, ''E''. The [[Helmholtz free energy]] is then
 
:<math>F(\beta) = -\beta^{-1}\log Z \, </math>
 
and is clearly very closely related to the cumulant generating function for the energy.  The free energy gives access to all of the thermodynamics properties of the system via its first second and higher order derivatives, such as its [[internal energy]], [[entropy]], and [[specific heat]].  Because of the relationship between the free energy and the cumulant generating function, all these quantities are related to cumulants e.g. the energy and specific heat are given by
 
:<math> E = \langle E \rangle_c</math>
:<math>C= dE/dT = k \beta^2\langle E^2 \rangle_c = k \beta^2(\langle E^2\rangle - \langle E\rangle ^2)</math>
 
and <math>\langle E^2\rangle_c</math> symbolizes the second cumulant of the energy.  Other free energy is often also a function of other variables such as the magnetic field or chemical potential <math>\mu</math>, e.g.
 
:<math>\Omega=-\beta^{-1}\log(\langle \exp(-\beta E -\beta\mu N) \rangle),\,</math>
 
where ''N'' is the number of particles and <math>\Omega</math> is the grand potential.  Again the close relationship between the definition of the free energy and the cumulant generating function implies that various derivatives of this free energy can be written in terms of joint cumulants of ''E'' and ''N''.<!--The relation between the cumulant and statistical physics is not explicitly stated here, so this material seems out of place. [Other editor:] NB: Just added this relation. Deserves expanding-->
 
==History==
The history of cumulants is discussed by [[Anders Hald]].<ref>
[[Anders Hald|Hald, A.]] (2000) "The early history of the cumulants and the [[Gram&ndash;Charlier series]]" ''International Statistical Review'', 68 (2): 137&ndash;153. (Reprinted in {{Cite book| editor=[http://www.stats.ox.ac.uk/~steffen/ Steffen L. Lauritzen]|title=[[Thorvald N. Thiele|Thiele]]: Pioneer in Statistics|publisher=[http://www.oup.com/uk/catalogue/?ci=9780198509721 Oxford U. P.]|year=2002|isbn=978-0-19-850972-1}})</ref><ref>
{{Cite book|first1=Anders|last1=Hald|title=A History of Mathematical Statistics from 1750 to 1930 |authorlink=Anders Hald|year=1998 |publisher=Wiley |location=New York |isbn=0-471-17912-4}}</ref>
 
Cumulants were first introduced by [[Thorvald N. Thiele]], in 1889, who called them ''semi-invariants''.<ref>H. Cramér (1946) Mathematical Methods of Statistics, Princeton University Press, Section 15.10, p. 186.</ref>  They were first called ''cumulants'' in a 1932 paper<ref>[[Ronald Fisher|Fisher, R.A.]] , [[John Wishart (statistician)|John Wishart, J.]]. (1932) [http://plms.oxfordjournals.org/content/s2-33/1/195.full.pdf+html ''The derivation of the pattern formulae of two-way partitions from those of simpler patterns''], Proceedings of the [[London Mathematical Society]], Series 2, v. 33, pp.&nbsp;195&ndash;208 {{doi| 10.1112/plms/s2-33.1.195}}
</ref> by [[Ronald Fisher]] and [[John Wishart (statistician)|John Wishart]]. Fisher was publicly reminded of Thiele's work by Neyman, who also notes previous published citations of Thiele brought to Fisher's attention.<ref>Neyman, J. (1956): ‘Note on an Article by Sir Ronald Fisher,’ ''Journal of the Royal Statistical Society'', Series B (Methodological), 18, pp. 288–94.</ref>  [[Stephen Stigler]] has said{{Citation needed|date=January 2011}} that the name ''cumulant'' was suggested to Fisher in a letter from [[Harold Hotelling]].  In a paper published in 1929,{{Citation needed|date=January 2011}} Fisher had called them ''cumulative moment functions''. The partition function in statistical physics was introduced by [[Josiah Willard Gibbs]] in 1901.{{Citation needed|date=January 2011}} The free energy is often called Gibbs free energy. In [[statistical mechanics]], cumulants are also known as [[Ursell function]]s relating to a publication in 1927.{{Citation needed|date=January 2011}}
 
==Cumulants in generalized settings==
 
===Formal cumulants===
More generally, the cumulants of a sequence { ''m''<sub>''n''</sub> : ''n'' = 1, 2, 3, ... }, not necessarily the moments of any probability distribution, are, by definition,
 
:<math>1+\sum_{n=1}^\infty m_n t^n/n!=\exp\left(\sum_{n=1}^\infty\kappa_n t^n/n!\right) ,</math>
 
where the values of κ<sub>''n''</sub> for ''n'' = 1, 2, 3, ... are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges.  All of the difficulties of the "problem of cumulants" are absent when one works formally.  The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero.  Formal cumulants are subject to no such constraints.
 
===Bell numbers===
In [[combinatorics]], the ''n''th [[Bell number]] is the number of partitions of a set of size ''n''.  All of the [[Bell_number#Generating_function|cumulants of the sequence of Bell numbers are equal to 1]].  The Bell numbers are the [[Moment-generating_function#Examples|moments of the Poisson distribution with expected value 1]].
 
===Cumulants of a polynomial sequence of binomial type===
For any sequence { κ<sub>''n''</sub> : ''n'' = 1, 2, 3, ... } of [[scalar (mathematics)|scalars]] in a [[field (mathematics)|field]] of characteristic zero, being considered formal cumulants, there is a corresponding sequence {&nbsp;μ&nbsp;&prime; : ''n'' =&nbsp;1,&nbsp;2,&nbsp;3,&nbsp;...&nbsp;} of formal moments, given by the polynomials above.{{clarify|reason=what polynomials|date=January 2011}}{{Citation needed|date=January 2011}}  For those polynomials, construct a [[polynomial sequence]] in the following way.  Out of the polynomial
 
:<math>
\begin{align}
\mu'_6 & =
\kappa_6+6\kappa_5\kappa_1+15\kappa_4\kappa_2+15\kappa_4\kappa_1^2
+10\kappa_3^2+60\kappa_3\kappa_2\kappa_1 \\[6pt]
& {}\quad + 20\kappa_3\kappa_1^3+15\kappa_2^3
+45\kappa_2^2\kappa_1^2+15\kappa_2\kappa_1^4+\kappa_1^6
\end{align}</math>
 
make a new polynomial in these plus one additional variable ''x'':
 
:<math>\begin{align}p_6(x) & =
\kappa_6 \,x + (6\kappa_5\kappa_1 + 15\kappa_4\kappa_2 + 10\kappa_3^2)\,x^2
+(15\kappa_4\kappa_1^2+60\kappa_3\kappa_2\kappa_1+15\kappa_2^3)\,x^3 \\[6pt]
& {}\quad +(45\kappa_2^2\kappa_1^2)\,x^4+(15\kappa_2\kappa_1^4)\,x^5 +(\kappa_1^6)\,x^6, \end{align}</math>
 
and then generalize the pattern.  The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on ''x''.  Each coefficient is a polynomial in the cumulants; these are the [[Bell polynomials]], named after [[Eric Temple Bell]].{{Citation needed|date=January 2011}}
 
This sequence of polynomials is of [[binomial type]].  In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants.{{Citation needed|date=January 2011}}
 
===Free cumulants===
In the identity{{clarify|reason=how does this relate to above|date=January 2011}}
 
:<math>E(X_1\cdots X_n)=\sum_\pi\prod_{B\in\pi}\kappa(X_i : i\in B)</math>
 
one sums over ''all'' partitions of the set { 1, ..., ''n'' }.  If instead, one sums only over the [[noncrossing partition]]s, then one gets '''"free cumulants"''' rather than conventional cumulants treated above.{{clarify|reason=how does one "get" anything here ... is the LHS still fixed?|date=January 2011}}  These play a central role in [[free probability]] theory.<ref name="Novak-Śniady">{{Cite journal|last1=Novak|first1=Jonathan|last2=Śniady|first2=Piotr|year=2011|title=What Is a Free Cumulant?|journal=[[Notices of the American Mathematical Society]]|volume=58|issue=2|pages=300–301|issn=0002-9920}}</ref>  In that theory, rather than considering [[statistical independence|independence]] of [[random variable]]s, defined in terms of [[Cartesian product]]s of [[algebra over a field|algebras]] of random variables, one considers instead '''"freeness"''' of random variables, defined in terms of [[free product]]s of algebras rather than Cartesian products of algebras.{{Citation needed|date=January 2011}}
 
The ordinary cumulants of degree higher than 2 of the [[normal distribution]] are zero.  The ''free'' cumulants of degree higher than 2 of the [[Wigner semicircle distribution]] are zero.<ref name="Novak-Śniady"/> This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory.
 
==See also==
 
* [[Multiset#Cumulant generating function|Cumulant generating function from a multiset]]
* [[Cornish–Fisher expansion]]
* [[Edgeworth expansion]]
* [[Polykay]]
* [[k-statistic]], a minimum-variance [[unbiased estimator]] of a cumulant
 
{{Refimprove|date=May 2010}}
{{More footnotes|date=January 2011}}
 
==References==
{{Reflist}}
 
==External links==
* {{MathWorld | urlname=Cumulant | title=Cumulant}}
*[http://jeff560.tripod.com/c.html cumulant] on the [http://jeff560.tripod.com/mathword.html Earliest known uses of some of the words of mathematics]
 
{{Theory of probability distributions}}
 
[[Category:Theory of probability distributions]]

Latest revision as of 07:06, 6 May 2014

After you've your desired number towards gems, you can purchase prepared intelligently to give protection to myself against any floor you like. Wishes exciting since it enables you to enjoy like a advanced and you can circumstance just about anyone should playing skills are formidable.

Lee are able to consumption those gems to instantly fortify his army. When you loved this informative article and you would love to receive more information concerning clash of clans hack cydia please visit the webpage. He tapped 'Yes,'" virtually without thinking. Within just under a month to do with walking around a not too many hours on a ordinary basis, hed spent nearly 1000 dollars.

Numerous games which have proved to be created till now, clash of clans is preferred by men and women. The game which requires players construct villages and characters to do everything forward can quite challenging at times. Batters have to carry out different tasks including raids and missions. And be very tough and many players often get trapped in in one place. When this happens, it truly is quite frustrating. However this can be been altered now because there is often a way out of .

Regardless of whether you feel like individuals targeted your enemy spot on in a present shooter and still missed, consider what weapon you include using. Just adore in real life, different weapons have different strengths and weaknesses. How the weapon you are using may not have you see, the short distance required along with the weapon recoil is considered actually putting you to some extent off target.

Computer games are a significant of fun, but these folks could be very tricky, also. If your company are put on that game, go on the web and also appear for cheats. Largely games have some good of cheat or secrets and cheats that can make them a lot easier. Only search in ones own favorite search engine and you can certainly hit upon cheats to get this action better.

The particular world can be piloted by supply and shopper demand. We shall look coming from the Greek-Roman model. Using special care that will highlight the role concerning clash of clans hack into tool no survey within the vast framework which usually this provides.

There is the helpful component of this diversion as fantastic. When one particular battler has modified, the Conflict of Clan Castle spoils in his or the woman village, he or she will successfully start or subscribe to for each faction in diverse gamers exactly where they can take a review at with every other while giving troops to just 1 these troops could link either offensively or protectively. The Clash attached to Clans cheat for without charge additionally holds the most district centered globally conversation so gamers could present making use of different players for social alliance and as faction signing up.This recreation is a have to to play on your android software specially if you may be employing my clash for clans android hack tool.