Wythoff symbol: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Tomruen
en>Yobot
m →‎External links: WP:CHECKWIKI error fixes using AWB (10508)
 
Line 1: Line 1:
In [[statistics]], a '''binomial proportion confidence interval''' is a [[confidence interval]] for a proportion in a [[statistical population]]. It uses the proportion estimated in a [[statistical sample]] and allows for [[sampling error]]. There are several formulas for a binomial confidence interval, but all of them rely on the assumption of a [[binomial distribution]]. In general, a binomial distribution applies when an experiment is repeated a fixed number of times, each trial of the experiment has two possible outcomes (labeled arbitrarily success and failure), the probability of success is the same for each trial, and the trials are [[statistically independent]].
Hello and welcome. My title is Figures Wunder. It's not a typical factor but what she likes performing is foundation leaping and now she is trying to make money with it. Supervising is my profession. South Dakota is her beginning place but she needs to move because of her family.<br><br>Feel free to surf to my web-site; [http://www.gaysphere.net/blog/167593 std testing at home]
 
A simple example of a binomial distribution is the set of various possible outcomes, and their probabilities, for the number of heads observed when a (not necessarily fair) coin is flipped ten times. The observed binomial proportion is the fraction of the flips which turn out to be heads. Given this observed proportion, the confidence interval for the true proportion innate in that coin is a range of possible proportions which may contain the true proportion. A 95% confidence interval for the proportion, for instance, will contain the true proportion 95% of the times that the procedure for constructing the confidence interval is employed.
 
There are several ways to compute a confidence interval for a binomial proportion. The normal approximation interval is the simplest formula, and the one introduced in most basic Statistics classes and textbooks. This formula, however, is based on an approximation that does not always work well. Several competing formulas are available that perform better, especially for situations with a small sample size and a proportion very close to zero or one. The choice of interval will depend on how important it is to use a simple and easy-to-explain interval versus the desire for better accuracy.
 
==Normal approximation interval==
 
The most commonly used formula for a binomial confidence interval relies on approximating the distribution of error about a binomially-distributed observation, <math>\hat p</math>, with a [[normal distribution]]. However, although this distribution is frequently confused with a [[binomial distribution]], it should be noted that the error distribution itself is not binomial,<ref name=Wallis2013>{{Cite journal
| last1 = Wallis
| first1 = Sean A.
| title = Binomial confidence intervals and contingency tests: mathematical fundamentals and the evaluation of alternative methods
| journal = Journal of Quantitative Linguistics
| volume = 20
| issue = 3
| pages = 178–208
| year = 2013
| doi = 10.1080/09296174.2013.799918
| url = http://www.ucl.ac.uk/english-usage/staff/sean/resources/binomialpoisson.pdf
}}</ref> and hence other methods (below) are preferred.
 
The approximation is usually justified by the [[central limit theorem]]. The formula is
 
: <math>\hat p \pm z \sqrt{\frac{1}{n}\hat p \left(1 - \hat p \right)}</math>
 
where <math>\hat p</math> is the proportion of successes in a [[Bernoulli trial]] process estimated from the statistical sample, <math>z</math> is the <math>\scriptstyle 1 - \frac{1}{2}\alpha</math> [[Percentile rank|percentile]] of a [[standard normal distribution]], <math>\alpha</math> is the error percentile and ''n'' is the sample size. For example, for a 95% confidence level the error (<math>\alpha</math>) is 5%, so <math>\scriptstyle 1 - \frac{1}{2}\alpha</math> = 0.975 and <math>z</math> = 1.96.
 
The [[central limit theorem]] applies poorly to this distribution with a sample size less than 30 or where the proportion is close to 0 or 1. The normal approximation fails totally when the sample proportion is exactly zero or exactly one. A frequently cited rule of thumb is that the normal approximation is legitimate as long as ''np''&nbsp;>&nbsp;5 and ''n''(1&nbsp;&minus;&nbsp;''p'')&nbsp;>&nbsp;5; see Brown et al. 2001.<ref name=Brown2001>
{{Cite journal
| last1 = Brown
| first1 = Lawrence D.
| authorlink = Lawrence D. Brown
| last2 = Cai
| first2 = T. Tony
| authorlink2 = T. Tony Cai
| last3 = DasGupta
| first3 = Anirban
| title = Interval Estimation for a Binomial Proportion
| journal = Statistical Science
| volume = 16
| issue = 2
| pages = 101–133
| year = 2001
| doi = 10.1214/ss/1009213286
| mr = 1861069
| zbl = 02068924
}}</ref> In practice there is little reason other than simplicity to use this method rather than one of the other, better performing, methods.
 
An important theoretical derivation of this confidence interval involves the inversion of a hypothesis test. Under this formulation, the confidence interval represents those values of the population parameter that would have large ''p''-values if they were tested as a hypothesized population proportion. The collection of values, <math>\theta</math>, for which the normal approximation is valid can be represented as
 
: <math>\left\{ \theta \bigg| y \le \frac{\hat p - \theta}{\sqrt{\frac{1}{n}\hat p \left(1 - \hat p\right)}} \le z \right\}</math>
 
where <math>y</math> is the <math>\scriptstyle \frac{1}{2}\alpha</math> [[Percentile rank|percentile]] of a [[standard normal distribution]].
 
Since the test in the middle of the inequality is a [[Wald test]], the normal approximation interval is sometimes called the [[Abraham Wald|Wald]] interval, but [[Pierre-Simon Laplace]] described it 1812 in ''Théorie analytique des probabilités'' (pag. 283).
 
==Wilson score interval==
 
The Wilson interval is an improvement (the actual [[coverage probability]] is closer to the nominal value) over the normal approximation interval and was first developed by [[Edwin Bidwell Wilson]] (1927).<ref name=Wilson1927>
{{Cite journal
| last1 = Wilson
| first1 = E. B.
| authorlink1 = Edwin Bidwell Wilson
| title = Probable inference, the law of succession, and statistical inference
| journal = Journal of the American Statistical Association
| volume = 22
| pages = 209–212
| year = 1927
|jstor = 2276774
}}</ref>
 
:<math>
  \frac{1}{1 + \frac{1}{n} z^2}
  \left[
    \hat p + \frac{1}{2n} z^2 \pm
    z \sqrt{
      \frac{1}{n}\hat p \left(1 - \hat p\right) +
      \frac{1}{4n^2}z^2
    }
  \right]
</math>
 
This interval has good properties even for a small number of trials and/or an extreme probability.
 
These properties obtain from its derivation from the binomial model. Consider a binomial population probability <math>P</math>, whose distribution may be approximated by the [[normal distribution]] with standard deviation <math>\scriptstyle \sqrt{\frac{1}{n}P \left(1 - P \right)}</math>. However, the distribution of true values about an observation is not binomial. Rather, an observation <math>\hat p</math> will have an error interval with a lower bound equal to <math>P</math> when <math>\hat p</math> is at the equivalent normal interval upper bound (i.e. for the same <math>\alpha</math>) of <math>P</math>, and vice-versa.<ref name=Wallis2013/>
 
The Wilson interval can also be derived from [[Pearson's chi-squared test]] with two categories. The resulting interval
 
:<math>
  \left\{ \theta \bigg| y \le
  \frac{\hat p - \theta}{\sqrt{\frac{1}{n} \theta \left({1 - \theta} \right)}} \le
  z \right\}
</math>
 
can then be solved for <math>\theta</math> to produce the Wilson interval. The test in the middle of the inequality is a [[score test]], so the Wilson interval is sometimes called the Wilson score interval.
 
The center of the Wilson interval
 
:<math>
  \frac
    {\hat p + \frac{1}{2n} z^2}
    {    1 + \frac{1}{n}  z^2}
</math>
 
can be shown to be a weighted average of <math>\hat p = \scriptstyle \frac{X}{n}</math> and <math>\scriptstyle \frac{1}{2}</math>, with <math>\hat p</math> receiving greater weight as the sample size increases.  For the 95% interval, the Wilson interval is nearly identical to the normal approximation interval using <math>\tilde p \,=\, \scriptstyle \frac{X + 2}{n + 4}</math> instead of <math>\hat p</math>.
 
===Wilson score interval with continuity correction===
 
The Wilson interval may be modified by employing a [[continuity correction]], in order to align the ''minimum'' [[coverage probability]] (rather than the ''average'') with the nominal value.
 
Just as the Wilson interval mirrors [[Pearson's chi-squared test]], the Wilson interval with continuity correction mirrors the equivalent [[Yates's correction for continuity|Yates' chi-squared test]].
 
The following formulae for the lower and upper bounds of the Wilson score interval with continuity correction <math>( w^- , w^+ )</math> are derived from Newcombe (1998).<ref name=New/>
 
:<math>
  w^- = \operatorname{max}\left\{0, \frac { 2n\hat p + z^2 - [z \sqrt{z^2 - \frac{1}{n} + 4n\hat p(1 -\hat p)+(4\hat p - 2)}+1] }
              { 2(n+z^2) }\right\}
</math>
:<math>
  w^+ = \operatorname{min}\left\{1, \frac { 2n\hat p + z^2 + [z \sqrt{z^2 - \frac{1}{n} + 4n\hat p(1 -\hat p)-(4\hat p - 2)}+1] }
              { 2(n+z^2) }\right\}
</math>
 
==Jeffreys interval==
The ''Jeffreys interval'' has a Bayesian derivation, but it has good frequentist properties.  In particular, it has coverage properties that are similar to the Wilson interval, but it is one of the few intervals with the advantage of being ''equal-tailed'' (e.g., for a 95% confidence interval, the probabilities of the interval lying above or below the true value are both close to 2.5%).  In contrast, the Wilson interval has a systematic bias such that it is centred too close to p = 0.5.<ref>Cai TT. One-sided confidence intervals in discrete distributions. Journal of Statistical Planning and Inference 2005;131:63-88.</ref>
 
The Jeffreys interval is the Bayesian [[credible interval]] obtained when using the [[non-informative prior|non-informative]] [[Jeffreys prior]] for the binomial proportion {{math|''p''}}. The [[Jeffreys prior#Bernoulli trial|Jeffreys prior for this problem]] is a [[Beta distribution]] with parameters {{math|(1/2,&nbsp;1/2)}}. After observing {{math|''x''}} successes in {{math|''n''}} trials, the [[posterior distribution]] for {{math|''p''}} is a Beta distribution with parameters {{math|(''x''&nbsp;+&nbsp;1/2,&nbsp;''n''&nbsp;–&nbsp;''x''&nbsp;+&nbsp;1/2)}}.
 
When {{math|''x''&nbsp;≠0}} and {{math|''x''&nbsp;≠&nbsp;''n''}}, the Jeffreys interval is taken to be the {{math|100(1&nbsp;–&nbsp;''α'')%}} equal-tailed posterior probability interval, i.e., the {{math|''α''&thinsp;/&thinsp;2}} and {{math|1&nbsp;–&nbsp;''α''&thinsp;/&thinsp;2}} quantiles of a Beta distribution with parameters {{math|(''x''&nbsp;+&nbsp;1/2,&nbsp;''n''&nbsp;–&nbsp;''x''&nbsp;+&nbsp;1/2)}}. These quantiles need to be computed numerically, although this is reasonably simple with modern statistical software.
 
In order to avoid the coverage probability tending to zero when {{math|''p''&nbsp;→&nbsp;0}} or {{math|1}}, when {{math|''x''&nbsp;{{=}}&nbsp;0}} the upper limit is calculated as before but the lower limit is set to 0, and when {{math|''x''&nbsp;{{=}}&nbsp;''n''}} the lower limit is calculated as before but the upper limit is set to 1.<ref name=Brown2001/>
 
==Clopper-Pearson interval==
 
The Clopper-Pearson interval is an early and very common method for calculating binomial confidence intervals.<ref>
{{Cite journal
| last1 = Clopper
| first1 = C.
| last2 = Pearson
| first2 = E. S.
| authorlink2 = Egon Pearson
| title = The use of confidence or fiducial limits illustrated in the case of the binomial
| journal = Biometrika
| volume = 26
| pages = 404–413
| year = 1934
| doi = 10.1093/biomet/26.4.404
}}</ref> This is often called an 'exact' method, but that is because it is based on the cumulative probabilities of the binomial distribution (i.e., exactly the correct distribution rather than an approximation), but the intervals are not exact in the way that one might assume: the discontinuous nature of the binomial distribution precludes any interval with exact coverage for all population proportions. The Clopper-Pearson interval can be written as
 
:<math>
  \left\{ \theta \Big| P \left[ \mathrm{Bin}\left( n; \theta \right) \le X \right] > \frac{\alpha}{2} \right\} \bigcap
  \left\{ \theta \Big| P \left[ \mathrm{Bin}\left( n; \theta \right) \ge X \right] > \frac{\alpha}{2} \right\}
</math>
 
where ''X'' is the number of successes observed in the sample and Bin(''n'';&nbsp;θ) is a binomial random variable with ''n'' trials and probability of success θ.
 
Because of a relationship between the cumulative binomial distribution and the [[beta distribution]], the Clopper-Pearson interval is sometimes presented in an alternate format that uses quantiles from the beta distribution.
 
:<math>B\left(\frac{\alpha}{2}; x, n - x + 1\right) < \theta <  B\left(1 - \frac{\alpha}{2}; x + 1, n - x\right)</math>
 
where ''x'' is the number of successes, ''n'' is the number of trials, and ''B''(''p''; ''v'',''w'') is the ''p''th [[Cumulative distribution function#Inverse distribution function .28quantile function.29|quantile]] from a beta distribution with shape parameters ''v'' and ''w''.  The beta distribution is, in turn, related to the [[F-distribution]] so a third formulation of the Clopper-Pearson interval can be written using F percentiles:
 
:<math>
  \left( 1 + \frac{n - x}{\left[x + 1\right]F\left[\frac{\alpha}{2}; 2(x + 1), 2(n - x)\right]} \right)^{-1}<
  \theta <
  \left( 1 + \frac{n - x + 1}{xF\left[1 - \frac{1}{2}\alpha; 2x, 2(n - x + 1)\right]} \right)^{-1}
</math>
 
where ''x'' is the number of successes, ''n'' is the number of trials, and ''F(c; d1, d2)'' is the ''1 - c'' quantile from an F-distribution with ''d1'' and ''d2'' degrees of freedom.<ref name=AgrestiCoull1998 />
 
The Clopper-Pearson interval is an exact interval since it is based directly on the binomial distribution rather than any approximation to the binomial distribution. This interval never has less than the nominal coverage for any population proportion, but that means that it is usually conservative. For example, the true coverage rate of a 95% Clopper-Pearson interval may be well above 95%, depending on n and θ.  Thus the interval may be wider than it needs to be to achieve 95% confidence.  In contrast, it is worth noting that other confidence bounds may be narrower than their nominal confidence with, i.e., the Normal Approximation (or "Standard") Interval, Wilson Interval,<ref name=Wilson1927/> Agresti-Coull Interval,<ref name=AgrestiCoull1998>
{{Cite journal
| last1 = Agresti
| first1 = Alan
| last2 = Coull
| first2 = Brent A.
| title = Approximate is better than 'exact' for interval estimation of binomial proportions
| journal = The American Statistician
| volume = 52
| pages = 119–126
| year = 1998
| mr = 1628435
| jstor = 2685469
| doi=10.2307/2685469
}}</ref> etc., with a nominal coverage of 95% may in fact cover less than 95%.<ref name=Brown2001/>
 
==Agresti-Coull Interval==
 
The Agresti-Coull interval is also another approximate binomial confidence interval.<ref name=AgrestiCoull1998/>
 
Given <math>X</math> successes in <math>n</math> trials, define
:<math>\tilde{n} = n + z^2</math>
 
and
:<math>\tilde{p} = \frac{1}{\tilde{n}}\left(X + \frac{1}{2}z^2\right)</math>
 
Then, a confidence interval for <math>p</math> is given by
 
:<math>
  \tilde{p} \pm z
    \sqrt{\frac{1}{\tilde{n}}\tilde{p}\left(1 - \tilde{p} \right)}
</math>
 
where <math>z</math> is the <math>1 - \frac{1}{2}\alpha</math> percentile of a standard normal distribution, as before.  For example, for a 95% confidence interval, let <math>\alpha = 0.05</math>, so <math>z</math> = 1.96 and <math>z^2</math> = 3.84. If we use 2 instead of 1.96 for <math>z</math>, this is the "add 2 successes and 2 failures" interval in <ref name=AgrestiCoull1998/>
 
==Arc sine transformation==
 
Let ''X'' be the number of successes in ''n'' trials and let ''p'' = ''X''/''n''. The variance of ''p'' is
 
: <math> var(p) = \frac{ p ( 1 - p ) }{ n } </math>
 
Using the arc sine transform the variance of the arcsine of ''p'' is<ref name=Shao1998>Shao J (1998) Mathematical statistics. Springer. New York, New York, USA</ref>
 
: <math> var( arcsin( \sqrt { p } ) ) \approx \frac{ var( p ) }{ 4 p( 1 - p ) } = \frac{ p( 1 - p ) }{ 4n p( 1 - p ) } = \frac{ 1 }{ 4n } </math>
 
This method may be used to estimate the variance of ''p'' but its use is problematic when ''p'' is close to 0 or 1.
 
==t<sub>a</sub> transform==
 
Let ''p'' be the proportion of successes.  For 0 ≤ ''α'' ≤ 2
 
: <math> t_{ \alpha } = \log\left( \frac{ p^{ \alpha } }{ ( 1 - p )^{ 2 - \alpha } } \right) = \alpha \log( p ) - ( 2 - \alpha )\log( 1 - p ) </math>
 
This family is a generalisation of the logit transform which is a special case with ''α'' = 1 and can be used to transform a proportional data distribution to an approximately [[normal distribution]]. The parameter ''α'' has to be estimated for the data set.
 
==Special cases==
In medicine, the [[Rule of three (medicine)|rule of three]] is  used to provide a simple way of stating an approximate 95% confidence interval for ''p'', in the special case that no failures (<math>\hat p = 0</math>) have been observed.<ref>Professor Mean (2008) [http://www.childrensmercy.org/stats/size/zeroevents.aspx  "Confidence interval with zero events"], The Children's Mercy Hospital. (website: "Ask Professor Mean at [http://www.childrensmercy.org/stats/ Stats topics or Medical Research])</ref> The interval is {{nowrap|(0,3/''n'')}}.
 
==Comparison of different intervals==
 
There are several research papers that compare these and other confidence intervals for the binomial proportion.<ref name=Wallis2013/><ref name=New/><ref name=Rei/><ref name=SL/>  A good{{By whom|date=December 2013}} starting point is Agresti and Coull (1998)<ref name=AgrestiCoull1998/> or Ross (2003)<ref name=Ross/> which point out that exact methods such as the Clopper-Pearson interval may not work as well as certain approximations.
 
Many of these intervals can be calculated in [[R (programming language)|R]] using the [http://cran.r-project.org/web/packages/binom/index.html binom] package.
 
==See also==
*[[Coverage probability]]
*[[Estimation theory]]
 
==References==
{{Reflist|refs=
 
<ref name=New>{{cite pmid|9595616}}</ref>
<ref name=Rei>Reiczigel J. (2003) [http://www.zoologia.hu/qp/Reiczigel_conf_int.pdf Confidence intervals for the binomial parameter: some new considerations]. ''Statistics in Medicine,'' 22, 611&ndash;621.</ref>
<ref name=SL>Sauro J., Lewis J.R. (2005) [http://www.measuringusability.com/papers/sauro-lewisHFES.pdf "Comparison of Wald, Adj-Wald, Exact and Wilson intervals Calculator"]. ''Proceedings of the Human Factors and Ergonomics Society, 49th Annual Meeting (HFES 2005)'', Orlando, FL,  p2100-2104</ref>
<ref name=Ross>{{Cite journal
| last1 = Ross  | first1 = T. D.
| title = Accurate confidence intervals for binomial proportion and Poisson rate estimation
| journal = Computers in Biology and Medicine
| volume = 33
| pages = 509–531
| year = 2003
| doi = 10.1016/S0010-4825(03)00019-2 }}
</ref>
 
}}
 
{{DEFAULTSORT:Binomial Proportion Confidence Interval}}
[[Category:Statistical theory]]
[[Category:Statistical approximations]]
[[Category:Statistical intervals]]

Latest revision as of 09:40, 10 December 2014

Hello and welcome. My title is Figures Wunder. It's not a typical factor but what she likes performing is foundation leaping and now she is trying to make money with it. Supervising is my profession. South Dakota is her beginning place but she needs to move because of her family.

Feel free to surf to my web-site; std testing at home