# Bayesian inference

### Conjugate priors

{{#invoke:main|main}}

In parameterized form, the prior distribution is often assumed to come from a family of distributions called conjugate priors. The usefulness of a conjugate prior is that the corresponding posterior distribution will be in the same family, and the calculation may be expressed in closed form.

### Estimates of parameters and predictions

It is often desired to use a posterior distribution to estimate a parameter or variable. Several methods of Bayesian estimation select measurements of central tendency from the posterior distribution.

For one-dimensional problems, a unique median exists for practical continuous problems. The posterior median is attractive as a robust estimator.[6]

If there exists a finite mean for the posterior distribution, then the posterior mean is a method of estimation.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} ${\displaystyle {\tilde {\theta }}=\operatorname {E} [\theta ]=\int _{\theta }\theta \,p(\theta \mid \mathbf {X} ,\alpha )\,d\theta }$ Taking a value with the greatest probability defines maximum a posteriori (MAP) estimates:{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

${\displaystyle \{\theta _{\text{MAP}}\}\subset \arg \max _{\theta }p(\theta \mid \mathbf {X} ,\alpha ).}$

There are examples where no maximum is attained, in which case the set of MAP estimates is empty.

There are other methods of estimation that minimize the posterior risk (expected-posterior loss) with respect to a loss function, and these are of interest to statistical decision theory using the sampling distribution ("frequentist statistics").{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} The posterior predictive distribution of a new observation ${\displaystyle {\tilde {x}}}$ (that is independent of previous observations) is determined by{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

${\displaystyle p({\tilde {x}}|\mathbf {X} ,\alpha )=\int _{\theta }p({\tilde {x}},\theta \mid \mathbf {X} ,\alpha )\,d\theta =\int _{\theta }p({\tilde {x}}\mid \theta )p(\theta \mid \mathbf {X} ,\alpha )\,d\theta .}$

## Examples

### Probability of a hypothesis

Suppose there are two full bowls of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1?

Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes' theorem. Let ${\displaystyle H_{1}}$ correspond to bowl #1, and ${\displaystyle H_{2}}$ to bowl #2. It is given that the bowls are identical from Fred's point of view, thus ${\displaystyle P(H_{1})=P(H_{2})}$, and the two must add up to 1, so both are equal to 0.5. The event ${\displaystyle E}$ is the observation of a plain cookie. From the contents of the bowls, we know that ${\displaystyle P(E\mid H_{1})=30/40=0.75}$ and ${\displaystyle P(E\mid H_{2})=20/40=0.5}$. Bayes' formula then yields

{\displaystyle {\begin{aligned}P(H_{1}\mid E)&={\frac {P(E\mid H_{1})\,P(H_{1})}{P(E\mid H_{1})\,P(H_{1})\;+\;P(E\mid H_{2})\,P(H_{2})}}\\\\\ &={\frac {0.75\times 0.5}{0.75\times 0.5+0.5\times 0.5}}\\\\\ &=0.6\end{aligned}}}

Before we observed the cookie, the probability we assigned for Fred having chosen bowl #1 was the prior probability, ${\displaystyle P(H_{1})}$, which was 0.5. After observing the cookie, we must revise the probability to ${\displaystyle P(H_{1}\mid E)}$, which is 0.6.

### Making a prediction

Example results for archaeology example. This simulation was generated using c=15.2.

An archaeologist is working at a site thought to be from the medieval period, between the 11th century to the 16th century. However, it is uncertain exactly when in this period the site was inhabited. Fragments of pottery are found, some of which are glazed and some of which are decorated. It is expected that if the site were inhabited during the early medieval period, then 1% of the pottery would be glazed and 50% of its area decorated, whereas if it had been inhabited in the late medieval period then 81% would be glazed and 5% of its area decorated. How confident can the archaeologist be in the date of inhabitation as fragments are unearthed?

The degree of belief in the continuous variable ${\displaystyle C}$ (century) is to be calculated, with the discrete set of events ${\displaystyle \{GD,G{\bar {D}},{\bar {G}}D,{\bar {G}}{\bar {D}}\}}$ as evidence. Assuming linear variation of glaze and decoration with time, and that these variables are independent,

${\displaystyle P(E=GD\mid C=c)=(0.01+0.16(c-11))(0.5-0.09(c-11))}$
${\displaystyle P(E=G{\bar {D}}\mid C=c)=(0.01+0.16(c-11))(0.5+0.09(c-11))}$
${\displaystyle P(E={\bar {G}}D\mid C=c)=(0.99-0.16(c-11))(0.5-0.09(c-11))}$
${\displaystyle P(E={\bar {G}}{\bar {D}}\mid C=c)=(0.99-0.16(c-11))(0.5+0.09(c-11))}$

Assume a uniform prior of ${\displaystyle \textstyle f_{C}(c)=0.2}$, and that trials are independent and identically distributed. When a new fragment of type ${\displaystyle e}$ is discovered, Bayes' theorem is applied to update the degree of belief for each ${\displaystyle c}$:

A computer simulation of the changing belief as 50 fragments are unearthed is shown on the graph. In the simulation, the site was inhabited around 1420, or ${\displaystyle c=15.2}$. By calculating the area under the relevant portion of the graph for 50 trials, the archaeologist can say that there is practically no chance the site was inhabited in the 11th and 12th centuries, about 1% chance that it was inhabited during the 13th century, 63% chance during the 14th century and 36% during the 15th century. Note that the Bernstein-von Mises theorem asserts here the asymptotic convergence to the "true" distribution because the probability space corresponding to the discrete set of events ${\displaystyle \{GD,G{\bar {D}},{\bar {G}}D,{\bar {G}}{\bar {D}}\}}$ is finite (see above section on asymptotic behaviour of the posterior).

## In frequentist statistics and decision theory

A decision-theoretic justification of the use of Bayesian inference was given by Abraham Wald, who proved that every Bayesian procedure is admissible. Conversely, every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures.[7]

Wald characterized admissible procedures as Bayesian procedures (and limits of Bayesian procedures), making the Bayesian formalism a central technique in such areas of frequentist inference as parameter estimation, hypothesis testing, and computing confidence intervals.[8] For example:

• "Under some conditions, all admissible procedures are either Bayes procedures or limits of Bayes procedures (in various senses). These remarkable results, at least in their original form, are due essentially to Wald. They are useful because the property of being Bayes is easier to analyze than admissibility."[7]
• "In decision theory, a quite general method for proving admissibility consists in exhibiting a procedure as a unique Bayes solution."[9]
• "In the first chapters of this work, prior distributions with finite support and the corresponding Bayes procedures were used to establish some of the main theorems relating to the comparison of experiments. Bayes procedures with respect to more general prior distributions have played a very important role in the development of statistics, including its asymptotic theory." "There are many problems where a glance at posterior distributions, for suitable priors, yields immediately interesting information. Also, this technique can hardly be avoided in sequential analysis."[10]
• "A useful fact is that any Bayes decision rule obtained by taking a proper prior over the whole parameter space must be admissible"[11]
• "An important area of investigation in the development of admissibility ideas has been that of conventional sampling-theory procedures, and many interesting results have been obtained."[12]

### Model selection

{{#invoke:Hatnote|hatnote}}

## Applications

### Computer applications

Bayesian inference has applications in artificial intelligence and expert systems. Bayesian inference techniques have been a fundamental part of computerized pattern recognition techniques since the late 1950s. There is also an ever growing connection between Bayesian methods and simulation-based Monte Carlo techniques since complex models cannot be processed in closed form by a Bayesian analysis, while a graphical model structure may allow for efficient simulation algorithms like the Gibbs sampling and other Metropolis–Hastings algorithm schemes.[13] Recently Bayesian inference has gained popularity amongst the phylogenetics community for these reasons; a number of applications allow many demographic and evolutionary parameters to be estimated simultaneously.

As applied to statistical classification, Bayesian inference has been used in recent years to develop algorithms for identifying e-mail spam. Applications which make use of Bayesian inference for spam filtering include CRM114, DSPAM, Bogofilter, SpamAssassin, SpamBayes, and Mozilla. Spam classification is treated in more detail in the article on the naive Bayes classifier.

Solomonoff's Inductive inference is the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computable probability distribution. It is a formal inductive framework that combines two well-studied principles of inductive inference: Bayesian statistics and Occam’s Razor.[14] Solomonoff's universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion.[15][16]

### In the courtroom

Bayesian inference can be used by jurors to coherently accumulate the evidence for and against a defendant, and to see whether, in totality, it meets their personal threshold for 'beyond a reasonable doubt'.[17][18][19] Bayes' theorem is applied successively to all evidence presented, with the posterior from one stage becoming the prior for the next. The benefit of a Bayesian approach is that it gives the juror an unbiased, rational mechanism for combining evidence. It may be appropriate to explain Bayes' theorem to jurors in odds form, as betting odds are more widely understood than probabilities. Alternatively, a logarithmic approach, replacing multiplication with addition, might be easier for a jury to handle.

If the existence of the crime is not in doubt, only the identity of the culprit, it has been suggested that the prior should be uniform over the qualifying population.[20] For example, if 1,000 people could have committed the crime, the prior probability of guilt would be 1/1000.

The use of Bayes' theorem by jurors is controversial. In the United Kingdom, a defence expert witness explained Bayes' theorem to the jury in R v Adams. The jury convicted, but the case went to appeal on the basis that no means of accumulating evidence had been provided for jurors who did not wish to use Bayes' theorem. The Court of Appeal upheld the conviction, but it also gave the opinion that "To introduce Bayes' Theorem, or any similar method, into a criminal trial plunges the jury into inappropriate and unnecessary realms of theory and complexity, deflecting them from their proper task."

Gardner-Medwin[21] argues that the criterion on which a verdict in a criminal trial should be based is not the probability of guilt, but rather the probability of the evidence, given that the defendant is innocent (akin to a frequentist p-value). He argues that if the posterior probability of guilt is to be computed by Bayes' theorem, the prior probability of guilt must be known. This will depend on the incidence of the crime, which is an unusual piece of evidence to consider in a criminal trial. Consider the following three propositions:

A The known facts and testimony could have arisen if the defendant is guilty
B The known facts and testimony could have arisen if the defendant is innocent
C The defendant is guilty.

Gardner-Medwin argues that the jury should believe both A and not-B in order to convict. A and not-B implies the truth of C, but the reverse is not true. It is possible that B and C are both true, but in this case he argues that a jury should acquit, even though they know that they will be letting some guilty people go free. See also Lindley's paradox.

### Bayesian epistemology

Bayesian epistemology is a movement that advocates for Bayesian inference as a means of justifying the rules of inductive logic.

Karl Popper and David Miller have rejected the alleged rationality of Bayesianism, i.e. using Bayes rule to make epistemological inferences:[22] It is prone to the same vicious circle as any other justificationist epistemology, because it presupposes what it attempts to justify. According to this view, a rational interpretation of Bayesian inference would see it merely as a probabilistic version of falsification, rejecting the belief, commonly held by Bayesians, that high likelihood achieved by a series of Bayesian updates would prove the hypothesis beyond any reasonable doubt, or even with likelihood greater than 0.

## Bayes and Bayesian inference

The problem considered by Bayes in Proposition 9 of his essay, "An Essay towards solving a Problem in the Doctrine of Chances", is the posterior distribution for the parameter a (the success rate) of the binomial distribution.{{ safesubst:#invoke:Unsubst||date=__DATE__ |\$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

## History

{{#invoke:main|main}}

The term Bayesian refers to Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem. However, it was Pierre-Simon Laplace (1749–1827) who introduced a general version of the theorem and used it to approach problems in celestial mechanics, medical statistics, reliability, and jurisprudence.[24] Early Bayesian inference, which used uniform priors following Laplace's principle of insufficient reason, was called "inverse probability" (because it infers backwards from observations to parameters, or from effects to causes[25]). After the 1920s, "inverse probability" was largely supplanted by a collection of methods that came to be called frequentist statistics.[25]

In the 20th century, the ideas of Laplace were further developed in two different directions, giving rise to objective and subjective currents in Bayesian practice. In the objective or "non-informative" current, the statistical analysis depends on only the model assumed, the data analyzed,[26] and the method assigning the prior, which differs from one objective Bayesian to another objective Bayesian. In the subjective or "informative" current, the specification of the prior depends on the belief (that is, propositions on which the analysis is prepared to act), which can summarize information from experts, previous studies, etc.

In the 1980s, there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery of Markov chain Monte Carlo methods, which removed many of the computational problems, and an increasing interest in nonstandard, complex applications.[27] Despite growth of Bayesian research, most undergraduate teaching is still based on frequentist statistics.[28] Nonetheless, Bayesian methods are widely accepted and used, such as for example in the field of machine learning.[29]

## Notes

1. Hacking (1967, Section 3, p. 316), Hacking (1988, p. 124)
2. Template:Cite web
3. van Fraassen, B. (1989) Laws and Symmetry, Oxford University Press. ISBN 0-19-824860-1
4. Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.;Vehtari, Aki; Rubin, Donald B. (2013). Bayesian Data Analysis, Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5.
5. Larry Wasserman et alia, JASA 2000.
6. {{#invoke:citation/CS1|citation |CitationClass=book }}
7. Bickel & Doksum (2001, p. 32)
8. * {{#invoke:Citation/CS1|citation |CitationClass=journal }}
• {{#invoke:Citation/CS1|citation
|CitationClass=journal }}
• {{#invoke:Citation/CS1|citation
|CitationClass=journal }}
9. {{#invoke:citation/CS1|citation |CitationClass=book }} (see p. 309 of Chapter 6.7 "Admissibilty", and pp. 17–18 of Chapter 1.8 "Complete Classes"
10. {{#invoke:citation/CS1|citation |CitationClass=book }} (From "Chapter 12 Posterior Distributions and Bayes Solutions", p. 324)
11. {{#invoke:citation/CS1|citation |CitationClass=book }} page 432
12. {{#invoke:citation/CS1|citation |CitationClass=book }} p. 433)
13. {{#invoke:citation/CS1|citation |CitationClass=book }}
14. Samuel Rathmanner and Marcus Hutter. "A Philosophical Treatise of Universal Induction". Entropy, 13(6):1076–1136, 2011.
15. "The Problem of Old Evidence", in §5 of "On Universal Prediction and Bayesian Confirmation", M. Hutter - Theoretical Computer Science, 2007 - Elsevier
16. "Raymond J. Solomonoff", Peter Gacs, Paul M. B. Vitanyi, 2011 cs.bu.edu
17. Dawid, A. P. and Mortera, J. (1996) "Coherent Analysis of Forensic Identification Evidence". Journal of the Royal Statistical Society, Series B, 58, 425–443.
18. Foreman, L. A.; Smith, A. F. M., and Evett, I. W. (1997). "Bayesian analysis of deoxyribonucleic acid profiling data in forensic identification applications (with discussion)". Journal of the Royal Statistical Society, Series A, 160, 429–469.
19. Robertson, B. and Vignaux, G. A. (1995) Interpreting Evidence: Evaluating Forensic Science in the Courtroom. John Wiley and Sons. Chichester. ISBN 978-0-471-96026-3
20. Dawid, A. P. (2001) "Bayes' Theorem and Weighing Evidence by Juries"; http://128.40.111.250/evidence/content/dawid-paper.pdf
21. Gardner-Medwin, A. (2005) "What Probability Should the Jury Address?". Significance, 2 (1), March 2005
22. David Miller: Critical Rationalism
23. Howson & Urbach (2005), Jaynes (2003)
24. {{#invoke:citation/CS1|citation |CitationClass=book }}
25. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
26. {{#invoke:citation/CS1|citation |CitationClass=book }}
27. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
28. Template:Cite paper
29. {{#invoke:citation/CS1|citation |CitationClass=book }}

## References

• Aster, Richard; Borchers, Brian, and Thurber, Clifford (2012). Parameter Estimation and Inverse Problems, Second Edition, Elsevier. ISBN 0123850487, ISBN 978-0123850485
• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• Box, G. E. P. and Tiao, G. C. (1973) Bayesian Inference in Statistical Analysis, Wiley, ISBN 0-471-57428-7
• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• {{#invoke:citation/CS1|citation

|CitationClass=book }}

|CitationClass=book }}

• {{#invoke:citation/CS1|citation

|CitationClass=book }}

### Elementary

The following books are listed in ascending order of probabilistic sophistication:

• Stone, JV (2013), "Bayes’ Rule: A Tutorial Introduction to Bayesian Analysis", Download first chapter here, Sebtel Press, England.
• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• Bolstad, William M. (2007) Introduction to Bayesian Statistics: Second Edition, John Wiley ISBN 0-471-27020-2
• {{#invoke:citation/CS1|citation

|CitationClass=book }} Updated classic textbook. Bayesian theory clearly presented.

• Lee, Peter M. Bayesian Statistics: An Introduction. Fourth Edition (2012), John Wiley ISBN 978-1-1183-3257-3
• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• DeGroot, Morris H., Optimal Statistical Decisions. Wiley Classics Library. 2004. (Originally published (1970) by McGraw-Hill.) ISBN 0-471-68029-X.
• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• Jaynes, E. T. (1998) Probability Theory: The Logic of Science.
• O'Hagan, A. and Forster, J. (2003) Kendall's Advanced Theory of Statistics, Volume 2B: Bayesian Inference. Arnold, New York. ISBN 0-340-52922-9.
• {{#invoke:citation/CS1|citation

|CitationClass=book }}