# Behrens–Fisher problem

 Only approximate solutions are known

In statistics, the Behrens–Fisher problem, named after Walter Ulrich Behrens and Ronald Fisher, is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples.

## Specification

One difficulty with discussing the Behrens–Fisher problem and proposed solutions, is that there are many different interpretations of what is meant by "the Behrens–Fisher problem". These differences involve not only what is counted as being a relevant solution, but even the basic statement of the context being considered.

### Context

Let X1, ..., Xn and Y1, ..., Ym be i.i.d. samples from two populations which both come from the same location-scale family of distributions. The scale parameters are assumed to be unknown and not necessarily equal, and the problem is to assess whether the location parameters can reasonably be treated as equal. Lehmann states that "the Behrens–Fisher problem" is used both for this general form of model when the family of distributions is arbitrary and for when the restriction to a normal distribution is made. While Lehmann discusses a number of approaches to the more general problem, mainly based on nonparametrics, most other sources appear to use "the Behrens–Fisher problem" to refer only to the case where the distribution is assumed to be normal: most of this article makes this assumption.

### Requirements of solutions

Solutions to the Behrens–Fisher problem have been presented that make use of either a classical or a Bayesian inference point of view and either solution would be notionally invalid judged from the other point of view. If consideration is restricted to classical statistical inference only, it is possible to seek solutions to the inference problem that are simple to apply in a practical sense, giving preference to this simplicity over any inaccuracy in the corresponding probability statements. Where exactness of the significance levels of statistical tests is required, there may be an additional requirement that the procedure should make maximum use of the statistical information in the dataset. It is well known that an exact test can be gained by randomly discarding data from the larger dataset until the sample sizes are equal, assembling data in pairs and taking differences, and then using an ordinary t-test to test for the mean-difference being zero: clearly this would not be "optimal" in any sense.

### Welch's approximate t solution

{{#invoke:main|main}} A widely used method is that of B. L. Welch, who, like Fisher, was at University College London. The variance of the mean difference

${\bar {d}}={\bar {x}}_{1}-{\bar {x}}_{2}\,$ results in

$s_{\bar {d}}^{2}={\frac {s_{1}^{2}}{n_{1}}}+{\frac {s_{2}^{2}}{n_{2}}}.$ Welch (1938) approximated the distribution of $s_{\bar {d}}^{2}$ by the Type III Pearson distribution (a scaled chi-squared distribution) whose first two moments agree with that of $s_{\bar {d}}^{2}$ . This applies to the following number of degrees of freedom (d.f.), which is generally non-integer:

$\nu \approx {(\gamma _{1}+\gamma _{2})^{2} \over \gamma _{1}^{2}/(n_{1}-1)+\gamma _{2}^{2}/(n_{2}-1)}\quad {\text{ where }}\gamma _{i}=\sigma _{i}^{2}/n_{i}.\,$ Under the null hypothesis of equal expectations, μ1 = μ2, the distribution of the Behrens-Fisher statistic T, which also depends on the variance ratio σ12/σ22, could now be approximated by Student's t distribution with these ν degrees of freedom. But this ν contains the population variances σi2, and these are unknown. The following estimate only replaces the population variances by the sample variances:

${\hat {\nu }}\approx {(g_{1}+g_{2})^{2} \over g_{1}^{2}/(n_{1}-1)+g_{2}^{2}/(n_{2}-1)}\quad {\text{ where }}g_{i}=s_{i}^{2}/n_{i}.$ This ${\hat {\nu }}$ is a random variable. A t distribution with a random number of degrees of freedom does not exist. Nevertheless, the Behrens-Fisher T can be compared with a corresponding quantile of Student's t distribution with these estimated number of degrees of freedom, ${\hat {\nu }}$ , which is generally non-integer. In this way, the boundary between acceptance and rejection region of the test statistic T is calculated based on the empirical variances si2, in a way that is a smooth function of these.

This method also does not give exactly the nominal rate, but is generally not too far off.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} However, if the population variances are equal, or if the samples are rather small and the population variances can be assumed to be approximately equal, it is more accurate to use Student's t-test,{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}.

### Other approaches

A number of different approaches to the general problem have been proposed, some of which claim to "solve" some version of the problem. Among these are,

• that of Chapman in 1950,
• that of Prokof’yev and Shishkin in 1974,
• that of Dudewicz and Ahmed in 1998.

In Dudewicz’s comparison of selected methods, it was found that the Dudewicz–Ahmed procedure is recommended for practical use.

## Variants

A minor variant of the Behrens–Fisher problem has been studied. In this instance the problem is, assuming that the two population-means are in fact the same, to make inferences about the common mean: for example, one could require a confidence interval for the common mean.

## Generalisations

The immediate generalisation of the problem involves multivariate normal distributions with unknown covariance matrices, and is known as the Multivariate Behrens–Fisher problem.