# Fisher's method

In statistics, **Fisher's method**, developed by and named for Ronald Fisher, is a data fusion or "meta-analysis" (analysis after analysis) technique for combining the results from a variety of independent tests bearing upon the same overall hypothesis (*H*_{0}) as if in a single large test.

Fisher's method combines extreme value probabilities, P(results at least as extreme, assuming *H*_{0} true) from each test, called "p-values", into one test statistic (*X*^{2}) having a chi-square distribution using the formula

The p-value for *X*^{2} itself can then be interpolated from a chi-square table using 2*k* "degrees of freedom", where *k* is the number of tests being combined. As in any similar test, *H*_{0} is rejected for small p-values, usually < 0.05.

In the case that the tests are not independent, the null distribution of *X*^{2} is more complicated. If the correlations between the are known, these can be used to form an approximation.

## References

- Fisher, R. A. (1948) "Combining independent tests of significance",
*American Statistician*, vol. 2, issue 5, page 30. (In response to Question 14)