Survivorship bias

In finance, Survivorship bias is the tendency for failed companies to be excluded from performance studies due to the fact that they no longer exist. It often causes the results of studies to skew higher because only companies which were successful enough to survive until the end of the period are included.

For example, a mutual fund company's selection of funds today will include only those that have been successful in the past. Many losing funds are closed and merged into other funds to hide poor performance. This is how 90% of extant funds can truthfully claim to have performance in the first quartile of their peers: the other three quarters of funds have closed.

In 1996 Elton, Gruber, & Blake showed that survivorship bias is larger in the small-fund sector than in large mutual funds (presumably because small funds have a high probability of folding). They estimate the size of the bias across the U.S. mutual fund industry as 0.9% per annum, where the bias is defined and measured as:
 * "Bias is defined as average α for surviving funds minus average $$\alpha$$ for all funds"
 * (Where α is the risk-adjusted return over the S&P 500. This is the standard measure of mutual fund out-performance).

As a general experimental flaw
Survivorship bias (or "Survivor bias") is a statistical artifact in applications outside of finance, where studies on the remaining population are fallaciously compared with the historic average despite the survivors having unusual properties.

Mostly, the unusual property in question is a track record of success (like the successful funds). For example, the parapsychology researcher Joseph Banks Rhine believed he had identified the few individuals from hundreds of potential subjects who had powers of ESP. His calculations were based on the improbability of these few subjects guessing the Zener cards shown to a partner by chance.

A major criticism which surfaced against his calculations was the possibility of unconscious survivor bias in subject selections. He was accused of failing to take into account the large effective size of his sample (all the people he didn't choose as 'strong telepaths' because they failed at an earlier testing stage). Had he done this he may have seen that from the large sample, one or two individuals will probably achieve the track record of success he had found purely by chance. (Similarly, many investors believe that chance is the main reason that most successful fund managers have the track records they do.)

Writing about the Rhine case, Martin Gardner explained that he didn't think the experimenters had made such obvious mistakes in a statistically naive way, but as a result of subtly disregarding some poor subjects. He said that without trickery of any kind, there would always be some people who had improbable success, if a large enough sample was taken. To illustrate this, he speculates about what would happen if one hundred professors of psychology read Rhine's work and decided to make their own test; he said that survivor bias would winnow out the typical failed experiments, but encourage the lucky successes to continue testing. He thought that the common null hypothesis (of no result) wouldn't be reported, but:
 * "Eventually, one experimenter remains whose subject has made high scores for six or seven successive sessions. Neither experimenter nor subject is aware of the other ninety-nine projects, and so both have a strong delusion that ESP is operating."

He concludes:
 * "The experimenter writes an enthusiastic paper, sends it to Rhine who publishes it in his magazine, and the readers are greatly impressed."

Recently, this result of survivor bias has led to cases of "publication bias", and has begun to concern scientific journals. If enough scientists study a phenomenon, some will find statistically significant results by chance, and these are the experiments submitted for publication. To combat this, some editors now call for the submission of 'negative' scientific findings, where "nothing happened."