Publication bias

Jump to: navigation, search

Publication bias arises from the tendency for researchers and editors to handle experimental results that are positive (they found something) differently from results that are negative (found that something did not happen) or inconclusive.

Publication bias has been documented to occur in studies of medical interventions.[1] Publication bias, or the related outcome reporting bias (see below), may occur in 25%[2] to 60% of some types of articles.[3][4][5]

Publication bias may also occur in studies of diagnostic tests.[6] Publication bias may be more of a problem in diagnostic test research than in randomized controlled trials because studies of diagnostic tests can be secondary analyses of databases and do not have to be registered prior to publication.[7]

Definition

"Publication bias occurs when the publication of research results depends on their nature and direction."[8]

Positive results bias, a type of publication bias, occurs when authors are more likely to submit, or editors accept, positive than null (negative or inconclusive) results.[9] A related term, "the file drawer problem", refers to the tendency for those negative or inconclusive results to remain hidden and unpublished.[10] Even a small number of studies lost in the file drawer can result in a significant bias.[11].

Selective reporting bias, or outcome reporting bias, occurs when several outcomes within a trial are measured but these are reported selectively depending on the strength and direction of those results.[12] Related terms that have been coined are p-hacking[13] and HARKing (Hypothesizing After the Results are Known)[14].

Omitted-variable bias occurs when an "adjusting variable has an own effect on the dependent variable and is correlated with the variable of interest, excluding this adjusting variable from the regression induces omitted-variable bias".[15]

For example, skeptics often argue that there is (or at least was) a strong publication bias in the field of parapsychology, leading to a File drawer problem.

Small study effect

The small study effect is closely related. The small study effect is the observation that small studies tend to report more positive results.[16][17] This is especially a threat when the original studies in a meta-analysis are less than 50 patients in size.[18]

Examples

Suppose that several studies about the influence of power lines on cancer are performed. They are admitted for publication only if they show a correlation with a 95% confidence level. If only the positive results make it to publication, because negative results are just shelved, we do not know how many studies were performed, so it is possible that all the published results are type I errors.

Detection

P-curves may[13] or may not[15] be able to detect p-hacking.

The caliper test may be able to detect publication biases[19]

Effect on meta-analysis

The effect of this is that published studies may not be truly representative of all valid studies undertaken, and this bias may distort meta-analyses and systematic reviews of large numbers of studies - on which evidence-based medicine, for example, increasingly relies. The problem may be particularly significant when the research is sponsored by entities that may have a financial interest in achieving favourable results.

Those undertaking meta-analyses and systematic reviews need to take account of publication bias in the methods they use for identifying the studies to include in the review. Among other techniques to minimise the effects of publication bias, they may need to perform a thorough search for unpublished studies, and to use such analytical tools as a funnel plot to quantify the effects of bias.

Possible examples

An example of probable publication bias is in the studies of glucosamine and chondroitin for treatment of osteoarthritis. In an initial meta-analysis, the authors noted evidence of publication bias during examination of the results.[20] A subsequent large randomized controlled trial[21] and meta-analyses including the large trial were negative.[22][23]

Another example is the selective publication of randomized controlled trials of antidepressants[24] or of positive trials in general[25].

One study[26] compared Chinese and non-Chinese studies of gene-disease associations and found that "Chinese studies in general reported a stronger gene-disease association and more frequently a statistically significant result"[27]. One possible interpretation of this result is selective publication (publication bias).

Ioannidis has inventoried factors that should alert readers to risks of publication bias [28].

Study registration

In September 2004, editors of several prominent medical journals (including the New England Journal of Medicine, The Lancet, Annals of Internal Medicine, and JAMA) announced that they would no longer publish results of research unless that research was registered in a public database from the start.[29] In this way, negative results should no longer be able to disappear.

See also

External links

References

  1. Dickersin K, Min YI, Meinert CL (1992). "Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards". JAMA. 267 (3): 374–8. PMID 1727960.
  2. Turner, Erick H. (2012-03-20). "Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database". PLoS Med. 9 (3): e1001189. doi:10.1371/journal.pmed.1001189. Retrieved 2012-03-21. Unknown parameter |coauthors= ignored (help)
  3. Eyding D, Lelgemann M, Grouven U, Härter M, Kromp M, Kaiser T; et al. (2010). "Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials". BMJ. 341: c4737. doi:10.1136/bmj.c4737. PMID 20940209.
  4. Decullier E, Lhéritier V, Chapuis F (2005). "Fate of biomedical research protocols and publication bias in France: retrospective cohort study". BMJ. 331 (7507): 19. doi:10.1136/bmj.38488.385995.8F. PMC 558532. PMID 15967761.
  5. Chan AW, Krleza-Jerić K, Schmid I, Altman DG (2004). "Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research". CMAJ. 171 (7): 735–40. doi:10.1503/cmaj.1041086. PMC 517858. PMID 15451835.
  6. Owens DK, Holodniy M, Garber AM; et al. (1996). "Polymerase chain reaction for the diagnosis of HIV infection in adults. A meta-analysis with recommendations for clinical practice and study design". Ann. Intern. Med. 124 (9): 803–15. PMID 8610949. Unknown parameter |month= ignored (help)
  7. Irwig L, Macaskill P, Glasziou P, Fahey M (1995). "Meta-analytic methods for diagnostic test accuracy". J Clin Epidemiol. 48 (1): 119–30, discussion 131–2. PMID 7853038. Unknown parameter |month= ignored (help)
  8. K. Dickersin (1990). "The existence of publication bias and risk factors for its occurrence". JAMA. 263 (10): 1385&ndash, 1359. PMID 2406472. Unknown parameter |month= ignored (help)
  9. D. L. Sackett (1979). "Bias in analytic research". J Chronic Dis. 32 (1–2). PMID 447779. Text " pages 51–63 " ignored (help)
  10. Robert Rosenthal (1979). "The file drawer problem and tolerance for null results". Psychological Bulletin. 86 (3): 638&ndash, 641. Unknown parameter |month= ignored (help)
  11. Jeffrey D. Scargle (2000). "Publication Bias: The "File-Drawer Problem" in Scientific Inference". Journal of Scientific Exploration. 14 (2): 94&ndash, 106.
  12. Chang L, Dhruva SS, Chu J, Bero LA, Redberg RF. [Selective reporting in trials of high risk cardiovascular devices: cross sectional comparison between premarket approval summaries and published reports]. BMJ. 2015 Jun 10;350:h2613. doi:10.1136/bmj.h2613
  13. 13.0 13.1 Simonsohn U, Nelson LD, Simmons JP (2014). "P-curve: a key to the file-drawer". J Exp Psychol Gen. 143 (2): 534–47. doi:10.1037/a0033242. PMID 23855496.
  14. Kerr NL (1998). "HARKing: hypothesizing after the results are known". Pers Soc Psychol Rev. 2 (3): 196–217. doi:10.1207/s15327957pspr0203_4. PMID 15647155.
  15. 15.0 15.1 Bruns SB, Ioannidis JP (2016). "p-Curve and p-Hacking in Observational Research". PLoS One. 11 (2): e0149144. doi:10.1371/journal.pone.0149144. PMC 4757561. PMID 26886098.
  16. Nüesch E, Trelle S, Reichenbach S, Rutjes AW, Tschannen B, Altman DG; et al. (2010). "Small study effects in meta-analyses of osteoarthritis trials: meta-epidemiological study". BMJ. 341: c3515. doi:10.1136/bmj.c3515. PMC 2905513. PMID 20639294.
  17. Sterne JA, Egger M, Smith GD (2001). "Systematic reviews in health care: Investigating and dealing with publication and other biases in meta-analysis". BMJ. 323 (7304): 101–5. PMC 1120714. PMID 11451790.
  18. F. Richy, O. Ethgen, O. Bruyere, F. Deceulaer & J. Reginster : From Sample Size to Effect-Size: Small Study Effect Investigation (SSEi) . The Internet Journal of Epidemiology. 2004 Volume 1 Number 2
  19. Gerber AS, Malhotra N. Publication bias in empirical sociological research: Do arbitrary significance levels distort published results?. Sociological Methods & Research. 2008 Aug;37(1):3-0. doi:10.1177/0049124108318973
  20. McAlindon TE, LaValley MP, Gulin JP, Felson DT (2000). "Glucosamine and chondroitin for treatment of osteoarthritis: a systematic quality assessment and meta-analysis". JAMA. 283 (11): 1469–75. PMID 10732937.
  21. Clegg DO, Reda DJ, Harris CL; et al. (2006). "Glucosamine, chondroitin sulfate, and the two in combination for painful knee osteoarthritis". N. Engl. J. Med. 354 (8): 795–808. doi:10.1056/NEJMoa052771. PMID 16495392.
  22. Vlad SC, LaValley MP, McAlindon TE, Felson DT (2007). "Glucosamine for pain in osteoarthritis: why do trial results differ?". Arthritis Rheum. 56 (7): 2267–77. doi:10.1002/art.22728. PMID 17599746.
  23. Reichenbach S, Sterchi R, Scherer M; et al. (2007). "Meta-analysis: chondroitin for osteoarthritis of the knee or hip". Ann. Intern. Med. 146 (8): 580–90. PMID 17438317.
  24. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008). "Selective publication of antidepressant trials and its influence on apparent efficacy". N. Engl. J. Med. 358 (3): 252–60. doi:10.1056/NEJMsa065779. PMID 18199864.
  25. Bourgeois FT, Murthy S, Mandl KD (2010). "Outcome reporting among drug trials registered in ClinicalTrials.gov". Ann Intern Med. 153 (3): 158–66. doi:10.1059/0003-4819-153-3-201008030-00006. PMID 20679560.
  26. Zhenglun Pan, Thomas A. Trikalinos, Fotini K. Kavvoura, Joseph Lau, John P.A. Ioannidis, "Local literature bias in genetic epidemiology: An empirical evaluation of the Chinese literature". PLoS Medicine, 2(12):e334, 2005 December.
  27. Jin Ling Tang, "Selection Bias in Meta-Analyses of Gene-Disease Associations", PLoS Medicine, 2(12):e409, 2005 December.
  28. Ioannidis J (2005). "Why most published research findings are false". PLoS Med. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMID 16060722.
  29. (The Washington Post) (2004-09-10). "Medical journal editors take hard line on drug research". smh.com.au. Retrieved 2008-02-03.

de:Publikationsbias he:אפקט המגירה nl:Publicatiebias



Linked-in.jpg