Dealing with Missing Studies

At this point, it is well known that many research findings get put away in a file drawer and never see the light of day (Kicinski et al. 2015; Rosenthal 1979). There are several reasons for this, some of which include:

  •   Getting results that contradict a hypothesis  
  •   Submitting papers with null results that get rejected
  •   Producing results that may compromise research funding

So, what does this mean? A lot of the studies that show that X treatment helps with Y condition got published, but all the other studies that found no effect of X on Y were never published.

Why is this a problem? Because if we start making decisions on a clinical or population level, based on biased findings, our decisions will be from findings that aren’t true.

Luckily, we have developed some ways of detecting whether there is a file drawer problem, which falls under the umbrella term, publication bias.

In this blog post, I would like to discuss some of these methods in the context of a meta-analysis, which I discussed in a previous blog postThe examples and discussion below are adopted from Borenstein’s book on meta-analysis (Borenstein et al. 2011), which is a dense reading, but well worth it. I would recommend reading this article first since it is simplified and shorter.

Understanding Publication Bias

Anyway, back to publication bias. First, I want to make it clear that we are talking about unpublished studies that are missing due to bias, not randomness. Let me unpack this.

When you run a meta-analysis, you want to get as many studies as possible to increase the precision of your pooled estimate, but it is not very likely that you will capture all the studies that meet your inclusion criteria. Several may escape you, just because of randomness (like you happened not to see a few studies because there are so many), and as a result, you will get wider confidence intervals. This is not publication bias.

However, if studies are missing from your meta-analysis because they were never published due to null results, that is a systematic reason they are missing and something that will throw off your point estimate.

Other systematic reasons for missing studies in meta-analyses may be:

  • Availability bias (studies that are easy to access are more likely to be included)
  • Language bias (studies that are in the reviewer's language are more likely to be included)
  • Familiarity bias (studies like the reviewer’s discipline are more likely included)
  • Cost bias (studies that are less expensive to access or free are likely to be included)
  • Duplication bias (studies that are large and expensive have more papers)
  • Citation bias (significant results are more likely to be cited by others and included)

I just wanted to mention that these are some other reasons for publication bias but, I want to discuss the file-drawer problem specifically, missing studies due to the results.

First, we want to see if there’s evidence of publication bias.

Detecting Evidence of Publication Bias 

The Sample Size + Effect Size Model

One of the most common models used to detect publication bias is based on sample sizes and effect sizes. The model assumes the following:

Large studies (which have large investment from stakeholders and researchers) are very likely to be published even if the results are not significant. So, there won’t be many missing studies in this category. Also, tiny effect sizes become significant.

Moderately sized studies have sample sizes that are likely to produce significant findings, so only a few studies are lost here because many are significant and published.

Small studies will produce significant findings if the effects are large, and therefore, large effects are likely to be published, but small-medium effects are often not. So, several studies are likely to be lost here if they do not have large effects.

Caveat: It is possible that small studies DO have large effects because of several other reasons unrelated to publication bias, like poor quality control amongst other things. This is referred to as “small-study effects,”(Sterne et al. 2001) and it is something we should keep in mind, that what looks like publication bias, may not be publication bias. All the methods below are based on assumptions made about the effect sizes and the sample size.

The Forest Plot Method

A quick method to gauge publication bias is to look at a forest plot. It is not objective, but it gives you a sense of the data. First, what you do is organize all the studies by how large they are, AKA by the weight. Now, you eyeball whether the point estimates begin to shift towards a specific direction as the weight increases or decreases.

For example, below, the studies are organized by size. Larger studies (more weight) are on the top, and smaller studies (less weight) are on the bottom. Notice that the risk ratios begin to shift towards the right (larger effects) as the studies get smaller. The larger studies with more weight, have smaller point estimates closer to the null value. The smaller studies have larger point estimates. Based on our model, there may be *some* evidence of publication bias here.

studies organized from largest (top) to smallest (bottom). Notice the risk ratios get larger for the smaller studies.

The Funnel Plot Method

Another subjective method is to use a funnel plot. Each point in a funnel plot is a study and distributed around the mean because of random sampling error. If there is symmetry, then there may be a low risk of publication bias, but if there’s much asymmetry, there may be a reason to believe that this indication of publication bias (J. Light and B. Pillemer 1986).

There are obviously caveats here because asymmetry may not be a result of publication bias, and symmetry is not automatically indicative of lack of publication bias. Again, this is to give you a sense of the data.

In a funnel plot, the large studies are likely to be clustered around the top because they have small standard errors. We do not focus on this area because there often aren’t many missing studies with large studies, based on our model.

Moderately sized studies are a bit more difficult to eyeball because some studies are missing, but the standard error is not too high, so, they are not dispersed wide enough to gauge whether they are clustered in a direction.

Small studies have a lot of standard error, are widely dispersed and will cluster in a direction if there is some indication of bias. The forest plot below shows some evidence of publication bias, because, in the lower half of the graph, which includes small to medium sized studies, several studies are clustered towards the right of the mean.

Orwin’s Fail-Safe Number

Orwin’s fail-safe number is a method to calculate how many studies it would take to make the meta-analysis pooled estimate irrelevant (Orwin 1983).

It is a modification of Rosenthal’s fail-safe number, which is a method asking the question “How many missing null studies would need to be incorporated into this meta-analysis to make this pooled estimate insignificant?”(Rosenthal 1979).

Let me first explain this in the context of Rosenthal’s method and move on to Orwin’s method. If a meta-analysis of 10 studies finds a significant overall value, and it only took 5 missing studies with null treatment effects to make the overall treatment effect insignificant, then there may be some reason to be concerned because if such a small number of null studies happened to actually be missing and they make the overall treatment effect null, the pooled estimate was never robust in the first place.

Now, let’s say you did the calculations for another meta-analysis via Rosenthal’s method and it took 2,000 studies for the significant effect to become insignificant, then it would give you some more confidence in the overall treatment effect, as in, it is a very robust treatment effect.

Orwin’s fail-safe number is similar to Rosenthal’s except instead of focusing on significance, it focuses on effect sizes, and asks the question, “How many missing studies would it take to make this pooled treatment effect no longer relevant?” Also, unlike Rosenthal’s method, which assumes that the all the missing studies are null, Orwin’s method also takes into account the fact that the missing studies could have opposite signs and be negative values. Rosenthal’s method is hardly used anymore but is essential to explain to understand Orwin’s method.

How Much Did Publication Bias Affect the Results?

Okay, there might be some evidence of publication bias. However, we cannot do much with this information. We want to know, how much has publication bias affected my overall pooled estimate. Below are some methods to do this.

Trim and Fill

The Trim and Fill method tries to produce an unbiased pooled effect by removing small studies with extreme effects (trim) and imputing missing studies in a mirror-like manner (fill)(Duval and Tweedie 2000). The beautiful thing about this method is that you can easily compare the funnel plots before the Trim and Fill procedure and after the procedure to see how much the pooled estimate has changed as a result of imputing missing studies and removing extreme studies. It is always important to remember that these are simulated studies and not *REAL* studies, therefore if you see a new point estimate that is drastically different from the original point estimate, this does not mean that you can be 100% certain that this is reason to believe that publication bias has impacted your point estimate.  

Before trim and fill

 

After trim and fill

Cumulative Meta-Analysis

Cumulative meta-analysis is a technique where you add studies to your meta-analysis one by one and see the overall effects it has on your pooled estimate. In the image below, large studies are added first at the top. The first row has one large study. The second row has the first large study and a second large study combined. The third row has the first two studies and another large study and so on. What you are looking to see is the effects of small studies on the overall pooled effect. In the bottom of the forest plot, as small studies are added, the point estimate does not change drastically, which gives us a feeling that publication bias does not largely affect the pooled estimate.  

effects of adding studies one by one

That is today's lesson on publication bias. If you want to read more about meta-analyses, check out my blog post here on using different meta-analysis models.

References

  1. Kicinski M, Springate DA, Kontopantelis E. Publication bias in meta-analyses from the Cochrane Database of Systematic Reviews. Stat Med. 2015;34(20):2781-2793. doi:10.1002/sim.6525
  2. Rosenthal R. The file drawer problem and tolerance for null results. Psychol Bull. 1979;86(3):638. http://psycnet.apa.org/record/1979-27602-001.
  3. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to Meta-Analysis. John Wiley & Sons; 2011. 
  4. Sterne JAC, Egger M, Smith GD. Investigating and Dealing with Publication and Other Biases. In: Egger M, Smith GD, Altman DG, eds. Systematic Reviews in Health Care. London, UK: BMJ Publishing Group; 2001:189-208. doi:10.1002/9780470693926.ch11
  5. J. Light R, B. Pillemer D. Summing Up: The Science of Reviewing Research Harvard University Press: Cambridge, MA, 1984, xiii+191 pp. Educ Res. 1986;15(8):16-17. doi:10.3102/0013189X015008016
  6. Orwin RG. A Fail-Safe N for Effect Size in Meta-Analysis. J Educ Behav Stat. 1983;8(2):157-159. doi:10.2307/1164923
  7. Duval S, Tweedie R. Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics. 2000;56(2):455-463. https://www.ncbi.nlm.nih.gov/pubmed/10877304.

Subscribe to the Blog

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 76 other subscribers