The PI’s Guide to Running a Bad Study

Before the study

  1. Don’t ask a medical librarian to do a systematic search of the literature to see what’s already been published.
  2. If you do happen to do a literature review, make sure to ask a research assistant to scour the depths of PubMed.
  3. Don’t consult a biostatistician before embarking on your courageous decision to run a trial. Who needs em?!
  4. Don’t pre-register your trial; it’s overrated. Plus, it’ll prevent you from fishing data (you don’t want that).
  5. Don’t come up with a hypothesis now. You do this after you get your results. Trust me on this.
  6. Don’t do a power analysis/design analysis.
  • If you do happen to do a power analysis mistakenly, make sure to find the largest effect size possible that will give you 80% power and brag about that in your grant.
  • When looking for effect sizes to do a power analysis, make sure to use the published literature because who cares about publication bias, yeah?

Suspect that your intervention may not be that great when compared to anything? Here’s a guide to making sure you get excellent results.

  • Use a high dose of your intervention, and a low dose of the comparator (superiority).
  • Use a high dose of the control and a low dose of your intervention to make the comparator look toxic (safer).
  • Focus on the shortest endpoints to establish there’s no difference between the two so that they are equivalent (equivalence).
  • Also, save your money and use a super small sample size to find no difference (equivalence).

During the study

  1. Forget about blinding your participants, or randomizing them (You can adjust for the confounders later!).
  2. Are your participants dropping out horrendously? Forget em, burn their data and keep the data of those who stayed. Bias associated with the interventions is irrelevant, and only losers impute data.

Results — Statistical Analyses

  1. Between-group analyses are not useful, only use WITHIN - group analyses for maximal significance.
  2. Send your results over a to a statistician to find significant ones and delete the nonsignificant ones (if they don’t do this for you, tell them they suck).
  3. Thinking about reporting both intent-to-treat analyses and per-protocol analyses? Hahaha, you must not want to be in academia.
  4. Effect sizes and standard deviations aren’t relevant. Nor are confidence intervals nor p-values (they’re cousins). Report that your results were significant. That's it. Wanna go full Bayesian?
  5. Say you got a super large Bayes factor, now you’re loved amongst the subjective nerds.
  6. Transform your data as many times as necessary.
  7. Get rid of data points that look like outliers to you.

Publishing Time

  1. Splice the results and make multiple papers out of this study.
  2. Submit the papers ONLY to predatory journals AKA illegitimate publishers. They’re looking out for you.
  3. Make your results seem better than they are in the press release and explain how your study is groundbreaking.
  4. Do as many interviews with science journalists as possible and use jargon that you don’t understand, this is key.

Subscribe to the blog.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 40 other subscribers

Leave some bamboo