The Story of Antidepressants and Bad Journalism

Setting the Narrative

Nearly a decade ago, a researcher by the name of Irving Kirsch conducted one of the most extensive studies on antidepressants. He compiled the *published* clinical trial data on antidepressants, and to make sure that he wasn’t analyzing data that only supported antidepressant effectiveness, he also invoked the Freedom of Information Act and dug up all the *unpublished* pharmaceutical data.

Before starting this ambitious project, Kirsch, a medical researcher at Harvard, was the man who spearheaded most of the research behind the placebo effect. Kirsch proposed that people felt better when they received placebos because they *expected* to feel better, and his research shaped most of the controlled research to this day. At the time, the research surrounding antidepressants had caught Kirsch’s eyes because of the nature of antidepressants and improved mood, so he decided to investigate whether they were better than placebo, the latter of which he was already an expert on.

Irving Kirsch
Irving Kirsch

In his first project with antidepressants around 1998, he gathered the *published* clinical trial data from 19 controlled trials and combined the results. He wasn’t surprised to see from the results that placebos made people feel better. After all, people are partly expecting to get better, so why wouldn’t they? However, he was surprised to see that antidepressants had such a weak effect. They were statistically better than placebos, but not clinically so. Meaning, they were only slightly better than placebos, but not by much to be clinically relevant. More on this later.

Kirsch knew that gathering *published* clinical trial data would probably give him numbers that made antidepressants look better than they are. Why? It was well known that clinical trial data that weren’t favorable for the funders or newsworthy wouldn’t get published and that this data would get put away in a file drawer.

Kirsch’s first project only involved published data, meaning he just had access to data that likely made antidepressants look effective. And he found from that project, which combined data from 19 clinical trials at the time, that antidepressants weren’t that much better than placebo! He also suspected that most of the benefits from antidepressants were likely due to the placebo effect, if not all. So, for his second and most controversial project, he wanted all the data. Even the unpublished data.

Kirsch was aware that before any clinical trial was conducted, it had to be registered with the FDA, so he knew that the FDA had all the data on every clinical trial that had ever been run, even unpublished ones. Kirsch pressed the FDA for this information using the Freedom of Information Act. So, over a decade, Kirsch dug up all the antidepressant research ever published. Kirsch was incredibly thorough. He didn’t want to miss a single study.

After combining all the results from several published and unpublished studies on antidepressants (35 clinical trials), Kirsch once again, found that antidepressants were statistically better than placebos, but not clinically relevant.

To work with some numbers, Kirsch found that antidepressants had an effect of 0.32.

This is an effect size (also referred to as standardized mean difference) based on averages and it’s calculated by taking the averages of groups, like the antidepressant group and the placebo group, subtracting them from one another and dividing it by the standard deviation. An effect size of 0.2 is considered small, an effect size of 0.5 is considered medium, and an effect size of 0.8 is considered large. Larger effect sizes mean more potent effects.

Again, Kirsch found an effect size of 0.32. So, this difference was very close to what was considered “small” in psychological research. Before Kirsch’s projects, the UK research group, NICE, created guidelines where they suggested that effect sizes that weren’t equal to or above 0.50 were not considered clinically relevant for depression. Why 0.50? What was the justification for it? Well, not much actually. They just saw that it was a large number and pushed that idea in their guidelines.

So, when Irving Kirsch conducted his large meta-analysis of published and unpublished data and found an effect size of 0.32, he disclosed in his paper that antidepressants were statistically significant, but not clinically so, echoing the ideas from the UK research group, NICE.

When the paper was published, and it was time for the press release, journalists looked at the conclusions of the paper, and pushed headlines that “antidepressants were no better than placebo.” This set the narrative around antidepressants for quite a long time. Mainly because many were aware of the behemoth that Kirsch’s project was. It was thorough, and he claimed that antidepressants weren’t clinically significant. The results of his project made news all over the world. It was one of the most controversial studies at the time because so many people were prescribed antidepressants.

2008 headlines
2008 headlines

It was only later that the research group, NICE, decided to change their guidelines on what constituted clinical significance because they didn’t have much evidence to back their ideas in the first place! It’s still debated to this day.

Unfortunately, this didn’t receive as much attention from the media. So, for a long time, the narrative was that antidepressants were not better than placebo because legendary placebo researcher Irving Kirsch said so in his paper and so did NICE at some point.

Lazy Journalism and Bad Headlines

A decade later, another researcher, Andrea Cipriani, was teaming up with some of the world’s top medical researchers and statisticians to do another project on antidepressants that would be like Kirsch’s project, but even more thorough. Kirsch pooled data from 35 trials, Cipriani pooled clinical trial data from 500 trials. This project was a statistical behemoth.

network meta-analysis
network meta-analysis


Simply put, Cipriani got a standardized mean difference (effect size based on means) of 0.30, which suggested that antidepressants were indeed more effective than placebos.

This project of Cipriani’s also made headlines all over the world. See the images below. News headlines claimed that this was definitive proof that antidepressants were indeed better than placebo. However, there is a touch of irony to this.

2018 headlines
2018 headlines

A decade ago, Irving Kirsch got an effect size of 0.32, and the media claimed that antidepressants were no better than placebo because he claimed in his paper they weren’t clinically better.

A decade later, Cipriani gets an effect size (0.30) even LOWER than Kirsch’s (0.32), and the media concludes this is definitive proof that antidepressants are better than placebo.


What can we summarize from this? Nothing new. Just, another shameful example showing that beyond institutional press releases, most science journalists do not do enough digging. If journalists were more skeptical and comfortable with uncertainty, they wouldn’t write pieces claiming that antidepressants don’t work and then have their colleagues write stories claiming that antidepressants do work a decade later, even though the results from the new study were like the old (worse actually!)

Science journalism is incredibly important because not only does it keep people informed, but it also impacts policy changes. If we’re making policy changes based on inaccurate interpretations of science, then we need to get better at relaying scientific findings to the public, and something needs to change.

Credit: I’d like to give due credit to Scott Alexander of Slate Star Codex and Neuroskeptic who inspired me to write this piece



Leave some bamboo