Proving a cause and effect relationship isn’t easy. Causality is a complex subject, and there are thousands of texts on it, involving philosophical and mathematical arguments that are beyond my understanding. However, I do understand a bit of causality to discuss how we arrive at cause and effect relationships in the sciences.
One of the first things often drilled into students in a research methods course is that correlation does not equal causation. It’s also taught that one of the best ways to show causality in the health and social sciences is with a randomized controlled trial (RCT). Why?
Simply put, there are a few reasons:
- you can compare what you’re testing to a control, to see if it’s better or worse than the control and if those effects can be ruled out through phenomena like the placebo or nocebo effect, or through phenomena like regression towards the mean
- you break the spurious links between confounders through randomization (reducing selection bias)
- and you have the sequence of events mapped out (addressing temporal problems, so you know which variable came first and caused the other).
These are hard to do with observational/epidemiological studies, as seen below:
So, RCTs are one of the best ways to prove causality, and at some point, in history, we determined that smoking causes lung cancer in humans. Does that mean we recruited participants, randomized them to a group, and made them smoke?
Uhhhh, nope. Although several unethical experiments on humans have been carried out in the past, this was not one of them. Thank god. Institutional Review Boards wouldn’t allow an investigation like that, where people could potentially be harmed to this degree.
Wait, if there are no RCTs testing the effects of cigarettes, then how did we arrive at this conclusion, that smoking causes lung cancer in humans? In the mid-1900s, two researchers (Richard Doll and Austin Bradford Hill) produced data showing that smoking frequency was strongly correlated with lung cancer incidence. The data were compelling. Look at the numbers below. Crazy stuff.
But again, these were associations from observational studies (case controls), not experiments. What if people who had lung cancer, had genes that also made them more likely to smoke? What if they were already screwed and they also happened to be drawn to smoking? You couldn’t dismiss these possibilities with correlations.
This is actually what legendary statistician, empiricist, and potato scientist, Ronald A. Fisher, exclaimed at the time.
Fisher was a regular critic of these findings and of the surgeon general’s report that smoking causes lung cancer. He claimed that these conclusions were purely correlational and funnily enough, it’s also what tobacco companies exclaimed to defend their profitable product. And they weren’t wrong.
Alright, we’re in a dilemma. Randomized trials are the best way to establish causality, but you can’t conduct randomized experiments to see whether smoking causes cancer, because it’s unethical, so you’re limited to epidemiological/observational studies.
So, researchers have to work with what they have. And that’s what they did in the mid-1900s. There were already suspicions at the time, that consumption of tobacco products caused harmful gene mutations, from studies carried out in animal models. Specific compounds found in tobacco products had also been established as carcinogens, and researchers were already open to the idea that smoking wasn’t the most excellent habit. Unfortunately, rigorous human data was missing.
The results from the British Doctor’s Study (the study conducted by Richard Doll and Austin Bradford Hill) was just the nail in the coffin because it provided evidence (although correlational) that smoking frequency increased the incidence rates of lung cancer.
One of the researchers on that original paper (Austin Bradford Hill) went on to establish the Bradford Hill Criteria, a set of principles that allow people to develop causality from epidemiological data.
Interestingly enough, he was also a pioneer of the randomized controlled trial! What a badass.
I’ve strayed. So, Hill established that you could determine causality from nine principles:
- The strength of the association
- Dose dependency
- Biological mechanism
The relationship between smoking and lung cancer fit many of these criteria. There was a biological mechanism (DNA damage from carcinogens in tobacco products), the effect sizes of the associations were large (those rates though), there was a dose-dependent relationship (meaning groups that smoked more, had a higher incidence of lung cancer), there was consistency amongst the data observed, and there was coherence (so multiple lines of evidence to support a relationship: animal models, cellular biology, molecular, and epidemiological, etc.). Also, see this interesting paper on robust research needing multiple lines of evidence.
So that’s how we figured out smoking causes cancer. Even without experiments, the evidence from multiple lines was overwhelming.
Anyway, epidemiology has its limits. Temporality is always a problem because it’s hard to figure out which variable caused the other, and not all confounding variables can be adjusted for in regressions (some statistical models are ridiculous and suffer from over adjustment bias), but if there are multiple lines of evidence to suggest a relationship, then there could be one.
And that is why epidemiology is not useless. Also #mendelianrandomizationisbadass
Also, if you’re fascinated by causality, I’d suggest reading Causal Calculus by Judea Pearl. It’s a fascinating text on how one can arrive at causality from correlations and also consider looking into things like Bayesian networks.
That’s all for today folks.