How Useful is Nutritional Epidemiology?

Nutritional epidemiological findings are often the studies that generate the most buzz, but they’re also the ones that get harshly criticized. Some folks will even go out of their way to say that the entire field produces findings that are mostly useless.

Here’s what one of the leading medical statisticians in the world had to say about nutritional epidemiology:

“Nutritional Epidemiology is a scandal. It should just go to the waste bin.” – John Ioannidis

Now, it’s important to distinguish that this isn’t an argument against epidemiology as a field, but instead, against nutritional epidemiology. It would be hard to argue that epidemiology produces useless findings. Remember, epidemiological evidence, along with evidence from multiple other lines, helped us establish causal relationships between smoking and lung cancer, LDL and heart disease, and Zika and birth defects.

And those who aren’t delusional know very well that randomized controlled trials (RCT) cannot answer all of our questions. So, yes epidemiology is useful, but back to the topic at hand. Critics of nutritional epidemiology mostly claim that nutritional epidemiology isn’t very useful because effects are:

And I would have to mostly agree with the critics. I wouldn’t go to say that the entire field is completely useless, but we should probably be skeptical of a lot of the findings that have come out of this field and we should avoid getting influenced by the media hype that accompanies these studies. Here are some arguments against nutritional epidemiology.

“Nutritional epidemiology suffers from less precise assessments of exposure – due largely to unreliable, self-reported dietary information – and far greater potential for confounding. Dietary choices are associated with subtle socio-economic, behavioral and lifestyle factors, all potentially explaining the observed associations. This is why epidemiologic associations have traditionally been considered suitable only for hypothesis generation and insufficient as rationale for establishing policy.” – Gary Taubes & Nina Teicholz

I agree with Gary Taubes and Nina Teicholz. What a surprise! 

Why? Well, here’s what some of the evidence shows (main thanks to Vinay Prasad for highlighting the studies). In one meta-study, titled “Assessment of vibration of effects due to model specification can demonstrate the instability of observational associations”, John Ioannidis showed that by using different combinations of a certain number of covariates, you could virtually make effects go in either direction (type S errors) or change their magnitude (type M errors). These are the variables that researchers typically adjust for in multivariate regression models. 

Ioannidis downloaded 13 variables from the NHANES dataset that were linked to all-cause mortality, and that had a substantial number of participants associated with each variable (at least 1000 participants and 100 deaths). From those 13 variables, he was able to produce 8,192 different statistical models that all resulted in different hazard ratios (HR), as seen in the image below. The variables were included age, smoking, BMI, hypertension, diabetes, cholesterol, alcohol consumption, education, income, sex, family history of heart disease, heart disease, and any cancer.


For one particular relationship between vitamin D and all-cause mortality, Ioannidis reported that with no adjustment of covariates, Vitamin D resulted in an impressive 0.64 HR. Wow. A 36% decrease. However, when all 13 covariates are included in the model, the HR increases to 0.75. Maybe some researcher in California decided to include five variables in a model, while another researcher in New York decided to include 10.

The fact that there are nearly 8,000 statistical models that are possible with just 13 variables and far more with more possible covariates is a significant concern when considering the problem of multiple comparisons. It’s possible that when certain authors report their data, they are only reporting the data that passed their significance filter (p < 0.05) and discounting all the other thousands of models that didn’t pass their significance filter. Maybe this sort of reporting is intentional. Maybe it is a result of ignorance. It’s hard to know. A lot of the reported findings may just be false positives.

In another meta-study, Ioannidis showed that several foods had large associations with cancer as shown below. 

However, when several of the studies were pooled in meta-analyses, these large effects shrunk as shown below.



Which makes sense considering that as you pool more and more models together, you’re likely to get closer to the real effect and you reduce the impact of large effects that passed the significance threshold. So, here, we again see the problem of multiplicity and publication bias in the field.

Luckily, the constant push to preregister all data-analysis protocols may help attenuate the problem of selective reporting. Furthermore, Ioannidis and several other statisticians explain that rather than focus on model selection, it would be ideal to report all statistical models and look for the median and mean effect sizes/p-values from all the included models. This would yield far more useful information, than the results from a few associations.  

A good rule of thumb when interpreting a nutritional epidemiological study would be to interpret the findings carefully, look at the preregistration protocol, and consider that while there might be some signal there, there could also be a lot of noise. Ideally, it would be great to corroborate with an RCT whether there is any signal there, but this isn’t always possible. Regardless, it’s important to remember that even though small effects may not seem relevant, especially on a personal level, they are worth chasing because if they happen to be real, and policy changes are made upon these small effects, the outcomes could be large when looking at it from a population level. But as my colleague Kevin points out below, jumping on the results of every nutritional epidemiological study as if it were groundbreaking is simply delusional. 

Also, I did write about a few nutritional epidemiological studies:

That’s all for today!

 

Leave some bamboo