Health reporting is shit, and we are all to blame; a resveratrol story

Why can’t we have nice things?

Today, I stumbled on a great example of how inadequate science reporting is.

It’s so bad, I had to write about it.

At the outset, I’d like to say I’m not attacking anyone in particular; it’s not the journalist’s fault, nor really the researchers (we’ll get back to them later). It’s everyone’s fault that we choose to consume news in a form that basically misleads us in every way.

That being said, the headline “Glass of red wine a day could help women to get pregnant says new research” is so wildly misleading that it’s hard to know where to start.

Wine, Resveratrol and PCOS

At the outset it’s important to note that none of this is studying the effects of red wine itself. This research is all looking at purified, extracted resveratrol. Red wine is bad for you, much like all alcoholic drinks; enjoy in moderation.

And whilst we love mice (they are both cute and a useful animal model for testing things), there is very little evidence that resveratrol is useful in humans. We’ve got some nice big population studies that say red wine drinkers live longer (sometimes), but there are literally thousands of differences between those who drink red wine and us normal folk, it’s hard to tell if it’s the red wine or if they are just hybrid super-humans genetically engineered to enslave us all.

Red wine drinkers; so much like us, and yet probably space-aliens

In comes our study.

Polycystic Ovarian Syndrome (PCOS) is a nasty condition which causes symptoms ranging from disruption of periods to infertility, and is associated with a list of other diseases. It’s a difficult one because there are hundreds of potential causes, and we are still unsure how to treat or even manage the symptoms long-term. We do know that many of the problems caused by PCOS are due to an increase in ‘male’ hormones in the women who suffer from the condition.

Our researchers hypothesized that resveratrol would reduce the amount of male hormones produced by the women who were taking it, presumably leading to some sort of health benefit (although this wasn’t tested). They gave 34 women either resveratrol or a placebo pill for 3 months, and found that there was a small but statistically significant drop in hormone levels in women receiving resveratrol compared to women getting the placebo.

They also tested 22 other variables, and found a few significant correlations, particularly with markers for diabetes and obesity.

In other words, resveratrol helped control the hormone-imbalance caused by PCOS.

Pictured; Science!

Don’t celebrate too soon

  1. Small sample size; they included 34 women. That’s tiny.It’s barely enough to detect an enormous difference between the groups using statistics. It’s usually considered unethical to test treatments on such a small number of people because you’re doing all this research to get useless results.
  2. Multiple variables; if you flip a coin 22 times, you come up with a lot of heads. The way statistics works, if you test 22 things, you will be assured of at least 1 positive result*.
  3. Short follow-up; 3 months doesn’t provide sufficient information to draw long-term conclusions. It’s just too short to know whether this is a statistical glitch or something more meaningful.
  4. Differences at baseline; with such a small group of individuals, it’s not surprising that there were several significant differences between the groups when they started (for example, placebo women had lower cholesterol). This makes comparison dicey, as the groups did not start on a level playing field**.
  5. Dropouts; there were 4 women who dropped out of the study before it finished. It’s usually best-practice to include them in the stats because we know that people who drop out of studies are likely to be worse off than people who stay in, and leaving them out of the analysis can bias the results. They didn’t include these dropouts, which is particularly strange when you consider that they were obviously different to the study groups — one woman dropped out because she got pregnant (very hard to do for sufferers of PCOS).
  6. Lack of clinical endpoints; this was very initial research. Proving that some blood test results are different between groups doesn’t say much about the symptoms of PCOS, in particular fertility issues.
  7. Only publishing relative risk; the oft-cited “23% fall” in testosterone in women taking resveratrol was actually a fairly small absolute decrease (from .53 down to .44 nanograms/ml).

All of this is important because it’s really not easy to understand, and even harder to catch in practice. I spent years in university learning how to analyze studies, and I get caught out all the time. And the press release (which is usually what journalists read to get their information on published research) wasn’t helpful; they only published relative risks and barely touched on the drawbacks to this study.

This makes it almost inevitable that journalists will latch on to the easy story in the press release, and ignore the weak basis it relies on.

It’s also important to note that this is pre-published research. This means that it hasn’t even been peer-reviewed, our basic bar for the validity of scientific results. Pre-publication announcements overstate the results of studies on a regular basis.

I could go on. Despite being a randomized, controlled trial (RCT), supposedly the peak of academic evidence, these researchers made a number of questionable choices in their design***.

Research on wine that doesn’t involve drinking it is always a mistake

Silly science reporting

Having said that, it’s still a bit surprising that the journalists didn’t dig a bit deeper. Resveratrol in particular has a long and murky history of wild overstatements, dodgy pronouncements of benefit, and even actual research fraud. It’s a molecule that people have been declaring a panacea for 20 years, without a shred of reliable evidence to back up the claim.

And the reality is that this study proved very little. Extrapolating further is a waste of time, particularly given the small sample size.

Even if this study proved a clinical benefit for women taking resveratrol (and the jury is still wildly out on that one), the factors mentioned above would mean that there is little chance of these results being useful.

And the saddest part? The science reporting for this study was actually not that bad (in comparison). Most articles quoted the authors, gave details of the study (although no links sadly), and discussed it reasonably well. Some even mentioned the sample size! But despite this, the conclusions and headlines were mostly wildly overstated.

It’s depressing that the best science reporting still barely scratches the surface of what the research actually says.

If only we all cared a little more about what is actually true in the media. Journalists can only write what we consume. If we don’t care about accuracy in science reporting, we have only ourselves to blame when we are fed incorrect information.


There’s not much more you can really gather from this study.

The only thing we can say for certain?

More (and better) research is needed

  • *You can correct for this (i.e. do some clever statistical tricks), but they didn’t.
  • **For example, if I test 2 groups of people on how good they are at tennis whilst taking a drug, I’d need to make sure they started off the same. If one group had an average age of 5 and the other group 25, it would be surprising if there wasn’t a difference in their tennis ability.
  • ***Not a criticism of researchers; it’s often hard to conduct a ‘perfect’ trial in the real world.

Epidemiologist. Writer. Podcaster. Twitter FB Email