Water. It’s the stuff of life. It’s an amazing, wonderful, spectacular molecule that gives us everything from oceans to lakes, swimming pools to ponds, bathtubs to the good old gin and tonic.
We’d be lost without water.
Now, we all know that you need water to live. But recent news reports have come out saying that it’s not just important to have enough water to get by — no, apparently drinking lots of water can actually make your brain work better. According to media from around the world, drinking water can improve children’s ability to multitask, boosting their brainpower to superhuman levels.
The problem is, that’s not quite true. In fact, the science appears to say that you get no cognitive improvement from drinking lots of water at all.
Get ready for a story of suspicious science, media mayhem, industry inconsistency, and some very weird numbers.
Settle in, because this is a pretty long one.
The study that has everyone agog about the power of water was what’s known as a crossover trial, looking at hydration and brain power. Basically, the scientists got children aged 9–11 to either drink however much water they wanted (ad libitum), a little water (low), or a lot of water (high), for four days, and then tested how hydrated they were and how they performed on some tests of cognition.
Somewhat unsurprisingly, at the end of fours days children who drank a lot of water were better hydrated than children who drank very little water, and a little bit more hydrated than those who were drinking as much as they wanted.
The thing that has the media in a tizzy, however, is that the scientists also apparently found that children who drank more water actually did better on cognitive tests.
Except, oddly enough, they didn’t find that at all.
So, first thing’s first: the main results of this study. There were no differences in test scores (specifically accuracy and reaction time) on any of the three cognitive tests used between the three groups. Children did no better or worse no matter how much water they were given to drink.
Or, the exact opposite of what the news stories said.
Confused? I was, too. How did a basically null study get reported as massively positive? It turns out that the science was a lot murkier than you might’ve expected.
In short, the authors did quite a few analyses in their study. The headlines come from an analysis that basically looked at children within-groups. What this means is that they split up kids within their group of water drinking — low, ad libitum, high — by how dehydrated they appeared on urine tests, and then compared their test scores. Kids whose urine was darker in color did a bit worse on test scores when they were drinking the same amount of water, and so the news stories were born.
Now, that sounds a bit odd. The authors ran a lot of analyses, most of them found nothing, but the one that did manage to find a modest benefit was promoted far and wide.
At this point, it’s always worth looking at the numbers themselves.
The first point of call is the study’s pre-registration. A pre-registration is something that is very important for clinical trials and science in general: the researchers write down their study protocol — the hows and whys of their research — and put it out in public, so that anyone can come back later and make sure there is nothing strange in the published paper. This is designed to reduce the number of studies where researchers run dozens of analyses and only publish the ones that come out positive.
It’s like flipping a coin — as Derren Brown famously showed, if you only publish the positive results, you can quite easily flip 10 heads in a row as long as you’ve got time to spend most of the day filming yourself flipping coins. Similarly, if you can change which statistical tests you run without telling anyone, you can flip as many metaphorical coins as you like and only show the ones that are positive.
Usually, pre-registrations are a bit different to the published research. Things change, variables are re-named, you recruit a few more people than you were originally planning — all fine, as long as you acknowledge it in your paper. This time, however, the pre-registration is completely different to the published research, aside from the names of the investigators and a few other important points that we’ll get to.
There were fewer participants in the pre-registration, the study design was different, the outcome measures changed, the analysis changed, the ages of the kids shifted, and perhaps most importantly the exclusion criteria had some very key differences between the pre-registration — remember, what the scientists planned to do — and the final paper.
And if you look at the exclusions — the children whose results weren’t used in the final paper — there are some very strange decisions. The pre-registration doesn’t make any mention of excluding children for non-compliance (i.e. not drinking enough water when given it or drinking too much when given a little), but that’s what the scientists did. If nothing else, this makes the study a per-protocol analysis, which is known to be a big issue for research. It doesn’t say anywhere that kids who performed extremely badly on tests would be dropped from the analysis, but again that’s what it says in the paper. All in all, the scientists excluded up to 34% of the kids who participated in their study and had results, due to a variety of stated reasons.
To give you an idea of why this is a problem, let’s look at outliers. The scientists reported that they dropped results from their analysis that were considered outliers — results too high or low to be believably true. In one case, this meant that more than 10% of children were excluded from an analysis, which could be a big problem — if 1 in 10 kids are ‘outliers’, then the term doesn’t really apply!
Sometimes, you can defend excluding outliers, but the decisions made in terms of exclusion in the study seem extremely arbitrary. For example, children who had 40% or fewer ‘correct’ answers were excluded from the final analyses — their results were chucked out — but there’s no reasoning at all for this. Given that at times this meant that more than 15% of the total number of participants were just ignored completely, it’s a huge potential source of bias that almost certainly impacted the results.
On top of all of this, it doesn’t appear that the published study was randomized, although the pre-registration says it was going to be. This is a problem because all of the children were doing repeated cognitive tests, and people get better at these tests over time. So, it might be that the results in the study don’t have anything to do with water and are more about children improving their test scores over time as they get used to the tests.
The study also wasn’t blinded. While it’s fair to say you can’t really blind kids to how much water they’re drinking, without even a randomized design it seems extremely likely that the expectations of kids and parents would’ve influenced the results in some way.
The problem with all of these issues, aside from the fact that they make the study itself seem very dodgy, is that they are all the sort of thing you’d expect to push the results in a positive direction. Arbitrarily excluding participants, per-protocol designs, poorly-explained dropout rates — these are choices that almost always make a study more likely to find a positive result, even if the truth is that there’s no difference (what’s known as a false positive).
All in all, the study made some very odd decisions, and most of these decisions were the kind that would usually lead to a more positive result. It’s hard to know why this was without being in the room when it happened, but there’s one thing about the research that makes the inconsistencies, perhaps, a bit less surprising.
You see, this study was industry-funded. Specifically, it was funded by Danone, who employ two of the study authors, and also make the popular bottled water brands Evian, Volvic, and more than a dozen others.
Now, as I’ve said many times, industry funding isn’t necessarily a problem for individual studies. Often, trials funded by the industry are actually of slightly better quality than those that aren’t.
However, it’s very handy for the sponsors of this trial that it was reported as positive. I wouldn’t be surprised if the headlines about water boosting your brain power are going to get a lot more people buying a bottle of water, even though as I’m sure you remember that’s not what the study found anyway.
Ultimately, there’s no way to know why the study design had some odd inconsistencies. Maybe it was a happy accident that these seem to have resulted in something positive to report despite the study being largely negative. We may never know.
That being said, if there’s one thing we can take away from this study, it’s that there doesn’t appear to be any good evidence that drinking more water does anything at all for your cognitive ability*. The findings that were reported everywhere in the news were the least reliable numbers that the authors produced, in a study that appears to be plagued by inconsistencies and very strange decisions.
Drink water because it’s healthy. Drink it because hydration keeps us all alive.
But don’t buy a water bottle because you’ve been told that drinking liters a day is the key to success.
As long as you’re not thirsty, you’re probably fine.
You can now listen to Gid on the Sensationalist Science podcast for your weekly dose of scientific shenanigans and media muddling:
*Note: It’s actually pretty funny that this is the main finding, because you’d expect that intentionally dehydrating children would make them do a bit worse on tests, but that’s not what this study found.