In the Age Of COVID-19, scientific research has gone from the esoteric mutterings of geeks like me to the everyday parlance of pretty much everyone the world over. Instead of discussing the weather, or politics, suddenly you’ll find yourself dipping into seroprevalence results, or the potential benefits of anti-malarials for the treatment of viral disease.
Truly, 2020 is a weird time for us all.
But something keeps coming up in almost every discussion about COVID-19. It’s a pretty simple mistake to make, and very easy to understand — one of those things that we all intuitively think, because it just makes sense. I’m talking about adding up studies numerically, and counting that as the best evidence. Because more studies must mean better evidence, right?
Wrong. Totally, utterly wrong.
In fact, the opposite is true — often, when people simply add up studies and count them up, they are making a bad faith case that something is true, despite a total lack of evidence that it is. You see, the number of studies is meaningless if the quality is low.
Let me explain.
A common idea when you’re looking at evidence is that higher numbers is always better. It’s true to say that low numbers can be a problem, of course, especially when you’re talking about individual studies, but when we’re talking about meta-research it’s a totally different story. The thing is, when we’re adding up studies without considering how well done they are we aren’t really doing the maths right.
A bad study is, all else being equal, worth pretty much nothing in terms of evidence.
I’ve talked about this in general terms before. Despite dozens of terrible studies into the topic, we still have very little idea if vitamin D works for coronavirus, for example. There have been more studies on hydroxychloroquine and COVID-19 than practically any other drug or treatment put together, and it wasn’t until we got to see the results of some very large trials that we could really say that it probably doesn’t work.
See, when you’re thinking about research, it’s good to think about questions, and answers. Good research is designed to provide a clear, definitive answer to a question. Often that question is quite complex, like “Which of these two treatments works better in people who are ventilated in ICU?”, but they can be pretty simple too. Sometimes the answer can be that will still don’t know, but at least you know something more about the topic than you did before.
Bad research is, very simply, not aimed at answering questions. You can take 100 studies that were not designed to provide you with a definitive answer to the question “does hydroxycholoroquine work for COVID-19?” and end up having no idea if the answer is yes or no. Conversely, a single well-done study might tell you straight away what the answer is, which is pretty much what happened in real life.
And the thing is, this is not a critique of a specific type of research. You can have awful randomized controlled trials that completely fail to give you any useful information. You can conduct amazingly well-done observational research that gives you important information that you couldn’t get otherwise.
Unfortunately, it can be really quite hard to appraise study quality in a meaningful way.
So what can you, my reader, do with this information?
Well, firstly, I’d be very careful if someone is just throwing out study numbers as their evidence-base. It’s almost always a bad sign if someone says that their argument is supported by 100 studies, because it’s very rare that a question actually needs 100 studies to support it. What it usually means is that they have found 99 studies that don’t really support their point at all, and 1 useless piece of research that proves nothing.
There are many things that you want lots of, but studies just aren’t one of them.
More importantly, if you’re looking at a question to do with COVID-19, as we all must in these increasingly incomprehensible days, be very careful about the research that you’re relying on. Check your sources, and check them again. If at all possible, read the study that the story is based on, and don’t just skim the abstract. Go on twitter to see what some experts think of the research, and try to get more than one opinion on the board. It’s important to know quite a lot about the evidence that you’re relying on to make decisions, and even more so when every decision could potentially be life-or-death.
Ultimately, the simple fact is that this stuff isn’t always easy. Appraising study quality and determining whether it answers a question is a skill that can take years to develop. Knowing what the evidence as a whole says is something that can often take weeks of careful sifting through studies to see what they really say, rather than relying on second-hand analyses or abstracts to guide your views.
But most importantly, it is simply unscientific to argue that because a point is supported by a large number of studies it is true.
Science just doesn’t work that way, no matter what we might intuitively feel.