Menu

Search

  |   Insights & Views

Menu

  |   Insights & Views

Search

Who was better at predicting the course of the pandemic – experts or the public?

Shutterstock

Early on in the pandemic, it seemed as if the media was asking anyone with potentially relevant expertise – scientists, doctors, statisticians – to tell us what was coming. These individuals were frequently asked to give off-the-cuff answers to questions about how bad the pandemic might get, even though there was little data to go on.

The use of expert predictions like these are important. They have the potential to shape public opinion and policy and influence how events unfold. Yet these have often been disregarded, particularly on social media, where alternative predictions by non-experts (including misinformation about experts’ forecasts) spread easily.

Despite the potential influence of both expert and non-expert predictions on people’s responses to the pandemic, there’s been limited research on the accuracy of either – or indeed on the difference in accuracy between them. To this end, in April 2020 my colleagues and I conducted an experiment to find out whether experts really did have a better idea of what was on the way than the rest of us.

Having a sense of this could inform what sort of public role we want experts to play in a future pandemic. Likewise, it could suggest how much weight we should place on expert and non-expert predictions of how future disease outbreaks will unfold.

Higher accuracy, lower confidence

We asked 140 experts (epidemiologists, statisticians, mathematical modellers, virologists and clinicians) and 2,086 laypersons to give their best guesses on several questions about how the pandemic would progress.

We asked them, by the end of 2020, how many people in the UK would have been infected with COVID-19, how many deaths there would have been in the UK, and how many people would have died out of every 1,000 infected with the virus in the UK and worldwide. Here’s how the two groups fared.

The experts’ best guesses were more accurate than laypeople’s on every question, but even the experts underestimated the total number of infections and deaths by a substantial margin. For example, the median estimate for the number of UK COVID-19 infections by the end of 2020 was 250,000 for non-experts and 4 million for experts. Calculations based on infection-fatality ratio research suggest the true count was closer to 6.4 million.

For each question, we also asked everyone to pick two numbers that they were 75% confident the true outcome would fall between. For example, someone might be 75% confident that between 100,000 and 1,000,000 UK residents would be infected by the end of the year. Someone who selects a narrower range – say, being 75% sure that between 200,000 and 250,000 people will be infected – is more confident about their prediction. Someone who selects a wider range is indicating that they are more uncertain.

If you are 75% sure that the true outcome will fall within the range you selected, you might reasonably hope to be correct 75% of the time. Unfortunately, our participants weren’t. Actual outcomes fell within laypeople’s ranges only between 8% and 20% of the time, depending on the question. For experts, actual outcomes fell within their ranges between 36% and 57% of the time.

In other words, experts were more accurate and less overconfident than laypeople, but still less accurate and more overconfident than we might hope.

Some notes of caution: our experts were individuals who held one of the occupations described at the beginning of this article and who responded to an announcement on social media. They aren’t necessarily representative of experts who spent the most time talking to the media or advising governments.

And our laypeople certainly weren’t practised in forecasting, unlike the experienced predictors on websites such as the Good Judgment Project and Metaculus, who may well have outperformed experts. Our laypeople were proportional to the UK population with respect to age and gender, but may have differed in other ways. However, even when we restricted the comparison to those laypeople who scored well on a maths test, experts were still much more accurate and less overconfident.

Perhaps it’s not surprising that most people’s best guesses about the number of deaths and infections were off: predictions about emerging diseases are hard, and none of us has a crystal ball. We found that even experts weren’t particularly good at predicting the pandemic’s ultimate course and impact. But our level of confidence about our predictions is within our control – and the evidence suggests that most of us could stand to be a bit more humble.

For experts, this suggests that extra caution is warranted around making confident public predictions, so as to avoid prediction “reversals” that may undermine public trust in science. And for the public, when faced with predictions of how future disease outbreaks will unfold, we should not be surprised if the true situation turns out to be better or worse than predicted – particularly if those predictions come from non-experts.

Unfortunately, the continued threat of pandemics means that this research may continue to be relevant in the future. For example, risks of serious natural pandemics have been estimated at between 1% and 5% every year, and the risks of engineered pandemics may grow as synthetic biology improves, so long-term investments in general-purpose disease surveillance and response technologies seem likely to come in handy eventually. In the meantime, we must all learn to live with the fact that we don’t know how the future is going to unfold, and that no one can tell us for sure.The Conversation

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.