Advertisement

How good are we at predicting the pandemic?

<span>Photograph: Victoria Jones/PA</span>
Photograph: Victoria Jones/PA

Epidemiological models have been a source of continual controversy from the start of the pandemic, often blamed for fearmongering and inaccuracy. How well have they done?

Perhaps the most famous piece of modelling came from Neil Ferguson’s team at Imperial College London in March 2020, credited with provoking the full national lockdown. Unfortunately, there are repeated claims they estimated 510,000 deaths in Great Britain over two years, but that was a projection under the implausible scenario that nothing was done about the virus. Their model was, if anything, rather optimistic. Even short of a full lockdown, they projected maximum deaths in Great Britain of fewer than 50,000 and the actual total has been far higher.

In July 2020, a “reasonable worst case scenario” predicted 85,000 UK Covid deaths up to 31 March 2021. This seemed pessimistic at the time but, in part due to the unforeseen “Kent” variant, the truth turned out to be rather worse than the “worst case”, with about 95,000 deaths.

In contrast, when a second lockdown was being contemplated in October 2020, there were leaked projections of a possible peak of 4,000 deaths a day. These outputs were not meant for release and had already been revised down.

Can human predictions do better? In early April 2020, DS and his colleagues asked 140 UK experts and more than 2,000 non-experts for quantitative predictions. Experts gave a median estimate of 30,000 Covid deaths by the end of the year, whereas the non-experts said 20,000. The truth was around 75,000; this value was in only a third of the experts’ prediction intervals and in only 10% of the non-experts’. People were both far too optimistic and confident, a common finding.

In the words of statistician George Box, “all models are wrong, but some are useful”. Epidemic models are always full of uncertainty. That uncertainty flows from their simplified structure, their assumptions and data inputs and the unpredictability of real life. Provided we view these models as tools for our understanding, they still can be useful.

• David Spiegelhalter is chair of the Winton Centre for Risk and Evidence Communication at Cambridge. Anthony Masters is statistical ambassador for the Royal Statistical Society