Advertisement

COVID misinformation is a health risk – tech companies need to remove harmful content not tweak their algorithms

  <span class="attribution"><a class="link " href="https://www.shutterstock.com/image-photo/businessman-holding-mobile-phone-their-hands-290677832" rel="nofollow noopener" target="_blank" data-ylk="slk:file404/Shutterstock;elm:context_link;itc:0;sec:content-canvas">file404/Shutterstock</a></span>

Many worldwide have now caught COVID. But during the pandemic many more are likely to have encountered something else that’s been spreading virally: misinformation. False information has plagued the COVID response, erroneously convincing people that the virus isn’t harmful, of the merits of various ineffective treatments, or of false dangers associated with vaccines.

Often, this misinformation spreads through social media. At its worst, it can kill people. The UK’s Royal Society, noting the scale of the problem, has made online information the subject of its latest report. This puts forward arguments for how to limit misinformation’s harms.

The report is an ambitious statement, covering everything from deepfake videos to conspiracy theories about water fluoridation. But its key coverage is of the COVID pandemic and – rightly – the question of how to tackle misinformation about COVID and vaccines.

Here, it makes some important recommendations. These include the need to better support factcheckers, to devote greater attention to the sharing of misinformation on private messaging platforms such as WhatsApp, and to encourage new approaches to online media literacy.

But the main recommendation – that social media companies shouldn’t be required to remove content that is legal but harmful, but be asked to tweak their algorithms to prevent the viral spread of misinformation – is too limited. It is also ill suited to public health communication about COVID. There’s good evidence that exposure to vaccine misinformation undermines the pandemic response, making people less likely to get jabbed and more likely to discourage others from being vaccinated, costing lives.

The basic – some would say insurmountable – problem with this recommendation is that that it will make public health communication dependent on the good will and cooperation of profit-seeking companies. These businesses are poorly motivated to open up their data and processes, despite being crucial infrastructures of communication. Google search, YouTube and Meta (now the umbrella for Facebook, Facebook Messenger, Instagram and WhatsApp) have substantial market dominance in the UK. This is real power, despite these companies’ claims that they are merely “platforms”.

A person liking something on Facebook on their phone

These companies’ business models depend heavily on direct control over the design and deployment of their own algorithms (the processes their platforms use to determine what content each user sees). This is because these algorithms are essential for harvesting mass behavioural data from users and selling access to that data to advertisers.

This fact creates problems for any regulator wanting to devise an effective regime for holding these companies to account. Who or what will be responsible for assessing how, or even if, their algorithms are prioritising and deprioritising content in such a way as to mitigate the spread of misinformation? Will this be left to the social media companies themselves? If not, how will this work? The companies’ algorithms are closely guarded commercial secrets. It is unlikely they will want to open them up to scrutiny by regulators.

Recent initiatives, such as Facebook’s hiring of factcheckers to identify and moderate misinformation on its platform, have not involved opening up algorithms. That has been off limits. As leading independent factchecker Full Fact has said: “Most internet companies are trying to use [artificial intelligence] to scale fact checking and none is doing so in a transparent way with independent assessment. This is a growing concern.”

Plus, tweaking algorithms will have no direct impact on misinformation circulating on private social media apps such as WhatsApp. The end-to-end encryption on these wildly popular services means shared news and information is beyond the reach of all automated methods of sorting content.

A better way forward

Requiring social media companies to instead remove harmful scientific misinformation would be a better solution than algorithmic tweaking. The key advantages are clarity and accountability.

Regulators, civil society groups and factcheckers can identify and measure the prevalence of misinformation, as they have done so far during the pandemic, despite constraints on access. They can then ask social media companies to remove harmful misinformation at the source, before it spreads across the platform and drifts out of public view on WhatsApp. They can show the world what the harmful content is and make a case for why it ought to be removed.

A person using WhatsApp

There are also ethical implications of knowingly allowing harmful health misinformation to circulate on social media, which again tips the balance in favour of removing bad content.

The Royal Society’s report argues that modifying algorithms is the best approach because it will restrict the circulation of harmful misinformation to small groups of people and avoid a backlash among people who already distrust science. Yet this seems to suggest that health misinformation is acceptable as long as it doesn’t spread beyond small groups. But how small do these groups need to be for the policy to be deemed a success?

Many people exposed to vaccine misinformation are not politically committed anti-vaxxers but instead go online to seek information, support and reassurance that vaccines are safe and effective. Removing harmful content is more likely to be successful in reducing the risk that such people will encounter misinformation that could seriously damage their health. This aim, above all, is what we should be prioritising.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation
The Conversation

Andrew Chadwick currently receives funding from the Leverhulme Trust (RPG-2020-019) and is a member of the Oxford Coronavirus Explanations, Attitudes and Narratives (OCEANS) project, which received funding from the University of Oxford COVID-19 Research Response Fund (0009519), the National Institute of Health Research (II-C7-0117-20001, BRC-1215-20005, and NIHR-RP-2014-05-003) and the Arts and Humanities Research Council (AH/V006819/1). The University of Oxford entered into a partnership with AstraZeneca for the development of a coronavirus vaccine. Andrew is an adviser (unpaid) to the Department of Digital, Culture, Media and Sport and is an advisory board member (unpaid) of Clean Up The Internet. The views in this article are his alone and not those of funders or affiliates.