Advertisement

Social media’s selective historical memory is a human rights issue

 (Getty Images)
(Getty Images)

Seventy-five years ago, prosecutors at the Nuremberg Trials, that followed World War II, made history in an unexpected way. They asked the court to dim the lights and enter into the record a new form of evidence to document human rights violations: A film.

For more than an hour, horrific footage played of Nazi concentration camps, which was taken as the allies liberated them. As the light bounced from the screen and landed on the defendants, it found them bewildered. These men, once the most feared members of the Nazi regime, were reduced to stammering, tremors, and tears. The film left everyone in the court stunned.

The prosecutors’ choice to move away from witness testimony and prioritise raw documentation to prove atrocities was not obvious. But it was filled with wisdom and intended to stand the test of time.

Today, when we watch the eight minute and 46 seconds video of George Floyd’s last horrific moments on earth, or an angry mob storm the US Capitol, we reflexively demand accountability. To get there, we are all walking on a path cleared by the prosecutors at Nuremberg.

To connect these key moments in history, I emailed Ben Ferencz, the last surviving prosecutor from Nuremberg, to get his thoughts on how digital videos compare to his use of film in that hallowed courtroom. “They are equally galvanising,” he said, “and should rightly be used to seek justice for victims, regardless of who the perpetrators of crimes against them might be.” For him, the legacy of Nuremberg was crystal clear: “No one is above the law, and the eyes of the world are watching.”

Ferencz’s sense of determination, at a young age of 100, is stirring. Yet, I found the certainty from his email slowly dissipated as I opened a new tab and stepped back into the morass of the internet.

Even as the internet and smartphones have expanded our collective power to; capture, store, and share information, accountability in our digital age is often out of reach because our faith in the reliability of digital media is broken.

Trust in online media continues to plummet. Irrefutable images of chemical weapons attacks in Idlib, Syria, mass arrests on the streets of Minsk, Belarus, police brutality of Black Lives Matter protesters across the United States, are overwhelmed by hashtags and posts that cast doubt on their veracity. Today’s contrarians allege that these images are hoaxes, the product of fake news or vast conspiracies.

The CEOs of the major platforms, Facebook, Twitter, and Google, have been called before Congress twice in the last three months to explain what they have done to take action and fight misinformation and disinformation with content moderation. In attempting to moderate dangerous content, much of which is anti-Semitic, racist, or filled with hate speech – today’s platforms are doing critical work, and, at first glance, seem to be the standard-bearers of Nuremberg’s most enduring lessons.

Yet, that’s just too tidy of a story.

In reality, content moderation is messy and filled with intractable challenges. These complexities raise the question, should we even want content to be moderated by a single centralised authority? Especially when one considers their methods. To deal with the scale of the problem, automated moderation tools are being deployed by platforms that are often just too blunt for the task at hand. They make errors. And with the coronavirus pandemic clearing out offices, major platforms have warned users that automated takedowns would only increase – and so too the errors.

Prominent human rights organisations have sounded the alarm that algorithmic takedowns resulted in the wholesale destruction of key evidence of human rights abuses and international crimes. Human Rights Watch noted that in 2020 nearly 11 per cent of social media content it cited as key evidence in its work, was taken down by algorithms. Groups from the Syrian Archive to Amnesty International and Witness, have all reported unprecedented levels of content deletion. Black Lives Matter activists report that their accounts continue to be taken down or muted with no notice or explanation.

It is one thing to limit the spread of violent and dangerous content to millions of people. Yet it is something quite different to prohibit civil society members from responsibly preserving and analysing data to protect human rights and advance accountability.

Protection and preservation don’t have to be competing goals. But out of fear of competition, big tech platforms have made them incompatible. Platforms would prefer civil society wait in their lobby to ask for permission to access public domain content, rather than risk that more open policies could let competitors leverage the platforms’ user data and compete with them.

The problem for human rights activists begins the moment users click the banal “I agree” button to accept the platform’s terms and conditions. Buried within some of their endless contracts are clauses that compel users to waive US federal-level fair use (and common sense) doctrines and make it illegal to download content and preserve it in private archives for safekeeping.

The Supreme Court heard arguments this past November that challenge the validity of these types of policies and will likely hear another key case later this year to make a final determination if scraping and archiving tools violate the Computer Fraud and Abuse Act.

Until then, by the strictest reading of the law, NGOs and human rights organisations that try to follow Nuremberg’s example and preserve evidence of human rights violations are at best vigilantes and at worst felons.

True accountability goes way beyond content moderation.

It requires a level of nuance and expertise to protect the safety and dignity of users that we cannot afford to entrust to the platforms alone. Human rights experts have offered many sophisticated proposals to get this done, but, in all cases, these solutions still require social media platforms to permit the experts to do their work.

Yet, over the last decade, a growing number of technologists have dropped incremental approaches to solving the problem of Big Tech’s hegemony, by supporting alternative platforms that shift the balance of power to decentralise the internet. Together, these new protocols and platforms are often called Web3.

At the Starling Lab that I lead at the USC Shoah Foundation and Stanford’s Department of Electrical Engineering, we have evaluated and deployed a range of decentralised technologies to help restore trust in our digital media and advance the cause of human rights.

We’ve found many viable solutions and arrived at important innovations through advances in distributed cryptography. The latest generation of Web3 technology holds the promise that as you decentralise information, you also make it more secure and trustworthy.

For instance, solutions like the recently released Filecoin protocol, which was developed with contributions from our faculty and alumni, lets users leverage distributed computing systems that enable millions of users to help seal files and preserve their integrity. The more people who join the network and contribute their computing power, the stronger the seal becomes.

Consider a world in which; human rights defenders upload evidence to a secure distributed storage network, safeguard their files by making copies of them on millions of mobile phones, and allow qualified researchers to help verify the validity of their contents with different points of view and expertise. Together these efforts create a web of trust that end-users can reference to come to their own conclusions. That’s what activism, historical archival work, content moderation, and accountability will look like in the twenty-first century.

By decentralising power, you ensure even nascent civil rights movements have their chance to track and protect the information they need to make the case for society to understand the crimes that are least understood.

Nuremberg teaches that – the path to accountability begins with preserving the primary records of history with a sense of duty of care. The internet brings us an unparalleled power of documentation and the means to securely transmit information to the activists and experts who can act upon it. The question is whether we hold ourselves accountable to ensure they can.

By doing so, we don’t acquit or excuse big tech for its failings but instead, pursue accountability to renew our trust in the internet itself. Web3 offers us tools and methods to do just that. The stakes have never been higher.

And as Ferencz reminds us: “the eyes of the world are watching.”

Jonathan Dotan is a fellow at Stanford University. He is the director and co-founder of the Starling Lab for Data Integrity, based at the USC Shoah Foundation and Stanford Department of Electrical Engineering

Read More

AI experts warn against crime prediction algorithms

Social media companies accused of 'radicalising and grooming' users

Here’s how to protect ourselves from hidden algorithms

Angela Merkel says internet search engines 'distort perception'

We shouldn't let algorithms decide our opinions for us