Advertisement

Live updates: Facebook papers whistleblower Frances Haugen testifies at Parliament

Misinformation and extremism spreading unchecked. Hate speech sparking conflict and violence in the U.S. and abroad. Human traffickers sharing a platform with baby pictures and engagement announcements.

Despite its mission to bring people closer together, internal documents obtained by USA TODAY show that Facebook knew that users were being driven apart by a wide range of dangerous and divisive content on its platforms.

The documents were part of the disclosures made to the Securities and Exchange Commission by Facebook whistleblower Frances Haugen. A consortium of news organizations, including USA TODAY, reviewed the redacted versions received by Congress.

The documents provide a rare glimpse into the internal decisions made at Facebook that affect nearly 3 billion users around the globe.

Concerned that Facebook was prioritizing profits over the well-being of its users, Haugen reviewed thousands of documents over several weeks before leaving the company in May. On Monday, she testified before a committee at the British Parliament.

►A tale of two accounts: The story of Carol and Karen: Two experimental Facebook accounts show how the company helped divide America

►Facebook rebrand on the horizon?: Is Facebook changing company name? Shift to metaverse ignites rebranding plan, report says

The documents, some of which have been the subject of extensive reporting by The Wall Street Journal and The New York Times, detail company research showing that toxic and divisive content is prevalent in posts boosted by Facebook and shared widely by users.

Concerns about how Facebook operates and its impact on teens have united congressional leaders.

The company has responded to the leaked documents and recent media attention, claiming the premise was "false."

"At the heart of these stories is a premise which is false. Yes, we're a business and we make profit, but the idea that we do so at the expense of people's safety or wellbeing misunderstands where our own commercial interests lie. The truth is we've invested $13 billion and have over 40,000 people to do one job: keep people safe on Facebook," the company said.

Frances Haugen testifies before British Parliament

During her testimony, Haugen said she is concerned with several concepts related to Facebook, such as ranking posts based on engagement, a lack of safety support for languages beyond English, and the "false choices" that Facebook presents by reducing discussions on how to act in a battle between transparency versus privacy.

"Now is the most critical time to act," said Haugen, comparing Facebook's situation to an oil spill. "Right now the failures of Facebook are making it harder for us to regulate Facebook."

Haugen also discussed the influence of "Groups" in the spread of misinformation and polarizing content.

"Unquestionably, it's making hate worse," she said.

Haugen suggested solutions that would help curb the spread of misinformation and shift away from ranking based on engagement, such as a return to updates to users' news feeds that happen chronologically.

Frances Haugen, gives evidence to members of the UK parliament comprising the Joint Committee on the draft Online Safety Bill at the Houses of Parliament in London. - Haugen, a former Facebook employee turned whistleblower, has rocked the tech world with numerous blistering claims about her former employer since releasing thousands of pages of internal research documents she secretly copied before leaving her job in the company's civic integrity unit. She was in Westminster on October 25 speaking to members of the UK parliament on online safety.

However, Facebook has pushed back from changes that could impact its bottom line, she said.

"They don't want to lose that growth," said Haugen. "They don't want 1% shorter sessions because that's 1% less revenue."

Haugen also addressed Facebook's Oversight Board, the body that makes decisions on content moderation for the platform. Haugen implored the board to seek more transparency in its relationship with Facebook.

Haugen said if Facebook can actively mislead its board, "I don't know what the purpose of the Oversight Board is."

Haugen 'deeply worried' about making Instagram safe for kids

Haugen, who spoke for more than an hour, said she is "deeply worried" about Facebook's ability to make its social app Instagram safe for kids.

Facebook had planned to release a version of the app for kids under 13, but postponed launch in September to work more with parents and lawmakers to address their concerns.

During testimony, Haugen said unlike other platforms, Instagram is built for "social comparison," which can be worse for kids.

Haugen disputes claims from Facebook that they need to launch a kids version of Instagram because many users under 13 lie about their age. She suggests Facebook should publish how they detect users under 13.

When asked why Facebook hasn't done anything to make Instagram safer for kids, she said the company knows "young users are the future of the platform and the earlier they get them the more likely they'll get them hooked."

Haugen: 'Mandatory regulation' needed

Haugen said Facebook needs more incentives for its employees to raise issues about the flaws of its platform. She told British lawmakers there are countless employees with ideas for making Facebook safer, but those ideas aren't amplified internally because they would slow the company's growth.

"This is a company that lionizes growth," she said.

Haugen called for "mandatory regulation" to help guide Facebook toward a safer platform.

Facebook's response

Haugen's comments before lawmakers in the U.S. and Britain as well as numerous media investigations have created the most intense scrutiny that Facebook has encountered since it launched in 2004.

CEO Mark Zuckerberg has repeatedly defended the company and its practices, sharing in an internal staff memo that "it's very important to me that everything we build is safe and good for kids."

Nick Clegg, Facebook's vice president of global affairs, echoed a similar sentiment in an extensive memo to staff on Saturday that was obtained by USA TODAY.

Clegg told staff that they "shouldn’t be surprised to find ourselves under this sort of intense scrutiny."

"I think most reasonable people would acknowledge social media is being held responsible for many issues that run much deeper in society – from climate change to polarization, from adolescent mental health to organized crime," Clegg said. "That is why we need lawmakers to help. It shouldn’t be left to private companies alone to decide where the line is drawn on societal issues.

On Sunday, Sen. Richard Blumenthal, D-Conn., chair of the Consumer Protection Subcommittee that held Haugen's testimony, told CNN that Facebook "ought to come clean and reveal everything."

The spread of misinformation

The documents reveal the internal discussions and scientific experimentation surrounding misinformation and harmful content being spread on Facebook.

A change to the algorithm which prioritizes what users see in their News Feed rolled out in 2018 and was supposed to encourage "meaningful social interactions" and strengthen bonds with friends and family.

Facebook researchers discovered the algorithm change was exacerbating the spread of misinformation and harmful content and actively experimenting with ways to demote and contain that content, documents show.

►Who is Facebook whistleblower Frances Haugen: Everything you need to know

►From Facebook friend to romance scammer: Older Americans increasingly targeted amid COVID pandemic

News Feeds with violence and nudity

Facebook’s research found that users with low digital literacy skills were significantly more likely to see graphic violence and borderline nudity in their News Feed.

The people most harmed by the influx of disturbing posts were Black, elderly and low-income, ​among other vulnerable groups, the research found. It also said Facebook also conducted numerous in-depth interviews and in-home visits with 18 of these users over several months. The researchers found that the exposure to disturbing content in their feeds made them less likely to use Facebook and exacerbated the trauma and hardships they were already experiencing.

Researchers found: A 44-year-old who was in a precarious financial situation followed Facebook pages that posted coupons and savings and were bombarded with unknown users’ posts of financial scams. A person who’d used a Facebook group for Narcotics Anonymous and totaled his car was shown alcoholic beverage ads and posts about cars for sale. Black people were consistently shown images of physical violence and police brutality.

By contrast, borderline hate posts appeared much more frequently in high-digital-literacy users’ feeds. Whereas low-digital-literacy users were unable to avoid nudity and graphic violence in their feeds, the research suggested people with better digital skills used them to seek out hate-filled content more effectively.

Curbing harmful content

The documents show the company’s researchers tested various ways to reduce the amount of misinformation and harmful content served to Facebook users.

Tests included straightforward engineering fixes that would demote viral content that was negative, sensational, or meant to provoke outrage.

In April 2019, company officials debated dampening the virality of misinformation by demoting “deep reshares” of content where the poster is not a friend or follower of the original poster.

Facebook found that users encountering posts that are more than two reshares away from the original post are four times as likely to be seeing misinformation.

By demoting that content, Facebook would be “easily scalable and could catch loads of misinfo,” wrote one employee. “While we don’t think it is a substitute for other approaches to tackle misinfo, it is comparatively simple to scale across languages and countries.”

Other documents show Facebook deployed this change in several countries – including India, Ethiopia and Myanmar – in 2019, but it’s not clear whether Facebook stuck with this approach in these instances.

►Done with Facebook?: Here's how to deactivate or permanently delete your Facebook account

►'Profits before people': After Facebook whistleblower Frances Haugen argued her case, will Congress act?

How to moderate at-risk countries

Facebook knows of potential harms from content on its platform in at-risk countries but does not have effective moderation – either from its own artificial intelligence screening or from employees who review reports of potentially violating content, the documents show.

Another document, based on data from 2020, offered proposals to change the moderation of content in Arabic to “improve our ability to get ahead of dangerous events, PR fires, and integrity issues in high-priority At-Risk Countries, rather than playing catch up.”

A Facebook employee made several proposals, the records show, including hiring individuals from less-represented countries. Because dialects can vary by country or even region, the employee wrote, reviewers might not be equipped to handle reports from other dialects. While Moroccan and Syrian dialects were well represented among Facebook’s reviewers, Libyan, Saudi Arabian and Yemeni were not.

“With the size of the Arabic user base and potential severity of offline harm in almost every Arabic country – as every Arabic nation save Western Sahara is on the At-Risk Countries list and deals with such severe issues as terrorism and sex trafficking – it is surely of the highest importance to put more resources to the task of improving Arabic systems,” the employee wrote.

One document from late 2020 sampled more than 1,000 hate speech reports to Facebook in Afghanistan, finding deficiencies in everything from the accuracy of translation in local languages in its community standards to its reporting process. (Afghanistan was not listed among Facebook’s three tiers of at-risk countries in a document Haugen collected before her departure in May, which was before the United States' withdrawal.)

The report found for one 30-day set of data, 98% of hate speech was removed reactively in response to reports, while just 2% was removed proactively by Facebook.

The document recommended Facebook allow employees in its Afghanistan market to review its classifiers to refine them and add new ones.

“This is particularly important given the significantly lower detection of Hate Speech contents by automation,” it said.

Platform enables human trafficking

Facebook found that its platform “enables all three stages of the human exploitation lifecycle” – recruitment, facilitation and exploitation – via complex real-world networks, according to internal documents.

Though Facebook’s public-facing community standards claim the company removes content that facilitates human exploitation, internal documents show it has failed to do so.

Facebook has investigated the issue for years, proposing policy and technical changes to help combat exploitation on its platforms, records show. But it’s unclear whether those changes were adopted. In at least one case, Facebook deactivated a tool that was proactively detecting exploitation, according to internal documents.

In October 2019, prompted by a BBC investigation, Apple threatened to remove Facebook and Instagram from its app store because it found it had content promoting domestic servitude, a crime in which a domestic worker is trapped in his or her employment, abused and either underpaid or not paid at all. An internal document shows Facebook had been aware of the issue prior to Apple’s warning.

In response to Apple’s threat, Facebook conducted a review and identified more than 300,000 pieces of potentially violating content on Facebook and Instagram, records show. It took action on 133,566 items and blocked violating hashtags.

Contributing: Mike Snider

This article originally appeared on USA TODAY: Facebook whistleblower updates: Frances Haugen testifies at Parliament