Digital media ethics and intermediary liability: How other countries have approached it

The new mandate from the Indian government under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021, which came into force on May 26, requires social media companies with over 5 million users in India to not just enable traceability of end-to-end encrypted messages, but also establish local offices with senior officials to deal with law enforcement and user grievances.

They also have to alter their interface to clearly distinguish verified users from others, apart from setting up automated tools for content filtration and informing users if their accounts have been blocked.

Intermediary liability is a legal concept which describes how liable an intermediary is for the content which it stores and transmits. Such content could potentially include hate speech, copyright infringement, or images of abuse
Intermediary liability is a legal concept which describes how liable an intermediary is for the content which it stores and transmits. Such content could potentially include hate speech, copyright infringement, or images of abuse

Intermediaries play a huge role in our daily use of the Internet: access to the Internet, browsing, e-commerce, and the publishing of content are all made possible by intermediaries. Although they are frequently referred to as Internet Service Providers (ISPs), online intermediaries comprise a variety of services, including Internet access providers, hosting providers, search engines, e-commerce platforms, and social networking platforms.

Intermediary liability is a legal concept which describes how liable an intermediary is for the content which it stores and transmits. Such content could potentially include hate speech, copyright infringement, or images of abuse.

There are three common approaches to intermediary liability in democratic countries outside the United States: the awareness or “actual knowledge” approach (Australia, India, Japan, and the Philippines), the notice and takedown approach (New Zealand and South Africa), and the “mere conduit” approach (EU, South Africa, and India).

India's new law requires social media companies with over 5 million users in India to not just enable traceability of end-to-end encrypted messages, but also establish local offices with senior officials to deal with law enforcement and user grievances
India's new law requires social media companies with over 5 million users in India to not just enable traceability of end-to-end encrypted messages, but also establish local offices with senior officials to deal with law enforcement and user grievances

These approaches are not mutually exclusive, with a number of countries having applied a mix of multiple approaches. In addition, some countries have enacted legislation that deals with intermediary liability for certain types of content (e.g., violent or sexual content or hate speech) or for the removal of content, similar to Section 230 in the United States which is now being considered for reform by the Biden government.

Besides, the decentralised, global nature of the Internet means that there are many blurry lines as to what will constitute ‘bad’ content in different countries. One country may ban certain types of speech, while they may be accepted in another.

Let us look at how countries worldwide from democracies to federal states, have approached this issue.

China: Law of Tort Liability

In China, intermediaries who have the requisite knowledge of infringing activity on their services will be held jointly and severally liable. In addition to that general rule, Article 36 of the Law of Tort Liability (LTL) controls intermediary liability in China. Article 36 paragraph 3 of the LTL restates the above default position, and paragraph 2 stipulates that when a person whose rights are being infringed sends notification of that fact to the intermediary, which then fails expeditiously to take the necessary measures on receiving that notification, the intermediary is jointly and severally liable with the direct infringer to the extent that any further damages have been caused by its inaction.

Australia: Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill

The Australian government passed the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill which makes it illegal for social media platforms to fail to promptly remove abhorrent violent user material shared on their services. The legislation seems to be a response to the March mosque attacks in Christchurch, New Zealand that was live-streamed by the gunman on Facebook.

Two tourists in Christchurch stand opposite the Al Noor Mosque, the site of the 2019 attacks that killed 51 worshippers. (Photo by Adam Bradley/SOPA Images/LightRocket via Getty Images)
Two tourists in Christchurch stand opposite the Al Noor Mosque, the site of the 2019 attacks that killed 51 worshippers. The attack was live streamed by the gunman on Facebook (Photo by Adam Bradley/SOPA Images/LightRocket via Getty Images)

The law defines abhorrent violent material as acts of terrorism, murder, attempted murder, torture, rape, and kidnapping. The crime will be punishable by three years in prison for individuals and up to 10% of a corporate body’s annual turnover. The legislation creates a liability regime that is stricter than the notice-and-takedown regimes in place in the USA and Europe.

Germany: NetzDG

Germany adopted the Network Enforcement Act (Netzwerkdurchsetzungsgesetz, or NetzDG) in October 2017, which requires tech companies to delete ‘obviously illegal’ content within 24 hours of being notified. Other illegal content must be reviewed within a week of being reported, and then deleted. Non-compliance carries fines of up to €50 million (EUR), despite the guidelines being based on vague and ambiguous terms, such as ‘insult’ or ‘defamation’. In 2020, the Bundestag passed a reform to the NetzDG that requires social networks to report certain types of unlawful content to Germany’s Federal Criminal Police Office. Since the adoption of the new German law, at least 13 countries - in addition to the European Commission - have adopted or proposed models of intermediary liability broadly similar to the act’s matrix.

USA: Section 230 of Communications Decency Act and DMCA

In the US, Section 230 of the Communications Decency Act, 1996 provides immunity to online services, including Intermediaries, from liability for transmission of any third party content. It specifically states that providers of an ‘interactive computer service’ will not be treated as publishers of third-party content. The section further provides that such online services may moderate and remove, in ‘good faith’ offensive or obscene third-party content. This has led to a generation of platform-specific ‘community guidelines’ and policies with respect to content suitability on each such platform. This is a notable deviation from India’s position on regulation of Intermediaries as the IT Act does not provide for content moderation practices. In February 2020, the US Department of Justice held a day-long workshop to discuss ways in which Section 230 could be further amended. They’re examining cases in which platforms have enabled the distribution of nonconsensual pornography, harassment, and child sexual abuse imagery.

The US also has adopted the Digital Millennium Copyright Act (DMCA) in 1998. Section 512 of the DMCA sets a procedure by which, under certain conditions, online service providers (“OSPs”) are not liable for monetary relief for copyright infringement that occurs on their networks. One of those conditions is compliance with the Notice and Takedown procedure: upon notification of claimed infringement by a copyright owner, the OSP must remove the material from its network.

EU: e-Commerce Directive

The e-Commerce Directive is the foundational legal framework for online services in the EU. The limited liability regime of the e-Commerce Directive establishes that internet companies must remove illegal content or activity from their services when they have been informed of its presence on their service. It also establishes that they cannot be held liable for illegal content or activity on their services unless they have ‘actual knowledge’ of the illegal content or activity. In doing so, it limits the liability of service providers to instances where they have been properly informed of the presence of illegal content/activity and have not acted expeditiously to remove it. The limited liability regime is for internet companies and users, as well as for the internet ecosystem as a whole. Member States cannot force any general content monitoring obligation on intermediaries.

The decentralised, global nature of the Internet means that there are many blurry lines as to what will constitute ‘bad’ content in different countries. One country may ban certain types of speech, while they may be accepted in another
The decentralised, global nature of the Internet means that there are many blurry lines as to what will constitute ‘bad’ content in different countries. One country may ban certain types of speech, while they may be accepted in another

United Arab Emirates: Cybercrime Law

Online content is primarily regulated by the Cybercrime Law. The provisions of the Cybercrime Law are widely drafted and prohibit the online publication of offensive content (including any content that is considered to "prejudice public morals" or criticise the State or Islam).

The Cybercrime Law does not stipulate any specific information that the website operator must provide.The Cybercrime Law provisions are drafted widely, with many of the content related prohibitions imposing liability on the uploader of the content and the owner and operator of the website displaying the content.

An ISP may be required to remove or block content by the relevant UAE authorities. The ISP must comply with that request under the Cybercrime Law, and failing to do so may result in penalties such as imprisonment or a fine.

Must-read related stories:

Russia: Sovereign Internet Law

In November 2019, Vladimir Putin’s government introduced new regulations that create a legal framework for centralised state management of the internet within Russia’s borders.

The Sovereign Internet Law, designed to improve cybersecurity, will affect all users of the Russian Internet, both individuals and legal entities. The new law allows the government to block malicious traffic, activities or sites. This will protect the Ru-Net and allow stable functioning and continuity in case of a critical nationwide cyber threat.

To implement the law effectively, Internet service providers are required to install deep packet inspection (DPI) tools that can identify the source of Internet traffic and filter content as required. This is monitored by the Russian telecommunications authority, Roskomnadzor.

Japan: Provider Liability Limitation Act

The Diet passed the Provider Liability Limitation Act in 2001, which covers both copyright and non-copyright law. According to section 3(1), when someone’s rights are infringed by a flow of information, intermediaries ‘shall not be held liable for the damage caused unless it is technically feasible to take measure to prevent the transmission of infringing information to unspecified persons’ and either:

(1) the intermediary had known the fact of the infringement; or

(2) the intermediary knew the existence of the relevant information and there were ‘reasonable grounds’ for the intermediary to know the fact of the infringement.

This is similar to EU’s e-Commerce Directive where the intermediary has exemption for unknown content even if it is unlawful as long as the intermediary has neither actual knowledge nor apparent knowledge, and when the intermediary does have that knowledge, it takes down the content.