As we examine global content regulation, each country presents a distinct approach: from Germany's Network Enforcement Act to Russia's sovereign internet laws, and from the UK's Online Safety Act to India's media rights challenges.
In an era defined by hybrid warfare and the unchecked proliferation of social media, the battle for minds has become as potent as any conventional conflict. As the world witnesses this challenge, nations are looking towards exemplars who have implemented strict measures to rein in unbridled media. From Germany's Network Enforcement Act to Russia's sovereign internet laws, the global community is scrutinizing diverse approaches to combat fake news, hate speech, and misinformation online. These legislative actions underscore a critical juncture where the balance between free expression and societal well-being hangs in the balance.
Germany’s Network Enforcement Act
The Network Enforcement Act, informally referred to as the Facebook Act (Facebook—Gesetz), represents a significant legislative initiative passed by the German Bundestag (the lower house of the federal legislature of Germany) with the explicit objective of countering the proliferation of fake news, hate speech, and misinformation in the online sphere. Enacted on October 1, 2017, this law imposes a legal obligation upon social network providers to promptly remove any content deemed unlawful from their platforms. Non-compliance with this mandate constitutes a regulatory offense, potentially incurring substantial fines. Notably, the enactment of this law has catalyzed international attention, prompting at least 13 countries, along with the European Commission, to either adopt or consider frameworks of intermediary liability akin to the provisions delineated within the Network Enforcement Act.1
In response to escalating concerns regarding hate speech dissemination, German Justice Minister Heiko Maas orchestrated the establishment of a specialized task force in 2015, drawing participation from major tech conglomerates such as Google, Facebook, and Twitter.2 This collaborative effort aimed to swiftly address the proliferation of illegal posts, with a mandate to remove such content within a stringent 24-hour window. Subsequent analysis, detailed in a published report, revealed notable strides in content moderation performance. YouTube, for instance, demonstrated a commendable efficacy, eliminating 90 percent of flagged content, with 82 percent of such removals occurring within the stipulated timeframe. In comparison, Facebook exhibited a comparatively modest performance, removing only 39 percent of reported content, with 33 percent of deletions achieved within the designated 24-hour period. Similarly, Twitter's response remained less robust, with a mere 1 percent of reported posts being taken down promptly. These statistics underscore a marked improvement over previous metrics, highlighting a tangible shift towards more proactive moderation strategies within the social media landscape.
Following incidents like the ‘live-streamed attack on a synagogue’ and a ‘lawmaker's shooting’, Berlin has proposed new measures requiring platforms like Facebook, Twitter, and Google to actively report illegal content to law enforcement. This move aims to expedite the response to harmful material, with plans for a dedicated federal police unit to handle submissions and the mandatory provision of IP addresses for those responsible for spreading hateful content.3 The renewed effort underscores Germany's commitment to holding tech giants accountable for content shared on their platforms, setting a precedent for other countries for greater regulation of online speech.
Russia's Internet Governance
In November 2019, Russia implemented laws enabling centralized state control over its internet infrastructure, aiming to isolate its network and regulate information flow. Recent actions, like blocking Telegram, underscore its efforts to curb dissent and control communication.4
On March 4, 2022, Russia enacted laws imposing both administrative and criminal penalties for disseminating "false" information, with immediate effect.5 These laws, collectively known as the "March 2022 Amendments," expanded the Russian Criminal Code to criminalize various actions, including spreading knowingly false information about the Russian Armed Forces and obstructing military operations. These regulations apply to all social media platforms, including Bloomberg Terminal and TikTok.
For instance, Russian courts have imposed fines on individuals for disseminating "knowingly false information" about COVID-19 through various digital platforms, including WhatsApp and VKontakte (social media platform). Similarly, these penalties likely extend to spreading false information about the Russian Armed Forces and state authorities.
The Russian bill's report referenced Germany's Network Enforcement Act, suggesting the need for similar regulations in Russia. Additionally, Russia has established or proposed governmental entities responsible for ordering intermediaries to remove illegal content without independent review or complaint mechanisms.6
During the Russian-Ukrainian conflict, Roskomnadzor issued censorship orders to foreign platforms, including YouTube, alleging Western "information warfare." This extended to domestic media, resulting in shutdowns and the exodus of 150 journalists.7
Since 2017, Russia has expanded the number of agencies authorized to enforce content blocking and raised fines for entities, such as internet service providers (ISPs) and search engines, that resist content removal or offer ways to bypass censorship.8 In 2023, Russia saw a surge in electronic crime cases, including alleged cooperation with foreign intelligence agencies. Amendments by President Putin increased the maximum penalty for "high treason" to life imprisonment. Curtailing anti-war speech continued, resulting in convictions of at least 77 individuals. Prominent opposition figures received lengthy prison sentences, signaling a crackdown on dissent.9
House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online
Since 2017, the Network Enforcement Act has played a crucial role in tackling online hate crimes. It mandates social networks to swiftly remove illegal content and offers a streamlined process to fulfill this obligation. Carsten Müller, a member of Germany’s Bundestag, shared insights on the NetzDG law, which targets illegal online content. He emphasized its role in combatting hate crime online while safeguarding freedom of expression. Müller addressed concerns about penalizing platforms for content removal and discussed the law's influence on international regulation.10
The tightening grip of Russian internet laws, from centralized control to recent penalties for spreading false information, reveals a global struggle between state power and digital freedom. As Russia looks to Germany's Network Enforcement Act for inspiration, the battle between combating illegal content and upholding civil liberties intensifies on the world stage.
UK's Online Safety Act 2023: Balancing Security and Privacy in the Digital Age
The United Kingdom’s (UK's) Online Safety Act 2023 is the digital guardian for all, with a special focus on shielding the youngsters. This legislation, a labor of years, aspires to mold the UK into the world's ultimate safe haven online, placing tech giants under a new regime of obligations in shaping and policing their digital realms.11
The UK’s ‘Online Safety Bill’ is a visionary proposal poised to sanitize the digital realm, purging it of illicit and harmful content while championing the sacred right to free expression. Its mission is to shield internet users from the lurking dangers of fraudulent content and safeguard children from the harmful depths of the online world.12
In December 2020, the UK watchdog, the Office of Communications (Ofcom), fined Republic Bharat, the Hindi channel of Republic TV, GBP 20,000 for broadcasting hate speech against Pakistani individuals in a program aired the previous year. Ofcom's detailed note on the decision highlighted comments made by the host, Arnab Goswami, and some guests, which amounted to hate speech and derogatory treatment of Pakistani people. The show, "Poochta Hai Bharat," failed to comply with Ofcom's broadcasting rules, leading to the fine and restrictions on the licensee's broadcasting activities in the UK.13
New proposals introduced by the Department for Digital, Culture, Media and Sport (DCMS) could mean fines or blocks for websites failing to tackle "online harms" like terrorism and child abuse. The plan includes an independent watchdog writing a "code of practice" for tech giants, with potential penalties for breaches.14
In December 2020, the UK's data watchdog, the Information Commissioner’s Office (ICO), fined TikTok GBP 12.7 million for mishandling children's data and violating safety concerns. Despite its rules prohibiting children under 13 from creating accounts, TikTok allowed approximately 1.4 million UK children in that age group to access its platform in 2020. Consequently, the social media app was barred from all UK government-owned phones.15
The UK's Online Safety Bill, which received royal assent on October 26, 2023, has become a law. Aimed at making the UK the "safest place in the world to be online," it introduces new obligations for tech firms to police their platforms, tackling issues from underage access to online pornography to terrorism-related content.16
Hate Crime and Public Order (Scotland) Bill
On March 11, 2021, the Scottish Parliament approved the Hate Crime and Public Order (Scotland) Bill, enhancing existing hate crime legislation. The Act broadens the scope of protected characteristics, introduces new offenses for stirring up hatred, and consolidates all hate crime laws within it. Additionally, the Act mandates the publication of hate crime data and convictions annually. It aims to combat hate crimes rooted in prejudice against characteristics such as age, disability, religion, sexual orientation, and transgender identity.17
In 2023, the UK faced a digital crime surge, with police recording a whopping 5.5 million offenses, excluding fraud and computer misuse, which spiked by 15 percent to 1.2 million cases.18 While there was a slight uptick in crimes resulting in charges or summons, mysterious suspects still puzzle investigators, closing nearly 40 percent of cases. With the average time to resolve cases at 14 days, authorities are feeling the pressure of the rising workload.
The surge in digital crime underscores the urgent need for robust measures like the UK's Online Safety Act 2023. Despite challenges, the Act aims to strike a delicate balance between security and privacy, positioning the UK as a global leader in online safety.19
Recent Developments in Social Media Regulation and Disinformation Combat in the USA
In the ever-evolving landscape of digital regulation and misinformation combat, the United States has been at the forefront of implementing new measures and addressing emerging challenges. From bipartisan actions targeting social media giants to the establishment of specialized centers to counter foreign influence, recent developments reflect a concerted effort to navigate the complexities of the digital age.
U.S. Senate Passes Bill Regulating TikTok
After the Senate's decisive move on social media regulation, a bipartisan bill swiftly passed, forcing ByteDance to sell TikTok within a year or face a U.S. ban. Driven by national security and data privacy worries, lawmakers championed the bill, while TikTok vows legal resistance, citing free speech violations. Amid the escalating battle, potential buyers, including major tech giants, scramble to navigate the sale's complexities, complicated by China's involvement. U.S. legislation, prompted by concerns over potential access to sensitive user data by the Chinese government through TikTok, bans the platform unless sold to a government-approved buyer.20
Deepfake Dilemma: U.S. and UK Respond with Legal Measures
In response to the widespread dissemination of fake images of Taylor Swift, U.S. lawmakers are advocating for the creation of laws specifically targeting the production of deepfake content. Although there are no existing federal regulations addressing this issue, some states have begun efforts to address it. Meanwhile, in the UK, the Online Safety Act of 2023 has made the sharing of deepfake pornography illegal, marking a significant step toward combating digital manipulation.21
Establishment of FMIC
The Foreign Malign Influence Center (FMIC) was established on September 23, 2022, following congressional funding approval. Although its creation was initially undisclosed, it operates within the Office of the Director of National Intelligence (ODNI) and has the unique authority to coordinate efforts across the U.S. intelligence community to combat foreign influence, including disinformation campaigns. As per its mandate, the FMIC is empowered to counter foreign disinformation aimed at U.S. elections and public opinion.22
Biden Administration’s Social Media Crackdown
The Biden Administration, along with twelve Democratic state attorneys general, collaborated with social media platforms to purge accounts and quash undesirable narratives identified by the Center for Countering Digital Hate (CCDH). Through direct emails, the administration urged Facebook and other platforms to de-platform the so-called "disinfo dozen," a label coined by CCDH.23
FTC's Crackdown on Meta's Privacy Violations
As of May 2023, the Federal Trade Commission (FTC) has taken action against social media platforms for deceptive practices regarding misinformation. In one case, Facebook settled with the FTC over allegations of misleading users about privacy practices, resulting in a record-breaking USD 5 billion penalty. Now known as Meta, the company faces new accusations from the FTC for failing to adhere to its 2020 data privacy settlement.24 These allegations include misleading representations about parental controls on Messenger and improper data access for app developers. If found guilty, Meta could face additional restrictions and penalties, including bans on profiting from data collected from users under 18 and heightened privacy requirements.
U.S. State Laws Target Social Media Content Moderation
In 2021, Florida and Texas passed laws targeting social media platforms' content moderation practices, particularly concerning perceived bias against conservative viewpoints. These laws allow lawsuits against platforms with over 50 million monthly U.S. users for removing political content, except in cases involving criminal activity or imminent harm.25
U.S. Senate Bill 2023-S895A
As of May 2023, U.S. Senate Bill S895A mandates social media companies to prominently display terms of service for each platform they own or operate. It also requires them to submit reports on these terms to the attorney general and outlines remedies for violations.26
Tech Giants Ease ‘Misinfo’ Rules Ahead 2024 Elections
As of May 2023, major tech firms are easing restrictions on COVID-19 and 2020 election misinformation, sparking concerns. YouTube will now allow content alleging election fraud, and Meta reinstated Robert F. Kennedy Jr.'s Instagram for his presidential bid. Experts stress fact-checking but note layoffs may hinder enforcement. Platforms previously cracked down on hate speech, but under Elon Musk, Twitter is relaxing rules. Watch for changes in political ad policies as firms like Twitter and Spotify consider reinstating them.27
China's Battle Against Online Disinformation
In March 2022, in response to the surge of online disinformation polarizing public opinion in China, an adviser to the Chinese government proposed the implementation of fresh legislation aimed at prohibiting the creation and spread of false information on the internet. This call reflects growing concerns about the impact of misinformation on societal harmony and underscores the need for regulatory measures to address the proliferation of fabricated content in online spaces.28
China has implemented stringent laws to tackle disinformation, with severe penalties for those who spread false information that significantly disrupts public order. Offenders can face imprisonment for up to seven years for disseminating disinformation through media platforms.29
Developments in Electronic Crime Violations
In 2023, China saw significant strides in combating electronic crime, with a focus on data privacy violations. Didi Global, a major Chinese ride-hailing company, faced substantial fines in July 2022 for breaching privacy regulations, amounting to RMB 8.026 billion yuan (equivalent to USD 1.19 billion). To address telecom and online fraud, a new law was enacted, imposing stringent penalties on operators and service providers failing to meet compliance standards. Enterprises risk fines up to RMB 5 million yuan (approximately USD 736,750), while individuals face penalties of up to RMB 200,000 yuan (about USD 29, 470).30
Turkiye's Crackdown on Disinformation
In 2022, Turkiye's parliament passed a sweeping new law, allowing for the imprisonment of those accused of spreading disinformation for up to three years. The bill, proposed by the ruling Justice and Development Party (AKP), aims to curb domestic journalism and social media activity. In response, journalists and press freedom organizations have condemned the law. The legislation criminalizes the intentional dissemination of disinformation or "fake news" without clearly defining what constitutes disinformation, raising concerns about arbitrary enforcement. Additionally, the law imposes harsher penalties for using anonymous accounts to spread alleged disinformation.
Japan's New Rapid-Response Unit to Combat Disinformation
In response to the growing threat of disinformation campaigns, Japan's central government is establishing an organization within the Cabinet Secretariat dedicated to combating fake news and impersonator accounts. Chief Cabinet Secretary, Hirokazu Matsuno announced the initiative, emphasizing the need to address the negative impacts of spreading false information on security and universal values. The new body will focus on gathering and analyzing information about disinformation, enhancing external information dissemination, and collaborating with external organizations. With disinformation posing a significant challenge in the digital era, the government aims to launch the rapid-response unit by fiscal 2024.32
This move comes as part of Japan's efforts to strengthen its response to information wars and safeguard against the spread of fake information, as outlined in the updated National Security Strategy. The new organization will collaborate with existing units within the Cabinet and other relevant ministries to bolster Japan's capabilities in combating disinformation effectively.
Malaysia’s Anti-Fake News Act 2018
Enacted on April 11, 2018, the Anti-Fake News Act in Malaysia criminalizes the creation, dissemination, or circulation of fake news. Offenders face fines up to MYR 5,00,000 ringgits (USD 128,575) or imprisonment up to six years, or both. Additionally, continuing offenses may incur daily fines up to MYR 3000 ringgits (USD 771).33
As the world navigates the complex realm of digital regulation amidst the proliferation of social media and disinformation, it stands at a crucial point where the need for effective legislation becomes imperative. The challenges posed by unchecked social media platforms, which provide fertile ground for anti-state elements and misinformation campaigns, demand a comprehensive and nuanced approach.
1. Gesley, Jenny. 2021. “Germany: Network Enforcement Act Amended to Better Fight Online Hate Speech.” Library of Congress, Washington, D.C. 20540 USA. July 6, 2021. https://www.loc.gov/item/global-legal-monitor/2021-07-06/germany-network-enforcement-act-amended-to-better-fight-online-hate-speech/.
2. “Germany to Force Facebook, Twitter to Delete Hate Speech – DW – 03/14/2017.” n.d. DW.com. https://www.dw.com/en/germany-to-force-facebook-twitter-to-delete-hate-speech/a-37927085.
3. “Germany Lays down Marker for Online Hate Speech Laws.” 2019. Politico. October 30, 2019. https://www.politico.eu/article/germany-hate-speech-netzdg-angela-merkel-facebook-germany-twitter/.
4. “Deciphering Russia’s ‘Sovereign Internet Law’ | DGAP.” n.d. Dgap.org. https://dgap.org/en/research/publications/deciphering-russias-sovereign-internet-law.
5. "Guide to Understanding the Laws Relating to Fake News in Russia" (2022), ‘CPJ’. https://cpj.org/wp-content/uploads/2022/07/Guide-to-Understanding-the-Laws-Relating-to-Fake-News-in-Russia.pdf.
6. Fiss, Jacob Mchangama, Joelle. n.d. “Germany’s Online Crackdowns Inspire the World’s Dictators.” Foreign Policy. https://foreignpolicy.com/2019/11/06/germany-online-crackdowns-inspired-the-worlds-dictators-russia-venezuela-india/.
7. Sherman, Justin. 2022. “Russia’s Internet Censor Is Also a Surveillance Machine.” Council on Foreign Relations. September 28, 2022. https://www.cfr.org/blog/russias-internet-censor-also-surveillance-machine.
8. Human Rights Watch. 2020. “Russia: Growing Internet Isolation, Control, Censorship.” Human Rights Watch. June 18, 2020. https://www.hrw.org/news/2020/06/18/russia-growing-internet-isolation-control-censorship.
9. Human Rights Watch. 2024. “Russia: Events of 2023.” Human Rights Watch. January 11, 2024. https://www.hrw.org/world-report/2024/country-chapters/russia.
10. Written evidence submitted by Carsten Müller, "Carsten Müller MP—supplementary written evidence (FEO0112)", March 16, 2021. https://committees.parliament.uk/writtenevidence/26054/html/.
11. Guest, Peter. 2023. “The UK’s Controversial Online Safety Act Is Now Law.” Wired. October 26, 2023. https://www.wired.com/story/the-uks-controversial-online-safety-act-is-now-law/.
12. “What You Need to Know about the UK’s Online Safety Bill.” n.d. Computerworld. Accessed April 25, 2024. https://www.computerworld.com/article/1615426/what-you-need-to-know-about-the-uks-online-safety-bill.html.
13. “UK Media Watchdog Fines Arnab’s Republic Bharat £20,000 for ‘Hate Speech against Pakistanis.’” n.d. The Wire. https://thewire.in/media/uk-govt-body-slaps-20000-fine-on-republic-bharat-for-hate-speech-against-pakistan.
14. Fox, Chris. 2019. “Websites to Be Fined over ‘Online Harms’ under New Proposals.” BBC News, April 8, 2019. https://www.bbc.com/news/technology-47826946.
15. Browne, Ryan. 2020. “Social Media Giants Face Big Fines and Blocked Sites under New UK Rules on Harmful Content.” CNBC. December 15, 2020. https://www.cnbc.com/2020/12/15/uk-online-harms-bill-tech-giants-face-big-fines-and-blocked-sites.html.
16. Porter, Jon. 2023. “The UK’s Controversial Online Safety Bill Finally Becomes Law.” The Verge. October 26, 2023. https://www.theverge.com/2023/10/26/23922397/uk-online-safety-bill-law-passed-royal-assent-moderation-regulation.
17. “Crime Prevention: Hate Crime-Government. Scotland.” n.d. www.gov.scot. https://www.gov.scot/policies/crime-prevention-and-reduction/hate-crime/.
18. Home Office. 2023. “Crime Outcomes in England and Wales 2022 to 2023.” GOV.UK. July 20, 2023. https://www.gov.uk/government/statistics/crime-outcomes-in-england-and-wales-2022-to-2023/crime-outcomes-in-england-and-wales-2022-to-2023.
19. Ibid.
20. “Congress Passes TikTok Sell-Or-Ban Bill, but Legal Battles Loom.” n.d. USA Today. https://www.usatoday.com/story/news/politics/2024/04/23/congress-passes-tiktok-ban-biden-china/73424172007/.
21. “Taylor Swift Deepfakes Spark Calls for US Legislation.” 2024. BBC News. January 26, 2024. https://www.bbc.com/news/technology-68110476.amp.
22. Klippenstein, Ken. 2023. “The Government Created a New Disinformation Office to Oversee All the Other Ones.” The Intercept. May 5, 2023. https://theintercept.com/2023/05/05/foreign-malign-influence-center-disinformation/.
23. Press. 2023. “America First Legal Launches Multi-Front Investigation into Government Collusion with Pro-Censorship UK-Based Nonprofit ‘Center for Countering Digital Hate.’” America First Legal. July 20, 2023. https://aflegal.org/america-first-legal-launches-multi-front-investigation-into-government-collusion-with-pro-censorship-uk-based-nonprofit-center-for-countering-digital-hate/.
24. Davis, Jessica. 2023. “FTC Says Facebook Broke Terms of USD 5B Data Privacy Settlement.” SC Media. May 3, 2023. https://www.scmagazine.com/news/ftc-facebook-broke-terms-5b-data-privacy-settlement.
25. “Florida.” n.d. CivicPlus. Accessed April 25, 2024. https://www.civicplus.com/social-media-archiving/florida/.
26. “NY State Senate Bill 2023-S895A.” n.d. www.nysenate.gov. Accessed April 25, 2024. https://www.nysenate.gov/legislation/bills/2023/S895/amendment/A.
27. “Big Tech's misinformation policies for the 2024 election.” Axios, June 6, 2023. https://www.axios.com/2023/06/06/big-tech-misinformation-policies-2024-election
28. Taipei, Rhoda Kwan in. 2022. “Chinese Government Adviser Calls for Law to Ban ‘Fake News.’” The Guardian. March 8, 2022. https://www.theguardian.com/world/2022/mar/08/chinese-government-adviser-calls-for-law-to-ban-fake-news.
29. “China Policies Affecting Disinformation - ADTAC Disinformation Inventory.” n.d. Inventory.adt.ac. Accessed April 25, 2024. https://inventory.adt.ac/wiki/China_Policies_Affecting_Disinformation.
30. “Biggest Data Breach Fines and Settlements Worldwide 2020.” n.d. Statista. https://www.statista.com/statistics/1170520/worldwide-data-breach-fines-settlements/.
31. Ibid.
32. “Japan Setting up Rapid-Response Unit to Counter Disinformation” n.d. The Asahi Shimbun. https://www.asahi.com/ajw/articles/14824434.
33. Buchanan, Kelly. 2018. “Malaysia: Anti-Fake News Act Comes into Force.” Library of Congress, Washington, D.C. 20540 USA. April 19, 2018. https://www.loc.gov/item/global-legal-monitor/2018-04-19/malaysia-anti-fake-news-act-comes-into-force/.
Comments