In the midst of the recent Israel-Hamas war, top European regulator Thierry Breton issued a warning to social media platforms like Meta, TikTok, and X (formerly Twitter), urging them to be vigilant against disinformation and violent posts related to the conflict. Unlike the United States, where the First Amendment protects speech, European regulations such as the Digital Services Act hold platforms accountable for illegal online content. This article critically examines the potential impact of global regulations on social media platforms, with a focus on the contrasting regulatory approaches of Europe and the United States.
Under the Digital Services Act, large online platforms are required to have robust procedures for removing hate speech and disinformation. Failure to comply with these rules can result in fines of up to 6% of their global annual revenues. European Commissioner Thierry Breton’s warning serves as a signal that the European Commission is closely monitoring platform activities. However, it remains uncertain how these regulations will affect content moderation both in Europe and globally. While social media companies have previously adapted their policies to comply with specific countries’ regulations, such as the GDPR, it is unclear whether they will implement similar content moderation policies worldwide.
The First Amendment in the United States protects various forms of speech, including abhorrent speech, and prohibits the government from stifling it. Consequently, the U.S. government’s efforts to combat misinformation and election-related content moderation have faced legal challenges. State attorneys general argued that the Biden administration’s suggestions to social media companies to moderate content violated the First Amendment. A recent court ruling found that government coercion of content moderation likely violated the First Amendment, highlighting the limitations on government interference with online platforms.
Unlike Europe, the United States does not have a legal definition of hate speech or disinformation that can be punished. While some types of speech might fall under exemptions for incitement to imminent lawless violence, the First Amendment restricts the viability of certain provisions found in the Digital Services Act. Consequently, the U.S. government cannot exert the same level of pressure on social media platforms as European regulators, as excessive coercion could be seen as a form of regulation itself.
In the United States, government requests to social media platforms must be carefully worded to avoid being perceived as threats or enforcement actions. It is crucial for governments to emphasize that requests are not accompanied by penalties or enforcement actions to preserve the delicate balance between free expression and regulation. For instance, New York Attorney General Letitia James sent letters to social media platforms urging them to remove calls for violence and terrorist acts but refrained from threatening penalties for non-compliance.
The impact of global regulations on content moderation remains uncertain. While social media companies have navigated various restrictions on speech in different countries, they may choose to confine their policies to comply with European regulations alone. The tech industry has historically applied policies such as the GDPR more broadly, but individual users should have the freedom to customize their settings and exclude certain types of content according to personal preferences.
The contrasting regulatory approaches of Europe and the United States have significant implications for social media platforms. While European regulations empower regulators to enforce content moderation standards and impose fines, the First Amendment in the United States limits government intervention. It is crucial for governments to balance the fight against hate speech and disinformation with the protection of free expression. Ultimately, the global nature of social media platforms necessitates ongoing dialogue between regulators, platforms, and users to navigate the complex landscape of online content moderation.