Author
The regulatory divergence between the European Union and the United States over digital governance has been marked by escalating conflict, with the US government asserting that key EU legislation, particularly the Digital Services Act (DSA), poses a fundamental threat to American principles of free speech.
The concerns regarding the clash between the DSA and the United States’ First Amendment1 tradition emerged early in the legislative process. By April 2022, the New York Times was already outlining how the extraterritorial nature of the DSA could compel US tech platforms to moderate or remove content globally, thereby chilling First Amendment protected speech in the United States itself2. However in reality, do the provisions of the DSA genuinely extend beyond the EU’s borders, as suggested by the US?
This initial tension escalated into a geopolitical dispute. In a coordinated diplomatic effort, the US government launched a lobbying campaign instructing diplomats to pressure EU and Member State officials to repeal or amend the DSA.3
Then, the principle of sovereign regulatory autonomy was challenged, the US administration simultaneously weighed imposing sanctions (likely visa restrictions) on European Union and Member State officials responsible for implementing the democratically passed law.4
This unprecedented contemplation of punitive measures against officials enforcing domestic legislation highlights the severity of the transatlantic struggle for global regulatory standard setting in the digital sphere.
The most crucial impact is the elevation of the critique from individual voices or lobbying efforts to a formal inquiry by the US government. The House Judiciary Committee is a powerful legislative body responsible for overseeing the administration of justice and is the primary committee for constitutional amendments. The publication of the Committee Report, titled “The Foreign Censorship Threat: How the European Union’s Digital Services Act Compels Global Censorship and Infringes on American Free Speech”5, and the holding of a high-profile, over 5-hour long hearing on September 3, 20256, transform the issue from a “trade dispute” into a matter of national interest and a foreign threat to US constitutional rights and innovation. While the opposition is largely driven by Republicans7, the formal nature of the proceedings signals a serious challenge to the EU’s authority and provides a consolidated platform for US Big Tech companies to lobby against the regulations.
This analysis fits in the objectives of the DTU-GREITMA as it explores the strategic use of Single Market mechanisms to regulate a data-driven society and tests the EU’s aspiration to be a credible international actor promoting European standards in the complex intersection of fundamental rights, speech governance, and global cross-border interdependencies.
Hate speech, manipulation, misinformation, counterfeiting, cyberbullying… These abuses are increasingly affecting online content. To protect Europeans, the Digital Services Act (DSA) regulates the activities of platforms.
The DSA8 is a cornerstone of the European Union’s digital strategy, designed to establish a safer, more open digital single market focused on user rights and fair business competition. Enacted in 2022 and entering into force in 2023, the DSA introduces a comprehensive regulatory framework for digital intermediary services, including social media, online marketplaces, and search engines, operating within the EU. Its primary goals are to combat illegal content and harmful content and disinformation while promoting a transparent online environment where fundamental user rights are protected. The law imposes obligations that scale with a platform’s size and systemic risk, with the strictest rules targeting Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) (those with over 45 million monthly active users).9 Key requirements for these largest platforms include enhanced transparency regarding content moderation and algorithms, protection of minors, and stringent rules on targeted advertising. Non-compliance with the DSA can result in substantial penalties, up to 6% of a company’s annual global turnover.
As seen, this legislation, however, has ignited a fierce regulatory and political dispute with the United States, whose tech giants are primary targets of the regulation.
The core of the US opposition, voiced in recent report, frames the DSA as a “Foreign Censorship Threat” that infringes upon American Free Speech. While the DSA is founded on EU principles of protecting fundamental rights, the US views its provisions as mandatory content moderation that threatens the First Amendment tradition. This publication examines the substance of the US critique and argues that the dispute is not merely technical, but a fundamental collision between two opposing philosophies regarding freedom of expression and the role of systemic online governance.
The main claims of the report are the following:10
- The DSA compels American technology platforms to modify their worldwide terms of service and content moderation policies to align with EU mandates. Nonpublic evidence suggests that European Commission regulators explicitly expect platforms to adopt these EU obligations globally, leading to an extraterritorial regulatory effect.
- EU enforcement of the DSA is alleged to result in the censorship of protected political speech, including forms such as humor and satire. “Subpoenaed” documents indicate that European regulators are targeting and attempting to suppress core political discourse (even when the content is neither harmful nor illegal under existing EU or national law) specifically citing sensitive topics such as immigration and environmental debate.
- The report contends that private workshops held by the European Commission, such as the one in May 202511, reveal an overreach in the interpretation of key DSA terms, effectively broadening the scope of mandated platform action.
- The Committee concludes that the alleged censorship resulting from DSA implementation is not ideologically neutral, claiming that the content moderation actions are predominantly and disproportionately directed at the speech of political conservatives.
The US critique then centers on two main legal and political concerns, arguing that the DSA compels platforms to engage in over-removal that chills speech.
The DSA applies to services offered to recipients located in the Union, regardless of where the service provider is established.12 The US report alleges that this extraterritorial application forces US based companies to choose between two undesirable options. The first one being to develop complex, country-specific geoblocking and content moderation policies, which is costly and the second being to implement the strictest EU-required moderation policies globally to ensure legal compliance and streamline operations.
The US concern is that this “Brussels Effect”13 will de facto export the EU’s more restrictive speech standards to the US, leading to the suppression of content that would otherwise be constitutionally protected by the First Amendment.
Moreover, the report alleges that the DSA’s broad definitions for “illegal content” and the enforcement emphasis on countering “disinformation” create an incentive for platforms to be overly cautious to avoid fines up to 6% of global turnover.
Additionally, and importantly, the House Committee’s report consistently frames the DSA and the EU through the contentious lens of “censorship”. This terminology, however, reflects a significant conceptual divergence rooted in the differing approaches to freedom of expression between the US First Amendment tradition (which itself has boundaries, though they are undeniably more permissive, and therefore less protective, than the limits on freedom of expression in the EU) and the EU’s regulatory model. While the US emphasis is on absolute protection against state intervention, the DSA’s central objective is not content prohibition, but rather the creation of a safe and accountable online environment where democratic values and user rights are upheld. This focus on systemic risk mitigation necessarily raises the question of where the limits of permissible expression lie: specifically, whether the right to freedom of expression14 for users generating lawful but harmful content must be balanced against the fundamental rights of others to be protected from online harm15 (e.g., hate speech, discrimination, or abusive conduct, a good example would be those targeting women in online spaces like e-sports and streaming platforms; in general, being part of a minority online). This perspective views platform moderation as a necessary measure to protect user safety and dignity, suggesting that the former’s freedom of expression terminates where the latter’s freedom begins.
The Committee claims the DSA’s impact on political speech is inherently biased, asserting that “The censorship is largely one-sided, almost uniformly targeting political conservatives.”16
However, this conclusion must be evaluated against the DSA’s legislative intent. The primary objective of the regulation is not the suppression of political ideas, but the mitigation of systemic risk, namely, the dissemination of disinformation and illegal content, irrespective of the political or non-political nature of the source. The report’s examples, such as the targeting of speech concerning the environment and immigration, highlight a core tension: when politically embedded speech contains verifiable falsehoods (e.g., in the context of climate change denial17) or promotes societal harm, the European framework requires platforms to act. From the EU’s perspective, regulating such content is a necessary step to protect democratic integrity and public health18. Thus, the perceived “targeting” of political discourse in the US report may be an effect of enforcing content standards against high-volume, impactful disinformation, a phenomenon currently prominent in certain political movements, rather than evidence of an intent to suppress a particular ideology.
Finally, The House Committee’s report expresses alarm that the DSA’s core provisions on systemic risks are merely a pretense for censorship, stating that “Tech companies are directed to identify ‘systemic risks’ present on their platforms, which are defined to include ‘misleading or deceptive content,’ ‘disinformation,’ ‘any actual or foreseeable negative effects on civil discourse and electoral processes,’ and ‘hate speech’.” The report frames these categories as mechanisms to suppress a “wide variety of speech.”19
However, this interpretation fundamentally overlooks the DSA’s core regulatory purpose. The inclusion of “misleading or deceptive content” and “negative effects on civil discourse” in the systemic risk framework does not automatically equate to censorship; rather, it reflects a legislative decision to hold platforms accountable for the societal harms that result from their business models.
The DSA targets disinformation and harmful content due to its potential to undermine democratic processes and inflict real world harm on citizens, including vulnerable populations like minors. Given that social media now serves as a one of the primary sources of information, the widespread of falsehoods constitutes a systemic threat to societal stability and informed public debate, a risk that extends equally to US citizens and global internet users. The Committee argument that the DSA is built on a belief that government cannot trust citizens to freely decide what is true”20 (in other words that the EU governments distrust their citizens’ ability to discern truth from falsehood) is rooted in a maximalist faith in the self-regulating marketplace of ideas. However, this position fails to address the risk that citizens’ understanding of “truth” may be entirely manufactured by sophisticated, malicious disinformation campaigns. The EU’s regulation implicitly addresses this vulnerability, recognizing that unmitigated disinformation can lead to a de facto “fake truth,” necessitating a regulatory intervention to protect the integrity of the information ecosystem.
To conclude, the central question is also who should define the limits of permissible digital expression, and where the right to freedom of expression must be balanced against the right to protection from harm. The EU’s framework operates on the principle that the freedom of expression is not absolute and must be balanced against other fundamental rights, such as human dignity, non-discrimination, and protection from online violence (e.g., severe misogyny or condition of minorities).21 From this viewpoint, a platform’s failure to mitigate speech that is demonstrably harmful, even if not strictly illegal, contributes to a hostile environment, thereby curtailing the freedom and participation rights of the victims.
The DSA’s mandate is thus to mitigate high-volume, impactful content that degrades the public sphere,22 as failure to act (i.e., allowing disinformation to flourish) can lead to real-world harm and undermine democratic processes. Therefore, characterizing the regulation of content that is misleading or harmful as “censorship” reflects an adherence to a maximalist interpretation of free speech that does not adequately account for the negative externalities of unmoderated digital The DSA’s mandate is thus to mitigate high-volume, impactful content that degrades the public sphere, as failure to act (i.e., allowing disinformation to flourish) can lead to real-world harm and undermine democratic processes. Therefore, characterizing the regulation of content that is misleading or harmful as “censorship” reflects an adherence to a maximalist interpretation of free speech that does not adequately account for the negative externalities of unmoderated digital platforms.23 The challenge remains the governance question: establishing a transparent and democratically accountable mechanism to define and limit such harmful speech without ceding content control to either unaccountable platforms or state power.
1 Which guarantees an absolute freedom of expression far exceeding European limits on hate speech and illegal content
2 Satariano, A. (2022). E.U. Takes Aim at Social Media’s Harms With Landmark New Law. N.Y. TIMES.
3 Pamuk, H. (2025). Rubio orders US diplomats to launch lobbying blitz against Europe’s tech law. Reuters.
4 Pamuk, H. (2025). Trump administration weighs sanctions on officials implementing EU tech law, sources say. Reuters.; see also Brussels Watch. (2025). Trump Administration Intensifies Lobbying Against EU Digital Services Act. Brussels Watch.
5 Committee on the Judiciary of the US House of Representatives. (2025). The Foreign Censorship Threat: How the European Union’s Digital Services Act Compels Global Censorship and Infringes on American Free Speech. Interim Staff Report.
6 House Judiciary Committee Republicans. (2025). Europe’s Threat to American Speech and Innovation. Hearing.
7 55% of the Committee is Republicans, see: https://judiciary.house.gov/about/membership
8 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act). (2022). OJ L 277.
9 Article 33 of the Digital Services Act.
10 Committee on the Judiciary of the US House of Representatives. (2025). The Foreign Censorship Threat: How the European Union’s Digital Services Act Compels Global Censorship and Infringes on American Free Speech. Interim Staff Report.
11 DSA Multi-Stakeholder Workshop Agenda (May 7, 2025). See Exhibit 1 of the Report.
12 Article 2.1 of the Digital Services Act.
13 The “Brussels Effect” is the unilateral mechanism by which the EU is able to externalize its regulations globally simply through the market power of the EU’s internal market. See: Bradford, A. (2019). The Brussels Effect: How the European Union Rules the World. Oxford Scholarship Online.
14 Article 11 of the Charter of Fundamental Rights of the European Union. OJ C 326.
15 A variety of articles are relevant here, namely articles 1, 3, 7, 8, 21 and 24 of the Charter of Fundamental Rights
16 Committee on the Judiciary of the US House of Representatives. (2025). The Foreign Censorship Threat: How the European Union’s Digital Services Act Compels Global Censorship and Infringes on American Free Speech. Interim Staff Report, p. 4.
17 Noor, D., Milman O. (2025). Scientists slam Trump administration climate report as a ‘farce’ full of misinformation. The Guardian.
18 Recital 83 of the Digital Services Act.
19 Committee on the Judiciary of the US House of Representatives. (2025). The Foreign Censorship Threat: How the European Union’s Digital Services Act Compels Global Censorship and Infringes on American Free Speech. Interim Staff Report, p. 3.
20 Idem, p. 16.
21 See Enarsson, T. (2024). Navigating hate speech and content moderation under the DSA: insights from ECtHR case law. Information & Communications Technology Law, 33(3), 384-401.
22 Article 34 of the Digital Services Act: The DSA requires VLOPs/VLOSEs to assess and mitigate systemic risks related to the spread of harmful content, particularly content with foreseeable negative effects on civic discourse and electoral processes.
23 Caruso C. Towards the Institutions of Freedom: The European Public Discourse in the Digital Era. German Law Journal. 2025;26(1):114-137. doi:10.1017/glj.2024.68.