The Future of Free Speech’s Comments on The U.S. AI Action Plan
We submitted a comment to the White House on its AI Action Plan. Here's why it should take free speech seriously.
On January 23, 2025, the Trump Administration issued an Executive Order titled “Removing Barriers to American Leadership in Artificial Intelligence.” It contends that the U.S. “must develop AI systems that are free from ideological bias or engineered social agendas” to maintain its status as a global leader in the field.
The Administration has invoked free speech as justification for revoking existing AI policies and directives, aligning with broader claims about restoring free speech in America. However, this statement raises some legitimate concerns that the administration could be willing to engage in jawboning or other forms of overreach to shape AI systems’ content moderation policies. This is especially important to highlight given the Administration’s inconsistent commitment to free speech principles.
Following the order, the White House has sought input in the form of public comments from industry, civil society, and other stakeholders to implement the Administration’s AI Action Plan, and The Future of Free Speech has submitted a response. We believe it is crucial to demonstrate what robust First Amendment protections in AI policy should look like in practice.
Our recommendations emphasize the critical role of the First Amendment in AI governance. While AI presents new challenges, regulatory approaches must be grounded in free speech principles and avoid overreach that could stifle free expression.
We recommend the following steps that the AI Action Plan should include to advance these goals:
Address real harms of AI-generated content carefully and with respect to the First Amendment, avoiding broad risk-based obligations.
Avoid preemptive regulation of perceived risks of generative AI, particularly regarding deepfakes in elections, without substantial evidence of impact.
Prevent government jawboning of AI companies by pressuring them to change content moderation policies in ways that contradict their First Amendment rights.
Support open-source AI development to foster innovation, increase transparency, and reduce the risk of centralized censorship by a few dominant corporations.
1. Regulation should focus on real risks
Regulation of AI-generated content should be narrowly tailored to address specific and real harms – such as Child Sexual Abuse Material (CSAM), Non-Consensual Intimate Image (NCII), fraud, and extortion – without infringing on protected speech. Any restrictions should carefully consider the First Amendment, especially the limitations it imposes on viewpoint and content-based discrimination.
Overbroad regulations risk unintended censorship. A prime example is the TAKE IT DOWN Act, which mandates rapid content takedowns for NCII, which could incentivize platforms to remove legitimate content, including satire and journalism, to avoid liability.
This is not to say that the TAKE IT DOWN Act is not well-intentioned. NCII and CSAM are real harms that generative AI is not an exception to. However, as free speech advocates have argued, “laws intended to combat deepfake abuse could be co-opted to justify broader censorship of legally protected speech deemed “obscene,” “indecent,” or taboo by far-right conservatives, such as adult content, sexual education materials, and information about LGBTQ+ rights and history.” The Center for Democracy and Technology (CDT) has urged Congress to modify the bill to better protect free expression by limiting the scope of the notice-and-takedown mechanism and carving out necessary exemptions for legitimate speech.
Trump endorsed the TAKE IT DOWN Act while addressing a joint session of Congress. After confirming his intent to sign the bill into law, Trump stated, “I’m going to use that bill for myself too if you don’t mind because nobody gets treated worse than I do online, nobody.” Trump’s suggestion that he would use the bill to target his critics raises even more free speech concerns that the bill’s broad language would not only have a chilling effect on free expression but “be weaponized by federal enforcers to crack down on speech they dislike.”
With the Administration’s current record of taking action to censor speech it dislikes, the concerns regarding the TAKE IT DOWN Act are by no means far-fetched and represent a very real, current threat to free expression.
2. Avoid preemptive regulation of perceived AI risks
The AI Action Plan should avoid preemptively regulating ill-defined, perceived risks of generative AI, including the impact of deepfakes on elections. Over the last several years, there has been a rise in legislation and media focusing on the potential risks of AI-generated deepfakes and misinformation on elections. However, leading research has shown that generative AI misinformation and deepfakes have not had a meaningful impact in the U.S. and European elections.
Policymakers have rushed to regulate AI-generated deepfakes, particularly in elections, despite a lack of evidence showing a meaningful impact on voter behavior. Princeton researchers and The Alan Turing Institute have found that AI-driven misinformation has not had a meaningful impact in the U.S. and European elections. Regulating election-related deepfakes should require substantial evidence of impact and careful tailoring.
The U.S. has seen an influx of legislation regulating election-related manipulated media. However, as the Foundational for Individual Rights and Expression (FIRE) has noted “some bills define ‘deepfake’ so broadly as to encompass content made or edited without the use of AI or content that doesn’t depict an identifiable person.” This broad language restricts protected speech and can violate the First Amendment’s protection of satire, political commentary, and parody.
3. Stop government jawboning
Government intervention in AI governance should avoid broad and preemptive censorship in seeking to define “neutrality” or overreaching into these content policies of generative AI systems. Additionally, transparency in government requests and communications with AI companies when it comes to their speech policies and product decisions will greatly contribute to this policy goal. A commitment to the First Amendment requires government to avoid censorship and jawboning by pressuring private companies to change their content practices.
Government "jawboning"—the pressuring of private companies to change their content practices without formal regulation— violates the spirit of the First Amendment by circumventing constitutional protections against government censorship. The government must remain neutral toward private content decisions, regardless of which political viewpoints benefit.
Both Democratic and Republican administrations have engaged in this practice. During Biden’s presidency, officials pressured tech companies to moderate misinformation regarding COVID-19 and elections. While some of these concerns were reasonable, government involvement blurs the line between content moderation and state-imposed censorship. The Trump Administration’s criticism of the former administration's infringement of free speech has often contradicted its own approach to dissent and expression, exposing its double standards.
A January 2025 Executive Order on “Restoring Free Speech and Ending Federal Censorship” claims to protect the First Amendment. In practice, however, it has been wielded as a tool, along with other threats, to influence AI companies to avoid moderating certain types of content, even if it violates their policies. Despite seeming like a win for free speech, this opens the door for concerning levels of government interference in private decision-making.
If AI companies, especially those developing foundational models, are pressured to adopt government-preferred moderation policies, the risks to free speech will be profound. AI-generated content, search results, and even chatbots could be manipulated to favor political narratives.
4. Foster open-source AI development
The AI Action Plan should consider the role that open-source models may play in fostering innovation, increasing transparency, and reducing the risk of censorship. One of the key threats to free speech in AI governance is the concentration of AI development in a few large corporations, which have the power to dictate what types of content their models can and cannot generate.
A concentrated AI market could stifle free speech. A smaller number of AI companies could mean that fewer people determine which content is acceptable, leading to less intellectual vitality and viewpoint diversity. This would be a particularly risky situation if a few closed-source systems ended up dominating the market and being used by most citizens and institutions.
Furthermore, there is a concern that AI companies will ideologically align with the preferred content moderation policies of whichever Administration is currently in power. Recently, OpenAI and Meta have publicly adjusted their policies to align with the Trump Administration’s commitment to ‘free speech.’
Open-source models—AI systems that are freely available to use, study, modify, and share—offer an exciting opportunity for free speech. By allowing developers, researchers, and civil society organizations to analyze and improve models, open-source AI systems increase accountability and reduce the risk of centralized censorship. Ensuring that AI remains open and accessible will preserve diverse perspectives, prevent undue corporate-government entanglement, and protect free expression in the rapidly evolving AI landscape.
Why This Matters Now
With the Trump Administration’s revocation of the Biden-Harris AI Executive Order, the U.S. is at a pivotal moment in shaping its AI policy for the years ahead. Given the Administration’s inconsistent commitment to free speech, it is essential to establish clear First Amendment protections in AI governance.
As the Trump Administration crafts its AI Action Plan, we urge policymakers to resist the temptation to implement broad, risk-based regulations that stifle speech and innovation. While AI presents genuine challenges, the best approach is one that adheres to constitutional principles and ensures that free expression remains protected in the digital age.
We welcome continued dialogue on this issue and encourage all stakeholders to advocate for policies that uphold both technological progress and civil liberties.
Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.