.exe-pression: January 2025
A Monthly Newsletter on Freedom of Expression in the Age of AI
The Future of Free Speech is excited to announce our new monthly newsletter, “.exe-pression.” Each month, Senior Research Fellow Jordi Calvet-Bademunt and Research Associate for AI Policy Isabelle Anzabi will explore the latest developments in global AI policy affecting free speech.
TL;DR
In the United States, the Trump Administration reshaped the U.S. approach to AI policy by rescinding Biden’s Executive Order on AI and issuing a new Executive Order. In the meantime, Texas is considering risk-based legislation to govern AI.
The European Union already has risk-based legislation, and the European Commission recently released the second draft of the General-Purpose AI Code of Practice. The first measures of the AI Act entered into effect in early February.
South Korea passed a comprehensive AI regulatory framework. Brazil may be on track to follow suit, as its Senate has passed a similar bill.
The Chinese company DeepSeek sent shockwaves throughout the AI world by launching a model that performed on par with flagship U.S. models while raising concerns about censorship.
Main Stories
The U.S. President Revokes AI Executive Order Reshaping AI Policy Approach
The U.S. is shifting its approach to AI Governance. Right after President Trump’s inauguration, he rescinded the “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” issued by former President Biden in 2023. Just a few days later, President Trump issued a new Executive Order, "Removing Barriers to American Leadership in Artificial Intelligence,” which revokes existing, unidentified policies that act as barriers to American AI innovation.
The Order highlights the objective of developing systems “free from ideological bias or engineered social agendas.” What this means in practice and its implications for free speech remains to be seen.
The Executive Order requires a new AI action plan that sustains and enhances “America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” It also requires the removal of actions “inconsistent with, or present obstacles to” this objective.
European Commission Released the Second Draft of the General-Purpose AI Code of Practice
The EU promulgated its AI framework, the AI Act, in June 2024. In December 2024, the European Commission released the second draft of the General-Purpose AI Code of Practice. When approved, this Code will guide general-purpose AI providers in implementing the provisions of the AI Act on systemic risk and other requirements until harmonized standards are approved in a few years.
Over 1,000 people are developing the Code of Practice for Generative AI, a process led by five chairs supervising four working groups on issues like “risk identification and assessment” and “technical risk mitigation.” The final draft of the Code is expected to be presented in April 2025 and subsequently considered for approval. So far, the draft code of conduct has raised concerns about how some of its provisions – like one aimed at mitigating the risk of harmful manipulation – could impact free speech.
To understand why free-speech advocates should press for clear protections for freedom of expression in the AI Act implementation, you can read Jordi’s article: Safeguarding Freedom of Expression in the AI Era. To learn more about the Draft of the General-Purpose AI Code of Practice, you can read the European Commission’s official press release.
South Korea Passed the AI Basic Act
South Korea’s National Assembly recently passed the “Act on the Development of Artificial Intelligence and the Establishment of Trust” (AI Basic Act), which will enter into force in January 2026. Following the European Union, the AI Basic Act establishes the world's second comprehensive AI regulatory framework.
The law's stated purpose is to protect rights and dignity, enhance quality of life, and boost national competitiveness by establishing principles for AI development and trust. Notably, the AI Basic Act imposes transparency, safety, and reliability measures for “high-impact” AI. High-impact AI includes systems operating in 11 designated areas that could substantially affect or risk human life, physical safety, and fundamental rights.
The Act also mandates companies to label generative AI outputs and notify users when interacting with AI systems. Much like with the EU AI Act, free speech advocates should ensure that Korea’s AI Basic Act respects free speech and protects users’ right to access information.
To learn more about Korea’s new AI law, read IAPP’s brief: Analyzing South Korea's Framework Act on the Development of AI.
The Brazilian Senate Passed a Comprehensive AI Bill
Brazil could be the following country to establish an AI framework. In December 2024, the Brazilian Federal Senate passed Bill No. 2338/2023, which also aims to regulate the development and use of artificial intelligence, establishing a risk-based approach. The bill prohibits “excessive risk” AI systems and obligates high-risk systems to conduct algorithmic risk assessments, human supervision, and additional transparency measures.
The bill provides penalties for violations and creates the central overseeing authority, the National System for Artificial Intelligence and Governance. It also includes watermarking obligations and provisions aimed at protecting the fundamental rights of individuals and groups affected by AI systems. The Chamber of Deputies is now considering the bill.
To learn more about Brazil’s Bill, read Mattos Filho’s brief, Regulatory framework for artificial intelligence passes in Brazil’s Senate.
Texas Responsible AI Governance Act (TRAIGA) introduced in Texas Legislature
Texas is also considering a risk-based approach to govern AI. In December 2024, the Texas Responsible AI Governance Act was introduced into the Texas Legislature. It seeks to regulate the development and deployment of AI systems in the state. The bill would ban AI systems that pose unacceptable risk and establish obligations for high-risk AI systems.
To learn more about Texas’ Bill, read Orrick’s brief, The Texas Responsible AI Governance Act: 5 Things to Know.
DeepSeek’s AI Model Rivals Western Flagship Models
DeepSeek, a Chinese AI startup, launched an open-source AI model that rivals industry-leading models developed in the West, including OpenAI’s. The model quickly became one of the most downloaded apps across several markets, with millions of downloads.
DeepSeek claims it cost only $6 million to train the model, though this figure has been disputed. Despite DeepSeek’s promising performance, concerns about censorship, data privacy, and national security have been raised. Chinese AI companies must comply with strict government oversight and regulations on “sensitive topics,” which raises questions about the model’s ability to provide unbiased responses.
To learn more about DeepSeek’s new model, read The New York Times’ brief: What to Know About DeepSeek and How It Is Upending AI. The Guardian analyzed how the model presented propaganda and censored sensitive topics related to China: We tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan.
Links to Additional News
Industry:
OpenAI plans to transition from a non-profit to for-profit structure to advance its mission
Stargate Project intends to invest $500 billion to build new U.S. AI infrastructure
Google pushes global agenda to educate workers and lawmakers on AI
Government:
The U.S. House of Representatives Bipartisan AI Task Force releases a report offering key principles, findings, and proposals on AI
New York Governor signs the legislative oversight of automated decision-making in government act (LOADinG Act)
The Pope issues the Guidelines on AI, establishing ethical principles for the development and use of AI in the Vatican City State
Biden-Harris Administration expanded export controls for advanced computing chips and certain closed AI model weights
Research:
MLCommons’ AILuminate v1.0 benchmark release provides safety testing for general-purpose chat systems across twelve hazard categories
A new paper from Anthropic provides the first empirical example of an LLM engaging in alignment faking without having been trained or instructed to do so
Stanford Law Review article provides speech certainty as an approach to conceptualizing machine learning algorithms under existing First Amendment jurisprudence.
The Future of Free Speech in Action
The Future of Free Speech will be hosting a RightsCon 2025 panel on “T&S in Gen AI: Agreeing on Principles for Freedom and Safety.” The discussion will be held online on February 26 and will feature panelists Sarah Shirazyan (Meta, Stanford), David Evan Harris (UC Berkeley), and Sayash Kapoor (Princeton), with Jordi Calvet-Bademunt (The Future of Free Speech) moderating. You can register for the event here.
Jordi Calvet-Bademunt will be speaking at the 2025 Tennessee Campus Civic Summit on February 28 at the University of Tennessee Chattanooga, discussing the implications of AI for democracy. Attendance is in person, and you can register here.
Jordi Calvet-Bademunt is a Senior Research Fellow at The Future of Free Speech and a Visiting Scholar at Vanderbilt University. Isabelle Anzabi is a research associate at The Future of Free Speech, where she analyzes the intersections between AI policy and freedom of expression.