Table of Contents
Another year, another session of AI overregulation
Shutterstock.com
As lawmakers kick off the 2026 legislative session, a new and consequential phase in the conversation about free speech and artificial intelligence is already taking shape in statehouses across the country. Yet another crop of AI bills is set to dictate how people use machines to speak and communicate, raising fundamental constitutional questions about freedom of expression in this country.
The First Amendment applies to artificial intelligence in much the same way it applies to earlier expressive technologies. Like the printing press, the camera, the internet, and social media, AI is a tool people use to communicate ideas, access information, and generate knowledge. Regardless of the medium involved, our Constitution protects these forms of expression.
As lawmakers revisit AI policy in 2026, it bears repeating that existing law already deals with many of the harms they seek to address — fraud, forgery, defamation, discrimination, and election interference — whether or not AI is used. Fraud is still fraud, whether you use a pen or a keyboard, because liability properly attaches to the person who commits unlawful acts rather than the instrument they used to do it.
Many of the AI bills introduced or expected this year rely on regulatory approaches that raise serious First Amendment concerns. Some would require developers or users to attach disclaimers, labels, or other statements to lawful AI-generated expression, forcing them to serve as government mouthpieces for views they may not hold. FIRE has long opposed compelled speech in school, on campus, and online, and the same concerns apply when it comes to AI systems.
Election-related deepfake legislation remains a central focus in 2026. Over the past year, multiple states have introduced bills aimed at controlling AI-generated political content. But these laws often restrict core political speech, and courts have applied well-settled First Amendment jurisprudence to find them unconstitutional. For example, in Kohls v. Bonta, a federal district court struck down California’s election-related deepfake statute, holding its restrictions on AI-generated political content and accompanying disclosure requirements violated the First Amendment. The court emphasized that constitutional protections for political speech, including satire, parody, and criticism of public officials, apply even when new technologies are used to create that expression.
Another growing category of legislation seeks to restrict “chatbots,” or conversational AI, using frameworks borrowed from social media laws. These include blanket warning requirements telling users they are interacting with AI, sweeping in many ordinary, low-risk interactions where no warning is needed. Some proposals would categorically prohibit chatbots from being trained to provide “emotional support” to users, effectively imposing a direct and amorphous regulation on the tone and content of AI-generated responses. Other proposals require age or identity verification, either explicitly or as a practical matter, before a user may access the chatbot.
These kinds of constraints place the government between the people and information they have a constitutionally protected right to access. They censor lawful expression and burden the right to speak and listen anonymously.
For that reason, courts have repeatedly blocked similar restrictions when applied to social media users and platforms. The result is likely to be similar for AI.
Broad, overarching AI regulatory bills have also returned this year, with at least one state introducing such a proposal so far this cycle. These bills, which were introduced in several states in 2025, go well beyond narrow use cases, seeking to impose sprawling regulatory frameworks on AI developers, deployers, and users through expansive government oversight and sweeping liability for third-party uses of AI tools. When applied to expressive AI systems, these approaches raise serious First Amendment concerns, particularly when they involve compelled disclosures and interfere with editorial judgment in AI design.
Addressing real harms, including fraud, discrimination, and election interference can be legitimate legislative goals. But through FIRE’s decades of experience defending free expression, we’ve observed how expansive, vague, and preemptive regulation of expressive tools often chills lawful speech without effectively targeting misconduct. That risk is especially acute when laws incentivize AI developers to suppress lawful outputs, restrict model capabilities, or deny access to information to avoid regulatory exposure.
Rather than targeting political speech, imposing age gates on expressive tools, or mandating government-scripted disclosures, government officials should begin with the legal tools already available to them. Existing laws provide remedies for unlawful conduct and allow enforcement against bad actors without burdening protected expression or innovation. Where gaps truly exist, any legislative response should be narrow, precise, and focused on actionable conduct.
Recent Articles
Get the latest free speech news and analysis from FIRE.
VICTORY: Jury finds Tennessee high school student’s suspension for sharing memes violated the First Amendment
FIRE statement on calls to ban X in EU, UK
University of Arkansas rescinds dean offer after lawmakers object to legal advocacy in trans athletes Supreme Court case