Table of Contents
FIRE Statement on Free Speech and Social Media
It took centuries for individuals and governments to figure out how to live with and regulate the effects — good and bad — of the printing press, which connected millions of people in conversation with one another. Social media, in a relative few years of existence, added billions to the discourse. Social media companies have ascended to wielding immense power to shape and inform important national debates, and some have gone too far in regulating the speech of their users.
FIRE is disturbed by calls for government action to force or pressure social media companies to censor. Bad laws are already on the books. FIRE sued the state of New York in 2022 to challenge its unconstitutional law requiring social media networks to address speech that could “vilify” or “humiliate” people based on protected class (such as race, gender, or religion).
At the same time, we must resist the impulse to invoke coercive state power to force social media platforms to host or publish content they may wish to restrict. Two states have already attempted to do exactly that. We address this new push for social media regulation in depth to explain why it is the wrong response to legitimate concerns about the major platforms’ content moderation.
Solutions that reduce the scope and power of the First Amendment are likely to be no solutions at all.
Any government intrusion into platforms’ editorial discretion threatens the platforms’ own expressive rights under the First Amendment — and potentially that of other speakers. Solutions that reduce the scope and power of the First Amendment are likely to be no solutions at all, and their effects will almost certainly reverberate beyond social media.
Ultimately FIRE believes cultural arguments for free expression on social media must prevail, and no government can pass a law to make that happen. Unless and until proven ineffective, it is far preferable to advocate for voluntary measures by social media companies to protect free speech culture in their terms of service and in deed.
A choice between two NetChoices
Texas and Florida apparently take a different view. Their elected officials believe government regulation is the answer to social media platforms’ viewpoint-discriminatory practices. Each of these states recently passed laws prohibiting large social media platforms from refusing to host certain speech or users.
In NetChoice, LLC v. Moody, the U.S. Court of Appeals for the Eleventh Circuit held parts of Florida’s SB 7072 are likely unconstitutional. The law prohibits private social media platforms from banning political candidates or restricting content from candidates and media organizations. In its ruling, the court stated:
Social-media platforms exercise editorial judgment that is inherently expressive. When platforms choose to remove users or posts, deprioritize content in viewers’ feeds or search results, or sanction breaches of their community standards, they engage in First-Amendment-protected activity
[ . . . ]
Like parade organizers and cable operators, social-media companies are in the business of delivering curated compilations of speech created, in the first instance, by others. Just as the parade organizer exercises editorial judgment when it refuses to include in its lineup groups with whose messages it disagrees, and just as a cable operator might refuse to carry a channel that produces content it prefers not to disseminate, social-media platforms regularly make choices “not to propound a particular point of view.”
But in NetChoice, L.L.C. v. Paxton, a divided panel of the U.S. Court of Appeals for the Fifth Circuit rejected a First Amendment challenge to Texas’s law, HB 20, which is most notable in prohibiting large private platforms from removing or restricting content based on viewpoint, thereby forcing them to distribute speech they do not wish to publish and/or with which they do not want to associate.
The Eleventh Circuit rightly observed that social media platforms are inherently expressive private enterprises. Like newspapers and magazines, they make decisions about what speech they do or do not want to publish and promote based on voluntarily adopted standards. When they ban speech, they send a message of disapproval of that speech. The majority opinion in Paxton, authored by Judge Andrew Oldham, says this is “censorship” unprotected by the First Amendment. As the Eleventh Circuit recognized in Moody, however, longstanding Supreme Court precedent establishes the First Amendment right of publishers to exercise editorial judgment in deciding what speech to platform, even when the speech (for example, an opinion piece) is that of a speaker unaffiliated with the publisher.
LAWSUIT: New York can’t target protected online speech by calling it ‘hateful conduct’
Today, the Foundation for Individual Rights and Expression sued New York Attorney General Letitia James, challenging a new state law that forces websites and apps to address online speech that someone, somewhere finds humiliating or vilifying.
Not only does Judge Oldham’s majority opinion deny that editorial discretion is First Amendment-protected expression, it denies that the major social media platforms are even exercising such discretion. The majority emphasizes that social media platforms, unlike newspapers, “do not choose or select material before transmitting it: They engage in viewpoint-based censorship with respect to a tiny fraction of the expression they have already disseminated.”
First, this is factually incorrect. Platforms block users — preventing them from saying anything on the platform in the future — and use algorithms that both filter out content before it is posted and determine how other content is arranged and presented. Second, it is unclear how the timing of the exercise of editorial discretion is relevant. Hundreds of millions of posts go up on Facebook and Twitter everyday. As Judge Leslie Southwick notes in his dissent: “Editorial discretion is exercised when it is sensible and, in many situations, even possible to do so. The First Amendment fits new contexts and new technologies as they arise.” And while social media companies might not take action against the vast majority of content on their platforms, the fact remains that they use standards (however inconsistently or unwisely) to control what speech they display and how it is displayed. The dissent rightly notes that existing First Amendment doctrine “protects the curating, moderating, or whatever else we call the Platforms’ interaction with what others are trying to say.”
There is a stronger case for imposing viewpoint-neutrality obligations on certain online intermediaries and infrastructure companies.
Judge Oldham also opined that Texas reasonably classified the largest social media platforms as common carriers by virtue of their market dominance, though no other judge joined this analysis, which is therefore not controlling. A common carrier designation carries with it an obligation to provide members of the public with fair and equal access to the carrier’s services. Common carriage is a complex doctrine with a long history. Courts have traditionally applied it to transportation providers like ferries and railroads that serve the public indiscriminately, and later to communication services such as telegraph and telephone companies. But it would be a big leap (over the First Amendment) to extend the doctrine to private entities engaged in expressive activity, requiring them to adhere to viewpoint neutrality. After all, if common carriage is effectively provision of service to the public (or such classes of users as to be effectively available to the public) on a nondiscriminatory basis, isn’t refusal to provide service on that basis, but rather disqualifying certain users or content based on the social media platform’s own judgment, a deliberate and significant step away from common carriage? Moreover, to the extent that Judge Oldham (and those who agree with him) would impose common carriage obligations on social media if they willingly serve virtually all of the public while blocking or removing only a relatively small proportion of user-generated content, it creates an incentive for platforms to be more rather than less selective in an effort to maintain private-speaker status and protections — the exact opposite of the goal of Texas’ law.
There is a stronger case for imposing viewpoint-neutrality obligations on certain online intermediaries and infrastructure companies — internet service providers, web hosting services, content delivery networks, domain registrars, payment processors, and the like — that enable websites to exist and users to access them. Unlike social media platforms that expressly reserve the right to publish or refuse to publish speech using their own editorial judgment (as outlined in their terms of service) — and which use algorithms and other means to curate content and drive engagement — these types of enterprises that provide mere transmission of user content or funds (in the case of payment services) do not exercise editorial discretion.
Social media companies may restrict or privilege speech in unwise, inconsistent, or illiberal ways, and may act in ways that run contrary to the decades of wisdom compiled in our First Amendment jurisprudence. These decisions may be troubling and worthy of criticism. But the First Amendment itself is a bulwark against the state telling private speakers or publishers: We don’t like the way you’re exercising your First Amendment rights, so you better do it differently. That protection applies both to a private actor publishing or hosting speech the government wants taken down, and a private actor refusing to publish or host speech the government wants platformed. The Eleventh Circuit rightly stated it’s not the government’s role to attempt to level the expressive playing field. And as Judge Southwick observed, the First Amendment protects a “wide-ranging, free-wheeling, unlimited variety of expression — ranging from the perfectly fair and reasonable to the impossibly biased and outrageous.” To upend this principle is to upend the First Amendment itself.
The consequences of weakening First Amendment protections for social media platforms
Those who support government control of large social media platforms’ content moderation should pause to consider the broader ramifications. While Texas’ law applies only to social media platforms with more than 50 million users, the Fifth Circuit’s application of First Amendment principles in Paxton could ultimately affect smaller, niche internet forums and speech-hosting websites that moderate content to keep conversations on topic or advance specific views — or even offline media. Think subreddits, Quora Spaces, blog comment sections, Discord channels, and TripAdvisor forums, just to name a few. (In fact, there is a plausible argument that Reddit, Discord, and even Wikipedia meet HB 20’s definition of “social media platform.”) The Electronic Frontier Foundation offers some disturbing hypotheticals:
For example, the Fifth Circuit’s holding could allow laws that require sites supporting people suffering from chronic fatigue syndrome to post comments from people who don’t believe this ailment is a real disease. Sites promoting open carry gun rights that disallow comments critical of gun rights would violate such laws. A site dedicated to remembering locals whose families were affected by the Holocaust could be forced to allow comments by Holocaust deniers. Platforms unable to withstand an attack of harassing comments from trolls could be forced offline altogether.
Under the Paxton majority’s reasoning, a wide variety of online forums “are now potentially under the thumb of the state, which could force them to serve its interests by calling the removal of opposing views ‘censorship.’” There is nothing in the Fifth Circuit’s decision that prevents Texas from passing a new law tomorrow that reaches smaller platforms. And those smaller platforms would be far less able than Twitter or Meta to absorb litigation and compliance costs that result from laws like Texas’ HB 20, potentially making the largest platforms even more dominant. People have a right to build and shape online communities to serve particular interests or views — which may entail excluding certain content or individuals — without government interference.
And what about bookstores, movie theaters, and other establishments that selectively distribute others’ speech? In general, these entities no more convey a coherent message through their speech distribution than do social media platforms. Under the Paxton majority’s reasoning, is a bookstore’s decision to not stock a book or to take it off the shelf, or a theater’s refusal to screen a particular movie “censorship” that the state has the authority to regulate? Does a comedy club not have a First Amendment right to usher an open mic comedian off the stage because it disapproves of his material? Well-established First Amendment jurisprudence runs counter to what the rule of law in Paxton suggests.
Twitter’s new era continues to stir debate around online free speech
Musk tweeted Monday that Apple threatened to remove Twitter from Apple’s App Store with no explanation.
One possible response is that the Fifth Circuit’s “content moderation = unprotected censorship” rationale applies only to speech-hosting platforms that achieve a certain size or share of the market. In other words, when a private platform becomes sufficiently large, popular, and dominant in the market, it loses constitutional rights. But that too is a radical reinterpretation of the First Amendment. And it still leaves large companies like Barnes & Noble and Amazon, which are arguably as dominant in their markets with as wide a reach as popular social media services, vulnerable to the government forcing them to display or distribute speech against their will.
Twitter, Facebook, YouTube, Instagram, TikTok, and other major social media platforms operate and compete in the free market with other platforms with potentially different — and better — approaches to content moderation. There is nothing inevitable about the big platforms’ dominance. To be sure, network effects and anticompetitive behavior may make it highly difficult for other platforms to gain a foothold in the market. But that is ideally answered with speech-agnostic structural reforms — which we discuss further below — not a full-frontal, prophylactic assault on editorial rights protected by the First Amendment.
Another justification for the Fifth Circuit’s decision might be that social media platforms do not face the space constraints confronting other media, so, unlike in the pages of a newspaper, there is plenty of room for everyone who wants to speak. Space constraints may be a relevant factor, but they are not dispositive. Whatever constraints exist, they do not change the inherently expressive nature of what social media companies are doing — publishing, excluding, promoting, deprioritizing, and otherwise curating speech on their privately owned platforms according to their own judgments about what type of content they want to display for the world. The space constraints rationale also presents a risk to the rights of other, smaller online forums and communities that similarly enjoy the enormous capacity of the internet. And while the internet’s capacity is theoretically unlimited, constraints do exist for websites even in that medium. Storage space is not infinite, nor is the amount of content that editorial staff (or even algorithms) can review, or the resources that online services have to dedicate to it. Social media users’ ability to consume content has its limits as well, given that no person has infinite time and attention to spare.
Free speech has a winning record
Modern First Amendment jurisprudence is commonly traced back to 1919 with Justice Oliver Wendell Holmes’ famous dissent in Abrams v. United States, in which he noted the historical tendency of humans to hold with unshakeable confidence beliefs later discredited or shown to be false. Holmes argued we are better off letting ideas freely collide than to vest some fallible authority with the power to decide, once and for all, which ideas are true or good and which are false or unspeakable:
But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution.
Of course, there is no guarantee the best ideas will always prevail over bad ones in free and open competition. But as FIRE President and CEO Greg Lukianoff put it, “open discussions where all opinions are aired are more likely than restricted discussions to lead away from error and toward better arguments and ideas.” (As Lukianoff also explains, the “marketplace of ideas” metaphor, which is particularly salient in the contexts of political debate and academic study, is not the only — or even necessarily the most important — justification for free speech. If we want to know the world as it really is, it is critical to know what other people actually think and why, especially if they hold false or pernicious beliefs. This is valuable information in and of itself.)
The First Amendment constrains government because government is an exceptional threat to freedom of speech.
However imperfectly our society’s marketplace of ideas may function, the last century of First Amendment jurisprudence reflects a considered judgment that government intrusion into that marketplace generally leads to worse, not better, outcomes for free expression. The wisdom of that position has held true through more than one revolution in communication technology that radically altered the landscape of human expression. We should not so quickly assume it will not survive another.
Without a doubt, we should identify and object to social media companies’ policies and practices that erode our culture of free expression. But the First Amendment constrains government because government is an exceptional threat to freedom of speech. Freed from the constraints of the First Amendment, the government could pass laws that reach speech anywhere and everywhere. It could impose the harshest of penalties, including imprisonment. And political leaders would be free to act on dangerous incentives to muzzle their opponents, while those in the majority could easily suppress minority or unorthodox views. Responding to private censorship by giving the government more regulatory power over speech is not only cruelly ironic but creates a serious risk of abuse liable to make the situation even worse.
Freedom of internet speech
Speech does not receive less First Amendment protection in what the Supreme Court called “the vast democratic forums of the Internet.” In the late 1990s, the Court refused to deviate from longstanding constitutional principles when faced with the transformational change in human communication brought on by widespread internet use:
As a matter of constitutional tradition, in the absence of evidence to the contrary, we presume that governmental regulation of the content of speech is more likely to interfere with the free exchange of ideas than to encourage it. The interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorship.
Two decades later, following the rise of social media, the Court held fast to this position, calling the internet — and social media in particular — one of the “most important places” for the exchange of views. Acknowledging that, on a historical scale, the internet and social media are still new phenomena, the Court showed appropriate humility:
While we now may be coming to the realization that the Cyber Age is a revolution of historic proportions, we cannot appreciate yet its full dimensions and vast potential to alter how we think, express ourselves, and define who we want to be. The forces and directions of the Internet are so new, so protean, and so far reaching that courts must be conscious that what they say today might be obsolete tomorrow.
Given this uncertainty, the Court recognized the need to “exercise extreme caution before suggesting that the First Amendment provides scant protection” to speech in that medium.
Social media platforms’ decisions about what speech to host or amplify are an exercise of their First Amendment right to editorial discretion. As the Electronic Frontier Foundation observed, “That First Amendment right helps the internet grow and provide diverse forums for speech.”
How to combat social media companies’ overzealous content moderation
Of course, to object to government interference with social media platforms’ content moderation is not to simply accept the status quo. What these platforms can legally do is a different question from what they should do.
It is deeply concerning that a handful of companies have so much power over public conversation. Arbitrary and viewpoint-discriminatory decisions to block users and remove, downrank, or demonetize content — often with little or no explanation — can smother online discourse and prevent social media from reaching its full potential as a positive forum for debate, discussion, innovation, artistic expression, understanding, and human connection. FIRE doesn’t just fight for a strong First Amendment. We also want a culture of free speech to flourish. And private speech platforms can and do act in ways which impede that goal.
Social media platforms’ decisions about what speech to host or amplify are an exercise of their First Amendment right to editorial discretion.
It’s important for citizens and advocacy groups to keep pressure on social media companies to harmonize their policies with expressive freedom. In early 2022, Greg Lukianoff outlined three key steps that social media platforms should take to advance free expression:
- Look to First Amendment law for guidance on implementing free speech-friendly policies. As private companies, social media platforms are under no legal obligation to enforce First Amendment free speech standards. However, it makes great sense to voluntarily borrow their wisdom. First Amendment law is the longest-sustained meditation on how to protect free speech in the real world. This body of law, honed over the course of a century, can provide practical guidance and real-world precedents for managing the platform.
- Eliminate viewpoint-discriminatory policies and practices. Viewpoint discrimination — singling out specific points of view for censorship while leaving others alone — is practically the definition of censorship. Banning or otherwise punishing speakers on the basis of their viewpoint not only chills speech but can intensify polarization. Private censorship may not be government action, but it is still censorship. Social media platforms should craft — and honor, with both practical and intellectual consistency — policies explicitly stating that no one will be banned or otherwise penalized for merely expressing an opinion.
- Use categories to clearly define sanctionable speech. American law takes a categorical approach to distinguishing protected from unprotected speech. By limiting what can be banned to categories of unprotected speech that jurisprudence has helped hone over the course of decades, this approach limits arbitrary censorship that can result from ad hoc balancing tests. Categories of unprotected speech in the law include incitement to imminent lawless action, defamation, obscenity, true threats, and speech that is materially part of criminal conduct. By reflecting categories of speech already existing in law, social media policies can gain clarity and enforcement consistency.
Of course, social media platforms are businesses. They need to attract users and make money — whether through subscription fees or displaying advertisements — to survive. That, too, is optimally a natural curb on wholesale viewpoint discrimination that is superior to government intervention. Ideally, platforms that shun arbitrary and viewpoint-discriminatory content moderation will attract users and advertisers. If allowing a richer diversity of views and ideas on social media makes platforms fail, that is an indictment of our free speech culture. It means we need to do a better job explaining the many ways in which free expression and tolerance are critical to our pluralistic democracy.
With our newly expanded mission, FIRE will continue to monitor the state of free speech on social media. We are developing a rating system for major social media companies based on the extent to which they restrict speech — similar to the Spotlight Database we maintain for college and university speech policies — to inform users of the bargains they enter when engaging with each social media platform, and to ensure each platform is aware their adherence to free speech principles is being monitored and reported.
What these platforms can legally do is a different question from what they should do.
FIRE will also advocate for greater transparency in how social media companies moderate speech on their platforms, but government imposition of disclosure requirements can also raise First Amendment issues. Users deserve to know with reasonable precision what speech is off limits, whether platforms are implementing their policies fairly and consistently, the grounds for takedowns, deplatforming, or deprioritization should the user suffer them, and how to appeal those decisions, as well as the extent of government involvement in any content moderation decisions — which in some instances may violate users’ First Amendment rights. Better information on what platforms are doing positions us better in the fight against censorship.
FIRE Statement on Free Speech and Online Payment Processors
Online payment processors like Venmo and PayPal often deny Americans access to these vital services based on their speech or viewpoints.
Various transparency provisions in the Florida and Texas laws involve measures that social media companies should be taking: disclosing how they moderate content, publishing an acceptable use policy that accurately and clearly describes what kind of content is allowed on the platform and how the policy is enforced, providing users with detailed notice of alleged policy violations, periodically issuing reports providing data on the platform’s content moderation activity. These are all good practices. However, government-mandated disclosure requirements that are viewpoint-discriminatory, overly burdensome, or unduly intrusive on editorial discretion may violate the First Amendment. (The Fifth Circuit in Paxton held the First Amendment did not protect the platforms from the Texas law’s disclosure requirements, and in Moody, the Eleventh Circuit held all but one of the Florida law’s disclosure provisions are likely constitutional.)
Certainly, no state should enact a transparency law with the goal of pressuring social media companies to remove more lawful content from their platforms. But a new California law attempts to do just that. The law requires social media companies to file semiannual reports with the state’s attorney general that publicly disclose their content moderation policies regarding categories such as “hate speech,” “misinformation,” and “extremism.” In a statement about the law, Gov. Gavin Newsom said, “California will not stand by as social media is weaponized to spread hate and disinformation that threaten our communities and foundational values as a country.” That may serve as a campaign slogan, but as an ethos of speech regulation — one that on its face encompasses constitutionally protected expression and leverages a term often used to target disfavored speech — it leaves much to be desired.
What else can be done?
Mike Masnick, editor of the Techdirt blog, makes the case for building decentralized protocols — “instructions and standards that anyone could then use to build a compatible interface,” similar to how email works — and moving away from centrally controlled private platforms:
Moving to a world where protocols and not proprietary platforms dominate would solve many issues currently facing the internet today. Rather than relying on a few giant platforms to police speech online, there could be widespread competition, in which anyone could design their own interfaces, filters, and additional services, allowing whichever ones work best to succeed, without having to resort to outright censorship for certain voices. It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.
In short, it would push the power and decision making out to the ends of the network, rather than keeping it centralized among a small group of very powerful companies.
To the extent the government does intervene in social media, it should do so in a way that does not involve regulation of speech. EFF advocates antitrust reforms, interoperability (allowing users to share content across platforms), and other steps to increase competition in the industry, freeing users from “Big Tech silos” without infringing any platform’s right to choose what speech it hosts.
FIRE is evaluating these ideas and the promise they hold for bolstering free speech on the internet consistent with the First Amendment.
Our current grappling with the effects of social media fits a historical pattern. Each disruptive new communications technology that evolves — from the printing press, to film, radio, and television, to the internet itself — tends to generate widespread fear and pushback, and to prompt those in power to try to control the information environment and undermine freedoms. But as the Supreme Court has recognized, “whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary when a new and different medium for communication appears.”
There is no denying that the largest social media companies — hosts to billions of speakers worldwide — police speech on their private platforms in ways that should worry all of us. But social media is still a relatively new phenomenon. And concerns about the way platforms distort the marketplace of ideas are even newer. Our immediate response should not be to chip away at or re-envision the First Amendment to allow more government regulation of private actors’ expressive activity.
As we confront the problems of the digital speech age, we must look for solutions that preserve expressive freedom for all. We won’t do that by weakening the First Amendment.