Table of Contents
FIRE Comment to FTC Regarding Technology Platform Censorship, May 21, 2025

Comments of
The Foundation for Individual Rights and Expression
Ari Cohn[*]
In the Matter of
Request for Public Comments Regarding Technology Platform
Censorship
FTC-2025-0023
May 21, 2025
The Foundation for Individual Rights and Expression (FIRE) submits this comment in response to the Federal Trade Commission’s Request for Public Comments Regarding Technology Platform Censorship, Docket No. FTC-2025-0023 (Feb. 19, 2025) (the “RPC”).
FIRE is a nonprofit, nonpartisan organization that defends the rights of all Americans to free speech and free thought. Since 1999, FIRE has protected expressive rights on campuses nationwide, and in June 2022, FIRE expanded its advocacy beyond the university setting to defend First Amendment rights in society at large. The Request implicates several of FIRE’s advocacy priorities, including: (a) supporting a free and open Internet where expression can flourish unhindered by the political whims of government officials; (b) fighting against governmental pressure aimed at the expressive decisions of private speakers; and (c) safeguarding against attempts to subvert the First Amendment through the inappropriate application of consumer protection law.
1. Introduction
The RPC seeks public comment on a bevy of questions about the alleged “censorship” practices of technology platforms, ostensibly for the purpose of investigating “potentially unfair or deceptive acts or practices, or potentially unfair methods of competition.” Describing the RPC’s purpose at the Free State Foundation’s annual policy conference, FTC Chairman Andrew Ferguson said: “I’m not looking for censorship qua censorship. I’m looking for exercises of market power that might reveal themselves in censorship.”
But the RPC’s language — along with Ferguson’s own statements — make clear that its ultimate target is the content moderation practices and decisions of social media platforms, and market power and consumer welfare are simply the justifications available to the FTC in its misadventure. The RPC comes close to admitting as much when it declares that “limit[ing] users’ ability to share their ideas or affiliations freely and openly” constitutes a consumer harm.
The problem with any FTC action that would burden the content moderation decisions of social media platforms is that those decisions are the platforms’ own constitutionally protected expression. Recasting those decisions as “censorship” or “consumer harm” does not disappear the First Amendment’s protection of such editorial judgments from governmental interference.
2. The First Amendment Protects Content Moderation Decisions
The FTC is not the first entity to attempt to evade the First Amendment by asserting government control over content moderation practices. As a result, we have some clarity from the Supreme Court about the First Amendment’s application to online speech platforms.
Evaluating a pair of state laws from Florida and Texas prohibiting social media platforms from removing or reducing the accessibility of certain user generated content, the Supreme Court reaffirmed that a platform’s decisions about what speech to host or not host are expressive in nature and constitutionally protected:
The individual messages may originate with third parties, but the larger offering is the platform's. It is the product of a wealth of choices about whether — and, if so, how — to convey posts having a certain content or viewpoint. Those choices rest on a set of beliefs about which messages are appropriate and which are not (or which are more appropriate and which less so). And in the aggregate they give the feed a particular expressive quality.
This is so regardless of whether or not these decisions are the result of an alleged lack of competition. But buried within the assertion that platform “censorship” is a sign of market dominance or anticompetitive behavior is a revealing set of assumptions.
Last month, FTC Chairman Ferguson addressed the 2025 Stigler Center Antitrust and Competition Conference, focusing almost exclusively on the RPC. He explained: “[I]f a social media platform can make its product less attractive through censorious practices . . . without a proportionate loss in its customer base, there are strong reasons to suspect that it is not operating in a competitive environment to the detriment of its users, which could be a violation of the antitrust or consumer protection laws . . . .”
Chairman Ferguson simply assumes that content moderation is a consumer harm in and of itself because it makes a social media platform objectively “less attractive.” In support of this assumption, he offered yet another one:
Consumers and content creators want a platform committed to free speech and open exchange of ideas. . . . . Because users of social media prefer it as a form of free expression and, as such, a facilitator of a marketplace of ideas, high concentration in the social media space leading to censorious practices poses an identifiable harm to consumer welfare.
While certainly some users might prefer a social media platform to be maximally permissive about the speech it will host, many do not. Users also have very different ideas of what constitutes a commitment to free speech by social media platforms. Thankfully, the First Amendment allows platforms to answer those questions as they see fit, and then users have the freedom to choose which approaches they prefer.
Unfortunately, the RFC appears aimed at substituting the FTC’s judgment for that of platforms and users.
And in those same remarks, Chairman Ferguson effectively admits as much, saying:
To my mind, the best way to preserve some of the promise of social media as a digital marketplace of ideas is to acknowledge that the quality of its product depends in part on its commitment to free speech to its users.
And:
“Fortunately, the solution [is] . . . to ensure that social media platforms meet consumer demand for a product that protects freedom of speech, thereby facilitating a genuine marketplace of ideas . . . .”
Chairman Ferguson characterizes discrimination against certain views or ideas — the core editorial function of content moderation — as inherently a consumer harm. To remedy that harm, he proposes compelling platforms to “protect freedom of speech,” i.e., forbid them from engaging in such “discrimination.” The conclusion is inescapable: through the RPC and subsequent agency actions, his goal is to further the same government interest in “balancing” the speech ecosystem that the Moody Court described as “not valid, let alone substantial.”
Ferguson believes social media platforms should primarily be devoted to existing as an unfettered “digital marketplace of ideas.” As a normative matter, FIRE agrees: it is preferable for speech platforms to remain as open as possible. But this is not for the government to decide. Whatever the FTC thinks social media ought to be, it is not free to dragoon platforms into service of such ends against their own editorial judgment. Government is “of course right to want an expressive realm in which the public has access to a wide range of views. . . . But the way the First Amendment achieves that goal is by preventing the government” from interfering in private editorial decisions to correct a perceived bias against certain views or ideas by “balancing” the speech ecosystem.
3. Competition Concerns Do Not Vitiate the First Amendment
The courts have been clear for decades: Market power and lack of competition do not invest the FTC with the power to ignore the First Amendment and impose a “course correction” on speech.
Defending its law requiring newspapers to publish replies from political candidates criticized in their pages, Florida argued that media consolidation had “place[d] in a few hands the power to inform the American people and shape public opinion,” and that “entry into the marketplace of ideas served by the print media” was “almost impossible.” The Supreme Court acknowledged this reality, but held that regulatory intervention was the greater evil — and prohibited by the First Amendment:
The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials — whether fair or unfair — constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time.
These principles apply with equal force to social media platforms, and courts across the country have accordingly refused to deviate from established First Amendment precedent on the argument that social media is somehow different enough to warrant a new approach to free speech.
4. The RPC Targets Editorial Decisions, Not Anticompetitive Conduct
The First Amendment does not immunize companies from general regulation of their business practices. But consumer protection and antitrust law do not shield regulation aimed at expressive decisions from First Amendment scrutiny.
In Lorain Journal Co. v. United States, the Supreme Court upheld an injunction against a newspaper that refused to carry advertisements from any business that did not boycott a rival radio station. Because the newspaper refused to carry ads based not on objection to their content, but rather as a way to starve out competition and monopolize trade, the Court rejected a First Amendment challenge to the injunction. In the decades since, courts have reiterated that while the First Amendment shields “legitimate [content] selection activities . . . it does not provide a cloak for activities whose primary motivation is the destruction of the competitive marketplace.”
Content moderation decisions are not primarily — or at all — motivated or designed to undermine competition. The RPC itself frames content moderation decisions as a potential symptom of anticompetitive conduct, not the act that constitutes it. But even this framing is inaccurate.
Rather, content moderation is — as the Supreme Court found — editorial in nature. In fact, platforms moderate content to make their product more attractive, not less, by curating the user experience to match what the platform thinks users want (in addition to accounting for the platform’s values and brand safety concerns).
5. Platforms’ Lofty Free Speech Claims Are Not Actionable Consumer Deception
The RPC’s questions imply that the FTC is evaluating whether a platform’s failure to live up to its promises of free speech constitutes a deceptive or unfair trade practice. It does not.
Content moderation decisions are subjective and value-laden endeavors, imbued with opinions about social policy, the value of certain ideas, and how particular views should be classified — the opposite of the quantifiable and provable assertions that underlie deceptive marketing claims. As former FTC Chairman Joe Simons noted to lawmakers, the FTC’s authority extends to “commercial speech [like false advertising], not political content curation.”
Terms of service for social media platform often reserve the right to remove content falling under subjective conceptual categories, like “hateful” or “abusive.” As a result, to assess whether a content policy has been “accurately” or “consistently” applied, the FTC would have to supplant the platform’s subjective judgment with the government’s own “official” determination of what terms like “hateful” or “abusive” mean. The First Amendment forbids that result for good reason: it would empower the government to act as the arbiter of opinion and taste and place a breathtaking amount of expression at the whims of whoever happens to wield political power at any given time.
FIRE is currently litigating this very issue before the U.S. Court of Appeals for the Second Circuit In Volokh v. James, FIRE is challenging a New York law requiring social media platforms to develop and publish policies for responding to “hateful conduct” and to provide a mechanism for users to complain about the same. Our motion for a preliminary injunction argued that the First Amendment prohibits the government from substituting its judgments about what expression should be permitted for a platform’s own:
Labeling speech as “hateful” requires an inherently subjective judgment, as does determining whether speech serves to “vilify, humiliate, or incite violence.” The Online Hate Speech Act’s definition is inescapably subjective — one site’s reasoned criticism is another’s “vilification”; one site’s parody is another’s “humiliation” — and New York cannot compel social media networks to adopt it. . . . The definition of “hateful,” and the understanding of what speech is “vilifying,” “humiliating,” or “incites violence,” will vary from person to person . . .
The First Amendment empowers citizens to make these value judgments themselves, because speech that some might consider “hateful” appears in a wide variety of comedy, art, journalism, historical documentation, and commentary on matters of public concern.
The district court agreed, noting:
[T]he Hateful Conduct Law requires a social media network to endorse the state's message about “hateful conduct”. To be in compliance with the law's requirements, a social media network must make a “concise policy readily available and accessible on their website and application” detailing how the network will “respond and address the reports of incidents of hateful conduct on their platform.” Implicit in this language is that each social media network's definition of “hateful conduct” must be at least as inclusive as the definition set forth in the law itself. In other words, the social media network's policy must define “hateful conduct” as conduct which tends to “vilify, humiliate, or incite violence” “on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” A social media network that devises its own definition of “hateful conduct” would risk being in violation of the law and thus subject to its enforcement provision.
These principles are well-established and their application is clear.
In 2004, the political advocacy groups MoveOn and Common Cause asked the FTC to act against Fox News’ use of the “Fair and Balanced” slogan, arguing that it was false and misleading. Then-FTC Tim Muris appropriately replied: “There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.”
And in 2020, the nonprofit advocacy group Prager University argued in a lawsuit that YouTube violated its free speech rights by restricting access to some of its videos and limiting its advertising. They claimed that as a result, the platform’s statements that “everyone deserves to have a voice” and “people should be able to speak freely” constituted deceptive marketing. The U.S. Court of Appeals for the Ninth Circuit rejected this claim, holding that the platform’s statements are “impervious to being quantifiable” and, as a result, were non-actionable.
Regulatory efforts to “hold platforms to their content policy promises” will meet the same result.
6. Conclusion
The FTC lacks authority to directly regulate or punish content moderation decisions. Justifying this inquiry, it has inverted the First Amendment, claiming that government intrusion is the mechanism by which free speech is protected. But that is not the only irony.
Asked about how government pressure relates to competition concerns, Chairman Ferguson explained that market concentration makes government jawboning easier and more effective: “If there’s a wider smattering of market participants, it’s harder for the government to get on the horn with everyone and pressure them equally.”
And notably, he described what he viewed as the “threat” communicated by the government defendants in Murthy v. Missouri as
the same threat that a government can always potentially inflict on any marketplace participant, which is, “We can make your life difficult.” The regulators can show up, they can audit, they can investigate, they can cost you a lot of money, and the path of least resistance is: “Do what we say.” . . . The “or else” is, “We have a tremendous array of investigative tools. Those tools are expensive when applied to you even if we don’t win at the end of the day, so knuckle under.”
But that’s precisely the extralegal pressure the FTC seeks to bring to bear here.
FIRE urges the FTC to leave editorial decisions in the hands of speakers, where they properly belong.
May 21, 2025
Respectfully submitted,
/s/ Ari Cohn
Ari Cohn
Lead Counsel, Tech Policy
Foundation for Individual Rights
and Expression
P.O. Box 40128
Philadelphia, PA 19106
(215) 717-3473
ari.cohn@thefire.org
Notes
[*] Ari Cohn is Lead Counsel for Tech Policy at the Foundation for Individual Rights and Expression.
[1] See, e.g., Foundation for Individual Rights and Expression, The Kids Online Safety Act gives government ‘dangerous powers’ of Americans’ expression (July 24, 2024), https://www.thefire.org/news/kids-online-safety-act-gives-government-dangerous-powers-over-americans-expression (opposing legislation in part because it would hand the government a ready-made tool for censoring online content it wishes to target); Foundation for Individual Rights and Expression, FIRE Statement on Free Speech and Social Media, https://www.thefire.org/research-learn/fire-statement-free-speech-and-social-media (explaining the dangers of government intrusion into social media platforms’ content decisions).
[2] See, e.g., Brief of Amici Curiae Foundation for Individual Rights and Expression, National Coalition Against Censorship, and First Amendment Lawyers Association in Support of Respondents and Affirmance, Murthy v. Missouri, No. 23-411, 603 U.S. 43 (2024), https://www.thefire.org/research-learn/amicus-brief-support-respondents-and-affirmation-murthy-v-missouri (asking the Supreme Court to affirm the lower courts’ holding that government officials violated the First Amendment coercing private content moderation decisions).
[3] See, e.g., Trump v. Selzer: Donald Trump Sues Pollster J. Ann Selzer for ‘Consumer Fraud’ Over Iowa Poll, Foundation for Individual Rights and Expression, https://www.thefire.org/cases/trump-v-selzer-donald-trump-sues-pollster-j-ann-selzer-consumer-fraud-over-iowa-poll (case materials relating to FIRE’s defense of a pollster in a lawsuit alleging that publishing an outlier poll constituted consumer fraud under Iowa law).
[4] Jericho Casper, FTC’s Ferguson Says Tech Censorship Practices May Violate Antitrust Law, Broadband Breakfast (Mar. 25, 2025), https://broadbandbreakfast.com/ftcs-ferguson-says-tech-censorship-may-violate-antitrust-law.
[5] Moody v. NetChoice, LLC, 603 U.S. 707, 738 (2024).
[6] Transcript: FTC Chairman Andrew Ferguson Keynote, ProMarket (Apr,. 17, 2025), https://www.promarket.org/2025/04/17/transcript-ftc-chair-andrew-ferguson-keynote.
[7] Id.
[8] See, e.g., David Rand & Cameron Martel, We need content moderation: Meta is out of step with public opinion, The Hill (Jan. 28, 2025), https://thehill.com/opinion/technology/5109667-sorry-zuckerberg-americans-actually-do-want-expert-content-moderation (discussing poll results showing 84 percent of Americans think platforms should moderate “misleading” content).
[9] See, e.g., Danielle K. Citron & Jonathon Penney, Empowering Speech by Moderating It, Daedalus (Summer 2024), https://www.amacad.org/publication/daedalus/empowering-speech-moderating-it (arguing that moderating abusive and hateful content benefits free speech by increasing the willingness of its targets to participate on social media).
[10] Transcript: FTC Chairman Andrew Ferguson Keynote, ProMarket (Apr. 17, 2025), https://www.promarket.org/2025/04/17/transcript-ftc-chair-andrew-ferguson-keynote.
[11] Transcript: FTC Chairman Andrew Ferguson Keynote Part II, ProMarket (Apr. 21, 2025), https://www.promarket.org/2025/04/21/transcript-ftc-chairman-andrew-ferguson-keynote-part-ii. (“But I think it’s equally true that the average consumer does not want a platform where particular ideas are just categorically excluded.”).
[12] Moody, 603 U.S. at 740.
[13] Moody, 603 U.S. at 740.
[14] Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241, 250–51 (1974).
[15] Id. at 258. See also Denver Area Educational Telecommunications Consortium, Inc. v. FCC, 518 U.S. 727, 813 (1996) (Thomas, J., concurring in part and dissenting in part) (“We also flatly rejected the argument that the newspaper’s alleged media monopoly could justify forcing the paper to speak in contravention of its own editorial discretion.”).
[16] Brown v. Ent. Merchants Ass'n, 564 U.S. 786, 790 (2011) (“[W]hatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of freedom of speech and the press, like the First Amendment's command, do not vary when a new and different medium for communication appears.) (internal citations and quotation marks omitted).
[17] See NetChoice, LLC v. Fitch, 738 F. Supp. 3d 753 (S.D. Miss. July 1, 2024) (preliminarily enjoining a law requiring age verification and parental consent for minors as likely violating the First Amendment), vacated and remanded on other grounds, 134 F.4th 799 (5th Cir. 2025); NetChoice, LLC v. Griffin, No. 5:23-cv-05105, 2023 WL 5660155 (W.D. Ark. Aug. 31, 2023) (holding that a law requiring social media platforms to age-verify users and obtain parental consent for minors likely violates the First Amendment); see also NetChoice, LLC v. Reyes, 748 F. Supp. 3d 1105, 1129 *169 (noting that a Utah law appeared to infringe on the First Amendment rights of users by requiring all users to undergo age verification and restricting the communicative functions of minors’ accounts).
[18] 342 U.S. 143 (1951).
[19] Id. at 156.
[20] Sunbelt Television, Inc. v. Jones Intercable, Inc., 795 F. Supp. 333, 336 (C.D. Cal. 1992) (emphasis added).
[21] Leah Nylen, Trump Aides Interviewing Replacement for Embattled FTC Chair, Politico (August 28, 2020), https://www.politico.com/news/2020/08/28/trump-ftc-chair-simons-replacement-404479.
[22] Volokh v. James, 656 F.Supp.3d 431, 441 (S.D.N.Y. 2023) (internal citations omitted).
[23] Memorandum of Law in Support of Plaintiffs’ Motion for Preliminary Injunction at 13–14, Volokh v. James, No. 1:22-cv-10195-ALC, 656 F.Supp.3d 431 (S.D.N.Y. Dec. 6, 2022).
[24] Volokh v. James, 656 F.Supp.3d 431, 441 (S.D.N.Y. 2023) (internal citations omitted).
[25] Petition for Initiation of Complaint Against Fox News Network, LLC for Deceptive Practices Under Section 5 of the FTC Act, MoveOn.org and Common Cause (July 19, 2004), https://web.archive.org/web/20040724155405/http://cdn.moveon.org/content/pdfs/ftc_filing.pdf.
[26] Statement of Federal Trade Commission Chairman Timothy J. Muris on the Complaint Filed Today by MoveOn.org (July 19, 2004), https://www.ftc.gov/news-events/press-releases/2004/07/statement-federaltrade-commission-chairman-timothy-j-muris.
[27] PragerU Takes Legal Action Against Google and YouTube for Discrimination, PragerU (2020), https://www.prageru.com/press-release/prageru-takes-legal-action-against-google-and-youtube-for-discrimination/.
[28] Id.
[29] Prager Univ. v. Google LLC, 951 F.3d 991, 1000 (9th Cir. 2020).
[30] Transcript: FTC Chairman Andrew Ferguson Keynote Part II, PRO MARKET (Apr. 21, 2025), https://www.promarket.org/2025/04/21/transcript-ftc-chairman-andrew-ferguson-keynote-part-ii/.
[31] Id.