A federal court held yesterday that a police department’s use of Facebook’s content filtering tools violated the First Amendment. The ruling could impact how public colleges and universities regulate online speech, too.
The United States District Court for the Eastern District of Arkansas ruled that the Arkansas State Police unlawfully used Facebook’s content moderation tools to censor speech on the department’s Facebook page. The agency set Facebook’s profanity filter (which deletes comments if they contain certain objectionable words) to the strongest available setting and blacklisted a custom set of words they selected, including “pig,” “copper,” and “jerk.”
“[B]ut people are free to say those words,” wrote Chief United States District Judge D.P. Marshall Jr., in the court’s opinion. “The First Amendment protects disrespectful language.”
That’s why government actors — like public universities — act unlawfully when they create public forums for speech on platforms like Facebook and Twitter, only to employ filters and other similar features that censor content or block certain users. The Arkansas court’s reasoning about the actions of its State Police mirrors concerns FIRE raised after surveying use of these same tools by public universities.
As our 2020 report, “No Comment: Public Universities’ Social Media Use and the First Amendment,” explained, Facebook has two comment-filtering mechanisms available for any user — including state actors:
The profanity filter. Facebook’s “profanity filter” automatically hides visitors’ posts if they contain words on one of two lists—one for the “medium” setting and one for the “strong” setting. The words on these lists are not publicly disclosed, but are composed of “the most commonly reported words and phrases marked offensive” by Facebook users. The profanity filter is turned off by default.
The customized blacklist. The “page moderation” filter allows an administrator to establish a custom list of blocked words. Like the profanity filter, this filter automatically hides posts or comments if they contain a phrase on the custom list.
The Arkansas court’s reasoning about the actions of its State Police mirrors concerns FIRE raised after surveying use of these same tools by public universities.
Our survey found that most public universities and colleges use the “profanity filter,” electing to block words above and beyond what Facebook removes under its own policies. Like the Arkansas State Police, many public institutions also use the customized blacklist feature to squelch specific criticism of their institution — from the University of Kentucky’s auto-deletion of the words “filthy” and “chickens” after a controversy involving animal rights activists, to the University of North Carolina’s prohibition on mentions of “Silent Sam,” to Clemson University’s censorship of comments criticizing a professor who made disparaging remarks about Republicans.
The Arkansas ruling represents one of the first times a federal court has weighed in on these issues, applying longstanding First Amendment jurisprudence to this relatively new issue in the social media space. The court held that the State Police’s filters ran afoul of the First Amendment in two ways.
First, the department’s use of Facebook’s profanity filtering system constituted an overbroad restriction on speech, because the department had no knowledge or control over the large and ever-changing secret list of words Facebook restricts. The department also selected the most restrictive of Facebook’s filtering options:
The State Police doesn’t know what words it is actually blocking. This information is apparently unavailable. Insofar as the testimony disclosed, Facebook’s community standards might filter out some words even if the State Police turned the page’s profanity filter off. The Court understands that Facebook has its own baseline community standards and changes them regularly. The State Police can’t do anything about that. Facebook’s control of which words it alone will and will not tolerate, though, doesn’t free the State Police from complying with the First Amendment in the filtering decisions the agency can make. In these circumstances, if further study yields no additional information, then the State Police must consider turning the profanity filter off, or selecting a weak or medium setting, supplemented with a narrowly tailored list of obscenities that it wants to block. The Court leaves the specifics to the agency. The Court holds only that the State Police’s current filter choice is not narrow enough for this designated public forum.
Second, the agency’s restriction on specific words it chose — specifically: “pig”, “pigs”, “copper”, and “jerk” — had “no plausible explanation … other than impermissible viewpoint discrimination.” As the court explains:
The slang terms “pig”, “pigs”, and “copper” can have an anti-police bent, but people are free to say those words. The First Amendment protects disrespectful language. And “jerk” has no place on any prohibited-words list, given the context of this page, the agency’s justification for having a filter, and the harmlessness of that word. Though some amount of filtering is fine in these circumstances, the State Police’s current list of specific words violates the First Amendment.
The court’s decision should serve as a warning that these tools, when wielded by public entities, likely violate the First Amendment. As FIRE explained in objecting to these practices by public universities, public colleges — and other government actors — silence protected speech when they employ social media platforms’ filtering tools to “quietly remove critical posts, transforming the Facebook pages into less of a forum and more of a vehicle for positive publicity.”
Yesterday’s ruling confirms FIRE’s analysis. Public colleges and universities should consider themselves on notice.