Table of Contents

What’s caught in the wide net cast by hate speech policies?

Facebook has a right — and, according to many users, a responsibility — to police its content to protect users from harassment or threats. But does it judiciously exercise that power? Now that Facebook is deleting nearly 300,000 posts it marks as “hate speech” each month, it’s worth asking that question — and recent coverage detailing the way Facebook removes this content suggests that its policies, and similar ones employed in other forums, deserve closer scrutiny.

In June, ProPublica released a “trove of internal documents” revealing the ways in which Facebook trains its content reviewers to censor speech. There were some surprising revelations. For example, Facebook requires the removal of “hate speech” targeting “white men,” but not “black children.” Here’s how:

One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men.

The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at “protected categories”—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about “subsets” of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected.

Facebook did not earn high praise for this news.

Then last month, The Washington Post focused on stories of individual Facebook users whose posts publicizing and decrying racism they experienced earned them censorship and a short spell in “Facebook jail,” in which users are locked out of their accounts for a certain amount of time as a punishment for posts containing allegedly inappropriate content.

One user, Francie Latour, took to Facebook to share an encounter with a racist man in a grocery store who looked at her two children and said, “What the f--- is up with those f---ing n----r heads?” For Latour, Facebook offered a platform in which she could share her outrage; as she explained, “I couldn’t tolerate just sitting with it and being silent.” However, within 20 minutes, Facebook had deleted the post for violating its community standards. A day later, Facebook reposted it, citing its removal as an error, and offered no further explanation.

Zahra Billoo, executive director of the Council on American-Islamic Relations’ office for the San Francisco Bay area, shared a similar experience in November 2016, two weeks after Donald Trump’s election: Billoo posted a photo to four accounts — her personal page, her public page, CAIR’s national page, and a local CAIR chapter’s page — of a letter sent to a San Francisco mosque that read “He’s going to do to you Muslims what Hitler did to the Jews.” Facebook removed the post from two of the pages, locked Billoo out of her account for 24 hours after she reposted the censored image to her personal page, and then later restored the post to only one of the pages it had censored.

“How am I supposed to do my work of challenging hate if I can’t even share information showing that hate?” Billoo asked.

Last week, WIRED addressed the way Facebook’s hate speech policies affect the LGBTQ community, whose members sometimes face online censorship because of the slurs used against them.

“While these words are still too-often shouted as slurs, they’re also frequently ‘reclaimed’ by queer and transgender people as a means of self-expression,” WIRED explained. “However, Facebook’s algorithmic and human reviewers seem unable to accurately parse the context and intent of their usage.”

In one particularly outrageous example, Brooke Oliver, an attorney fighting Dykes on Bikes’ legal battle to reclaim the slur “dyke” in a trademark, was met with censorship from Facebook when she tried to post about her own case and victory. (For further discussion of trademarks, slurs, free speech, and Dykes on Bikes specifically, check out So to Speak: The Free Speech Podcast’s interview with The Slants, who earned a recent Supreme Court victory in a similar case.)

Likewise, a gay man reported to being shut out of his Facebook account for seven days after posting an image of 1970s lesbian magazine DYKE, two other LGBTQ users were banned or censored for calling themselves “faggots” or “tranny,” and two more claimed they were censored for discussing the comic strip “Dykes To Watch Out For” by graphic novelist Alison Bechdel.

At this point, Facebook’s censorship might not come as a shock to Bechdel, whose work has faced challenges on campus as well. In 2014, South Carolina Governor Nikki Haley approved a state budget that contained a provision punishing two universities that offered LGBTQ content in their curriculum, including Bechdel’s graphic novel “Fun Home,” and in 2015, a Crafton Hills College professor was asked by a complaining student and administrators to include trigger warnings on future syllabi, after he chose to teach “Fun Home,” among other selections.

For its part, Facebook acknowledged that it often makes mistakes and attempted to address its treatment of “hate speech” in a recent “hard questions” discussion in its Newsroom. Facebook explained how it defines hate speech, but notes that the definition did not come easily:

Our current definition of hate speech is anything that directly attacks people based on what are known as their “protected characteristics” — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.

There is no universally accepted answer for when something crosses the line. Although a number of countries have laws against hate speech, their definitions of it vary significantly.

The lesson to be learned from Facebook’s delete-first-and-ask-questions-later method? Regardless of intent, the policies created to combat hate often unwittingly hinder efforts to do so.

Even a cursory review of FIRE’s work shows that campus efforts to eliminate hate through censorship often have the same effect. In a particularly notable case from last December, a Winthrop University student took part in an anti-lynching art installation that featured small figures hanging from the trees outside Winthrop’s Tillman Hall alongside a sign reading “Tillman’s Legacy.” From the artist’s point of view, the display was meant to evoke the history and bigotry of South Carolina governor Benjamin Tillman, namesake of Winthrop’s Tillman Hall, and target of student protests. The display was removed immediately upon discovery, and the participating student was threatened with suspension or expulsion, but later cleared, thanks to FIRE’s help.

After the display’s discovery, Winthrop president Daniel Mahony emailed the campus, warned that there would be punishment for its creator, stated that this “clearly hurtful and threatening” installation would not be permitted on campus, and claimed its intent was “unclear” — even though its target had faced protests for months and the display contained an explanation that lynching was “Tillman’s Legacy.”

In March, posters satirizing anti-immigrant attitudes and asking Americans to do their “civic duty” and report “any and all illegal aliens” met the same fate at Gustavus Adolphus College — they were removed for being hateful even though they were posted with disclaimers explaining their intent from the student organization responsible for creating them.

That same month, administrators at the University of New Hampshire removed a student-led exhibit displaying instances of street harassment experienced by students, and containing phrases like “You look like sluts,” “I’ll buy you a drink if you suck me off,” and “Flash your boobs,” and only allowed it to be re-posted after administrators scrubbed it clean of words they found most offensive. The university justified the exhibit’s removal under its posting policy banning — you guessed it — hate speech.

In these recent cases, art displays were removed because of “hate” or “hurt” — despite the fact that they were created specifically to combat the same evils they were accused of spreading.

Additionally, in a study on colleges’ “bias response teams,” which are teams often consisting of police and/or administrators that campuses employ in an effort to combat bias and hate, FIRE found that these teams were often asked to investigate students’ political speech. It’s not difficult to imagine that many people perceive the political speech of their opponents to fall under the “hate speech” umbrella, and take advantage of the systems created to investigate and remove it.

Look at any hate speech policy, and you’re likely to find some speech swept within its confines that you believe doesn’t deserve censorship. Facebook — unlike public universities, which are bound by the First Amendment, or private universities that pledge to protect free speech — can censor any content it claims to fall under its perception of “hate speech.” But it should think carefully first.

If we cannot discuss hateful words without censoring them, we cannot effectively draw attention to the fact that those words are used in public and private, or to the people who cast them. It’s difficult for society to confront ugliness within it if we silence those who are trying to prove that it exists.

Recent Articles

FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Share